• 1 Post
  • 38 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • It’s a good start to a long path :) I’m not a doctor of medicine, and not medical advice, but I know it was really helpful for me when I started recognizing I was on a path to helping myself, not the ADHD, not the trauma, not whatever else it may be diagnosed as, but me, my experiences, my patterns, my brain.

    The labels can be helpful for seeing, noticing, understanding, approaching, and getting medical support where needed, but ultimately it’s great that the symptoms were validated, and congrats on taking the steps! It’s hard work to identify the need, hard work to reach out and get support, and it means you’re very likely on a good path.


  • The remembral is really smart! I might need to find a way that works for me for that one.

    Being really open is also great; radical authenticity and openness (with those it’s appropriate and comfortable) has helped me learn and help others, and gotten acceptance from people I’d struggled with. “Let’s assume I’ve been living underground for a while, how exactly do you go about X, if you’re comfortable answering?” Also great for those with absent/developmentally lacking childhood experiences.


  • Yeah, a lot of my systems have been built up by noticing bad patterns and finding easier alternatives. A frozen curry that takes 10 minutes of effort tops, with pre-made masala paste - it may not be the most satisfying, but it’s costing me about $4, I’ll be eating in less time than ordering in, and I won’t get stuck looking at menus for an hour.


  • Yeah, I fortunately had a magic bullet (not great for it, but works) from years ago I received as a gift. The other comment nailed it; any time I’ve added water, it’s been bland. While milk, some yogurts, and a healthy mix of fruits is really flavourful, and it might throw the texture, but the oats and spinach add a night nutrient punch.


  • Less of everything is real. I’ll regularly and unintentionally mentally itemize what I have and what my options are regularly and funding ways to limit that is always helpful. I have a nearly empty fridge, pantry, a mealplan which is more like a two-week menu of options, only a few pieces of each clothing, and on. Fewer opportunities to fall into bad patterns. There was a time where my solution to not doing laundry for two weeks was to buy more cheap “backup” clothes.

    Then the purge happened.

    Few, good things, that were bought with the intention of easy maintenance, minimizing choice, while allowing a bounded spontaneity.




  • I agree with the recommendation for talking to the doctor; I’m in a similar position. This might be completely unrelated to your situation, so take it with many grains of salt.

    What I’ve been learning is that a lot of my own burnout cycling seems to be cycles of intense and constant masking. I struggle with social situations, following “chronos” time instead of “Kairos” (if you will entertain the misuse of these), and eventually get so buried in obligations and actively resisting my natural impulses that my wants and needs get muddied and untended to and the pain of pretending or entering social modes builds up too much. Being in a mode, “professor mode”, “friend mode”, “colleague mode”, “spouse mode”, they all build up the tension and while I can seem socially adept, if I’m not in a mode I’m pretty useless. My Uber drivers can attest. Sometimes I do need a break from social activities while I break down the mask that builds up. A decalcifying of the brain. My friends seem to understand.

    I’m a teaching professor, and I love the work, it has high flexibility but strict accountability, lots of room to experiment and find novelty, but a massive social burden and ridiculous workload. The only thing I’ve found to help so far, besides medication, is doing less. Fewer work obligations meant more time to de-mask, it meant I could take the more time on my tasks that I refused to admit to myself I needed compared to colleagues, more time to do stupid random impulsive (but safe) BS which I’ve found is the most relaxing to me, and that naturally led to meditating, exercising, and eating a bit healthier, which made things feel more manageable.

    I don’t know what the long term prospects of this realization are for me, but consider that ADHD usually means that tasks will take longer and more effort than typical people. Admitting to myself that it’s a disability and I don’t need to work twice or three times as hard as other people to make up for it all the time has been really important.

    You also mentioned trauma; a lifetime of letting people down without knowing why really turned me into an over-supporter as an adult. Fawning response to stress - feeling the stress build up and instinctively doing whatever you can to help other people at cost of yourself, rather than fighting or running away or freezing up - and then when you’re alone you fight yourself, freeze, or run away from everything. I’ve been told it’s a form of invisible self harm, and it’s nefarious because the goal is to make everyone else see things as all right.

    So I don’t know if any of this clicks true for you, and I don’t fully know the solution to these, but awareness of my own issues has helped a lot, and I think awareness and recognition is key to getting started. Years of therapy, meditation, medication, it’s all making progress, but it’s slow, and awareness has been key to any of the positives. For me, it seems like working less and admitting to myself I have a disability, undoing years of traumatic people pleasing at my own expense, and learning to unmask more in social interactions and at home are key, however that path is treaded.


  • I sit somewhere tangential on this - I think Bret Victor’s thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that’s better suited for the computer anyways. I get it’s not as valid here as other use cases, but there’s some room for improvements.

    Having it in separate functions is more testable and maintainable and more readable when we’re thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can’t we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can’t we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

    I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I’m leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style… and not just OOP. I’d love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.





  • I appreciate the comment, and it’s a point I’ll be making this year in my courses. More than ever, students have been struggling to motivate themselves to do the work. The world’s on fire and it’s hard to intrinsically motivate to do hard things for the sake of learning, I get it. Get a degree to get a job to survive, learning is secondary. But this survival mindset means that the easiest way is the best way, and it’s going to crumble long-term.

    It’s like jumping into an MMORPG and using a bot to play the whole game. Sure you have a cap level character, but you have no idea how to play, how to build a character, and you don’t get any of the references anyone else is making.


  • This is a very output-driven perspective. Another comment put it well, but essentially when we set up our curriculum we aren’t just trying to get you to produce the one or two assignments that the AI could generate - we want you to go through the motions and internalize secondary skills. We’ve set up a four year curriculum for you, and the kinds of skills you need to practice evolve over that curriculum.

    This is exactly the perspective I’m trying to get at work my comment - if you go to school to get a certification to get a job and don’t care at all about the learning, of course it’s nonsense to “waste your time” on an assignment that ChatGPT can generate for you. But if you’re there to learn and develop a mastery, the additional skills you would have picked up by doing the hard thing - and maybe having a Chat AI support you in a productive way - is really where the learning is.

    If 5 year olds can generate a university level essay on the implications of thermodynamics on quantum processing using AI, that’s fun, but does the 5 year old even know if that’s a coherent thesis? Does it imply anything about their understanding of these fields? Are they able to connect this information to other places?

    Learning is an intrinsic task that’s been turned into a commodity. Get a degree to show you can generate that thing your future boss wants you to generate. Knowing and understanding is secondary. This is the fear of generative AI - further losing sight that we learn though friction and the final output isn’t everything. Note that this is coming from a professor that wants to mostly do away with grades, but recognizes larger systemic changes need to happen.


  • 100%, and this is really my main point. Because it should be hard and tedious, a student who doesn’t really want to learn - or doesn’t have trust in their education - will bypass those tedious bits with the AI rather than going through those tedious, auxiliary skills that you’re expected to pick up, and use the AI was a personal tutor - not a replacement for those skills.

    So often students are concerned about getting a final grade, a final result, and think that was the point, thus, “If ChatGPT can just give me the answer what was the point”, but no, there were a bunch of skills along the way that are part of the scaffolding and you’ve bypassed them through improper use of available tools. For example, in some of our programming classes we intentionally make you use worse tools early to provide a fundamental understanding of the evolution of the language ergonomics or to understand the underlying processes that power the more advanced, but easier to use, concepts. It helps you generalize later, so that you don’t just learn how to solve this problem in this programming language, but you learn how to solve the problem in a messy way that translates to many languages before you learn the powerful tools of this language. As a student, you may get upset you’re using something tedious or out of date, but as a mentor I know it’s a beneficial step in your learning career.

    Maybe it would help to teach students about learning early, and how learning works.


  • Education has a fundamental incentive problem. I want to embrace AI in my classroom. I’ve been studying ways of using AI for personalized education since I was in grade school. I wanted personalized education, the ability to learn off of any tangent I wanted, to have tools to help me discover what I don’t know so I could go learn it.

    The problem is, I’m the minority. Many of my students don’t want to be there. They want a job in the field, but don’t want to do the work. Your required course isn’t important to them, because they aren’t instructional designers who recognize that this mandatory tangent is scaffolding the next four years of their degree. They have a scholarship, and can’t afford to fail your assignment to get feedback. They have too many courses, and have to budget which courses to ignore. The university holds a duty to validate that those passing the courses met a level of standards and can reproduce their knowledge outside of a classroom environment. They have a strict timeline - every year they don’t certify their knowledge to satisfaction is a year of tuition and random other fees to pay.

    If students were going to university to learn, or going to highschool to learn, instead of being forced there by societal pressures - if they were allowed to learn at their own pace without fear of financial ruin - if they were allowed to explore the topics they love instead of the topics that are financially sound - then there would be no issue with any of these tools. But the truth is much bleaker.

    Great students are using these tools in astounding ways to learn, to grow, to explore. Other students - not bad necessarily, but ones with pressures that make education motivated purely by extrinsic factors than intrinsic - have a perfect crutch available to accidentally bypass the necessary steps of learning. Because learning can be hard, and tedious, and expensive, and if you don’t love it, you’ll take the path of least resistance.

    In game design, we talk about not giving the player the tools to optimize their fun away. I love the new wave of AI, I’ve been waiting for this level of natural language processing and generation capability for a very long time, but these are the tools for students to optimize the learning away. We need to reframe learning and education. We need to bring learning front and center instead of certification. Employers need to recognize this, universities need to recognize this, highschools and students and parents need to recognize this.


  • Hmm… Nothing off the top of my head right now. I checked out the Wikipedia page for Deep Learning and it’s not bad, but quite a bit of technical info and jumping around the timeline, though it does go all the way back to the 1920’s with it’s history as jumping off points. Most of what I know came from grad school and having researched creative AI around 2015-2019, and being a bit obsessed with it growing up before and during my undergrad.

    If I were to pitch some key notes, the page details lots of the cool networks that dominated in the 60’s-2000’s, but it’s worth noting that there were lots of competing models besides neural nets at the time. Then 2011, two things happened at right about the same time: The ReLU (a simple way to help preserve data through many layers, increasing complexity) which, while established in the 60’s, only swept everything for deep learning in 2011, and majorly, Nvidia’s cheap graphics cards with parallel processing and CUDA that were found to majorly boost efficiency of running networks.

    I found a few links with some cool perspectives: Nvidia post with some technical details

    Solid and simplified timeline with lots of great details

    It does exclude a few of the big popular culture events, like Watson on Jeopardy in 2011. To me it’s fascinating because Watson’s architecture was an absolute mess by today’s standards, over 100 different algorithms working in conjunction, mixing tons of techniques together to get a pretty specifically tuned question and answer machine. It took 2880 CPU cores to run, and it could win about 70% of the time at Jeopardy. Compare that to today’s GPT, which while ChatGPT requires way more massive amounts of processing power to run, have an otherwise elegant structure and I can run awfully competent ones on a $400 graphics card. I was actually in a gap year waiting to go to my undergrad to study AI and robotics during the Watson craze, so seeing it and then seeing the 2012 big bang was wild.




  • For me, it’s the next major milestone in what’s been a roughly decade-ish trend of research, and the groundbreaking part is how rapidly it accelerated. We saw a similar boom in 2012-2018, and now it’s just accelerating.

    Before 2011/2012, if your network was too deep, too many layers, it would just breakdown and give pretty random results - it couldn’t learn - so they had to perform relatively simple tasks. Then a few techniques were developed that enabled deep learning, the ability to really stretch the amount of patterns a network could learn if given enough data. Suddenly, things that were jokes in computer science became reality. The move from deep networks to 95% image recognition ability, for example, took about 1 years to halve the error rate, about 5 years to go from about 35-40% incorrect classification to 5%. That’s the same stuff that powered all the hype around AI beating Go champions and professional Starcraft players.

    The Transformer (the T in GPT) came out in 2017, around the peak of the deep learning boom. In 2 years, GPT-2 was released, and while it’s funny to look back on now, it practically revolutionized temporal data coherence and showed that throwing lots of data at this architecture didn’t break it, like previous ones had. Then they kept throwing more and more and more data, and it kept going and improving. With GPT-3 about a year later, like in 2012, we saw an immediate spike in previously impossible challenges being destroyed, and seemingly they haven’t degraded with more data yet. While it’s unsustainable, it’s the same kind of puzzle piece that pushed deep learning into the forefront in 2012, and the same concepts are being applied to different domains like image generation, which has also seen massive boosts thanks in-part to the 2017 research.

    Anyways, small rant, but yeah - it’s hype lies in its historical context, for me. The chat bot is an incredible demonstration of the incredible underlying advancements to data processing that were made in the past decade, and if working out patterns from massive quantities of data is a pointless endeavour I have sad news for all folks with brains.