There’s a certain category of often repeated sayings that, even though you’ve heard them all your life, only truly make sense many years after you first heard them. “Life’s not fair”, or “money doesn’t buy happiness”, or, hey: meta, “you’ll understand when you’re older”. As a six-year-old, you say “totally” when someone says “money doesn’t buy happiness”, but only when you have or earn money do you start to see what the phrase wants to say.
One of them, for me, was “don’t lie to yourself.” As a six-year-old, I went “roger that, why would I?” and shrugged it off. It took me about twenty-five more years and starting to lift weights to understand what it means. Because when you have 150kg on your back and someone tells you to do as many reps as you can, it’s very tempting to tell yourself "that’s it, two reps, can’t do more.” It’s very easy to tell yourself you did something with good form. It’s very comfortable to believe that you did as much as you could. It’s convenient to say “no, I’m not ready for this weight today, can’t do it, let’s drop the weight down” when, in fact, you were ready, but scared.
So, yes, even though this will cause many eyes to be rolled, I recommend lifting, even for no other reason than the fact that there’s nothing but you, the bar, the weights, and what you tell yourself, and that combination will make you grow, not just in size.
But the reason I bring this is up: two days ago, when deadlifting, I did lie to myself by believing that I’m in good form (even though I hadn’t slept that well), that I can do the three reps, easily (even though the first few sets weren’t smooth). Then, on the second rep, I felt the tiniest of all twinges in my lower back, and, having felt it before, immediately knew that I lied to myself and knew it was over. I woke up yesterday unable to stand up straight, because my whole back had locked up.
So, take it with a grain of salt.
My favorite piece of writing this week: Thank you for being annoying, by Adam Mastroianni. There are too many things to quote here. So, please, read the whole thing. It’s very good.
And this is the most impressive piece of meta-writing I came across in a long time: Wikipedia’s Signs of AI writing. It’s fascinating to read through the examples and remember some you’ve already clearly identified as signs of AI, but then also find new ones and go “ahh, yes, that one!” In my case, it was the negative parallelisms (“it’s not just about …, it’s …”) that have made me mad for a long time, but only now have a name.
I finally watched the General Magic movie and it was great. Highly recommend it.
Wonderful Michael Lynch writing about The Software Essays that Shaped [Him]. Great list. Some of them I hadn’t read and this one, Choices by Joel Spolsky, made me think all kinds of thoughts on software developers and designers and product managers.
Good pairing would be to read the Choices article first, then this one: You Want Technology With Warts. And then ponder the difference between users and software developers as users.
I learned about Progress Quest, a game that apparently only consists of the player watching fancy progress bars. Yup, that’s it, but, still: try it. And then read the FAQ. And then read the Manual: “Progress Quest belongs to a new breed of ‘fire and forget’ RPG’s. There is no need to interact with Progress Quest at all; it will make progress with you or without you.”
Incredible blog post that immediately went on my ever-growing list of “things to work through once I do that sabbatical where I go into a shed in the woods for eight months and are magically not distracted by anything”: Inside NVIDIA GPUs: Anatomy of high performance matmul kernel.
“Some people have called into question whether AI coding agents are actually increasing developer productivity. It even led one person to ask, ‘Where’s the shovelware?’ in a widely circulated blog post. [...] Because I keep running into people in apparent disbelief that coding agents can do Real Programming, I decided to wear it loud and proud by creating a GitHub badge for all my projects that wouldn’t have existed without coding agents as Certified Shovelware.”
Domenic Denicola is retiring, which not only makes me feel old (I’ve been following him for well over a decade) but also poor (I think of Dune every time I see an Ex-Googler retire after a decade: “Eighty years of owning the spice fields. Can you imagine the wealth?”). But it also made me amazed, yet again, that the Internet, and the Web, are a thing at all: “Just like I was thrilled to learn after university that people will pay well for something as fun as programming, I’m amazed that we’ve managed to harness the will of the market and large corporate budgets to nurture an artifact as impressive as the web.”
I didn’t know who Scott Aaronson is before this tweet about one of this blog posts made me the rounds with the caption: “Yet more evidence that a pretty major shift is happening, this time by Scott Aaronson” Here’s what Scott wrote: “Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague. Within a half hour, it had suggested to look at the function […] And this … worked, as we could easily check ourselves with no AI assistance. And I mean, maybe GPT5 had seen this or a similar construction somewhere in its training data. But there’s not the slightest doubt that, if a student had given it to me, I would’ve called it clever.” Wild times.
And here’s Terrence Tao, who I know: “Here, the AI tool use was a significant time saver - doing the same task unassisted would likely have required multiple hours of manual code and debugging (the AI was able to use the provided context to spot several mathematical mistakes in my requests, and fix them before generating code). Indeed I would have been very unlikely to even attempt this numerical search without AI assistance (and would have sought a theoretical asymptotic analysis instead).” One thing that kept coming up in conversations I had in the last few weeks is that this is just the start.
But then here’s some cold water: AI isn’t replacing radiologists. Cold, and interesting, and nuanced water. It’s a very good article that shows how you can’t just inject “intelligence” into the real world.
And more cold water: “’Doom loops’, when we go round and round in circles trying to get an LLM, or a bunch of different LLMs, to fix a problem that it just doesn’t seem to be able to, are an everyday experience using this technology. Anyone claiming it doesn’t happen to them has either been extremely lucky, or is fibbing.It’s pretty much guaranteed that there will be many times when we have to edit the code ourselves. The ‘comprehension debt’ is the extra time it’s going to take us to understand it first.And we’re sitting on a rapidly growing mountain of it.”
Even if you don’t know any of the words involved here (including Minecraft, including redstone), you will be impressed, I guarantee it: “I built ChatGPT with Minecraft redstone!”
“Is 90% of code going to be written by AI? I don’t know. What I do know is, that for me, on this project, the answer is already yes.”
As someone who once tried to make it as a musician, I’ve read my fair share about the music industry, and streaming, and the collapse of this and the rise of that in the last twenty years, and these numbers were very surprising: “In 2011, the same year Spotify debuted in U.S. markets, the music recording industry’s revenue was around $15 billion—40% less than the $24 billion in sales it logged ten years earlier. Fast forward to 2025, thanks to streamers like Spotify, annual recording sales are near $30 billion, with streaming accounting for $20 billion of the total. Spotify pays 70% of its revenue to musicians and rights holders. In 2024, it paid out $10 billion.” I wonder when the narrative will shift, though.
Marc Brooker: “The current generation of LLMs can’t do this type of reasoning alone, but systems composed of LLMs and other tools (SMT solvers in this case) can do it. The hype and buzz around LLMs makes it easy to forget this point, but it’s a critical one. LLMs are more powerful, more dependable, more efficient, and more flexible when deployed as a component of a carefully designed system.”