Joy & Curiosity #62
Interesting & joyful things from the previous week
Here we go again, packing for a trip. I’m flying to San Francisco for the week. Or as my dad, who’s never been to the U.S., and nobody else I’ve spoken to in the last decade, calls it: Frisco.
And this time? Hand luggage only. But I’m bringing the black shirt.
Incredible: Dithering, Part 1. Incredible in the literal sense: after the tenth or so illustration you start to think, no way, no way they built all of this. But yes they did, they did. What a gem! And exactly like a Bartosz Ciechanowski post, this also made me wonder: imagine if all learning could look like this.
Yours truly wrote about the context window and context management in Amp. I had a lot of fun with this: making the diagrams in Monodraw was fun, creating the orbs with Midjourney and GPT-5 was fun, using this one model that I’m falling in love with to add the orbs was fun, building a script to invert the colors of the diagrams was fun. Hope you have fun reading it.
“People wouldn’t watch the robot Olympics, for example. People don’t watch the chess computer world championships. They watch the chess world championships because they’re interested in other people going through the journey of life and wrestling with the same things they’re wrestling with.” This is from this interview with comedy writer Madeleine Brettingham. And here’s one of Drew Breunig’s thoughts on the interview: “I had never really thought about what virtuosity meant as a concept, but the way it was discussed in this forum has since solidified the idea of it in my brain: virtuosity can only be achieved when the audience can perceive the risks being taken by the performer.” Both are worth reading.
Yes: “The fundamental number that has defined software development is a simple one: $150/hour. $150/hour is a reasonable approximation of the cost of a professional senior software developer in the United States at this time. That number is large, and the reasons for it are many, but fundamentally it is a simple question of supply and demand. [...] Virtually every aspect of how software development is done has evolved around that $150/hour number. With developers being rare and expensive, every line of code has to justify a very high cost. Decisions around how software should be designed, built, and tested are made not with respect to how to make the software the best it can be, but rather to optimize around that grinding $150/hour number. [...] So what happens when that brutal economics changes? Five months ago, it did, with the initial release of agentic AI for software development. While software developers have to do many more things at their jobs than coding, that $150/hour was justified purely by the fact that only software developers could create code. Worse they could only create it through essentially handcrafted processes that were only some constant factor better than scribing it into punch cards. As of five months ago, that justification became false.” This is from Software Development in the Time of Strange New Angels. Read the whole thing. It’s very good. If you scoffed at that quote: yes, read it.
Nano Banana can be prompt engineered for extremely nuanced AI image generation. As someone who has only recently started to dive into Midjourney and is now using Midjourney and ChatGPT to generate images and send them back and forth between models: this was fantastic! But even as someone who’s generated a few images, the kicker at the end, when the model takes HTML and produces a render of the page… Well, that’s something else entirely. Wow.
I’ve heard Tyler Cowen say we “should write for the AIs” before, but I could never really make sense of it — I’m writing on the Internet, aren’t it? Isn’t that writing for the AIs? Should I address them? Say hello and thank you? This article here — Baby Shoggoth Is Listening — digs into the idea. Gwern is quoted too. But… I don’t know, I don’t know. I still don’t think I know what it means. But it’s an interesting thought, so here we are.
“Personality basins are a mental model that I use to reason about humans within their environment: from modelling why people are they way they are, how they change over time, how mental illnesses and addiction function along with how we should look for their cures, and how the attention economy optimizes itself to consume all of your free time.”
“Google are killing XSLT!” is the headline at xlst.rip and… Look, I didn’t know what XSLT was, and now that I do I’m not sure whether it’s a bad thing to kill? But what I am sure about is this: that website is amazing. Click on that link.
Here’s another amazing website with a URL to match: how-did-i-get-here.net. Very well done. And the writing, too: “The Internet is often described as an open, almost anarchistic network connecting computers, some owned by people like you and me, and some owned by companies. In reality, the Internet is a network of corporation-owned networks, access and control to which is governed by financial transactions and dripping with bureaucracy.” And now I’m listening to this again.
Tom MacWright, one of the co-founders and the CTO of val.town, wrote this honest, direct, unfluffly, can’t-believe-how-honest-actually retrospective on Val Town 2023-2025. “One thing I’ve thought for a long time is that people building startups are building complicated machines. They carry out a bunch of functions, maybe they proofread your documents or produce widgets, or whatever, but the machine also has a button on it that says ‘make money.’ And everything kind of relates to that button as you’re building it, but you don’t really press it.”
I really, really, really wanted to scoff at curated.supply and say something like “who the hell puts a Porsche 911 and a Rolex and a freaking tea kettle on the same page?” but then I got sucked in and now I want to kind of buy this orb lamp.
“If you are having a problem with some code and seeking help, preparing a Short, Self Contained, Correct Example (SSCCE) is very useful. But what is an SSCCE?” If you haven’t worked on a popular open-source project your guess as to how many people struggle with producing a proper bug report is likely off, very off. I had always assumed everyone knows what a good ticket looks like — until I worked on Zed. Now, if someone submits a bug report with an SSCCE I treat as if I had found a gold coin in the pocket of my jacket.
When Your Hash Becomes a String: Hunting Ruby’s Million-to-One Memory Bug. This was great. I’m fascinated that they even managed to reproduce it. I guess at some point you’d run into it, but, wow, someone got lucky there in a very unlucky situation.
I always love listening to comedians talk shop and this one was great: Louis C.K. on a podcast with David Spade and Dana Carvey.
This post about lazygit was very interesting. I’ve not really used it, except for starting it a handful of times, but the section “What’s amazing in lazygit?” is interesting because, yes, it’s about lazygit, but it’s also about TUIs and terminal programs in general and right now a lot of coding agents are in the terminal and… well, it’s interesting, isn’t it?
Anthropic is reporting that they have been “disrupting the first reported AI-orchestrated cyber espionage campaign” and while, as you know, I love to read “state-sponsored group” in connection with cyber attacks, this one was… strange. Take these two paragraphs: “At this point they had to convince Claude—which is extensively trained to avoid harmful behaviors—to engage in the attack. They did so by jailbreaking it, effectively tricking it to bypass its guardrails. They broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose. They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing. The attackers then initiated the second phase of the attack, which involved Claude Code inspecting the target organization’s systems and infrastructure and spotting the highest-value databases. Claude was able to perform this reconnaissance in a fraction of the time it would’ve taken a team of human hackers. It then reported back to the human operators with a summary of its findings.” Now, tell me, why did they put this sentence in: Claude was able to perform this reconnaissance in a fraction of the time it would’ve taken a team of human hackers. Are you reporting on an attack that you averted, while telling us that your “extensively trained” model has been jailbroken, and then, kind of, brag? Is this a security report, or an advertisement? I was asking myself that until I made it to this paragraph: “Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign). The sheer amount of work performed by the AI would have taken vast amounts of time for a human team. At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.”
Jason Bateman talking to Marc Maron about being a director: “Just exercise taste. Just sit there and watch the results of other people’s work and say yes to this, no to that, a little more of this, a little less of that. It’s the one person on the set who doesn’t have a job. So you don’t actually need to do anything. I’m not suggesting that’s the right way to be a director but I worked with a million who worked that way. It’s very common. You need not be overwhelmed by ‘I gotta do a bunch of shit.’ Walk before you run. So just sit there and be the arbiter of taste.”
“Yes: the Referendum gets unattractively self-righteous and judgmental. Quite a lot of what passes itself off as a dialogue about our society consists of people trying to justify their own choices as the only right or natural ones by denouncing others’ as selfish or pathological or wrong. So it’s easy to overlook that hidden beneath all this smug certainty is a poignant insecurity, and the naked 3 A.M. terror of regret. The problem is, we only get one chance at this, with no do-overs. Life is, in effect, a non-repeatable experiment with no control.”
I’ve had thoughts similar to those expressed here — things have changed dramatically in the last two years, most people apparently haven’t realized it yet, and things will change even more — but, man, was I surprised that it’s Will Larson writing this: “In the 2010s, the morality tale was that it was all about empowering engineers as a fundamental good. Sure, I can get excited for that, but I don’t really believe that narrative: it happened because hiring was competitive. In the 2020s, the morality tale is that bureaucratic middle management have made organizations stale and inefficient. The lack of experts has crippled organizational efficiency. Once again, I can get behind that–there’s truth here–but the much larger drivers aren’t about morality, it’s about ZIRP-ending and optimism about productivity gains from AI tooling.” Highly, highly recommend reading this. If you haven’t noticed the shift yet, I hope this gives you a glimpse.


