Back in Germany, after two weeks in the bay area with my wife and kids. What a trip! Here are the important bits:
Apparently I made a huge mistake by eating a corndog without dipping it into something. (Most people say ketchup is the way to go, others, controversially, say mustard.)
Mill Valley and Marin are brain-breaking (“such a place exists?!”) beautiful.
Peanut Butter Pretzels are still amazing. Unrelated: I’m dieting now.
When I was in SF in February I brought a new shirt. An overshirt, I think you call it. The first day I wore it, I received two compliments from male colleagues (“that’s a nice shirt”), both of which have never complimented or even commented on anything I’ve worn before. And, generally speaking, male colleagues commenting on your fashion choices in a positive way is incredibly rare, isn’t it? “wow,” I wrote to my wife, knowing that she’d probably be sitting at home, hands folded, waiting eagerly for News on the Shirt, “I got two compliments on my new shirt. it’s a hit.” And then, guess what, when I travelled back to Germany, I forgot the shirt in the closet in the hotel. Big hit it was, though, I purchased it again, and brought it to SF again. And… now guess what. Yes. Exactly. SOMEWHERE between Twin Peaks and Noe Valley and our Airbnb I must’ve lost it again.
Speaking of loss: on the last day in SF, in the last Uber ride, my bluetooth headphones must’ve slipped out of my pockets and I forgot them in the car.
Stanford campus made me want to be 21 and a student there.
Muir Woods is beautiful. The Presidio still feels magical to me.
I really like San Francisco.
Having coffee with really smart people who think about software as much as you do and know every reference you bring up is beautiful. (Admittedly, I also enjoy having coffee with people who think I “do something with computers.”)
And here, shorter than usual, because I was travelling and half on vacation: some links for you!
New addition to my personal Now this is what I call engineering collection: this GitHub post by Michael Knyszek and Austin Clements on the new “green tea garbage collector” in Go. Technical writing of the highest quality about very proper software engineering. Imagine if all technical papers were written like this. “The core idea behind the new parallel marking algorithm is simple. Instead of scanning individual objects, the garbage collector scans memory in much larger, contiguous blocks. The shared work queue tracks these coarse blocks instead of individual objects, and the individual objects waiting to be scanned in a block are tracked in that block itself. The core hypothesis is that while a block waits on the queue to be scanned, it will accumulate more objects to be scanned within that block, such that when a block does get dequeued, it’s likely that scanning will be able to scan more than one object in that block. This, in turn, improves locality of memory access, in addition to better amortizing per-scan costs.”
So, so, so good: What if you could search every visible word on New York City’s streets? One of the handful of pages where I’m delighted that it makes use of scrolling. And then there’s the beautiful visualizations! And, of course, the content!
Told you before: I’m a sucker for lists like this one, 50 things I know. It has writing on it (“Writing defensively is a loser’s game. It lets people who won’t like your writing anyway win in advance.”), life (“It’s possible for someone to have a motivational system very different from your own and still be a force for good in the world. I’m turned off when people are motivated primarily by prestige, but many great works have been produced at the altar of social status.”), self-knowledge (“the traits that make you exceptional are the very same traits that show up in your neuroses and limitations”), a bit cheesy in some lines, funny in others. Good read.
Mitchell and the Ghostty contributors (great band name) rewrote the Ghostty GTK application. Interesting, technical read, but this is, to me, the most interesting point: “This is now my 5th time writing the GUI part of Ghostty from scratch: once with GLFW, once on macOS with SwiftUI, then on macOS with AppKit plus SwiftUI, once on Linux with GTK procedurally, and now on Linux with GTK and the full GObject type system. Each time, I've learned something new and valuable, and I've carried that experience into each iteration (and across platforms). Even this time, I've learned some new tricks that I plan on taking back over to macOS.”
Somehow ended up reading this François Chollet article from 2023, about how Chollet thinks about prompt engineering, and man, I don’t know whether I read that article two years ago or not, but it’s how I think about it, too, except my thinking is less sophisticated, of course: “The first difference is that a LLM is a continuous, interpolative kind of database. Instead of being stored as a set of discrete entries, your data is stored as a vector space — a curve. You can move around on the curve (it’s semantically continuous, as we discussed) to explore nearby, related points. And you can interpolate on the curve between different data points to find their in-between. This means that you can retrieve from your database a lot more than you put into it — though not all of it is going to be accurate or even meaningful. Interpolation can lead to generalization, but it can also lead to hallucinations.”
Really enjoyed Robin Sloan’s musings here, in this part of his newsletter, on whether “with the arrival of the AI language models, we have entered a new era of technological change”. I’ll spoil it for you (he doesn’t think so), but that also doesn’t make it less interesting.
Derek Thompson on how “AI Conquered the US Economy”.
This post, stating that “vibe coding is the fast fashion industry of software engineering”, was very interesting. I don’t agree with the, say, moral indictment here, and I think it missed the most interesting question that this realization results in: “As code production cheapens, we should examine parallels in other industries and my mind automatically goes to fast fashion: cheaper production led to affordable but low-quality clothes that are cheaper to discard than reuse or even repair.” My mind didn’t jump to the author’s next sentence (“Similarly, I foresee a future of cheap software flooding the market, polluting ecosystems, and harming users.”), but rather wondered, once again: if we can now have code that’s “affordable but low-quality” and “cheaper to discard than reuse or even repair” — what can we build with that? how does it change what we build? Because I don’t think we should keep building the way we have.
My colleague Matt sent me a link to one of his blog posts last week, while we were talking about trying something new: a culture of experimentation. It’s a great post that will stick with me (“In metal working there is a process called annealing where cycles of heating and cooling are used to modify the chemical structure of the metal to achieve desired properties. […] What does this exploration of annealing have to do with running a software team? The connection is that a software team may be stuck in a local optimum.”) and, man, I love it when someone pulls out a URL in a discussion.
And my ex-colleague at Zed, Conrad, wrote about why LLMs Can't Really Build Software and while I do think he has some good points (“When a person runs into a problem, they are able to temporarily stash the full context, focus on resolving the issue, and then pop their mental stack to get back to the problem in hand. […] We don't just keep adding more words to our context window, because it would drive us mad.”) I also can’t help but think: well, if it quacks like a duck…? Two weeks ago, I found a bug in Amp, a regression. I was jetlagged, sitting in the office, trying to get into the zone, hoping to find the bug. Right after sitting down, kinda as a too-early Hail Mary pass, I sent off Amp and asked it to figure out where the bug comes from. I also told the agent that it should consult the oracle, knowing that I need all the help I can get. While Amp ran, I also looked into the code, ignoring it, essentially, racing it, effectively. At some point, after a few minutes, it looked like it was stuck. I decided to maybe give it a few more minutes, because I was pulled into an actual real-life, in-person discussion in the office. So for the next 30-40 minutes or so, I sat there talking, and continuously side-eyed Amp, trying to see whether it still runs or doesn’t (note: the oracle has been improved by now and shows better progress). Once I dove back into the code, I noticed that the oracle still runs (“wow, it must’ve been running for 45min now”), but had a hunch that I’m now also close to finding the bug. So I dug in some more, added logging, reran the program, came up with 3 different hypothesis and threw them away again, then began to think “huh, the code is correct, it has to be something that’s wired up the wrong way” and — ding! — Amp was done. The oracle had come up with an answer. It took 61min and 108 inference calls and when I read the first line of its hypothesis I knew it was right. The main agent changed one line, I tested it, and the bug was fixed. 61min, more than half of which I spent doing something else. Millions of tokens. Not cheap, but the bug was fixed. Now… Maybe LLMs can’t really build software, like Conrad says, who knows, but I think the more interesting question is: does it matter?
I honestly don’t know what to make of it — how and whether it should influence frameworks — but what I do know is that I think it’s important for developers to know that this exists, that this is an option, that there’s this way to do it too: The Best "Hello World" in Web Development.
Neat: is OpenBSD 10x faster than Linux? (Had a similar moment years ago.)
Happy 50’th J&C. I love this series and am hugely thankful for your continued meaningful content! To 50 more!