This week, there’s not much to say here. I’ve been writing code like a maniac.
All week long, many hours every day, consistently ignoring my watch telling me it’s time to move, staring like a dog without a brain when my wife walks in and says she just wanted to check-in on me because I haven’t left the room in so many hours.
“It’s been a while since I’ve been this addicted to writing code,” I sent someone in a DM. I’ve never felt more in the right place at the right time, I thought to myself.
I urge you to read this post by Simon Willison on how he uses LLMs to write code. This is the link I need you to click on this week. It’s a calm post, a practical post, a thorough post, a post that — fuck it, I need to use the big words now — contains experience and wisdom. That’s right. I first wanted to write here that the Context is king section is the one I’d underline. But then I realized I’d also like to underline the Tell them exactly what to do section. And the one after that and, you know what, actually the whole thing. So, here we are — go and read.
Last week I tried using ChatGPT in deep research mode for the first time. I asked it how many hours people at Bell Labs worked during its peak innovation years, when Unix and C were developed. It surfaced this personal memoir that ends with the following observation: “People who worked at Bell Labs wanted to be there – it was hard to keep them away. They were there at night, weekends, and even holidays.” Now here’s the interesting bit. A day or two later I came across this very interesting post that asks “How did places like Bell Labs know how to ask the right questions?” and contains some very interesting observations on the realities of research & engineering but also this bit: “He noted that to get ahead at Bell Labs, ‘you were supposed to work on more than you were asked to work on.’ Still a bit of a newcomer to Bell Labs, he was right on this point. Mervin Kelly used to often tell new hires at Labs, ‘You get paid for the seven and a half hours a day you put in here, but you get your raises and promotions on what you do in the other sixteen and a half hours.’” Serendipitous is the word, I think.
“The point I'm trying to make is that in classes, there is a ceiling on how well you can do. You get an A, or maybe an A+ if the professor does that sort of thing. […] When you are actually doing something in real life though, there is no ceiling.”
Patrick and Mariano’s book is finally available: WebAssembly from the Ground Up. I haven’t read it yet, but over the last 2.5 years in which they worked on this I’ve seen bits and pieces — even got to review parts of a chapter! — and, man, the amount of care that went into it, consistently, over such a period of time? It’s impressive and I’m looking forward to dig into it.
Big, big compiler and language news this week: the TypeScript compiler is being rewritten in Go. A lot of people with a lot of opinions wrote a lot of comments the day this was announced, yet few of them managed to reach the thoughtfulness of Anders Hejlsberg’s post in the GitHub discussion. That’s engineering. I did like this comment though: “I looked at the repo and the story seems clear to me: 12 people rewrote the TypeScript compiler in 5 months, getting a 10x speed improvement, with immediate portability to many different platforms, while not having written much Go before in their lives (although they are excellent programmers). This is precisely the reason why Go was invented in the first place. ‘Why not Rust?’ should not be the first thing that comes to mind.”
The TypeScript compiler in Go also immediately nerd-sniped someone into making escape analysis in the Go compiler for that project faster by a factor of 5x. Now if this isn’t it, I don’t know what is.
Nelson Elhage wrote about the performance of the Python 3.14 tail-call interpreter and the post goes very well with my suggested pairing from last week on performance: “If you’d asked me, a month ago, to estimate the likelihood that an LLVM release caused a 10% performance regression in CPython and that no one noticed for five months, I’d have thought that a pretty unlikely state of affairs!”
After I posted about a “workflow” (these are airquotes) I sometimes use when debugging, with the intention being that I make fun of myself for being horribly inefficient because I don’t really know how to make the best use of debuggers, Rasmus said something wise: “Debugger is for when things go wrong, printf is for understanding how things behave”.
Maggie Appleton’s post is titled Why You Own an iPad and Still Can't Draw and yes, it’s about drawing, and yes, it mentions iPads, but it’s about much more than that, I’d say. It’s very good. “The Meat is the whole point of your illustration. What is your drawing about? What are you saying? Why does it matter?”
This post says it’s about Cursor and how get a lot out of it, but I think most of the tips in there apply even if you don’t use Cursor. This prompt, for example: “Write tests first, then the code, then run the tests and update the code until tests pass.” Everybody can one-shot 300 lines of code if they never run the compiler or the tests. The real magic happens when your AI has tests to validate its code against.
This is the hardest I’ve ever seen Gruber hit: “Concept videos are bullshit, and a sign of a company in disarray, if not crisis. The Apple that commissioned the futuristic “Knowledge Navigator” concept video in 1987 was the Apple that was on a course to near-bankruptcy a decade later.” There are more lines like this one in the piece. Hell, the title alone: “Something Is Rotten in the State of Cupertino”.
Just added googly eyes to my macOS menu bar. I’m very glad I did — it has an “eyes size” slider in the settings! Highly recommend you also install it. Live a little. (I’m also now waiting to add a virtual pet to my macOS Dock.)
Always do Extra. Great post.
Remember when I wondered whether we won’t see CONTEXT.md popping up and that they’ll be the robots.txt for AI tooling? Well, this week I found this: svelte.dev/llms-small.txt — how neat is that? Then someone pointed me to llmstxt.org. It’s happening.
It took me working in the Zed codebase to realize what an effect on overall happiness compile times can have. I knew about long compile times, of course, but that was theory. In practice, before Zed, I have never experienced what it’s like having to wait 45 seconds to try out a tiny visual change. It changes a lot of things: which changes you make and when, what you value in tooling, and so on. And all of that is to say: Zack’s piece here — “I spent 181 minutes waiting for the Zig compiler this week” — gives a glimpse of what it can be like. “This means I spend even more time waiting on the compiler, because I need to use it just to check if my code is correct. This issue is made worse by the fact that when Zig encounters a compiler error in a particular function scope, it will stop running semantic analysis on that scope and report it back to you. This means that in a function scope with multiple errors, you need to discover and fix each error one by one, waiting 90 seconds in between.”
Mary Rose Cook wrote another great post on developing software with AI: Explore, expand, exploit. “Learning to build software with AI feels completely different. It’s much closer to learning a new discipline. Certainly, the old way of programming is relevant. But all the power comes from the new techniques in this new field that doesn’t even really have a name.”
We were talking about how MCP and agents and AI might change developer tooling at large software companies (“why shouldn’t you be able to ask an AI how to rollback a deploy on your first day? And why can’t the agent do it for you, after also getting permissions for you?”). I started to wonder whether “custom AI setup with all tools wired up” won’t end up being on a The Joel Test-like list in the future. That in turn made me re-read Joel Spolky’s post and, wow, published in 2000, it’s older than I remember and, wow, it’s mostly still valid, isn’t it?
It was only yesterday that I wrote some code that either uses Claude 3.7 or Gemini 2.0, depending on configuration. I tried to wrap both clients as much as possible, thinking that it sure would be nice if I didn’t have to worry about any API specifics things at all (the oldest of all programming dreams, isn’t it?). Then today I come across RubyLLM and only reading through the README brought up all those feelings that made me enjoy using Ruby many years ago. Nice example of what programming in Ruby can look like.
Quinn shared this post with me a couple of days ago: Build a Team that Ships. It’s fantastic. I love it. I want to print it out. It perfectly describes what I’m doing at Sourcegraph right now. If you want to join me, live in Europe, and have seen the light and now know that AI will change developer tooling completely: let me know.
Hey there!
Great read as always, I'd love to connect :) Read about joining you - I live in EU, I'm a 2nd year CS student, love your work, would be an honor to learn from you.
Links:
X: https://x.com/luuk00101
GitHub: https://github.com/luuk00101
Thank you!