Short one this week, lovely readers. It’s been an incredibly busy week and tomorrow I’m flying to San Francisco for what I expect to be an even busier week, so I haven’t read as much as I’d like to.
Highlight of the week: 28 slightly rude notes on writing. I’d love to quote something from every paragraph here, but instead, here’s one thing this article did particularly well: it finally gave me a good explanation for why I can’t stand AI-generated prose. Here it is, in #14 and #15: “Some people think that writing is merely the process of picking the right words and putting them in the right order, like stringing beads onto a necklace. But the power of those words, if there is any, doesn’t live inside the words themselves. On its own, ‘Love the questions’ is nearly meaningless. Those words only come alive when they’re embedded in this rambling letter from a famous poet to a scared kid, a kid who is choosing between a life where he writes poems and a life where he shoots a machine gun at Bosnian rebels. The beauty ain’t in the necklace. It’s in the neck. Maybe that’s my problem with AI-generated prose: it’s all necklace, no neck.” Reading that made me realize that great writing, to me, isn’t about the words, it’s about who wrote them and why.
This post by Sean Goedecke introduces a term that I now proudly proclaim I will forever remember: wicked features. Wicked features are “features that must be considered every time you build any other feature” and just reading that made me go ohhhh. Great post.
Ben Kuhn on impact, agency, and taste: “I’ve noticed a lot of people underestimate their own taste, because they expect having good taste to feel like being very smart or competent or good at things. Unfortunately, I am here to tell you that, at least if you are similar to me, you will never feel smart, competent, or good at things; instead, you will just start feeling more and more like everyone else mysteriously sucks at them. For this reason, the prompt I suggest here is: what does it seem like everyone else is mysteriously bad at?”
Another Sean Goedecke post that I read this week and that I’m sharing here as a good summary of the recent Sycophancy Troubles at OpenAI (yes, I did have to look up sycophancy): Sycophancy is the first LLM "dark pattern". OpenAI’s second article, which came out after Goedecke’s, contains some more information about the whole process of evaluating a new model version: Expanding on what we missed with sycophancy.
Grant Slatton on different methodologies to develop software, one of them being: “Start working on the feature at the beginning of the day. If you don't finish by the end of the day, delete it all and start over the next day. You're allowed to keep unit tests you wrote.” Give-it-5-minutes kind of stuff, very good.
This brought back a lot of bad memories from my (short) time at university: “Papers are expected to use author-year citations. Author-year citations may be used as either a noun phrase, such as ‘The lambda calculus was originally conceived by Church [1932]’, or a parenthetic phase, such as ‘The lambda calculus [Church 1932] was intended as a foundation for mathematics‘.” The article is about more than bibliographic references, though: “One may say, ‘What’s wrong with a standard?’ Well, innovation proceeds by departures from the standard. Recently, I have been re-reading some milestone papers in CS, SE, and logic, and was struck by how unlike they are to each other. True, ‘They would not be accepted today’ is not an interesting argument, since the state of the art evolves; but one cannot help thinking that if the standards at the time had been as focused on form over substance as they are today, some of these papers would have been rejected back then.” I’ve never worked in academia, so my impressions of it might be completely wrong, but every time I hear or read about academia I think to myself that it’s the last thing I’d want to do.
Jason Fried on motivation: “I can fake enough. I can fake a lot. But I’ve noticed there’s one thing in particular I can’t fake: Motivation.” What I’ve slowly (painfully slowly) came to learn over the last few years is that motivation, for me, is the crucial element to getting anything done. Yes, sometimes it’s about brute force, and discipline, and just getting through it, but for the big pieces, the important pieces, if I don’t understand why I’m doing them, I’ll waste time. Learning this was painful, because it turns out there isn’t a quick fix to make motivation appear. It takes me a lot of back-and-forth with others, reading and writing, reformulating my thoughts, trying to come up with a story in my head. I haven’t found a shortcut. Once I have a story I can tell myself about what we’re doing and why, I noticed that I can use it very well to pull others along. Lately I’ve been wondering what making that effort — the effort to find motivation, to figure out the why — would look like if it was an explicit effort.
Very interesting paper that examines whether it’s possible to get an LLM to output prose as if it were, well, “living” in 1913. They compare training their own model and fine-tuning and prompting and the results are kinda what you’d expect: you get more period-confirming output by only training the model on text from that period. But the ideas in that paper are very interesting regardless of results. For example, this line about the researcher’s trying to assess the model’s outputs: “One might sum up by saying that readers in 2025 are not especially good at assessing a passage’s congruity with the state of the world a hundred-odd years ago.”
Related is this paper that came out this week: “On the generalization of language models from in-context learning and finetuning: a controlled study”. Surprising result, at least to me: “Overall, we find that the models generalize better on average along several dimensions from in-context learning. Using in-context learning to augment the finetuning dataset can exploit the complementary benefits of both to yield better performance” Will need to dig in more.
How to make something great: “In the end, greatness is less a checklist than a delicate alignment of mindsets, methods, and morals. You begin with something half-seen and half-known, build with others who share your faith, wander widely before settling on a direction, learn by doing rather than by empty theorizing, protect nascent ideas from premature judgment, and persist with agility, refusing to sacrifice excellence on the altar of speed. Each of these principles, taken alone, is just a note. Together, they form a chord whose resonance can reshape the world.“
“Beginning in the 2010s, and accelerating in the 2020s, reality began to conform to the cyberpunk visions I grew up with. […] I stopped reading new cyberpunk about a decade ago. Around that time it became clear that the pace of real technological change had overtaken authors’ imaginations; newly written cyberpunk fiction began to feel retrofuturistic, like someone writing about the present and getting it wrong. Meanwhile all I had to do to see fantastic techno-futures unfold around me was to read the news.” Last week we stayed at a hotel that wasn’t as fancy as the following will make you think it was. They had two robots in the restaurant taking away dirty dishes. My daugher and I stopped and inspected one, because she was interested in why there was a robot driving around! But it felt like we were the only ones even remotely interested in them! People just stepped aside for the robot, but otherwise didn’t acknowledge it. Sure, yes, the “robots” were basically “just” driving carts full of dirty dishes, but still: driving carts full of dirty dishes! With LED eyes! While we’re having breakfast! The next day I saw that the hotel has cleaning robots too that would mop the floor before people went to breakfast. And then, of course, you walk outside and there’s robots mowing the lawn. What a time.