One of the question that I’ve been throwing around in my head this week: when does it make sense to invest in meta-work? It’s a surprisingly tricky question.
Say work means working on your product: adding features, fixing bugs, talking to customers, marketing, and so on. Meta-work would be work to support the work: documenting how you add features or fix bugs, for example; or coming up with a system to prioritize bugs; migrating from a TODO text file to a ticketing system.
Meta-work does have value. It can make the effort you put into the non-meta work more efficient. Instead of working on the wrong thing and thus wasting time, the investment into prioritizing a list of bugs will hopefully lead to no one working on the wrong thing.
The tricky thing, though: when does it make sense to invest in meta-work?
Most engineers I know wouldn’t say “always” when asked like this, but in practice will tend to “always” because the lure of making something correct and efficient is so appealing.
But it doesn’t always make sense. When you have, for example, so much to do that it’s not even possible to build the wrong thing — in the early stages of a startup or a new product — investing into meta-work is wasteful, because by the time you’re done with your prioritization or labelling of tickets, the work itself has changed again.
The question is: when does that flip? When will investing into meta-work make the work more efficient? And how do you recognize that you’ve reached that point?
But, anyway, meta-work schmeta-work, here’s some important stuff: no newsletter next week — I’ll be in Spain, vacationing, vacaciones. It’ll just be me, my family, and the 600 pages of The Power Broker I still have left.
The tide is shifting. Now here’s Thomas Ptacek, also known as tptacek, also known as tqbf, writing that My AI Skeptic Friends Are All Nuts.
Useful reminder: “It should be taken for given that of course it's really hard to switch your mindset from ‘debugging this thing is really important’ to ‘should I debug this thing at all’, while you're in the middle of debugging said thing. But you know, you just try your best, sometimes your brain refuses to let go, and other times, maybe you get enough space to decide going off for a beer is the better bet.” Admittedly, I have the opposite problem: I rarely get stuck in rabbit holes, but more often than I should tread water on the surface.
Why Bell Labs Worked. There’s some good lines in there. Here, this one on Academia: “People who can survive this system aren't necessarily the same as people who can do great work. Most of the great names of the past would be considered unemployable today;” Or this one, on management and supervision: “After all, ‘what's stopping someone from just slacking off?’ Kelly would contend that's the wrong question to ask. The right question is, ‘Why would you expect information theory from someone who needs a babysitter?’” Or this one, on people with drive: “Bell Labs' pantheon was built on the backs of those who can't escape having dark nights of the soul. People who wake up in the middle of the night every night and ask ‘what am I doing with my life? I've accomplished nothing worthwhile.’”
There’s so much interesting stuff in here: “tbsp is an awk-like language that operates on tree-sitter syntax trees.” The tool itself, of course, but the code is also neat and readable, there’s examples, which are always great, but my favorite part is the README that has very little formatting but is mainly just a guide on how to use the tool.
Another tool based on tree-sitter that made me lean forward: srgn, a “grep-like tool which understands source code syntax and allows for manipulation in addition to search.” Reminds me of Comby, but it’s different. Look at this example: you can find the string “age” only inside the scope of a Python class.
This post on implementing a Forth (side-note: before my personal Summer of Lisp in 2016, I had a Spring of Forth and went through the two Leo Brodie books on Forth — highly recommended activity for spring or, heck, even summer; live a little) brought me to PlanckForth, a project that “aims to bootstrap a Forth interpreter from hand-written tiny (1KB) ELF binary.” The README then continues with the two sentences that make all of us go hell yeah: “This is just for fun. No practical use.” But then, if you scroll down, you can actually find a colored, annotated dump of said hand-written ELF binary. Hell yeah.
Talking about colors: scroll through this. Me, personally, I couldn’t live like this. But I respect it.
Dwarkesh Patel explained why he doesn’t think AGI is around the corner. Worth reading to understand how he thinks about AI timelines (“we’re underestimating the difficulty of solving the much gnarlier problem of computer use, where you’re operating in a totally different modality with much less data”), but also because it contains some very interesting thoughts around AI and learning in general. I loved this paragraph: “How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.”
After last week’s newsletter, to my big surprise, three different people reached out to me and asked how I find the time to read so much. It surprised me because, well, that issue only had, what, ten links to articles in it and ten articles in a week isn’t that much. I’m working a lot right now and don’t even read that much, but still, I replied, I try to constantly read: in the morning, in the evening, in the bathroom. A friend then said he lost the habit and now often reads the news. “The news?!”, I said, “Are you crazy?” And then I sent him a link to Aaron Swartz’s post on the news, I Hate the News.
I’ve heard about Section 174 and knew roughly what was going on, but if you had asked me to explain it to you, I couldn’t have. This article changed that: the hidden time bomb in the tax code that's fueling mass tech layoffs.
Absolute incredible “wait what” feeling when I read this paragraph: “To train a model and evaluate its performance, we needed a realistic dataset of answerable email queries. Luckily for us, when notorious energy trader Enron was sued for massive accounting fraud in 2001, 500K of their emails were made public in the litigation (pro tip: if you're engaging in massive accounting fraud, maybe don't save all your emails).” (If you haven’t, listen to the Acquired episode on Enron.)
Everybody should write more blog posts like this one: Too Many Open Files. Yes, everybody, including you. And yes, there weren’t a lot of things I didn’t know in that post, but I’m glad I read another person’s explanation of file descriptions, saw how they approach debugging it, had a “hmm, nice” moment when reading through the script, and I bet the author of that post learned a ton.
Every time I come across this Žižek video I have to rewatch it. I don’t have to tell you that I very much believe in what follows after the colon at the end of this sentence, because I know you know I do, but for future generations: the same thing happens in software.
Enjoy your vacation, and in honor of the 42nd newsletter - remember where your towel is. A towel is just about the most massively useful thing an interstellar hitchhiker can carry.