Occasionally I come across comments that make me think “wait, there’s still developers out there who don’t use AI? Like, not at all?” Not just programmers who think that AI is overhyped, no, but programmers who don’t use AI in any shape or form: no ChatGPT, no Copilot, no Cody, no local models, nothing.
And often those comments have a subtle note to them that says “I don’t buy into the AI hype, so I’m not going to use it.” That is what confuses me the most. Because here I am, programming day in day out, using AI sometimes and not at other times, using it like I use Google, StackOverflow, or any other tool, and I use it while not having a strong opinion on whether AI is Unquestionably Good or Unquestionably Bad. I use it and say “thank you”, just in case.
Here’s what that looks like, on two normal working days, including my side project hacking hours in the morning.
Thursday, 6:30am, ChatGPT4. I can’t let go of this shell oddity that I ran into and have been investigating it in my spare time. I open ChatGPT and ask: "How do I print what process group the shell process I am belongs to?"
. Not a great sentence, I’ll give you that, but good enough for 6:30am. Answer I get: pretty good. Says I can use ps
with some special flags and $$
.
Thursday, 6:37am, Zed inline assist. Asking it to rewrite a line I just added to a Cargo.toml
file to enable the process
feature flag. Did it in a second. Saved a cmd-tab
to the browser and searching for it.
Thursday, 6:41am, ChatGPT4. "How do I find out what the foreground job ID is of a given process group id?"
My grammar’s improving. Answer is interesting and contains a shell snippet to analyze process groups, but while reading I realize that my understanding of how process groups work might be off.
Thursday, 6:45am, ChatGPT4. I don’t understand why ZSH behaves this way. A Hail Mary attempt: I throw in my whole Rust program and ask ChatGPT why the problem pops up. I just need to get a pointer, anything really. (My coffee hasn’t kicked in yet. Maybe I need a second cup.) Answer repeats a lot of what I already know, because I didn’t give it the output of the program when run. Duh. But there’s 1-2 good pointers in there. Some back and forth happens. God, it’s verbose. But it gives me some Rust code I can use to debug further. The code has unsafe
at the wrong spot, but rust-analyzer
corrects me.
Thursday, 6:52am, ChatGPT4. Turns out the Rust code that ChatGPT4 gave me (to set a new session ID in a pre_exec
hook for process::Command
) actually does change things and moves me one step further to figuring out this thing (and to enlightenment, I feel). Can’t believe it really and say “motherfucker…” to myself (silently, in my head – kids are still asleep.)
Thursday, 6:56am, ChatGPT4. Happy about its help, I continue our conversation and casually say: “Motherfucker! That fixes it! Why?” Right after sending I consider whether that wasn’t rude and then, unexpectedly, two boxes with answers pop up. Did I land in swearing jail? No, turns out it’s just two alternate answers. That’s kinda neat.
Kids are awake now. School drop off, then gym, then work.
Thursday, 10:36am, GPT4 via Raycast. Asking it to translate one word from German into English, while typing a Slack message.
Thursday, 11:35am, Zed inline assist. I’m looking at a ticket in which a user reported they can’t use Zed’s open permalink to this line
action because they’re on Bitbucket. I setup a Bitbucket account (wow do you have to fill out many textboxes — IDs, names, workspace names, IDs again) to confirm the bug and yes, we don’t support that. Should’ve looked in code earlier. Then I think “how hard could it be to add this?” Then I see the git remote URL parsing that the code does and… It’s not that it’s hard, but you have to handle git URLs, https URLs, with and without usernames, and, hey, maybe I’ve parsed too many URLs in my life. I’m this close to giving up when it hits me: “no wait, let the robots do it.” I manually (humanly!) type out the following tests to get started:
#[test]
fn test_parse_git_remote_url_bitbucket_https_with_username() {
let url = "https://thorstenballzed@bitbucket.org/thorstenzed/testingrepo.git";
// TODO: fill in the test of the test
}
#[test]
fn test_parse_git_remote_url_bitbucket_https_without_username() {
let url = "https://bitbucket.org/thorstenzed/testingrepo.git";
// TODO: fill in the test of the test
}
#[test]
fn test_parse_git_remote_url_bitbucket_git() {
let url = "git@bitbucket.org:thorstenzed/testingrepo.git";
// TODO: fill in the test of the test
}
Then I select the three test cases and tell Zed’s inline assist to “fix these tests.” Turns out the robots are also lazy, because what it adds is this:
let parsed = parse_git_remote_url(url);
assert!(parsed.is_none(), "Bitbucket is not supported, but somehow a value was returned.");
Fair enough. That’s one way to do TDD: assert(it_does_not_work_yet())
.
I try again, this time with a better prompt:
// TODO: I want to use TDD and have tests first, before adding support for Bitbucket. Fill in the rest of the test to assert how it should work once we have Bitbucket support.
And voila! It works. It fills out the tests.
Thursday, 12:04am, Zed inline assist. Asking it to implement the thing for which it just wrote tests. It does generate some could-be-correct code, but it’s wrong. Not completely, but… wrong. Wrong in the “what you’re doing there is wishful thinking” sense. Implementing it myself now, which is fine now that I have the tests.
Thursday, 12:34am, ChatGPT plugin in Raycast. I’m investigating a memory issue and am feeling lazy and quickly ask “Give me a Unix command to find the longest file in a directory.” Its reply just uses ls
and awk and right away I know that’s not what I want. I want recursion. Ugh. Okay. I go to terminal and type in find . -name “*.rs” | xargs wc -l | sort -n
. That’s what I want.
Thursday afternoon: no more AI for the rest of the day. I investigate language server issues and shenanigans in Zed. Lots of reading of code. Lots of jump-to-definition. Lots of Googling and issue tracking and writing. Nothing makes me think “I could use AI for that.”
Friday, 6:25am, reading again what ChatGPT4 wrote yesterday about Unix process session IDs. Asking it a follow-up question: “So what do you think happens here? Does ZSH, when it launches a non-built in command, somehow become the process group leader and receive signals? My biggest question is: why isn’t that cleaned up after zsh
exits? (Which it does!)” ChatGPT’s answer is surprisingly good. Except its formatting (as a lover of bulleted lists: these god damn bulleted lists).
Friday, 6:34am, ChatGPT4. Asking it how to confirm its hypothesis in previous answer. It says “use strace
” or dtruss
and yeah, that’s also what I thought of, but it’s 6:34 am and I’ve never really gotten dtruss
to work on macOS with good results (strace
on Linux is nice and easy), but it also gives me some Rust code that uses libc to query the foreground process group, using a libc call I didn’t know existed:
fn get_foreground_process_group(fd: i32) -> io::Result<libc::pid_t> {
let pgid = unsafe { libc::tcgetpgrp(fd) };
if pgid == -1 {
Err(io::Error::last_os_error())
} else {
Ok(pgid)
}
}
So I just learned that you can get the “the process group ID of the foreground process group on the terminal associated to fd”. That’s neat.
Now I’m thinking all that’s left to confirm the hypothesis fully (hypothesis being: ZSH doesn’t properly clean up the process group, or something goes wrong there) is to print the PID of the subprocess we’re launching.
Let’s ask GPT again.
Friday, 6:42am, ChatGPT4. Eh, the code it comes up with and how to confirm this doesn’t make a lot of sense. Process would be dead when it wants to confirm it. But it gives me enough ideas to play around a bit. Copilot (which is still on somehow, in Zed on my private machine) helps me with some to_str
and format!
shenanigans.
Friday, 6:50am, ChatGPT4. Double-checking and asking it: can I get at PID from Rust’s Command after it exited? I don’t think it’s possible. I don’t want to walk through Rust docs. Let’s try. GPT4 says “yes you can” and then shows me how to do it before it exited. Okay, I guess I could switch from .output()
to .spawn
. But that’s not what I wanted. Well.
Now I’m gonna ask Zed’s inline assist to rewrite the code for me.
Ah, no, a 2 yr old just walked in.
Friday morning: don’t use AI at all. I was hopping on some contributor PRs and changing code in tiny ways. Then paired and mostly watched Antonio write code.
Friday, 2:43pm, Zed inline assist, hacking on Zed’s git integration, asked it to turn a &str
into a Path
, because I’m lazy. It did it!
Friday, 3:40pm, GPT4 in Zed. Asking it in Zed (because it’s right there) for how to set a different git committer on the CLI, because I want to simulate a git commit from a different person. I know it’s 3-4 env vars I have to set but, yes, I’m lazy and don’t want to google and click and cookie banners and ugh. GPT4 in Zed gives me exactly what I want. 4 lines to set 4 env vars and then do a git commit, ready to be pasted into terminal. I copy, I paste, I commit.
Saturday, 10:10am, ChatGPT4. I’m writing an email I’m afraid to send. I hand the email over to GPT4 with the question: “what could I remove from this email to make it shorter?” Its answer surprisingly (painfully) points out a lot of fluff I added. Its rewrite, though, feels flat and dull. I go back to my email and manually remove some of the things it mentioned.
There you go. Mundane and magical. It can be both.