> In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of 1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development.
I think this is a fantastic point well summarised. I see people coming out of the woodwork here on HN, especially when copyright is discussed in relation to LLMs, to say that there's no difference between human creativity and what LLMs do. (And therefore of course training LLMs on everything is fair use.) I'm not here to argue against that point of view, just to illustrate what this "message" means.
I feel fairly similar to Nolan and to this day haven't really started using LLMs in a major way in my work.
I do occasionally use it when I might have previously gone to Stack Overflow. Today I asked it a mildly tricky TypeScript generic wrangling question that ended up using the Extract helper type.
However, I'm also feeling the joy of coding isn't quite what it used to be as I move along in my career. I really feel great about finding the right architecture for a problem, or optimising something that used to be a roadblock for users until it's hardly noticeable. But so much work can just be making another form, another database table, etc. And I am always teetering back and forth between "just write the easy code (or get an AI to generate it!)" and "you haven't found the right architecture that makes this trivial".