I recently tried a 7-day trial version of Claude Code. I had 3 distinct experiences with it: one obviously positive, one bad, and one neutral-but-trending-positive.
The bad experience was asking it to produce a relatively non-trivial feature in an existing Python module.
I have a bunch of classes for writing PDF files. Each class corresponds to a page template in a document (TitlePage, StatisticsPage, etc). Under the hood these classes use functions like `draw_title(x, y, title)` or `draw_table(x, y, data)`. One of these tables needed to be split across multiple pages if the number of rows exceeded the page space. So I needed Claude Code to do some sort of recursive top-level driver that would add new pages to a document until it exhausted the input data.
I spent about an hour coaching Claude through the feature, and in the end it produced something that looked superficially correct, but didn't compile. After spending some time debugging, I moved on and wrote the thing by hand. This feature was not trivial even for me to implement, and it took about 2 days. It broke the existing pattern in the module. The module was designed with the idea that `one data container = one page`, so splitting data across multiple pages was a new pattern the rest of the module needed to be adapted to. I think that's why Claud did not do well.
+++
The obviously good experience with Claude was getting it to add new tests to a well-structured suite of integration tests. Adding tests to this module is a boring chore, because most of the effort goes into setting up the input data. The pattern in the test suite is something like this: IntegrationTestParent class that contains all the test logic, and a bunch of IntegrationTestA/B/C/D that do data set up, and then call the parent's test method.
Claude knocked this one out of the park. There was a clear pattern to follow, and it produced code that was perfect. It saved me 1 or 2 hours, but the cool part was that it was doing this in its own terminal window, while I worked on something else. This is a type of simple task I'd give to new engineers to expose them to existing patterns.
+++
The last experience was asking it to write a small CLI tool from scratch in a language I don't know. The tool worked like this: you point it at a directory, and it then checks that there are 5 or 6 files in that directory, and that the files are named a certain way, and are formatted a certain way. If the files are missing or not formatted correctly, throw an error.
The tool was for another team to use, so they could check these files, before they tried forwarding these files to me. So I needed an executable binary that I could throw up onto Dropbox or something, that the other team could just download and use. I primarily code in Python/JavaScript, and making a shareable tool like that with an interpreted language is a pain.
So I had Claude whip something up in Golang. It took about 2 hours, and the tool worked as advertised. Claude was very helpful.
On the one hand, this was a clear win for Claude. On the other hand, I didn't learn anything. I want to learn Go, and I can't say that I learned any Go from the experience. Next time I have to code a tool like that, I think I'll just write it from scratch myself, so I learn something.
+++
Eh. I've been using "AI" tools since they came out. I was the first at my company to get the pre-LLM Copilot autocomplete, and when ChatGPT became available I became a heavy user overnight. I have tried out Cursor (hate the VSCode nature of it), and I tried out the re-branded Copilot. Now I have tried Claude Code.
I am not an "AI" skeptic, but I still don't get the foaming hype. I feel like these tools at best make me 1.5X -- which is a lot, so I will always stay on top of new tooling -- but I don't feel like I am about to be replaced.