Responding to "The highest quality codebase"

This post made its way to the Hacker News frontpage today. The premise was interesting - it asked Claude to improve the codebase 200 times in a loop.

First thing’s first: I’m a fan of running Claude in a loop, overnight, unattended. I’ve written real customer-facing software this way, with a lot of success. I’ve sold fixed-bid projects where 80% of the code was written by Claude unsupervised - and delivered too. (Story for another day!)

This writer did NOT have the same success. In fact quite the opposite - the codebase grew from 47k to 120k lines of code (a 155% increase), tests ballooned from 700 to 5,369, and comment lines grew from 1,500 to 18,700. The agent optimized for vanity metrics rather than real quality. Any programmer will tell you, lines of code written does NOT equal productivity.

The premise of the post is really interesting - once you dig in and actually look at the prompt though, all of the intrigue falls away:

Ultrathink. You’re a principal engineer. Do not ask me any questions. We need to improve the quality of this codebase. Implement improvements to codebase quality.

Let’s deconstruct the prompt here.

First, “ultrathink” is a magic word in Claude that means “think about this problem really hard” - it dials thinking on the model up to its max.

Secondly, the rest of the prompt - “improve the codebase, don’t ask questions” - was almost bound to fail (if we define success as “not enough test coverage, less lines of code, bugs fixed”) and anyone who uses LLMs to write code would see this right away.

This post is equivalent to someone saying LLMs are useless because they can’t count the R’s in strawberry - it ignores the fact that LLMs are very useful in somewhat narrow ways.

To be fair, I think the author knew this was just a funny experiment, and wanted to see what Claude would actually do. As a fun exercise and/or as a way to gather data, I think it’s interesting.

I do fear that people will see this post, continue to blithely say “LLM bad”, and go about their day. Hey, if inertia is your thing, go for it!

How would I improve this prompt?

  • Have the LLM first write a memory - an architecture MD file, a list of tasks to do, and so on. The author literally threw it a codebase and said “go”. No senior engineer would immediately start “improving” the codebase before getting a grasp on the project at hand.
  • Define what success looks like to the LLM. What constitutes high quality? I’d say adding tests for particularly risky parts and reducing lines of code would be a good start.

While there are justifiable comments here about how LLMs behave, I want to point out something else: There is no consensus on what constitutes a high quality codebase. —mbesto

  • Give an LLM the ability to check its own work. In this case, I’d have run Claude twice: once to improve the codebase, and once to check that the new was actually “high quality”. Claude can use command line tools to run a git diff so why not instruct it to do so? Better if you have it run its tests after each iteration and fix problems.

The Hacker News Discussion

The HN thread had some (surprisingly) good takes:

hazmazlaz: “Well of course it produced bad results… it was given a bad prompt. Imagine how things would have turned out if you had given the same instructions to a skilled but naive contractor who contractually couldn’t say no and couldn’t question you. Probably pretty similar.”

samuelknight: “This is an interesting experiment that we can summarize as ‘I gave a smart model a bad objective’… The prompt tells the model that it is a principal engineer, then contradicts that role the imperative ‘We need to improve the quality of this codebase’. Determining when code needs to be improved is a responsibility for the principal engineer but the prompt doesn’t tell the model that it can decide the code is good enough.”

xnorswap: “There’s a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving. It can do structured problems very well, and it can transform unstructured data very well, but it can’t deal with unstructured problems very well… But right now, the best way to help an LLM is have a deep understanding of the problem domain yourself, and just leverage it to do the grunt-work that you’d find boring.”

asmor: “I asked Claude to write me a python server to spawn another process to pass through a file handler ‘in Proton’, and it proceeded a long loop of trying to find a way to launch into an existing wine session from Linux with tons of environment variables that didn’t exist. Then I specified ‘server to run in Wine using Windows Python’ and it got more things right… Only after I specified ‘local TCP socket’ it started to go right. Had I written all those technical constraints and made the design decisions in the first message it’d have been a one-hit success.”

ericmcer: “LLMs are good at mutating a specific state in a specific way. They are trash at designing what data shape a state should be, and they are bad at figuring out how/why to propagate mutations across a system.”

The experiment feels interesting, but in reality isn’t anything noteworthy - bad prompting gets bad results. This has been driven into every developer since the dawn of time - garbage in, garbage out. I mean, it’s kind of cute. But anyone concluding “Claude can’t write good code” from this misses the point entirely.

LLMs are tools. Like any tool, the results depend on how you use them. Give vague instructions to a circular saw and you’ll get messy cuts. Give vague instructions to Claude and you’ll get 18,000 lines of comments.