Posts tagged "Claude Code"

2 posts

← Back to all posts

Nov 24, 2025

Thoughts on Recent New Models: Claude Opus 4.5, GPT 5.1 Codex, and Gemini 3

Three major model releases hit in the past two weeks. All three are gunning for the developer market - here’s what stood out to me.

IMO, we’re beyond the point where major model releases move the needle significantly, but they’re still interesting to me nonetheless.

We’ll start with the one I’m most excited about:

Claude Opus 4.5

Anthropic announced Opus 4.5 today. If you’re not aware: Anthropic’s models are, in order from most to least powerful: Opus (most capable, most expensive), to Sonnet (mid-range, great price/performance) and Haiku (cheapest, fastest, and smallest - still a decent model). Pricing dropped to $5/million tokens input, $25/million tokens output. That’s 1/3rd what Opus 4.1 cost.

Anthropic claims Opus 4.5 handles ambiguity “the way a senior engineer would.” Bold claim. (Claude has already shown itself to be “senior” when it estimates a 10 minute task will take a couple of weeks. 🥁)

The claim that it will use fewer tokens than Sonnet on a given task because of its better reasoning is interesting, but I’ll have to see how it plays out when actually using it.

Why am I most excited about this one, you might ask? Mainly cause Claude Code is my daily driver, and I see no reason to change that right now.

GPT 5.1 Codex

OpenAI’s GPT 5.1 Codex-Max dropped last week also. I really like the idea of the Codex models - models that are specifically tuned to use the Codex toolset - that just makes a ton of sense to me.

Still… the name. Remember when GPT-5 was supposed to bring simpler model names? We now have gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini, and gpt-5.1-codex-max. Plus reasoning effort levels: none, minimal, low, medium, high, and now “xhigh.” For my money, I’m typically always on medium, but I’m interested to try xhigh (sounds like Claude’s Ultrathink?).

I don’t think this does much to alleviate confusion, but all APIs trade flexiblity with simplicity, and I’d prefer to have more levers to pull, not less.

One highlighted feature is “compaction,” which lets the model work across multiple context windows by summarizing and pruning history. Correct me if I’m wrong, but Claude Code has been doing this for a while. When your context runs out, it summarizes previous turns and keeps going (you can also trigger compression, which I do - that and /clear for a new context window). Nice to see Codex get on this, it’s a rather basic tenant of LLM work that “less is more” so compaction frankly should have been there from the get go.

I think the cost is the same as 5.1 Codex - honestly the blog post doesn’t make that super clear.

Gemini 3

Google’s Gemini 3 launched November 18th alongside Google Antigravity, their new coding IDE. I’m happily stuck with CLI coding tools + JetBrains Rider and IDEs come and go frequently, so I haven’t tried it (and probably won’t unless people tell me it’s amazing).

Before release, X was filled with tweets saying how Gemini 3 is going to be a gamechanger. As far as I can tell, it’s great, but it’s not anything crazy. I did use it to one-shot a problem that Claude Code has been stuck on for a while on a personal project (a new .NET language - more on that later!) - which was really cool. I’m excited to use it more, alongside Codex CLI.

The pricing is aggressive: $2/million input, $12/million output. Cheapest of the three for sure.

What Actually Matters

The benchmark numbers are converging. Every announcement leads with SWE-bench scores that differ by a few percentage points. However, that does not and never has excited me. Simon Willison put it well while testing Claude Opus:

I’m not saying the new model isn’t an improvement on Sonnet 4.5—but I can’t say with confidence that the challenges I posed it were able to identify a meaningful difference in capabilities between the two. … “Here’s an example prompt which failed on Sonnet 4.5 but succeeds on Opus 4.5” would excite me a lot more than some single digit percent improvement on a benchmark with a name like MMLU or GPQA Diamond.

Show me what the new model can do that the old one couldn’t. That’s harder to market but more useful to know. Right now, I’m reliant on vibes for the projects that I run.

What am I still using?

For my money, Sonnet 4.5 remains the sweet spot. I haven’t missed Opus 4.1 since Sonnet launched. These new flagship releases might change that, but the price-to-performance ratio still favors the tier below.

IMO, the dev tooling race is more interesting than the model race at this point. As I pointed out in my article on Claude Code and GitHub Copilot using Claude are NOT the same thing, models tuned to a specific toolset will definitely outperform a “generic” toolset in most cases.

Read more →

Nov 21, 2025

Claude Code and GitHub Copilot using Claude are not the same thing

Stop telling me “GitHub Copilot can use Claude, so why would I buy Claude Code”. There’s about a million reasons why you should stop using GitHub Copilot, and the main one is that it’s not a good product. Sorry not sorry Microsoft.

Over and over again I’ve heard how much coding agents suck (often from .NET devs), and the bottom line is they’re doing it wrong. If you aren’t at least TRYING multiple coding tools, you’re doing yourself a disservice.

This may sound a bit contemptuous, but I mean it with love - .NET devs LOVE to be force fed stuff from Microsoft, including GitHub Copilot. Bonus points if there is integration with Visual Studio. (The “first party” problem with .NET is a story for another time.)

I ran a poll on X asking .NET devs what AI-assisted coding tool they mainly use. The results speak for themselves - nearly 60% use GitHub Copilot, with the balance being a smattering across different coding tools.

(I know I’m picking on .NET devs specifically, but this applies equally to anyone using a generic one-size-fits-all coding tool. The points I’m making here are universal.)

Here’s the bottom line: GitHub Copilot will not be as good as a model-specific coding tool like OpenAI’s Codex, Claude Code (which is my preferred tool), or Google’s Gemini.

Why?

  • Sonnet 4.5 is trained to specifically use the toolset that Claude Code provides
  • GPT-5-Codex is trained to specifically use the toolset that Codex CLI provides
  • Gemini is trained to specifically use the toolset that Gemini CLI provides

OpenAI has explicitly said this is the case, even if the others haven’t.

“GPT‑5-Codex is a version of GPT‑5 further optimized for agentic software engineering in Codex. It’s trained on complex, real-world engineering tasks such as building full projects from scratch, adding features and tests, debugging, performing large-scale refactors, and conducting code reviews.” (Source)

Why not Copilot?

  • Giving several models the same generic toolset (with maybe some different prompts with a different model) will simply NOT work as well with an LLM as specific training for a specific toolset.
  • Model selection paralysis - which model is best suited to which task is really left up to the user, and .NET devs are already struggling with AI as is. (This is totally ancedotal of course, but I talk to LOTS of .NET devs.)
  • Microsoft has married themselves to OpenAI a little too much, which means their own model development is behind. I know it feels good to back the winning horse, but I’d love to see custom models come out of Microsoft/GitHub, and I see no signs of that happening anytime soon.

My advice

  • PAY THE F***ING $20 A MONTH AND TRY Claude Code, or Codex, or Gemini, or WHATEVER. I happily pay the $200/month for Claude Code.
  • Get comfortable with the command line, and stop asking for UI integration for all the things. Visual Studio isn’t the end all be all.
  • Stop using GitHub Copilot. When it improves, I’ll happily give it another go.
Read more →