Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
15× vs. ~1.37×: Recalculating GPT-5.3-Codex-Spark on SWE-Bench Pro (twitter.com/nvanlandschoot)
31 points by nvanlandschoot 13 days ago | hide | past | favorite | 17 comments
 help



> The narrative from AI companies hasn’t really changed, but the reaction has. The same claims get repeated so often that they start to feel like baseline reality, and people begin to assume the models are far more capable than they actually are.

This has been the case for people who buy into hype and don’t actually use the products, but I’m pretty sure people who do are pretty disillusioned by all the claims. The only somewhat reliable method is to test the things for your own use case.

That said: I always expected the tradeoff of Spark to be accuracy vs. speed. That it’s still significantly faster at the same accuracy is wild. I never expected that.


I believe a lot of the speed-up is due to a new chip they use [1] so the fact that the speedup didn't reduce the number of operations is likely why the accuracy has changed little.

1. https://www.cerebras.ai/blog/openai-codexspark


The people I know that use them the most also seem the most likely to buy into hype. The coworker who no longer answers questions by talking about code but instead by talking about which skills are the best is the same who posts all the hype.

Method: I used OpenAI’s published SWE-Bench Pro chart points and matched GPT-5.3-Codex-Spark to the baseline model at comparable accuracy levels by reasoning effort. At similar accuracy, the effective speedup is closer to ~1.37× rather than 15×.

Unless I'm missing it, the page they're referring to (https://openai.com/index/introducing-gpt-5-3-codex-spark/) never claims Spark is 15x faster.

It looks like it only appears in the snippet the Google result shows, presumably taken from the meta tags. It's possible an earlier draft claimed a 15x speed boost and they forgot to remove the claim from the tags.


I think they modified the page. If you search for GPT-5.3-Codex-Spark, Google still has it indexed with 15x. Searching: GPT-5.3-Codex-Spark + "15x" will show all the downstream sites that picked up the claim.

The Google snippet isn't outdated. It's from the <meta> tag. It's still there, and it still says "Introducing GPT-5.3-Codex-Spark—our first real-time coding model. 15x faster generation, 128k context, now in research preview for ChatGPT Pro users."

I don't think the visible page text ever said 15x faster. It's possible they modified it before I saw it, but it's not in the oldest Internet Archive version either.

The other news sites that mention 15x faster are probably either getting it from the same <meta> tag that shows up in search snippets, or from the RSS feed. Both would be generated from the same source text in whatever platform they use to write their posts.


Something I find odd in the AI space is that almost all journalists republish vendor benchmark claims without question.

Why not just benchmark the models yourself?

Tiny little YouTube channels will spend weeks benchmarking every motherboard from every manufacturer to detect even the tiniest differences!

Car reviews will often test drive the cars and run their own dyno tests.

Etc…

AI reviews meanwhile are just copy-paste from the market blurb.


Even the 3rd party AI benchmarks that are published [0], are all sham too. It is run by a paid shill (semianalysis) and all highly tuned by the vendors to make themselves look good.

[0] https://github.com/InferenceMAX/InferenceMAX/


>Why not just benchmark the models yourself?

Because their incentives are to churn stupid articles fast to get more views, and to be on major AI companies and potential advertisers' good graces. That, and their integrity and passion for what they do is minimal, plus they're paid peanuts.

Doesn't help that most brain-rotted readers are hardly calling them out for it, if they even notice it.


It's not free to run those benchmarks, especially on the big models.

Ideally journalists / their employers would swallow that as the cost of business, but it's a hard sell if they are feeling the squeeze and aren't making much in the first place.


This is the best sort of correct, in that it’s technically correct. The thing is, We don’t need 5.3 xxhigh reasoning for everything. Giving up some intelligence, and then taking the hit on some inevitable re-runs / re-prompts at 15x ends up with, I bet, more than 37% speed improvement on a lot of tasks.

There’s two ways to run this, and I’m curious which is better (time or quality, either would be interesting) - you could run 5.3xxhigh as the coordinator, spinning up some eager beaver coders that need wrangling, or you could run spark as the coordinator and probably code drafter - where it runs into trouble it could farm out to the big brains.

Now that I think about it, corporations use both models as well. It would be nice for the user if fast coordinator worked well; that lowers turns and ultimately could let you stay in the zone while pairing with a coding agent. But I really don’t know which is better.


>The fair comparison is where the models are basically equivalent in intelligence

I don't agree with this premise. I think it is fair to say that Haiku is a faster model than Opus.


“Haiku is faster than Opus” is fine as a simple statement. But if you’re going to say “15× faster” in a model card, it should be at similar accuracy. Otherwise you’re mostly comparing different settings, as opposed to a technological leap in model performance. It’s not technically wrong, it’s just not very useful and a bit misleading as a headline.

I still disagree. They clearly stated it was a smaller model and scored less on benchmarks. It was clear that this model is for people who want to trade off quality for speed.

efficiency per token has tanked but it's still faster. given this is the first generation for Cerberas hardware this is the worst it's ever going to be.

when it reaches the main 5.3 codex efficiency at this token rate this kind of articles will seem silly in retrospect


Yeah, the progress is still incredibly impressive even if 15× is overstated. Curious to see how far it goes in the future.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: