Reacting to: ChatGPT voice mode is a weaker model

Look, I get it. You talk to the voice assistant and assume you’re chatting with the best model in the building. Then it fumbles a basic question and you start side‑eyeing the whole product. Simon’s post nails the uncomfortable reality: the “coolest” interface is often the weakest brain behind the glass. Here’s the link if you want the source: https://simonwillison.net/2026/Apr/10/voice-mode-is-weaker/

I’ve seen this up close. In dev land, a top‑tier model will spend 45 minutes untangling a nasty Express.js mess and spit back a clean refactor. Then you open voice mode and it can’t remember what you said 30 seconds ago. That gap isn’t a bug, it’s priorities. B2B work has tests, revenue, and clear rewards. Voice chat has vibes and ambiguity. Guess which one gets the training budget.

So yeah, the UI is lying to you a little. It looks like the smartest thing because it feels human, but it’s a different model with a different job. If you care about results, pick the endpoint, not the interface.

P.S. If your “assistant” feels dumb, you might just be talking to the wrong model. Been there.

Was this useful?

// Comments

No comments yet.

← Back to blog