DEV Community

Daniel R. Foster
Daniel R. Foster

Posted on

Did We Get Baited? ChatGPT Was Only ‘Full Power’ at Launch

Lately, using ChatGPT feels like talking to a downgraded version of itself. It rambles, makes dumb mistakes, and sometimes feels noticeably less sharp than before. Not sure if it’s due to rising infrastructure costs, expensive hardware, or OpenAI trying to cut operational expenses, but the drop in quality is hard to ignore.

What’s especially obvious is the pattern around new model releases. Every time a new model drops, the quality feels insanely good at first, responses are sharp, context awareness is strong, reasoning feels solid. It genuinely feels like you’re using a top-tier AI running at full power.

But after a while, once the hype dies down, things start to degrade. Answers get less precise, more generic, sometimes even sloppy. It feels like the system is being “dialed down” over time.

Almost like in the beginning they allocate maximum resources to showcase the model and attract users. Then as usage scales and costs kick in, they start tightening things, maybe less compute per request, more aggressive optimization, or internal constraints to save money. And the user experience takes the hit.

From a business perspective, that might make sense. But as a user, it’s frustrating, because what you got at launch and what you’re getting later feel like two completely different products.

Top comments (0)