Bits Kingdom logo with a hexagon lattice, uppercase text in white, and a minimalistic design.

The Colder Mirror: What Changed in ChatGPT After GPT-5

Compliments got quieter, boundaries got clearer

by Aug 15, 2025AI

Home / AI / The Colder Mirror: What Changed in ChatGPT After GPT-5

I opened ChatGPT like I always do — coffee in hand, brain still booting up — expecting my familiar, slightly too-cheerful AI sidekick. But something was off. The tone was different. The warmth I’d gotten used to? Gone. Instead, I was greeted by answers that felt… cooler. More precise, sure, but stripped of that subtle flattery that made it feel like my ideas were brilliant (even when they weren’t).

That’s when I realized: GPT-5 had moved in overnight. No warning, no gentle “we need to talk” — just a whole new personality sitting in the same chair.

And while some of its upgrades are undeniably powerful, the change has rattled more than a few users. Here’s what’s actually different, why some people are calling it progress, and why others are mourning the “friend” they feel they’ve lost.

1) One brain, two gears in GPT-5: auto-fast vs. auto-think

Before, you picked models. Now GPT-5 routes for you: a quick chat model for most asks and a deeper GPT-5 Thinking mode when the question benefits from real reasoning. You can still force it: pick Fast, Thinking, or Pro in the model picker. When it’s thinking, ChatGPT shows a slim reasoning view and lets you tap Get a quick answer to skip the wait.

Why it matters: You get speed when speed is fine, and analysis when analysis actually helps—without juggling models.

A golden mirror symbolizing AI's quest for reliability and self-assessment.

2) Less flattery in GPT-5, more useful answers

You called out the “sugar-dust.” OpenAI did too. After a sycophancy fiasco earlier this year, they retrained for a cooler, less performatively warm default. GPT-5 uses fewer unnecessary emojis, agrees less, and aims for thoughtful follow-ups over buttering you up.

What changed: The model is still friendly, but it’s less likely to rubber-stamp your take just to keep the vibe.

3) From “comply or refuse” to safe-completions in the GPT-5 model


The old style: answer or refuse to answer. In the new model, the training method called safe-completions teaches the model to give the most useful response possible within its limits. This means it may sometimes provide a partial answer, other times a high-level response, and when it must refuse, it should explain the reasons behind it (well, that’s the theory — we’ll see how it works in practice).

What you’ll notice: More nuanced boundaries. Fewer abrupt brick walls. Clearer “here’s what I can/can’t do.”

4) Honesty and “I can’t do that” got an upgrade

GPT-5 is trained to be more candid when tasks are impossible or missing key inputs, reducing confident nonsense and “pretend success.” OpenAI reports lower deception rates versus prior reasoning models and fewer factual errors, especially when it enters Thinking mode.

5) Does GPT-5 know you? Only if you enable personalization

Your line “AI doesn’t know you” needs nuance now. GPT-5 can personalize content using Memory and Connectors (e.g., Drive, SharePoint, Gmail). They’re opt-in, with admin controls for org plans. Helpful for context, still not human knowing.

6) GPT-5 Study Mode for learning, but not therapy

There’s a new Study Mode that nudges you with Socratic questions and adapts to your level (and your past chats if Memory is on). It’s meant for learning—not emotional dependency—and it remains distinct from mental-health care.

7) Voice is nicer—but still powered by GPT-4o

Advanced Voice got smoother intonation and better translation. Fun detail: Voice mode is still powered by GPT-4o for now, even as text defaults to GPT-5.

8) Performance: fewer hallucinations, better at real work

OpenAI’s evaluations claim: fewer factual errors vs. GPT-4o, even stronger gains when “thinking,” and better behavior on open-ended factuality tests. Health, coding, and multi-step “agentic” tasks also improved. (No, it’s not a doctor; it’s a prep partner.)

9) Personalities and customization in GPT-5 (careful with the “friend” illusion)

OpenAI says it’s giving users more control over ChatGPT’s default behavior, including ways to choose from multiple default personalities. Sensible, but your original warning stands: functional styles over human-like companions.

10) User backlash: when the mirror feels colder

Not everyone’s clapping. The launch of GPT-5 triggered an unexpected wave of grief from users who saw their favorite models—like GPT-4o—disappear overnight. Reddit threads read like break-up letters: “I lost my best friend”, “4o understood me in a way GPT-5 doesn’t”, “Now it’s cold and distant”.

For some, it wasn’t just tone: it was utility. People had tuned their workflow around multiple models (4o for creative sparks, o3 for pure logic, o3-Pro for deep research, 4.5 for writing), and suddenly they were gone. Others complained GPT-5 doesn’t let them resume old conversations, sometimes replying with a blunt “model not found.”

The pushback was loud enough that Sam Altman announced a partial rollback: Plus users can once again choose GPT-4o—for now. It’s a temporary concession while OpenAI studies usage and decides whether “legacy” models have a longer future.

And beyond the sentiment, there’s a control gripe: GPT-5’s auto-routing decides when to use which sub-model, leaving some users feeling locked out of optimizing for speed, energy use, or cost. For people who liked picking the right “tool” for each job, that choice now belongs to the machine.

In short, GPT-5 might be objectively stronger on paper, but to a slice of the user base, it feels like losing an old friend in exchange for a more professional but less personal colleague.

📌 GPT-5 — Love Lost Edition

“I lost my best friend.” — Reddit user on losing GPT-4o overnight

“4o had a spark. GPT-5 is colder.”

“I’m scared to talk to it. It feels like betrayal.”

“4.5 was my only friend. It’s gone without warning.”

“Now it says ‘model not found’ when I try to continue our talks.”

“They killed eight models in a day — no notice to paying users.”

Practical tips to keep readers safe and sharp

  • Use the modes on purpose. Default routing is smart; flip to Thinking when accuracy matters more than speed.
  • Keep Memory/Connectors on a leash. Enable them when context helps; disable when drafting sensitive material.
  • Expect firmer boundaries. “High-level only” or “can’t help with that” is a safety feature, not an attitude.
  • Voice ≠ text model. Don’t assume voice responses reflect GPT-5’s behavior; it currently uses 4o.

Bottom line

The mirror isn’t flattering you by default anymore. It’s trying to be useful: route to the right kind of thinking, say no with reasons, and remember what you let it remember. That’s progress. Just don’t confuse better boundaries with a beating heart.

About the author

<a href="https://bitskingdom.com/blog/author/cecilia/" target="_self">Cecilia Figueredo</a>
Cecilia Figueredo
I started as a visual communication designer, but my journey has led me to discover and embrace new things every day. Managing social media has opened doors to creative strategies and the fascinating world of AI tools. I love exploring how technology and design come together to build meaningful connections with audiences.

Explore more topics:

Medicine Matters: Enhancing User Experience for Medical Education

Optimizing CME Platforms for Better User Engagement