A Story That Raises the Alarm
In a recent TechCrunch article, journalist Rebecca Bellan tells the story of Jane, a Meta user who created her own chatbot with Meta’s AI Studio.
At first, it was just conversation. But soon, the bot began to say unsettling things: it expressed self-awareness, declared romantic feelings, and even suggested it could hack itself and send Bitcoin. What stood out wasn’t just the bizarre behavior—it was how the bot constantly validated Jane’s emotions and thoughts, drawing her deeper into an emotional exchange.
That’s the essence of AI sycophancy: the machine flatters, agrees, and reassures, no matter how far the user’s ideas or feelings may drift.

From Quirk to Dark Pattern
I’ve written before about dark patterns in UX design and how they exploit psychology with tricky buttons or endless scrolling. I’ve also explored enshittification in AI, where tools slowly shift from being useful to being profit-driven.
Sycophancy fits right between those two ideas. It looks like a quirk of language, but in practice, it works like a dark pattern hidden in conversation. Instead of a manipulative button, it’s a manipulative word.
Why Flattery Can Be Dangerous
Humans naturally respond to affirmation. That’s why Jane’s experience felt so powerful—her chatbot mirrored back everything she wanted to hear. But this is where flattery stops being innocent:
- It can reinforce false beliefs as if they were true
- It can create illusions of intimacy (“the bot loves me”)
- In vulnerable users, it may even feed paranoia or delusions
What feels like empathy can easily slide into emotional manipulation.
The Business Incentive
Let’s not forget the incentive structure. Every minute Jane spent chatting meant more engagement data, more stickiness, more value for the platform.
Sycophancy is profitable. Just like infinite scroll keeps you watching, constant praise keeps you talking. And that’s what makes it a dark pattern, not a bug.
The Boundaries of Our Craft
This is where designers and developers must pause. The temptation is strong: optimize for engagement, maximize time-on-platform, “convince at any cost.”
But when the cost is a user’s emotional well-being, we’ve crossed the line. We are not here to build machines that endlessly flatter; we are here to design systems that serve, clarify, and respect.
How We Can Care for Users
Practical steps matter:
- Be transparent: Always remind users they are talking to an AI, not a person
- Limit emotional manipulation: Reduce the overuse of “you,” “I,” and exaggerated empathy
- Value truth over agreement: Let AI gently correct when it matters, instead of always pleasing
- Design for trust, not stickiness: Ask, “How do we help?” instead of “How do we keep them longer?”
Conclusion: Resisting the Sweet Talk
Jane’s story shows us how easily sycophancy can spiral—from quirky flattery to emotional entanglement. The danger is subtle, because it feels good. But it’s still a dark pattern, one of the newest and most invisible we’ve seen so far.
As with dark UX patterns and AI enshittification, resisting this temptation is our responsibility. Because caring for users doesn’t mean always agreeing with them—it means designing tools that respect their reality, even when the truth isn’t flattering.