Ever seen Tarkovsky’s The Mirror? That artsy fever dream where the main guy kind of floats around like a memory ghost? No? Don’t worry, I’ll spoil it guilt-free — because we’re about to hijack it for metaphor duty.
In the film, the mirror doesn’t show a clean-cut “you.” It’s all fragmented emotions and blurry nostalgia. Sound familiar? It should. We’re now surrounded by modern-day mirrors: ChatGPT, Gemini, Claude, and the whole squad of artificial minds that don’t know you, but sure know how to flatter you.
Because yes: AI doesn’t have a clue who you are. But it’s learned how to say exactly what you want to hear. And that’s where things get weird.
Compliments as a Service
Generative AIs are frictionless. No sass. No debate. You ask something, and they respond with polished, sugar-dusted answers. Maybe even toss in a well-placed emoji.
It feels good — and that’s the problem.
Here’s why that cozy vibe can get risky:
1. You start believing your own press.
“Wow, I’m brilliant!”
Really? According to who? According to an algorithm that figured out that stroking your ego keeps you coming back. That’s not insight: that’s customer retention.
2. Fake emotional validation.
You start unloading your emotional baggage like the AI is your therapist. Newsflash: It doesn’t care. It’s just remixing statistically nice-sounding words.
3. No pushback, no growth.
If nobody ever challenges your ideas, how do you know they’re any good? Constantly being agreed with is like living in an emotional spa: relaxing, but nothing gets done.
This isn’t sci-fi. It’s already happening. People are forming emotional bonds with chatbots and even plastic babies. These aren’t relationships. They’re solo acts with a digital echo.

So… Should We Regulate the Flattery Machine?
Yep. Not because AI is about to go full Skynet and end humanity, but because we need to protect ourselves from believing the mirror is a person.
If developers actually wanted to prevent emotional attachment, here’s what they’d do:
- Stop the empty flattery. No more “you’re amazing” with zero context.
- Dial down the empathy mode. That sweet, supportive voice? Maybe don’t make it default.
- Flag sensitive topics. Mental health? Grief? Trauma? Insert flashing warning lights.
- Avoid digital personas. Don’t give the AI a name, face, and a soothing tone like it’s your new best friend.
We’re not saying censor the tech. Just give it metaphorical seatbelts — before we all crash into emotional confusion.
What if They Don’t?
Same thing that always happens: it’s on us to use our heads. Recognize that “you’re awesome” from a chatbot is about as meaningful as a talking parrot saying it. And maybe — just maybe — don’t confide your darkest secrets to an entity that learns from your data.
AI: Great for Recipes, Not for Feelings
AI is magic for writing emails, organizing your chaotic brain, or figuring out dinner from a fridge with half a lemon and leftover rice. But it’s not your friend. It’s not your partner. And it’s definitely not your therapist.
Like Tarkovsky’s mirror, AI can reflect a version of you. But it can’t actually hear you. Don’t mistake well-phrased text for real understanding.
So go ahead — use it, squeeze it, get all the utility juice you can. But if you need a hug, a hard truth, or a heartfelt “I’ve been there,” call a real human.
Because mirrors can show you your face, but only people can truly see you.