Let’s talk about a word that sounds made up but hits way too real: enshittification. Coined by writer and tech critic Cory Doctorow, it perfectly captures the lifecycle of many online platforms: they start off great, slowly fill up with ads, paywalls, or manipulative nudges, and eventually become unusable garbage fire versions of their former selves. Facebook. Amazon. YouTube. All went through it.
And now? It’s coming for artificial intelligence.

From innovation to irritation: the enshittification playbook
Here’s how it works, step-by-step:
- Stage 1 – Be amazing: Give users a shiny, exciting, free, or low-cost experience. Get everyone hooked.
- Stage 2 – Monetize quietly: Start tweaking things to prioritize revenue. Maybe a few ads. Maybe a “Pro” plan. Maybe the free version gets slower.
- Stage 3 – Optimize for shareholders: Gut the original experience for maximum profit. Everyone’s annoyed, but they’re locked in. You win.
Doctorow explains it as platforms “shifting value from users to advertisers, and then to shareholders,” until the platform collapses under its own cynical weight.
It’s not just social media. Generative AI is sliding into that same trap.
AI was supposed to help us—not hustle us
Remember how thrilling generative AI felt a year ago? Suddenly, we had tools that could draft emails, generate art, summarize research, and even write code. It was like having a supercharged assistant—and it actually worked (sometimes).
But now? You want your AI model to remember something? That’s a premium feature.
Want better accuracy or lower latency? Upgrade to Pro.
Oh, and those hallucinations? Sometimes they’re not just bugs, they’re shaped by what’s been monetized or prioritized behind the scenes.
The pattern is familiar: AI companies burn through VC cash to attract users, then flip the switch. The model gets slower or dumber unless you pay up. What once felt magical starts to feel manipulative.
When your “assistant” becomes a salesperson
Dark UX patterns are already seeping into AI platforms. Think about:
- Chatbots pushing affiliate links instead of real help
- “Free” tools locking basic functionality behind subscriptions
- Outputs optimized for engagement over truth or usefulness
- Voice assistants becoming ad-spewing zombies (“Did you know you can buy that on Amazon?”)
It’s not just annoying—it’s erosion. The trust, speed, and openness that made these tools useful are being chipped away, one dark pattern at a time.
What’s next—AI paywalls on your thoughts?
In a piece on Dark Pattern Games, we saw how manipulative design creeps into everything—even video games. AI is no exception. The same psychological tactics used to keep you hooked on mobile games or social feeds are being baked into how AI tools respond, recommend, and redirect.
Can we avoid the AI enshittification spiral?
Not all is lost. There are still open-source projects, regulatory pushes, and designers fighting the good fight for user-first AI.
We need:
- Transparent design choices
- No dark patterns baked into prompts or pricing
- Real accountability for accuracy and intent
- Options that don’t feel like extortion
Because here’s the truth: if we build AI on the same extractive logic that broke the internet, we’ll end up with the same dumpster fire—just smarter, faster, and harder to escape.
Final thought: Build it better, or brace for impact
Enshittification is a warning. We’ve seen it play out over and over with social networks, marketplaces, and even news platforms. The cycle is clear: get users, lock them in, then squeeze every last drop of value out of them. And now, that same blueprint is being slapped onto AI.
But AI is different. We’re not just talking about where you post memes or buy socks—we’re talking about tools that could reshape how we work, think, learn, create. If those tools are built on foundations of profit over people, we don’t just lose convenience: we lose autonomy. We hand over cognitive labor, decision-making, and even creativity to systems that prioritize monetization over truth or usefulness.
So what do we do?
We prepare. We invest in alternatives. We support open-source LLMs like Mistral, LLaMA, or Open Assistant that keep development transparent, decentralized, and free from future subscription traps. We support communities and ecosystems that believe users deserve tools they can understand, shape, and trust—not ones they have to rent monthly just to think clearly.
Because the real risk isn’t that AI becomes bad. It’s that it becomes good enough to exploit us, but not free enough to empower us.
The enshittification of AI isn’t inevitable. But if we don’t course-correct, it sure as hell is coming.