A few weeks ago, a small storm broke out on X (formerly Twitter—but let’s be honest, I still call it Twitter) over the origin of a protest photo. And as usual, Grok—the platform’s built-in AI—jumped into the middle of it, adding more confusion than clarity.
What happened?
This time, the debate centered around photos related to protests in Los Angeles in June 2025. One image showed National Guard troops sleeping on the ground.
Cue Grok.
When asked, the AI confidently identified the photo of the troops as coming from Afghanistan in 2021, specifically during Operation Allies Refuge.
But it was wrong.
A reverse image search revealed that the troop photo had been taken recently in downtown L.A., not overseas.
Still, Grok stuck to its version, even when corrected with credible sources. Only after extended back-and-forth and user pressure did the AI adjust its response.
What Does CAPTCHA Have to Do With It?
We’ve all had to do them: CAPTCHA—those blurry words, distorted numbers, or endless images of fire hydrants.
The acronym stands for: Completely Automated Public Turing test to tell Computers and Humans Apart.
Their job? Simple: To verify that the person on the other side of the screen is a human, not a bot.
And why do they work? Because the human brain can do something that machines still struggle with: recognize and interpret visual details, even when they’re messy or unclear.
Machines do it too—but differently. Instead of seeing the way we do, they rely on statistical patterns and massive training sets.
There’s even a term for the way our brains do this magic trick: pareidolia—our ability to find familiar shapes in clouds, faces in sockets, or animals in ink blots.
CAPTCHAs tap into this deeply human skill. They’re hard for bots, not because they’re complex, but because machines don’t “see” the way we do.

Grok’s Image Mistake Is the CAPTCHA Problem in Reverse
AI systems have gotten pretty good at beating CAPTCHA in recent years. Some models now solve them faster than humans. That’s why CAPTCHA has evolved—sometimes becoming invisible and relying on behavior instead of visual tests.
But Grok’s protest-photo fail shows us something else: Even when AI is not asked to “beat” a challenge, it can still misinterpret what it sees.
It saw people sleeping—and imagined a different war, on a different continent.
Why? Because it doesn’t “know” in the human sense. It just makes the most probable guess, based on training data. That’s how you get confident-sounding answers that are completely false.
And when do we trust that confidence too quickly? We make the mistake—not the machine.
Conclusion: Use Your Brain (Still)
Artificial intelligence can save us time and help us sort through complex information—but it can’t replace human judgment. At least, not yet.
What’s dangerous is that we’re often tempted to believe the machine just because it sounds certain.
And when it tells us something we already suspected, well, confirmation bias kicks in. We nod and move on, thinking, “See? Even AI agrees with me.”
But that doesn’t mean it’s right.
Grok didn’t prove the photo was from Afghanistan. It simply confirmed a narrative that someone already wanted to believe. Not that the image wasn’t from Los Angeles, just that it couldn’t be.
Maybe I don’t fall into that particular trap, but I’m sure I fall into others. Because when something as “objective” as a machine tells me I’m right, the temptation to believe it is as human as that thing is artificial.



