Bits Kingdom logo with a hexagon lattice, uppercase text in white, and a minimalistic design.

AI hallucinations 

« Back to Glossary Index

AI hallucinations refer to the situation where an artificial intelligence system makes up information that sounds real, but isn’t.

The term hallucination comes from psychology and is used here as a metaphor. Just like a person might “see” something that isn’t there, an AI can generate false data with total confidence, as if it were true. This is especially common in language models, particularly when asked about uncertain topics, to invent names, or to fill in gaps with data that isn’t part of their training.

Examples

For instance, if you ask an AI about the author of a book that doesn’t exist, it might give you a very convincing name, a publisher, even a detailed synopsis — all made up. It’s not lying on purpose: it’s simply doing what it was trained to do — predict words in a coherent way. But coherence doesn’t guarantee truth.

This becomes a serious issue in contexts like journalism, medicine, or law, where made-up information can cause real harm. That’s why developers are constantly working to reduce hallucinations, and users are always advised to fact-check against trusted sources.

« Back to Glossary Index

Other terms that may interest you