A→Z
A2ZAI
Back to Glossary
concepts

Hallucination

When an AI model generates false or fabricated information presented as fact.

Share:

Definition

Hallucination refers to AI models producing content that sounds plausible but is factually incorrect, fabricated, or nonsensical.

Why Hallucinations Happen: - Models predict probable next tokens, not truth - Training data may contain errors - Models lack real-world grounding - Overconfidence in generation

  • **Types of Hallucinations:**
  • Factual Errors: Wrong dates, names, statistics
  • Fabricated Sources: Made-up citations or quotes
  • Logical Inconsistencies: Self-contradicting statements
  • Confident Nonsense: Plausible-sounding gibberish

Mitigation Strategies: - Retrieval-Augmented Generation (RAG) - Fact-checking pipelines - User verification - Temperature reduction

Examples

An LLM citing a non-existent research paper or inventing historical events.

Want more AI knowledge?

Get bite-sized AI concepts delivered to your inbox.

Free daily digest. No spam, unsubscribe anytime.

Discussion