A→Z
A2ZAI
Back to AI 101
Lesson 8 of 15
beginnerfundamentals

AI Hallucinations

When AI confidently makes things up

4 min read
Share:

AI Hallucinations

AI models sometimes generate confident, plausible-sounding information that is completely false. This is called hallucination—and it's one of the most important limitations to understand.

What Is a Hallucination?

A hallucination is when an AI:

  • Invents facts that don't exist
  • Cites fake sources or studies
  • Creates fictional quotes from real people
  • Describes events that never happened

The dangerous part: It sounds just as confident as when it's correct.

Why Hallucinations Happen

1. LLMs Are Pattern Matchers, Not Knowledge Bases

LLMs predict likely next words based on patterns. They don't "know" facts—they recognize what sounds plausible.

If "Research by Harvard scientists shows..." often precedes certain types of statements, the model might generate that pattern even when no such research exists.

2. Training Data Limits

Models can't verify information. They've seen claims about facts, not the facts themselves.

Ask about obscure topics → less training data → more hallucination risk.

3. Pressure to Provide Answers

Models are trained to be helpful. Sometimes being helpful means generating an answer when "I don't know" would be more accurate.

Real Examples

Fake Citations:

"According to a 2019 study published in Nature by Dr. James Wilson..."

No such study. No such author. Sounds completely real.

Invented History:

"The Treaty of Westphalia in 1648 established the concept of 'digital sovereignty'..."

The treaty is real. The claim is nonsense.

Fictional Code Libraries:

"You can use the fastparse npm package for this..."

Package doesn't exist (or does something completely different).

How to Protect Yourself

1. Verify Critical Information

Never cite AI outputs without checking sources. Google the claim. Find the original.

2. Ask for Sources

When the AI claims something, ask: "What's your source for this?"

If it provides a source, verify it exists. Often the hallucination extends to the citation.

3. Use Retrieval-Augmented Generation (RAG)

RAG systems:

  1. Search real documents first
  2. Give relevant excerpts to the AI
  3. AI answers based on actual sources

This dramatically reduces hallucinations by grounding responses in real data.

4. Recognize High-Risk Scenarios

More likely to hallucinate:

  • Obscure topics
  • Recent events (after training cutoff)
  • Specific numbers, dates, quotes
  • Technical details in unfamiliar domains

Less likely to hallucinate:

  • Common knowledge
  • Well-documented topics
  • General concepts
  • Code syntax for popular languages

5. Temperature and Sampling

Lower temperature = more conservative = fewer hallucinations.

For factual tasks, use lower temperature settings (0.0-0.3).

The Psychology of Hallucinations

Why we fall for them:

  • Confidence inspires trust
  • Details make things believable
  • We often can't verify specialized knowledge
  • Confirmation bias—we accept what sounds right

The danger: Students cite fake papers. Lawyers submit fake case law. Doctors get fictional drug interactions. These aren't hypotheticals—they've all happened.

What AI Companies Are Doing

Current approaches:

  • Better training to say "I don't know"
  • Connecting to search for real-time info
  • Citation features that link to sources
  • Confidence scores (experimental)

The hard truth: Hallucinations are fundamental to how LLMs work. They can be reduced but likely not eliminated entirely.

Your Responsibility

As an AI user:

  • Never blindly trust AI for facts
  • Always verify before sharing or acting
  • Be extra careful with high-stakes decisions
  • Warn others when sharing AI-assisted work

The AI is a tool, not an oracle. You're responsible for what you do with its output.


Next up: Open vs Closed AI Models — Understanding the AI ecosystem

Enjoying the course?

Get notified when we add new lessons and AI updates.

Free daily digest. No spam, unsubscribe anytime.

Discussion