Hallucination is the tendency of LLMs to generate plausible-sounding but false information — stated with the same confidence as accurate facts.
Why Hallucinations Happen
- LLMs are trained to produce fluent text, not verified facts
- They have no internal fact-checking mechanism
- They generalize patterns even when specific knowledge is absent
High-Risk Hallucination Domains
- Legal citations (lawyers have been sanctioned for submitting AI-hallucinated case citations)
- Medical dosages and drug interactions
- Financial data and statistics
- Historical facts and biographical details
Mitigation
Retrieval augmented generation (RAG) dramatically reduces hallucinations by grounding responses in verified documents.
Reference: