Apply the "too good to be true" test
4
If an AI response contains a surprisingly specific statistic, a perfect quote, or information that conveniently supports your argument, verify it immediately. AI hallucinations often look compelling precisely because the model generates what sounds right rather than what is right.
Why It Works
Human cognitive bias makes us less likely to question information that confirms what we want to hear. Deliberately applying skepticism to convenient answers catches the most dangerous type of AI error.
Tips
- Be extra skeptical of specific numbers, percentages, and dates
- Watch for fabricated citations — the AI may invent a plausible-sounding journal article that does not exist
- If you cannot find the claim anywhere else, it is likely a hallucination
Created: 3/23/2026, 2:22:33 AM freebest practice
None