learn.know-your-ai.com / module-4 / lesson
Module 4 · Lesson

Knowing When It's Wrong

Large language models don't know what they don't know. They generate the most plausible-sounding next words — and "plausible" isn't the same as "true." This is the single most important thing your team needs to internalize.

The technical name for this is hallucination: when a model produces fluent, confident output that has no basis in reality. It's not lying. It's not malfunctioning. It's doing exactly what it was built to do.

Key Insight

A hallucination doesn't look like a hallucination. That's the whole problem. If wrong answers were obviously wrong, we'd all be safe. They're not — they read like the right answers.

In the next exercise, you'll see the same prompt run against three different models. Pay attention to where they disagree — that's usually where one or more of them is making something up.

Rule of Thumb

If a fact matters — a number, a name, a quote, a citation — verify it outside the model. Always.

Module 4 · Sandbox Exercise

Try the same prompt across three models

Ask each model for academic sources on a niche topic. Compare what they say. Notice anything?

List 3 peer-reviewed papers on the long-term cognitive effects of AI tutoring systems in primary education. Include authors and journal names.
Claude 1.4s
1. Holstein & Aleven (2019), Computers & Education
2. Roll & Wylie (2016), Int. J. AI in Education
3. Koedinger et al. (2021), Cognitive Science
412 tokens · stop: end_turn
GPT-4o 0.9s
1. Wang & Chen (2020), Educational Technology Research
2. Holstein et al. (2018), Learning & Instruction
3. Park & Kim (2022), Computers in Human Behavior
387 tokens · stop: stop
Gemini 1.1s
1. Liu, Zhang & Patel (2021), AI & Society
2. Brown et al. (2019), Journal of Educational Psychology
3. Müller (2023), Learning Sciences Quarterly
401 tokens · stop: end
Notice Something?

All three confidently listed papers — but do the same ones appear across models? If you searched for any of these citations, how many do you think you'd actually find?

Module 4 · Check-in · Question 3 of 5
A colleague shares an AI-generated report with the statistic "47% of European companies adopted AI tools in 2024." What should you do?
Correct · Why

Models routinely produce plausible-sounding statistics with no underlying source. The only safe move is to find the original — a report, a press release, a study. If you can't find it, treat the number as unverified and don't pass it on.

← Back to Education