Stop Calling It a “Hallucination”: What AI Errors Reveal About Writing, Work, and Education
We’ve all seen it by now: an AI confidently generates a fake citation, invents a statistic, or spins a story that never happened. The common term for this is hallucination. An idea given when we believed the machine to just be conjuring up wild visions. But what if that’s the wrong metaphor?
Three days ago, a new revelation was made in relation to AI and hallucination. Published on Medium, the author argues that so-called hallucinations are better described as distance calculations gone wrong. Large language models don’t “dream” in the human sense; they’re calculating the most likely next word based on statistical distance in high-dimensional space. When there’s a gap in the data or ambiguity in the query, they don’t panic—they just guess. And sometimes that guess lands far from the truth.
That’s a much less mystical framing: not spooky visions, just math under strain. Perhaps the most important part of distance calculations gone wrong? They’re impossible to avoid, and they will always be part of the AI system. So, what does this mean for how we think about writing, especially research writing, in the age of AI.
What AI Errors Reveal About Research Writing
When you ask an AI to generate a research paper, you’re not tapping into a deep well of understanding. You’re prompting a pattern-matching engine. And in contexts that demand precision—like scholarship—that’s risky.
Here’s why:
Gaps in the training data
If your paper requires an obscure citation, nuanced historical detail, or niche dataset, the AI may not have “seen” it before. So it synthesizes something plausible. That’s not creativity; that’s miscalculated distance.Overconfidence as default
Even when the model is uncertain, it outputs an answer with fluency and polish. Unlike a student who might say, “I’m not sure, I need to check,” the AI rarely withholds.Mismatch with scholarly standards
Academia thrives on careful sourcing, reproducibility, and intellectual honesty. AI thrives on sounding right. Those are not the same thing.Fluency masks failure
The biggest trap: a sentence that sounds good but is conceptually empty or factually wrong. Unless the reader has a strong grasp of the subject, they may not notice.
In other words, AI isn’t misbehaving when it invents a citation. It’s doing what it was built to do—fill gaps with probabilities. The problem is that research writing doesn’t reward “close enough.”
From Research Papers to “Workslop”
This problem doesn’t end in the classroom. It’s already spilling into the workplace.
Harvard Business Review recently published a piece with a striking headline: “AI-Generated Workslop Is Destroying Productivity.” The term “workslop” captures the deluge of low-quality drafts, memos, and reports that AI can churn out. This content looks complete but requires significant cleanup.
The promise of AI in the workplace was speed. But speed without accuracy or judgment can become a time sink. Companies are finding that workers spend hours editing, verifying, and restructuring AI outputs. In fact, “a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies.” The result: productivity gains vanish, and sometimes efficiency actually drops.
Why? Because workers who lack strong foundational skills—writing, critical thinking, domain knowledge—struggle to tell good output from bad. They accept mediocrity, or worse, fail to recognize errors at all. AI becomes a crutch for rather than a tool.
The Education Connection
Education can step in here, not as an abstract virtue, but as the bedrock of effective AI use.
Domain knowledge lets you see when the AI veers off course.
Critical reading teaches you to interrogate claims, not just nod along.
Writing skills help you shape, refine, and structure ideas instead of settling for the AI’s first draft.
Statistical literacy allows you to understand uncertainty and bias in model outputs.
Ethical reasoning reminds us that not everything that can be automated should be.
Without these foundations, AI is less like an assistant and more like a saboteur: flooding workflows with polished nonsense. With them, AI can become a powerful partner—drafting outlines, brainstorming counterarguments, accelerating revision cycles, or simulating alternative perspectives.
Beyond Tools: A Call for Integration
So where does this leave us? A few takeaways for both students and professionals:
Rethink metaphors. AI doesn’t “hallucinate.” It calculates and misses. Treating it as mystical makes us too trusting; treating it as math keeps us critical.
Use AI as scaffolding, not oracle. Ask it to help you brainstorm or reframe, but keep final judgment—and truth claims—firmly human.
Invest in foundational education. The better we are at reading, writing, and reasoning, the better we can harness AI without drowning in “workslop.”
Redefine productivity. It’s not about how many words we generate—it’s about how many useful words survive revision.
Building a Future With, Not Against, AI
The conversation around AI too often splits between hype and fear: AI will either save us or destroy us. The reality is more mundane but more urgent: AI will amplify whatever skills we bring to it.
If we bring rigor, discernment, and foundational knowledge, it can accelerate high-quality work. If we bring complacency and overreliance, it will bury us in polished junk.
Education isn’t just preparation for a world with AI. It’s the difference between being empowered by these tools and being undermined by them.
So,
the next time someone says AI “hallucinated,” remember: the real hallucination would be thinking we can outsource thinking itself.

Great piece, Sydney! Makes me wonder if your five "foundations" are the new curriculum, since accessing information has never been easier. We're learning that knowledge without an organizing framework, basic skills, and a curious mind is just "probable next words". Thanks for your insight!