AI Decoded Series: Hallucinations, Dis & Mis information
- Corrie Dark

- Jul 4
- 4 min read

Snack-sized summary:
Large Language Models (LLMs) are designed to produce an answer so, when an LLM can’t find a piece of data its seeking, it will make something up from the patterns it’s learned – this is called a ‘Hallucination’. Disinformation and misinformation are also part of the LLM data set. Both these information types are inaccurate at best but are treated the same as the rest of the data so they can easily appear in the responses you’re given. Us humans need to ensure we’re asking open-ended questions, rigorously expanding on what and how we’re asking, and remain sceptical.
Let’s unpack.
‘Hallucination’, is a polite term for when a Large Language Model like ChatGPT ‘makes stuff up’. They happen when AI can’t find a piece of information so fills in the blanks with confident-sounding but incorrect information.
Some of the reasons Hallucinations happen are:
Pattern-Learning not Comprehension AI models learn patterns from massive amounts of text data, but don't actually understand the content. That means when it’s generating a response, it’s just predicting what should come next based on statistical patterns, not on true comprehension of facts or reality.
Training Data Contains All-the-Things AI models learn from data sets on the internet AKA everything from MIT research papers to Jeff your local conspiracy theorist. It can't distinguish between reliable and unreliable sources during training, so accurate and inaccurate information can be seen as equally valid.
Trained to Please AI models are designed to always produce an output. So they’ll tell you something even when they don't have sufficient information. Rather than saying "I don't know," they fill gaps with plausible-sounding content based on learned patterns in order to fulfil their goal.
They’re not Human AI models work with text patterns but don't have access to real-time information or experience in the real world. That means they can't verify facts against reality, so may generate information that sounds logical but is factually wrong.
The truth is that seeing the connections between seemingly disparate bits of information is a skill. And not one AI is good at. So, the more complex the query, the more steps of reasoning required, the more likely hallucinations will occur.
The danger of this is that AI answers sound confident even when they're essentially guessing. They don't have built-in uncertainty indicators, so false information is presented with the same self-assurance as accurate information.
Add to this mix our friends Mis and Dis Information.
These two charmers have snaked their way into AI’s data set and are part of the information mix. Both of them are masters of disguise and can be almost impossible to decloak even with an expert human eye, let alone our eager-to-please AI model.
They are:
Misinformation: False or inaccurate information that is spread without malicious intent. This could be someone sharing incorrect information they genuinely believe is true, or honest mistakes that get passed along.
Disinformation: False information that is deliberately created and spread with the intent to deceive or mislead people. This involves intentional deception for political, financial, or other manipulative purposes.
Misinformation and disinformation are rife throughout data sets. Both are inaccurate at best but are treated the same as the rest of the data.
And Then, There’s Us: The Challenge of Contextual Thinking
We're opinionated beasts, us humans. We have thoughts and views and ideas on the world. These are formed quite early on in life and are the lens through which we see and assess everything. They’re the context we work from. In fact, often the first time you learn other contexts exist is when you go flatting and realise your bathmat habits aren’t the same as everyone else’s. And that’s just in one class in one culture.
The point is, it’s really difficult to see outside of your own context. Because your context is what’s ‘normal’ to you.
This is something to be mindful of when searching for ‘answers’ on the internet and AI.
We Ask QueSTIONs on a Needs Basis
First off, we tend to ask leading questions. Our main use of AI is to solve a problem e.g. ‘Where is a gluten-free baker near me?’ or ‘What’s the song that goes “I’m never going to give you up…”’ Even when our queries are more complex, you’re still going to ask within what you know.
I’m about to slip into an unfortunate double-negative here but ‘you don’t know what you don’t know’ so how can you possibly include it? Using AI is effectively a one-way conversation – you ask questions from your world thinking and get answers back in kind.
This leads to what’s referred to as an Echo Chamber. Where when we think we’re asking insightful queries but in reality we’re just reinforcing our own world view which AI combined with dis and mis information are happy to oblige.
So what now?
We have access to a vast amount of data through AI. That data is made up of everything from toddler’s drawings, to MIT research, to carefully worded pieces designed to reinforce your point of view.
So whether you’re using AI to research or to create content – be mindful. Be rigorous in your thinking when you ask a question. Start by adding ‘what is the opposite point of view to my question?’ Check the information sources and verify with a subject matter expert where you can.
Tune in next week when we’ll discuss cyber security and AI – how safe is it really? And what do business leaders need to do to protect commercially sensitive ideas and data.



Comments