WHY AI CAN'T REASON (AND WHAT THAT ACTUALLY MEANS): EXPLAINED FOR OLDER ADULTS
Why AI can sound logical but isn't actually thinking, and what that means in practice.
Introduction
One of the most persistent myths about AI is that it can reason. It can't, and understanding why it can't (and what that limitation actually means) is crucial to using it properly.
AI can produce text that looks like reasoning, walking you through a problem step by step, laying out premises and conclusions, and even seeming to follow logic, but that's not the same as reasoning. It's mimicry. It's just a copycat really!
What reasoning actually is
Reasoning is the ability to work through a problem logically, which means understanding premises and drawing conclusions from them. It means knowing that if X is true, and Y follows from X, then Y must also be true.
Humans reason because we don't just pattern-match. Most of us actually think through implications, recognise contradictions, and adjust our thinking when we realise we've made a mistake. We understand cause and effect rather than just correlation, and reasoning requires a model of the world where you grasp what things mean instead of just what words tend to appear together.
AI doesn't have any of that. It has statistics. (And you know what they say about 'lies' and 'statistics', right?)
What AI does instead
When you ask AI to solve a problem, it doesn't reason through it but instead predicts what a good answer would look like based on patterns in its training data (we covered this in detail in Why AI sounds intelligent but isn't).
If the problem is similar to ones it's seen before, it'll do well because it's seen thousands of examples of people explaining things step by step and knows what that structure looks like. But it's not thinking, it doesn't have a mind in which to nurture such a thing. Instead it reverts back to predicting.
Here's an example: if you ask AI "If all cats are animals, and Fluffy is a cat, is Fluffy an animal?", it'll say yes, not because it reasoned through the logic but because it's seen that pattern of argument (a syllogism) in its training data and knows the answer that usually follows. But if you give it a novel problem that doesn't match a clear pattern it's seen it will more likely as not fail because it can't work through the logic from first principles but can only guess what an answer would probably look like.
Where this shows up in practice
Give AI a logic puzzle it hasn't seen the pattern for, and it'll often get it wrong by producing an answer that looks plausible and follows a sensible structure but doesn't actually hold up when you check the logic step by step. The conclusion doesn't follow from the premises, or it's made an error that a human would catch immediately.
AI will sometimes contradict itself without noticing, saying "It's always green" in one sentence and "It's aways blue" two sentences later because each sentence is being predicted independently. A human reasoning through a problem would spot the contradiction and fix it, but AI doesn't because it's not holding the whole argument in mind but just predicting what comes next.
AI is particularly bad at understanding causation because it can tell you that two things are correlated (they appear together in its training data) but doesn't understand why one thing causes another. If you ask it to explain why something happened, it'll give you an answer that sounds plausible but might have the causation backwards or confuse correlation with cause entirely.
Ask AI "What would have happened if X had been different?" and it struggles because while it can generate plausible-sounding speculation (it's seen lots of counterfactual reasoning in its training data), it's not actually reasoning through the implications of the change. The result might sound intelligent but be completely wrong about what the actual consequences would have been.
Why "chain of thought" helps (but doesn't solve it)
There's a technique called "chain of thought" prompting where instead of asking AI for an answer directly, you ask it to explain its reasoning step by step before giving the final answer. This often improves performance, sometimes dramatically.
Why does this work? Because AI's training data includes lots of examples of people working through problems step by step, so by prompting AI to follow that structure, you're nudging it toward patterns it's seen that tend to lead to correct answers. The act of generating intermediate steps makes it more likely that the final answer will be right because the structure itself acts as a guide.
This is still prediction as opposed to real reasoning, where the AI is generating text that looks like step-by-step thinking. Because that structure happens to reduce certain types of errors, the final answer is more likely to be correct, which makes it genuinely useful (Yay! At last!) but it's a trick rather than a solution. The AI still doesn't understand what it's doing; it's just following a pattern that happens to work more often than not.
What this means for you
The practical upshot is straightforward and simply put means don't trust AI with tasks that require reasoning unless you're checking its work carefully.
Don't use AI for legal analysis unless a qualified lawyer is reviewing everything it produces, or for medical diagnosis unless a doctor is verifying the conclusions. Don't use it for complex decision-making (hiring decisions, investment strategies, business strategy) where the cost of getting it wrong is high and the reasoning needs to be sound.
You can use AI for explaining concepts as long as you're able to verify that the explanation is correct and makes sense (like I am doing here. All the time!) You can use it for brainstorming ideas because you're not trusting it to be right, you're just using it to spark your own thinking. You can use it for drafting arguments or analyses which you then review, check, and fix yourself.
The key principle is this: AI can be useful in tasks that involve reasoning if you treat it as a tool that generates suggestions rather than a tool that thinks. Use it to speed up your own reasoning, to give you a framework or starting point, to show you what a solution might look like but don't use it to replace your reasoning because it can't do that.
Why this won't be fixed soon
Some people assume that as AI gets better (more data, bigger models, smarter training) it'll eventually develop the ability to reason. Maybe, but it's not a given and it's definitely not imminent.
Current AI is fundamentally built on pattern prediction, which is a different approach to problem-solving than reasoning from first principles. You can make pattern prediction better by giving it more data, more parameters, more computing power, and better training techniques, but you're still doing pattern prediction and still generating text based on what's likely to come next rather than working through logical implications.
Building an AI that actually reasons would require a different architecture or a breakthrough in how we approach machine intelligence that we haven't had yet. It might happen (there are researchers working on it) but it's not around the corner, and if someone tells you it is, they're either mistaken or have something to gain from you believing them!
For now, treat AI as a very sophisticated pattern-matcher that's brilliant at some tasks, hopeless at others, and completely incapable of reasoning in any meaningful sense.
Browse all topics → Index