WHAT AI CAN AND CANNOT DO: A GUIDE FOR OLDER ADULTS

The practical capabilities and hard limits of current AI systems.


Introduction

There's a lot of noise about AI - some people think it's about to solve every problem humanity has, while others think it's going to destroy civilisation. Both groups are wrong.

AI is a tool that's very good at some things and terrible at others, and most of the confusion comes from people not knowing which is which.

What AI is good at

AI excels at tasks that involve pattern recognition, prediction, and generation based on existing examples, so here's what that actually looks like in practice.

Summarising and drafting. Give AI a long document and ask for a summary, and it'll do a decent job - not perfect (it sometimes misses nuance or emphasises the wrong things) but much faster than reading it yourself. Same with first drafts: AI can turn "write me an email declining this meeting politely" into something coherent. You still need to edit it, but it's a useful starting point. Trust me - I have edited every single page on this site!

Answering routine questions. If the answer to a question exists in AI's training data and isn't too specific, it'll probably get it right. "What's the capital of France?" - fine. "How do I reset my password on this specific platform?" - fine, if that platform is well-documented online. "What should I do about my legal dispute?" - not fine.

Spotting patterns in data. AI can look at thousands of transactions and flag the ones that look like fraud, or scan medical images and highlight areas that might be tumours. It's not making a diagnosis - it's saying "this looks like the pattern we trained it to recognise" - but a human still needs to check. It's faster than doing it manually though.

Generating plausible text, images, or code. AI can (almost) write a blog post, generate an image of a cat wearing a hat, or produce a chunk of Python code. Whether any of it is good depends on how much you know about the subject and how carefully you check it, but for rough drafts or brainstorming, it's useful.

Translation. AI translation has gotten very good for common languages - it's not perfect (idioms and context still trip it up) but for straightforward text, it's often better than older tools like Google Translate.

What AI cannot do

Here's where people get into trouble: AI is bad (sometimes catastrophically bad) at anything requiring actual reasoning, judgement, or verification.

Reasoning. AI can't think through a problem step by step in any meaningful way. It can produce text that looks like reasoning because it's seen lots of examples of logical arguments, but if you give it a novel problem that requires working through implications, it'll often fail. It's guessing what a good answer looks like, not figuring it out.

Fact-checking itself. AI has no idea whether what it's saying is true - it generates text that sounds plausible based on patterns in its training data. If the pattern says "Paris is the capital of France," it'll say that, but if the pattern says "Paris is the capital of Germany" (because some badly written webpage said so), it might say that too. It doesn't check because it doesn't know what truth is.

Understanding context deeply. AI can pick up on surface-level context, but it misses subtlety - sarcasm, ambiguity, cultural references, implied meaning. It's especially bad when context depends on something unstated or when the meaning shifts depending on who's speaking.

Making moral or ethical judgements. AI has no values. It can tell you what arguments people make about a moral issue because it's seen those arguments in its training data, but it can't weigh them or tell you what's right. If you ask it to, it'll just generate text that sounds like the kind of thing people say when making ethical judgements, and that's not the same as having ethics.

Anything requiring up-to-date information. AI's knowledge freezes at the point it finished training, so if something happened after that, the AI doesn't know about it unless it's explicitly given access to live data (which some systems do, but not all). Even then, it's still prone to errors.

Creativity in the human sense. AI can generate novel combinations of things it's seen before - it can write a poem, design a logo, or suggest a plot twist - but it's not creating in the way humans do. It's remixing, which is still useful but not the same as original thought.

Why this matters

The danger isn't that AI is useless; it's that it's useful enough to be tempting and limited enough to be dangerous if you trust it too much.

If you use AI to draft an email and read it before sending, that's fine, but if you use AI to draft a legal contract and send it without checking, that's a disaster waiting to happen. If you use AI to suggest ideas for a project, that's helpful, but if you use AI to make a final decision about hiring someone, you're outsourcing judgement to a system that has no judgement.

The rule is simple: use AI for tasks where getting it wrong isn't catastrophic, or where you're checking the output carefully. Don't use it for anything where accuracy, reasoning, or accountability actually matter.

And if someone tells you AI can do something you now know it can't, assume they're either confused or selling you something.

Browse all topics → Index