HOW OLDER ADULTS CAN SPOT AI SCAMS
How scammers use AI for voice cloning, fake emails, and deepfakes, plus how to protect yourself.
Introduction
AI has made scams significantly more convincing, and that's not an exaggeration. The old tells that used to give scams away (bad grammar, generic messages, obviously fake voices) don't work as reliably anymore because AI can produce professional-looking text, clone voices, and personalize messages at scale.
This doesn't mean you're defenseless, but it does mean you need to update how you spot scams because the old rules aren't enough anymore.
Why AI makes scams more convincing
Traditional scams had obvious flaws that made them easy to spot if you knew what to look for. Phishing emails often had spelling mistakes and awkward phrasing because they were poorly translated or written by people unfamiliar with the target language. Fake phone calls sounded stilted or used obviously recorded messages. Impersonation attempts were crude because the scammer didn't have enough information to sound convincing.
AI has changed all of that in a few specific ways:
Voice cloning means scammers can now create convincing audio of someone you know using just a few seconds of their voice from social media, voicemail, or public videos. The technology is good enough that you won't necessarily spot it as fake, especially over a phone line where quality is already degraded. This has led to "grandparent scams*" where someone calls claiming to be your grandchild in trouble, and it genuinely sounds like them.
Personalized phishing means AI can write emails that sound natural, use proper grammar, and include details about you scraped from public sources to make the message seem legitimate. Instead of generic "Dear Customer" emails, you get messages that reference your actual bank, use your name correctly, and mention recent purchases or activities.
Image and video manipulation means scammers can create fake images of documents, fake video calls, or deepfake videos of people you trust. The quality varies, but it's good enough to fool people who aren't specifically looking for signs of manipulation.
Common AI-powered scams targeting older adults
Here are the specific scams that are becoming more prevalent and harder to spot thanks to AI.
The emergency call scam uses voice cloning where someone calls claiming to be a family member in urgent trouble (arrested, in an accident, stranded abroad) and needs money immediately. The voice sounds convincingly like your grandchild or child, and the story is designed to make you panic and act without thinking. The pressure to send money quickly via wire transfer or gift cards is the real tell here.
The authority impersonation scam involves someone calling and claiming to be from your bank, the tax office, or law enforcement with AI making the caller sound professional and knowledgeable. They might have details about your accounts or personal information to make the call seem legitimate, then claim there's a problem that requires immediate action like moving money to a "safe account" or paying a fine.
Investment and financial opportunity scams use AI to create convincing websites, fake testimonials, and personalized pitches based on your interests. The scammer uses AI chatbots to answer questions quickly and professionally, making the whole operation seem legitimate. These often target people looking to grow retirement savings or generate income.
Tech support scams have evolved where instead of obviously fake pop-ups, scammers use AI to create convincing-looking error messages, fake customer service interactions, and professional-sounding phone support. They walk you through "fixing" your computer while actually installing malware or stealing information.
Romance and relationship scams now use AI to maintain multiple conversations simultaneously where the scammer uses AI to generate personalized messages, remember details from previous conversations, and keep the relationship feeling genuine over weeks or months. The eventual request for money comes after trust has been established.
Red flags that still work
Despite AI making scams more sophisticated, certain patterns still give them away if you know what to look for.
Urgency and pressure remain the biggest tells where legitimate organisations don't demand immediate action or threaten consequences if you don't respond within hours. If someone's pushing you to act quickly without time to think or verify, that's a scam regardless of how professional they sound. The success of the scam depends upon the level of panic-induced action they goad you into. So try and remember to stay calm and if in any doubt, just hang up if you are on a telephone call, or close the browser on your phone/tablet/computer. Any genuine contact will try and contact you again, by which time you will have composed yourself and be able to ask pertinent questions.
Requests for unusual payment methods are always suspicious because real banks, government agencies, and legitimate businesses don't ask for payment via gift cards, wire transfers to individuals, cryptocurrency, or other irreversible methods. If they're pushing for payment methods that can't be traced or reversed, it's a scam.
Resistance to verification is a clear warning sign where if someone claims to be your bank but gets defensive when you say you'll call the bank directly to verify, that's not your bank. Legitimate callers will understand you need to verify their identity and won't pressure you to stay on the line.
Information they already should have becomes suspicious when someone claiming to be your bank asks you to verify your account number or password. Your actual bank already has this information and won't ask you for it, so if they're asking, they're not your bank.
"Too good to be true" still applies where if an investment promises guaranteed returns with no risk, or a stranger offers you money for no clear reason, or you've won a prize you don't remember entering, it's almost certainly a scam. AI can make these offers sound more plausible, but the underlying logic hasn't changed. If it sounds too good to be true, it is highly likely to not be.
How to protect yourself
The good news is that protecting yourself from AI-powered scams doesn't require technical knowledge but instead relies on habits and verification processes.
Never act on urgent requests without verification, and this is the single most important rule. If someone calls claiming to be family in trouble, hang up and call that person directly on a number you already have. If your bank calls about suspicious activity, hang up and call the number on your bank card. If an email claims to be from a company you do business with, go to their website directly rather than clicking links in the email.
It's not a bad idea to try and establish a family code word for emergencies because if someone calls claiming to be your grandchild in trouble, ask them a question only the real person would know or use a pre-agreed code word. This defeats voice cloning because the scammer won't know the answer. Just don't you forget the code!
Be suspicious of any request for payment via gift cards, wire transfers, or cryptocurrency because these are irreversible and untraceable, which is exactly why scammers love them. No legitimate business or government agency will ask for payment this way.
Trust your instincts because if something feels off (even if you can't articulate why), treat it as suspicious until proven otherwise. Your gut reaction to "this doesn't feel right" is often picking up on subtle inconsistencies that your conscious mind hasn't identified yet.
Sorry to repeat myself, but you MUST try to verify independently by using contact information you already have rather than information provided by the person contacting you. If your bank calls, hang up and call the number on your card. If you get an email, go to the company's website directly. Don't trust links, phone numbers, or email addresses provided in suspicious messages.
Why being sceptical isn't being paranoid
The world where you could assume most people were honest and most calls were legitimate is gone, and that's not your fault or a sign you're being unreasonably suspicious. Scammers have industrialized fraud using AI to operate at scale, and being cautious is the rational response. You only have to turn on the news to hear daily stories of huge scam centres getting developed in SE Asia, and every time one is raided and closed, they simply pick up and move somewhere else. It doesn't help that some governments which are, shall we say, on the wrong end of the corruption index actually aide and abet these scammers - but that is another tale for another time.
It's better to verify and feel slightly awkward about it than to lose money or personal information. Legitimate callers will understand why you need to verify their identity, and if they don't, that tells you everything you need to know.
The goal isn't to live in fear but to build verification into your routine until it becomes automatic. Hang up and call back, ask questions, take time to think, and don't let anyone pressure you into immediate action. Those habits will protect you from the vast majority of scams regardless of how sophisticated the AI gets.
*Yes, grandparent scams are absolutely real and well-documented. In 2023, senior citizens in the US were conned out of roughly $3.4 billion in various financial crimes according to FBI data, and voice cloning AI scams specifically target older adults CBS News. Canadians reported losing nearly $3 million to grandparent scams in 2024 according to the Canadian Anti-Fraud Centre CBC News.
The term "grandparent scam" is the actual name used by law enforcement and fraud prevention organisations. McAfee's global study found that one in four people surveyed had experienced an AI voice cloning scam or knew someone who had, and 70% of people said they weren't confident they could tell the difference between a cloned voice and the real thing McAfee.
Browse all topics → Index