INVESTIGATION: WHY ARE THE MOST EXPERIENCED WORKERS SUDDENLY THE ONES WHO TRUST AI THE LEAST?

Boomers' AI confidence has crashed 35% as workplace adoption accelerates. They're using it more but trusting it less. This is a mini investigation into what they are seeing.

The Paradox Nobody Saw Coming

The latest ManpowerGroup Global Talent Barometer (January 2026, nearly 14,000 workers across 19 countries) has a twist that should make every manager pause.

AI use at work jumped 13% in a year. Now 45% of people are using it regularly. You'd expect confidence to ride that wave. Instead, it crashed. Overall confidence in using tech tools dropped 18%. The steepest fall? Boomers, down 35%. Gen X isn't much better off at 25%.

This isn't the tired stereotype of older workers digging in their heels against new tech. These people are actually using AI more than before and trusting it a whole lot less.

They’re Using It—and Watching It Fail

Mara Stefan, VP of global insights at ManpowerGroup, nailed it in Fortune: “AI adoption is accelerating, but confidence is collapsing.” Workers get handed shiny tools with basically zero training, no real explanation, and no support. The training that does exist often assumes you already know what "prompting" means, that you're comfortable with trial-and-error on something affecting real work, and that you'll just "figure it out" the way you figured out Instagram. Except you didn't figure out Instagram (I think I still haven't!), and now you're supposed to use this thing for client deliverables with your name on them.

The numbers tell the story clearly. 89% of people still feel confident they can do their actual job well. But zoom in on AI specifically and confidence evaporates. Only 36% of Boomers say they feel okay using AI and keeping up with advancing tech. Gen X is at 52%. Younger workers score higher, even though they’re the ones most anxious about job loss.

So the problem isn’t skill or willingness. It’s experience. People who’ve spent decades in a field can spot when the AI is confidently wrong in ways that sound plausible but aren’t. Subtle mistakes in reports, forecasts, and client advice. Stuff that looks fine at first glance but could cause real damage later.

Example: An AI tool generates a client proposal that cites a regulation repealed three years ago. It looks authoritative and uses the right terminology, but it's completely wrong. A junior colleague might send it out. Someone with 20 years' experience catches it before it tanks the pitch.

Or the AI summarises a technical document and reverses a critical "do not" instruction, turning a safety protocol into a liability lawsuit waiting to happen. You need experience to spot that kind of error, and it happens more often than anyone wants to admit.

The Hidden Time-Suck and Burnout Loop

Catching those errors costs time. Hours spent babysitting the AI, fixing its confident mistakes, and cleaning up after it. Productivity doesn't rise. It stalls or drops. Then management wonders why you're "not adapting quickly enough," even though you're the one preventing client disasters.

Vicious cycle: more time fixing AI means less time on actual work, which means more stress and longer hours trying to catch up. The burnout stats back this up. 63% of workers report feeling burned out, mostly from crushing workloads and constant pressure.

Meanwhile, companies keep promising the moon from AI. PwC's survey shows the reality: most firms see basically zero measurable gains in revenue or costs. They deployed fast to look cutting-edge, then act shocked when half the workforce quietly resents the tools that made everything harder instead of easier.

Job Hugging: The New Great Stagnation

The fear is real and it’s changing behavior. 64% of workers are now “job hugging,” staying in roles they’re unhappy in or outright miserable because they’re terrified of jumping ship. Why risk it when the next application might get auto-filtered by an AI screener that quietly penalizes older resumes, or the new job demands AI fluency nobody’s properly taught you?

43% think automation could wipe out their job in the next two years (up from last year). For Boomers, this isn’t vague anxiety. It’s pattern recognition. They’ve seen how these systems work in practice.

Can This Actually Be Fixed?

Optimists believe so: better training that starts from where people really are, not where tech evangelists assume they should be. Give them time to learn without punishing the curve, reward accuracy and error-catching over raw speed, and actually listen when veterans say "this thing is hallucinating again."

Pessimists say we're already too far down the road, with AI being built by and for younger users. Interfaces, shortcuts, and assumptions are baked in from the start. As these tools become mandatory, older workers face a brutal choice: struggle forever or get pushed out entirely.

My take? It's probably fixable, but it won't fix itself. Without deliberate effort like training that respects experience gaps, metrics that value quality over velocity, and real support instead of lip service, we're going to lose a ton of institutional knowledge just when we need it most.

The bottom line is that the Boomer confidence crash isn’t stubbornness. It’s a rational, hard-earned reaction to tools that were oversold, under-supported, and still don’t quite work the way the hype promised.

People aren’t afraid of AI.
They’re afraid of being left behind by a system that’s sprinting ahead without them.


Browse all topics → Index


Sources: