DOES AI IN EVERYDAY TECH DISCRIMINATE AGAINST OLDER USERS OR REINFORCE AGEISM?

Yes, and the evidence is substantial, though it's often unintentional rather than deliberate.

Introduction

AI systems embedded in everyday technology frequently discriminate against older users, reinforcing societal ageism through biased datasets, design processes that exclude older adults, and algorithms trained on data that underrepresents anyone over 60. This isn't conspiracy or paranoia but documented reality backed by research, legal cases, and regulatory warnings from bodies like the World Health Organization and the European Union.

The discrimination shows up in hiring software that filters out older applicants, healthcare algorithms that undervalue the needs of older patients, voice assistants that struggle to understand age-related speech patterns, and facial recognition that performs worse on older faces. These aren't isolated glitches but systemic problems stemming from how AI gets built, what data it trains on, and who gets included (or excluded) from the design process.

This matters because AI is increasingly embedded in systems that affect whether you get a job interview, what healthcare you receive, how advertising targets you, and whether everyday technology actually works for you. When these systems carry age bias, they create real barriers that extend job searches, reduce care quality, and exclude older people from services that claim to be for everyone.

How it shows up in hiring and employment

AI recruitment tools have been caught explicitly and implicitly discriminating against older workers, leading to legal settlements and ongoing cases that reveal just how embedded age bias has become in automated hiring.

In 2022, the U.S. Equal Employment Opportunity Commission sued iTutorGroup for programming its AI hiring software to automatically reject female applicants over 55 and male applicants over 60, affecting over 200 qualified tutors. The case settled in 2023 for $365,000, and what's striking is that this wasn't subtle proxy discrimination but explicit age cutoffs built directly into the system.

The Mobley v. Workday case advanced through U.S. courts in 2025, with allegations that Workday's AI-based screening tools discriminate against job seekers over 40 by rejecting applications based on proxies for age like graduation dates or inferred experience levels. A U.S. District Court granted conditional certification for a collective action potentially covering millions of applicants since 2020, emphasising that AI tools can act as "agents" of employers, making vendors liable for discriminatory outcomes even if the bias wasn't intentional.

Research from Stanford published in Nature in 2025 found that large language models like ChatGPT generate resumes for older women that portray them as less experienced or qualified compared to older men, amplifying both age and gender biases in automated hiring. The systems aren't just screening candidates but actively generating discriminatory content that reinforces stereotypes about older workers.

The practical impact is measurable. OECD data shows that job seekers over 50 face job search times twice as long as younger workers, and while that's not entirely down to AI, automated screening tools that filter out older applicants before human recruiters even see them are making the problem worse rather than better.

How it shows up in healthcare and insurance

AI systems in healthcare frequently undervalue the risks or needs of older patients because they're trained on datasets skewed toward younger populations, creating what researchers call the "AI cycle of health inequity."

The World Health Organization issued a policy brief in 2022 warning that unchecked biases in AI could perpetuate societal ageism, undermine care quality for older people, and create health inequities. The problem stems from older adults being underrepresented in medical datasets, which means diagnostic tools, risk assessments, and treatment recommendations trained on that data perform worse for older patients.

A review published in 2023 identified "digital ageism" in AI where older adults get grouped broadly as "50+" or "60+" regardless of actual health variations, leading to representation bias and evaluation bias. When everyone over 60 gets treated as a homogeneous group, the AI can't account for individual differences and makes inaccurate predictions about health risks or treatment needs.

This isn't theoretical. Research from 2019 (with follow-ups through to 2025) revealed algorithmic bias in U.S. hospital systems that underestimated health risks for certain patient groups, and similar issues apply to age where data imbalances lead to under-prioritisation in care allocation. Precision medicine datasets often skew toward people under 60, causing AI to misinterpret symptoms in older patients or fail to account for age-related variations.

The practical result is older patients potentially getting less accurate diagnoses, inappropriate risk assessments, or lower priority in care allocation because the AI literally doesn't have enough good data about their age group to make reliable predictions.

How it shows up in voice assistants and everyday devices

Smart speaker ownership has become relatively common across age groups, showing that the digital divide in access has narrowed significantly, but barriers to effective use remain substantial for older adults even when they own these devices.

The discrimination isn't primarily about who buys these devices but about how well they work once purchased. Voice recognition systems can struggle with age-related speech variations such as slower speech patterns, different accents influenced by age, or changes in voice quality, leading to higher error rates and frustration when the device simply doesn't understand what you're saying.

Research through to 2025 shows that voice assistants perform worse for older voices, with designs assuming younger users' speech patterns and tech literacy. When the system is trained primarily on younger voices, it literally doesn't recognise older speech as well, creating exclusion through poor performance rather than explicit discrimination.

Privacy concerns compound the problem, with many older adults expressing distrust about intrusive data collection. These aren't irrational fears but reasonable responses to systems that collect enormous amounts of data without clear consent mechanisms designed for people who didn't grow up with this technology. Research from 2022 shows that 45% of people over 70 feel technology isn't designed with people of all ages in mind, reflecting a broader sense of exclusion from the design process.

The feedback loop is vicious. Poor performance for older users means frustration and lower daily use rates, which means less data from older users gets fed back into training, which means future systems continue to be optimised primarily for younger voices, which perpetuates poor performance and reinforces the exclusion. Nobody's deliberately excluding older users, but the system design creates exclusion as an outcome.

How it shows up in advertising and facial recognition

Advertising algorithms on platforms like Facebook have enabled age-based targeting that excludes older users from seeing job advertisements. ProPublica's investigation in the mid-2010s (with follow-up reporting through to 2025) exposed how Facebook's ad tools let employers screen out users over 40 from job postings. The findings led to settlements and policy changes, but the core technology that enables age-based exclusion is still built into these recommendation systems.

Facial recognition systems struggle with wrinkles and age-related facial changes, reducing accuracy for older users. This creates real problems when facial recognition gets used for security, authentication, or access control, as older users face higher rates of false rejections or failures compared to younger users.

Content recommendation systems perpetuate stereotypes by depicting older adults as frail or dependent in generated images and content, which influences how advertising gets personalised and what assumptions platforms make about older users' interests and capabilities. The result is older users receiving fewer relevant recommendations and being treated as less tech-savvy by default.

Why this happens systemically

The discrimination isn't usually deliberate malice but structural bias built into how AI systems get developed, trained, and deployed.

Training data underrepresents older adults or groups them too broadly. When datasets skew toward people under 60, the AI learns patterns that work for younger users and fails to account for variations in older populations. Machine learning systems optimise for the majority of their training data, which means minorities in the dataset (including older users) get worse performance by default.

Design processes exclude older adults from testing and development. When development teams are predominantly young and don't include older users in design testing, they create interfaces, voice interactions, and features that work well for themselves but create barriers for people with different needs or capabilities. This isn't ageism in the sense of active discrimination but ageism in the sense of not considering older users as part of the core audience.

Proxy discrimination through seemingly neutral factors creates age bias without explicitly using age. Graduation dates, years of experience, gaps in employment, or assumptions about tech familiarity all correlate with age, so algorithms using these factors as screening criteria effectively discriminate based on age without explicitly saying so.

The feedback loop of exclusion makes the problem self-perpetuating. Low adoption among older users leads to less data from that demographic, which leads to worse performance, which leads to lower adoption, which continues the cycle. Breaking this requires intentional intervention rather than hoping market forces solve it.

The honest assessment

AI ageism is real, documented, and systemic rather than isolated to a few bad actors. The evidence from legal cases, research studies, and regulatory warnings shows that current AI implementations in everyday technology frequently reflect and amplify societal ageism rather than challenge it.

The discrimination shows up across hiring tools, healthcare algorithms, voice assistants, facial recognition, and advertising platforms, with documented impacts on job searches, care quality, device usability, and access to services. This isn't theoretical concern but measurable harm affecting millions of people.

The problem stems from structural issues in how AI gets built: training data that underrepresents older adults, design processes that exclude them from testing, and optimisation for the majority demographic in datasets. These create systems that work worse for older users by default, even when there's no intention to discriminate.

What makes this particularly insidious is that the discrimination often hides behind claims of objectivity. AI decision-making gets presented as neutral and data-driven, which makes it harder to challenge than obvious human bias. When an algorithm rejects your job application or a voice assistant doesn't understand you, there's no human to appeal to, no explanation of why the decision was made, and often no way to know that age bias was the determining factor.

Regulatory bodies are starting to address this. The EU AI Act classifies employment AI as high-risk and mandates bias assessments. The WHO has issued warnings about ageism in healthcare AI. Legal cases like Mobley v. Workday are establishing that vendors can be held liable for discriminatory outcomes. But regulation lags behind deployment, which means older users are currently experiencing discrimination from systems already embedded in everyday life.

For older adults dealing with these systems, the practical reality is that AI tools may work worse for you not because you're less capable but because the systems weren't built with you in mind. Voice assistants might struggle with your voice, hiring tools might filter you out before humans see your application, and healthcare algorithms might underestimate your needs. This isn't your failure to adapt to technology but technology's failure to be designed inclusively.

The solution requires intentional effort: diverse datasets that properly represent older populations, design processes that include older users in testing, algorithmic audits for age bias, and regulation that holds developers accountable for discriminatory outcomes. None of this happens automatically, which means the current trajectory is toward more embedded ageism rather than less unless there's active intervention.

AI offers genuine potential benefits for aging populations through health monitoring, accessibility features, and assistance with daily tasks. But realising those benefits requires building systems that actually work for older users rather than treating them as an afterthought or assuming they're not part of the core audience. Until that changes, everyday AI will continue to discriminate against older users while claiming to be objective and neutral.

References and Further Reading

Employment Discrimination Cases:

Mobley v. Workday (2025) - Age discrimination lawsuit against AI hiring tools:

iTutorGroup EEOC Settlement (2023) - First EEOC AI discrimination case:

Research Studies:

Stanford University Study (2025) - Gender and age bias in AI resume generation, published in Nature:

Healthcare and Policy:

World Health Organization (2022) - Policy brief on ageism in AI for health:

Technology Adoption and User Experience:

AARP Tech Trends Reports - Annual surveys of technology use among adults 50+:

Browse all topics → Index