WHAT IS THE EU AI ACT, AND DOES IT AFFECT ME?
The EU AI Act is the world's first comprehensive AI regulation, banning dangerous uses and heavily regulating high-risk systems.
Introduction
The European Union passed the world's first comprehensive AI law in August 2024. If you're reading this from Auckland, Sydney, Johannesburg, Toronto, or London and thinking "I'm not in Europe, what's this got to do with me?", here's the short answer: plenty.
This law has extraterritorial reach. Any company serving EU customers must comply with it. That means the AI in your healthcare app, your bank's fraud detection system, or the care monitoring device you use is probably being designed to EU standards - whether the company is based in Brussels or Brisbane.
The EU market is large enough that global companies often standardise on EU requirements everywhere rather than maintaining separate systems for different regions.
What does this law actually do?
The Act sorts AI systems into four categories based on risk.
Banned entirely: Some AI uses are prohibited outright as of February 2025. Social scoring systems that rank citizens (like China's approach), AI that exploits vulnerabilities related to age or disability to manipulate people, subliminal manipulation techniques you can't consciously detect, scraping faces from the internet to build facial recognition databases, and most real-time biometric surveillance in public spaces. These are the "no-go zones."
High-risk, heavily regulated: This is the big category. AI used in healthcare diagnosis, employment decisions, credit scoring, educational admissions, law enforcement, and border control faces strict requirements. Providers must document everything, test for bias, ensure human oversight, and get certified. These rules take full effect in August 2026.
Limited risk, transparency required: Chatbots and similar systems need to tell you they're AI. You've probably seen these notices already: "You're chatting with a bot." AI-generated content, particularly deepfakes, must be labelled.
Minimal risk, no special rules: Spam filters, AI-enabled video games, recommendation algorithms on shopping sites—these face no specific regulation under the Act.
The penalties aren't trivial. Using banned AI could cost a company €35M or 7% of global turnover, whichever is higher. Violating high-risk rules hits €15M or 3% of turnover. For a multinational corporation, that's enough to concentrate minds.
I have my concerns about deepfakes though. The very fact that they are fakes - intended to deceive - makes it highly improbable anyone will declare or label them, right?
The case for regulation
The Act flat-out bans AI designed to exploit elderly people or people with disabilities. This isn't theoretical - there are already AI-powered scams specifically targeting seniors for personal data or money, toys that manipulate children into purchases, and systems that prey on vulnerable people. The ban offers genuine protection that didn't exist before.
High-risk AI used in sectors that matter to older people such as healthcare, finance, and care services must meet transparency and accuracy standards. Your GP's diagnostic AI, your bank's fraud detection and AI involved in care home decisions will all face oversight. These systems must be tested for bias, maintain documentation, and allow human review. In principle, this should reduce the risk of being denied credit or medical treatment by a faulty algorithm nobody can explain.
The EU's previous privacy law, GDPR, influenced privacy legislation worldwide. New Zealand's Privacy Act, Australia's Privacy Act, Canada's privacy laws - all show elements of GDPR influence. Companies serving global markets often find it simpler to adopt one high standard everywhere rather than maintaining different systems for different countries. The same pattern could emerge with AI regulation. If you're using a healthcare app made by a company serving EU customers, you might benefit from EU-level AI safety requirements even if your own country hasn't passed equivalent laws yet.
The EU AI Act also includes support for innovation such as regulatory sandboxes where companies can test AI safely, and €307 million in funding (announced January 2026) to help smaller firms meet compliance costs. There is a genuine attempt to foster responsible development, not just simply restrict its usage.
The case against regulation
Compliance costs are steep. Analysis suggests high-risk AI systems face annual costs ranging from around €30,000 to over €50,000 per system, covering documentation, auditing, testing, and legal review. For startups, this can gobble up 40% of profit margins—ouch. Even large companies feel the burden across dozens of systems.
On top of that, the complexity could stifle innovation. The Act runs to hundreds of pages. Detailed rules are still being drafted. The definition of "AI system" remains contentious. Companies face a choice: invest heavily in compliance or avoid the EU market. Some already have—both Meta and Apple delayed rolling out new AI products in the EU, citing regulatory uncertainty, though it must be said that these delays related more to other EU laws like GDPR and the Digital Markets Act than the AI Act itself.
Enforcement capacity is questionable. As of early 2026, EU member states are still designating the bodies responsible for auditing and certifying high-risk systems. The current number is insufficient for the flood of certification requests expected before the August 2026 deadline. Without proper enforcement, the Act's protections exist on paper but may not materialise in practice. Like a big barking dog with no teeth.
Lobbying has created loopholes. Civil society groups have documented gaps, particularly around biometric surveillance. Law enforcement exceptions remain broad. Fundamental rights impact assessments aren't always public. Critics argue that the Act prioritises industry interests over genuine protection.
Beneficial AI deployment could slow down. If compliance delays helpful diagnostic tools, fraud prevention systems, or accessibility features, the safety benefits come at the cost of slower access to improvements people actually need.
What this means in practice outside Europe
In practical terms, this law changes how many AI systems are built, not just where they are sold.
Any company that wants access to the EU market has to comply with the Act for certain types of AI. Because the EU market is huge and compliance is pricey, many firms will just build to EU specs globally, skipping separate versions.
As a result, AI tools used outside Europe may be shaped by EU rules even when local law does not require it. A healthcare app, a fraud detection system, or a care monitoring device used in New Zealand or Australia may be built to EU standards because the same product is also sold to European customers. The same logic applies to banks, insurers, and technology suppliers that operate internationally.
For the UK, the position is awkward but predictable. The UK is not legally bound by the EU AI Act, but UK companies that serve EU customers are. In practice, many of those companies are likely to adopt EU standards across all their systems, rather than maintaining two separate regulatory regimes. That means EU rules may influence UK AI tools by default, even without equivalent UK legislation.
Outside Europe, the impact will not be uniform. High-risk uses in areas such as healthcare and finance are the most likely to be affected, because the penalties for non-compliance are significant. Lower-risk applications, including marketing tools and consumer software, will continue to vary more widely between countries.
The important point is that this is not about Europe dictating policy to the rest of the world. It is about how global companies respond when one major market sets enforceable rules. Where EU standards become the baseline for certain AI systems, people elsewhere will feel the effects whether their own governments choose to regulate AI or not.
What this means for people over 60
The ban on AI exploiting age-related vulnerabilities is significant. Systems designed to manipulate elderly users are prohibited. This should offer some protection from increasingly sophisticated AI-powered scams.
High-risk AI in healthcare, finance, and care settings faces stronger oversight. Your doctor's diagnostic AI, your bank's fraud detection, AI in care home decisions—all must meet transparency, accuracy, and human oversight requirements. You should be less likely to face discriminatory treatment from biased algorithms or unexplainable automated decisions.
Healthcare monitoring apps, medication reminders, assistive technologies will have clearer safety requirements if they serve EU markets. The standards around data quality and testing should reduce the risk of faulty AI causing harm.
Needless to say, there has to be a trade-off: If compliance costs delay beneficial AI tools (e.g. better diagnostics, improved fall detection, more effective fraud prevention) then safety protections come at the expense of slower development and provision of helpful technology. For someone waiting for a new medical device, regulatory delays will matter.
The complexity also means you can't easily verify compliance yourself. You will be relying on regulators to do their job properly, and enforcement capacity remains uncertain.
The honest assessment
The EU AI Act is a serious attempt to regulate powerful technology before it causes widespread harm. The risk-based framework makes sense, namely, ban genuinely dangerous uses, heavily regulate high-risk applications, leave low-risk alone.
The protections for vulnerable groups, including provisions against age-based manipulation, are genuine improvements over no regulation at all, and if you are concerned about AI scams, biased healthcare algorithms, or unexplainable credit denials, the Act offers real safeguards. But the execution has problems - compliance costs are deterring smaller companies, enforcement infrastructure is still being built, lobbying may have undermined key protections, and the complexity means even well-intentioned companies are struggling to navigate the requirements.
The extraterritorial reach means this European law will shape AI tools globally, because companies serving international markets will often adopt EU standards everywhere rather than maintaining separate systems for different regions.
The Act won't solve all AI problems. It won't make biased algorithms disappear overnight, won't prevent every scam, won't guarantee perfect safety. What it hopefully WILL establish is a baseline framework with meaningful penalties for violations. Whether that baseline is too high, too low, or riddled with gaps is the question that will have to be answered over the next few years as the Act rolls out.
For now, the reality is that the world's first comprehensive AI law is in force, it affects companies globally, and the AI tools you use in healthcare, banking, and daily life are increasingly being built to meet its requirements. So, whether you're in Brussels or Brisbane, this EU rulebook is quietly rewriting the AI game for all of us.
Browse all topics → Index
*Australia: Released National AI Plan in December 2025. Explicitly moved AWAY from comprehensive AI-specific legislation (they were previously planning EU-style mandatory guardrails). Now relying on existing laws (privacy, consumer protection, discrimination, online safety). Launching AI Safety Institute in early 2026.
New Zealand: Released AI Strategy in July 2025 (was the LAST OECD country to do so). Taking explicit "light-touch" approach. Will NOT introduce AI-specific legislation. Relying on existing frameworks (Privacy Act 2020, Fair Trading Act, consumer protection). Released voluntary guidance for businesses.
SOURCES:
EU AI Act official text: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
High-level summary: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Roland Berger opportunities and challenges analysis: https://www.rolandberger.com/en/Insights/Publications/European-AI-Act-Opportunities-and-challenges.html
EU AI Act extraterritorial reach (William Fry): https://www.williamfry.com/knowledge/a-practical-guide-to-the-extraterritorial-reach-of-the-ai-act/
EU AI Act extraterritorial reach (Morgan Lewis): https://www.morganlewis.com/pubs/2024/07/the-eu-artificial-intelligence-act-is-here-with-extraterritorial-reach
Brookings Institution global impact analysis: https://www.brookings.edu/articles/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/
UK impact analysis (Centre for European Reform): https://www.cer.eu/insights/uks-plans-ai-brussels-still-looms-large
UK impact analysis (eyreACT): https://www.eyreact.com/ai-act-uk/
Australia and New Zealand impact (Hamilton Locke): https://hamiltonlocke.com.au/the-worlds-first-ai-rulebook-the-eu-ai-act-and-the-impact-on-australia-and-new-zealand/
Compliance costs analysis: https://medium.com/@arturs.prieditis/the-eu-ai-acts-hidden-market-how-high-risk-ai-compliance-became-a-17-billion-opportunity-734cea9b41e2
Age discrimination protections: https://artificialintelligenceact.eu/article/5/
Penalties structure: https://artificialintelligenceact.eu/article/99/
Healthcare implications: https://pmc.ncbi.nlm.nih.gov/articles/PMC11319791/
Civil society loopholes critique: https://ecnl.org/news/packed-loopholes-why-ai-act-fails-protect-civic-space-and-rule-law