What does EU AI Act Require of Regulated Organisations
If your AI system affects anyone in the EU, you're in scope. That includes non-EU companies whose outputs reach EU individuals. It doesn't matter whether you built the system or bought it. If you're deploying it, you're accountable.
The Act uses a risk-based framework. AI systems used for things like credit scoring or life and health insurance pricing are explicitly classified as high risk. Other use cases like transaction monitoring or claims processing aren't automatically high risk but could be depending on how they're used and whether they materially influence outcomes.
What regulators will look for is evidence. Not policies sitting in a drawer, but operational proof that risk management, human oversight, traceability, and monitoring are actually embedded into how you build and run AI systems. If you can't produce that evidence without scrambling to reconstruct it after the fact, that's a problem.
In financial services, regulators will want to see how decisions flow through your systems, who can override them, and what happens after a decision is made. For insurers, the questions will centre on whether you can explain a decision to an individual policyholder and whether you're tracking if outcomes start drifting or showing signs of bias over time. In health and life sciences, it comes down to how well you're governing your data, who's responsible when AI supports a clinical decision, and whether your documentation would hold up under audit or investigation.
.The practical takeaway is that organisations need to start integrating these obligations into their existing governance and delivery processes now. That means maintaining an inventory of AI systems, linking controls to risk classification, building compliance checks into development workflows, and generating audit evidence as part of normal operations rather than as an afterthought.