The EU’s AI Act shifts focus from technology to context-specific risk, questioning organizations’ control over systems affecting human rights in legal work.
AI’s Growing Role in Legal Work
AI drafts legal documents, summarizes case files, analyzes jurisprudence, prepares offers, and increasingly answers phone calls. In many firms, artificial intelligence operates almost unnoticed as “natural” support for lawyers.
The problem is that the AI Act does not ask whether lawyers use AI. It asks whether organizations understand where the system starts impacting human rights—and if they can control it.
The AI Act’s Shift: From Technology to Risk Context
This is a fundamental change in perspective. The AI Act explicitly states that regulation targets not technology itself, but AI systems operating in specific contexts and generating real risk.
Whether the model is “large” or “small,” cloud-based or local is irrelevant. What matters is whether its outcome affects an individual’s legal status, rights, obligations, property, or access to legal protection.



