Autonomous vehicles city image

The European Union’s planned risk-based framework for regulating artificial intelligence includes powers for oversight bodies to order the withdrawal of a commercial AI system or require that an AI model be retrained if it’s deemed high risk, according to an analysis of the proposal by a legal expert.

That suggests there’s significant enforcement firepower lurking in the EU’s (still not yet adopted) Artificial Intelligence Act — assuming the bloc’s patchwork of Member State-level oversight authorities can effectively direct it at harmful algorithms to force product change in the interests of fairness and the public good.

The draft Act continues to face criticizm over a number of structural shortcomings — and may still fall far short of the goal of fostering broadly “trustworthy” and “human-centric” AI, which EU lawmakers have claimed for it. But, on paper at least, there looks to be some potent regulatory powers.

The European Commission put out its proposal for an AI Act just over a year ago — presenting a framework that prohibits a tiny list of AI use cases (such as a China-style social credit scoring system), considered too dangerous to people’s safety or EU citizens’ fundamental rights to be allowed, while regulating other uses based on perceived risk — with a subset of “high risk” use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance.

In the draft Act, high-risk systems are explicitly defined as: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

Under the original proposal, almost nothing is banned outright — and most use cases for AI won’t face serious regulation under the Act as they would be judged to pose “low risk” so largely left to self regulate — with a voluntary code of standards and a certification scheme to recognize compliance AI systems.

There is also another category of AIs, such as deepfakes and chatbots, which are judged to fall in the middle and are given some specific transparency requirements to limit their potential to be misused and cause harms.

The Commission’s proposal has attracted a fair amount of criticism already — such as from civil society groups who warned last fall that the proposal falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias.

A number of EU institutions have also called explicitly for a more fulsome ban on remote biometric identification than the Commission chose to include in the Act (which is limited to law enforcement used and riddled with caveats).