From the AI Act to a European AI Agency: Completing the Union's Regulatory Architecture

From the AI Act to a European AI Agency: Completing the Union's Regulatory Architecture
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As artificial intelligence (AI) technologies continue to advance, effective risk assessment, regulation, and oversight are necessary to ensure that AI development and deployment align with ethical principles while preserving innovation and economic competitiveness. The adoption of the EU AI Act marks an important step in this direction, establishing a harmonised legal framework that includes detailed provisions on AI governance, as well as the creation of the European AI Office. This paper revisits the question of whether a more robust supranational agency dedicated to AI is still warranted and explores how such a body could enhance policy coherence, improve risk assessment capacities, and foster international cooperation. It also argues that a strengthened EU-level agency would also serve the Union’s strategic aim of securing digital and technological sovereignty.


💡 Research Summary

The paper examines the regulatory challenges posed by the rapid diffusion of artificial intelligence (AI) across a wide range of sectors and argues that the European Union’s newly adopted AI Act, while a significant step forward, still leaves a substantial gap in effective oversight. The AI Act introduces a risk‑based classification system that distinguishes high‑risk, limited‑risk, and minimal‑risk AI systems and assigns corresponding obligations. However, the enforcement body created by the Act – the European AI Office – is limited in resources, authority, and scope, functioning mainly as a coordination and support unit rather than a full‑fledged regulator.

The author first outlines the economic magnitude of AI, citing market forecasts that predict a rise from a $428 billion valuation in 2022 to over $2 trillion by 2030, and an estimated contribution of $15.7 trillion to global GDP. This growth is accompanied by heightened concerns about bias, discrimination, transparency, safety, and concentration of power, especially in high‑stakes domains such as healthcare, credit scoring, policing, and the criminal justice system. The paper stresses that these risks cannot be mitigated by soft‑law guidelines alone; binding, enforceable rules are required.

A comparative analysis of international approaches shows that the United States, China, Brazil, Canada, and the United Kingdom are each pursuing their own legislative paths, resulting in a fragmented global regulatory landscape. While organisations such as the OECD and UNESCO have issued non‑binding recommendations, they lack the coercive power needed to ensure compliance across borders. Even within the EU, divergent national implementations threaten the harmonisation that the AI Act seeks to achieve.

To bridge this gap, the paper proposes the creation of a dedicated European AI Agency (EAA) that would supersede the current AI Office. The EAA would be endowed with an independent governance structure, a robust budget, and expanded enforcement powers, enabling it to:

  1. Integrate standards, certification, and supervision – centralising pre‑market conformity assessments, high‑risk AI approvals, and post‑market monitoring.
  2. Deploy technical assessment tools – maintaining an AI risk database, providing algorithmic transparency and audit mechanisms, and offering automated risk‑scoring models for continuous oversight.
  3. Exercise binding enforcement – imposing fines, corrective orders, and market‑access restrictions to ensure consistent application of the AI Act across all Member States.
  4. Serve as an international cooperation hub – liaising with the OECD, UNESCO, the Council of Europe, and other multilateral bodies to harmonise standards, share best practices, and coordinate cross‑border investigations.

The author argues that such an agency would directly support the EU’s strategic objective of digital and technological sovereignty. By consolidating regulatory functions, the EU could reduce reliance on external technology providers, protect its domestic AI innovation ecosystem, and enhance its global competitiveness. Moreover, the EAA would enable a dynamic, risk‑based regulatory framework that can be regularly updated to reflect emerging AI capabilities and societal concerns, thereby maintaining policy coherence and reducing legal uncertainty for businesses.

In conclusion, the paper contends that while the AI Act establishes a solid foundation for risk‑based AI governance, its effectiveness hinges on the existence of a strong, supranational enforcement body. The proposed European AI Agency would fill this institutional void, offering a comprehensive platform that combines regulation, standard‑setting, supervision, and international collaboration. Such a structure is presented as essential for safeguarding fundamental rights, ensuring market fairness, and achieving the EU’s broader ambition of technological self‑reliance in the age of AI.


Comments & Academic Discussion

Loading comments...

Leave a Comment