Building Trust in AI: EU’s First Draft Guidelines Aim to Tame Risks and Boost Transparency

November 15, 2024

The European Union has taken a monumental step in regulating artificial intelligence by releasing its first draft of a Code of Practice for General Purpose AI (GPAI) models. This move aims to clarify how companies developing high-computing AI systems must align with the EU’s AI Act, which came into effect on August 1. Stakeholders now have until November 28 to provide feedback on this draft, with final guidelines expected by May 1, 2025.

Defining GPAI and Its Scope

GPAI models, defined as systems trained with over 10²⁵ FLOPs of computing power, include cutting-edge platforms developed by tech giants like OpenAI, Google, Meta, Anthropic, and Mistral. The draft anticipates more companies entering this regulatory framework as the field evolves.

The guidelines focus on four core areas: transparency, copyright compliance, risk assessment, and technical/governance risk mitigation. Developers are now required to disclose the sources of data used for training their models, addressing longstanding concerns from copyright holders and creators.

Risk Management Frameworks to Mitigate Threats

The draft proposes a Safety and Security Framework (SSF), urging companies to assess and manage risks proportionate to their AI models’ potential impact. Key risks include cyber offenses, discriminatory outputs, and loss of control over AI systems. Developers must implement failsafe mechanisms, secure model data, and continuously evaluate their systems’ resilience.

Transparency and Accountability at the Forefront

AI makers are now tasked with providing detailed documentation about their training methods, including the use of web crawlers. The EU stresses that such transparency not only mitigates risks but also protects creators’ intellectual property. Additionally, the governance guidelines mandate ongoing risk assessments and the involvement of external experts to ensure robust accountability.

Severe Penalties for Non-Compliance

Non-compliance comes at a hefty cost: companies face fines of up to €35 million (approximately $36.8 million) or seven percent of their global annual revenue—whichever is higher. These penalties underline the EU’s commitment to making AI development safe and ethical.

A Collaborative Approach

Stakeholders are invited to contribute feedback via the Futurium platform to shape the draft further. This collaborative effort ensures that the guidelines remain adaptable and inclusive as the AI landscape evolves.

To dive deeper into the EU’s regulatory vision for AI, read the full article on https://www.engadget.com/ai/the-eu-publishes-the-first-draft-of-regulatory-guidance-for-general-purpose-ai-models-223447394.html