EU introduces draft regulatory guidance for AI models

Date:

The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, ai act, artificial intelligence, development, ethics, eu, europe, european union, government, guidance, law, legal, Politics, regulation, safety, Society

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

spot_imgspot_img

Popular

More like this
Related

Alibaba Marco-o1: Advancing LLM reasoning capabilities

Alibaba has announced Marco-o1, a large language model (LLM) designed to tackle both conventional and open-ended problem-solving tasks.

New AI training techniques aim to overcome current challenges

OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.

Ai2 OLMo 2: Raising the bar for open language models

Ai2 is releasing OLMo 2, a family of open-source language models that advances the democratisation of AI and narrows the gap between open and proprietary solutions.

Generative AI use soars among brits, but is it sustainable?

A survey by CloudNine PR shows that 83% of UK adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies.