On August 2, 2025, new obligations for providers of general-purpose AI models begin to apply under the EU AI Act, the world’s first comprehensive law regulating artificial intelligence. These rules impact systems like ChatGPT, Claude, Gemini, and other platforms used by millions of people every day.

This Image was created with the assistance of DALL·E

What Are General-Purpose AI Models?

General-purpose AI systems (GPAI) are designed to handle a wide range of tasks, from writing and translating, to analyzing data and generating creative content. Because they can be applied in both everyday and high-risk settings, these models face specific regulatory attention under the EU’s new framework.

AI Usage Notice: In preparing this article, AI tools were used with careful human oversight and editing. We believe in transparency regarding the use of AI in our work.
AI Usage Notice: In preparing this article, AI tools were used with careful human oversight and editing. We believe in transparency regarding the use of AI in our work.

A Voluntary Code as a Bridge

In July 2025, the European Commission introduced a voluntary Code of Practice for general-purpose AI. Although not legally binding, this document offers essential guidance for companies working toward compliance with the AI Act, especially Articles 53 and 55, which define responsibilities for GPAI developers and deployers.

The Code acts as a transitional tool until formal technical standards are adopted (expected by 2027), helping companies navigate transparency, risk management, and responsible deployment.

Who Must Comply?

The obligations apply to all providers of general-purpose AI models placed on the EU market — regardless of whether the company is based in Europe or elsewhere.

  • New models released after August 2, 2025, must comply immediately.
  • Existing models released before that date have a grace period and must comply by August 2, 2027

What Comes Next?

This milestone is part of a phased rollout of AI regulation in the EU. Following the adoption of the AI Act in 2024, which introduced a ban on certain “unacceptable-risk” practices (such as social scoring and manipulative behavior, with enforcement starting on February 2, 2025) — the spotlight now shifts to general-purpose AI.

Future phases will include:

  • 2026 – enforcement of rules for high-risk AI systems (e.g. in health, education, law enforcement)
  • 2027 – adoption of full technical standards and further obligations