On 1 August 2024, the European Artificial Intelligence Act (AI Act) will come into force. The Act aims to promote the responsible development and deployment of artificial intelligence in the EU.
The AI Law, proposed by the European Commission in April 2021 and approved by the European Parliament and the Council in December 2023, addresses potential risks to citizens’ health, safety and fundamental rights. The law provides clear requirements and obligations for developers and adopters regarding specific uses of AI, while reducing administrative and financial burdens for companies.
The AI Act will introduce a harmonised framework for all EU countries, based on a forward-looking definition of AI and a risk-based approach.
Minimal risk: Most AI systems, such as spam filters or AI-enabled video games, have no obligations under the AI Act, but companies can voluntarily adopt additional codes of conduct. Specific transparency risk: Systems such as chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labelled as such. High risk: High-risk AI systems, such as AI-based medical software or AI systems used for recruitment, must adhere to strict requirements, including risk mitigation systems, high quality of datasets, clear user information and human oversight. Unacceptable risk: For example, AI systems that enable “social scoring” by governments or companies are seen as a clear threat to people’s fundamental rights and are prohibited.
The EU aspires to be a world leader in safe AI. By building a strong regulatory framework based on human rights and fundamental values, the EU can develop an AI ecosystem that benefits everyone. For its citizens, this means better healthcare, safer and cleaner transport and improved public services. For businesses, this means innovative products and services, especially in the areas of energy, security and healthcare; for governments, this means increased productivity and more efficient manufacturing; and for governments, this means cheaper and more sustainable services, such as transport, energy and waste management.
Recently, the European Commission launched a consultation on a code of conduct for providers of general purpose artificial intelligence (GPAI) models. The code, envisaged in the AI Act, will address key areas such as transparency, copyright-related rules and risk management. GPAI providers operating in the EU, companies, civil society representatives, rights holders and academic experts have been invited to submit their opinions and findings, which will be reflected in the European Commission’s upcoming draft code of conduct for GPAI models.
The provisions on the GPAI will be applicable within 12 months, and the Commission is expected to finalise the Code of Practice by April 2025. Furthermore, feedback from the consultation will also feed into the work of the AI Office, which will oversee the implementation and enforcement of the AI Act regulations on the GPAI.
For more information
European Artificial Intelligence Act comes into force – Press Release
Artificial Intelligence – Q&A
More on European AI law
Excellence and Trust in Artificial Intelligence
AI Law: Speak out about trustworthy general-purpose AI
European AI Office