The European Union’s risk-based rulebook for artificial intelligence, also known as EU AI law, has been years in the making. However, we expect to hear more about this regulation in the coming months (and years) as important compliance deadlines begin. In the meantime, read on for an overview of this law and its purpose.
So what is the EU trying to achieve? Turn the clock back to April 2021. At the time, the European Commission published its original proposal, which lawmakers were trying to frame as legislation that would strengthen the bloc’s ability to innovate in AI by fostering public trust. The EU proposed that this framework would ensure that AI technology remains “human-centric”, while also giving companies clear rules to work their machine learning magic.
The increased adoption of automation across industries and society certainly has the potential to significantly increase productivity in a variety of areas. However, there is also a risk of rapid harm if productivity is low and/or if AI intersects with and fails to respect individual rights.
Therefore, the goal of this block of AI legislation is to foster the adoption of AI and grow regional AI ecosystems by setting conditions aimed at mitigating the risk of things going horribly wrong. That’s it. Lawmakers believe that putting guardrails in place will increase public trust in and adoption of AI.
This idea of fostering an ecosystem through trust was largely uncontroversial at the beginning of the decade when this law was being debated and drafted. However, some critics argued that it was too early to regulate AI and that it could harm Europe’s innovation and competitiveness.
Of course, few would say it’s too early now, considering the technology has exploded into mainstream consciousness thanks to the boom in generative AI tools. However, despite the inclusion of supportive measures such as a regulatory sandbox, there remain opponents who argue that the law will hinder the prospects of homegrown AI entrepreneurs.
Still, how to regulate AI is currently a big debate for many lawmakers, and the EU set the tone with its AI law. The next few years will all depend on Brock executing his plan.
What does the AI Act require?
Most uses of AI fall outside the scope of risk-based regulations and are therefore not regulated at all by the AI Act. (It should also be noted that military uses of AI are completely out of scope, as national security is a legal competence of member states and not at EU level.)
With respect to the scope of use of AI, the Act’s risk-based approach provides for a small number of potential use cases (e.g. “harmful subconscious, manipulative and deceptive techniques” or “unacceptable social scoring”). ”) is framed as “unacceptable.” It is prohibited due to the risk. However, the list of prohibited uses is full of exceptions, meaning that even the law’s few prohibitions have many caveats.
For example, a ban on law enforcement using real-time remote biometrics in publicly accessible spaces may be limited to specific offenses, rather than a blanket ban as some lawmakers and many civil society groups have called for. The use of is permitted as an exception.
The next layer of unacceptable risk/prohibited use is “high risk” use cases such as AI apps used for critical infrastructure. Law enforcement. Education and vocational training. health management; etc. — app makers must conduct conformance assessments before market deployment and on an ongoing basis (e.g. when making significant updates to a model).
This means developers must be able to demonstrate that they meet the legal requirements in areas such as data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. I will. A quality and risk management system must be in place so that compliance can be demonstrated if the authorities come for an audit.
High-risk systems implemented by public authorities must also be registered in the EU’s public database.
There is also a third “medium risk” category, where transparency obligations apply to AI systems such as chatbots and other tools that can be used to create synthetic media. The concern here is that they can be used to manipulate people. This type of technology therefore requires users to be notified that they are interacting with or viewing AI-generated content.
All other uses of AI are automatically considered low/minimal risk and will not be regulated. This means, for example, that activities such as using AI to categorize and recommend social media content or targeted advertising are not obligated under these rules. However, the block encourages all AI developers to voluntarily follow best practices to increase user trust.
This tiered set of risk-based rules is the bulk of AI law. However, the multifaceted models that underpin generative AI technologies also have some specialized requirements. AI law refers to this as a “general purpose AI” model (or GPAI).
This subset of AI technology, sometimes referred to in the industry as “foundational models,” typically sits upstream of many apps that implement artificial intelligence. Developers leverage GPAI’s APIs to deploy the functionality of these models into their own software, often fine-tuned for specific use cases to add value. All of this means that GPAI can quickly gain a strong position in the market and have the potential to have a massive impact on AI outcomes.
GenAI has joined the chat…
The rise of GenAI has not only reshaped the debate around AI law in the EU. The bloc’s lengthy legislative process, combined with the hype around GenAI tools like ChatGPT, led to changes to the rulebook itself. Members of the European Parliament seized the opportunity to respond.
The MEP proposed adding additional rules to the underlying model of GPAI, or GenAI tool. These raised the tech industry’s attention to what the EU was doing with its legislation, leading to intense lobbying for a GPAI carve-out.
French AI company Mistral was one of the most vocal, arguing that restrictions on model makers would hinder Europe’s ability to compete with AI giants in the United States and China. OpenAI’s Sam Altman agreed, and was summoned by the EU after hinting in an aside to journalists that his company’s technology could be pulled from Europe if the law proves too onerous. He suggested that he may hasten to return to traditional lobbying against regional power brokers. Deal with this clumsy threat.
Mr. Altman’s crash course in European diplomacy is one of the most notable side effects of the AI law.
The result of all this noise was an uphill battle to complete the legislative process. It took several months and a long final negotiation session between the European Parliament, the Council and the European Commission for the proposal to cross the line last year. The political agreement was signed in December 2023, paving the way for the adoption of the final document in May 2024.
The EU touts its AI law as a “world first.” However, being first in the context of this cutting-edge technology means a lot of work, including setting specific standards to which the law applies and creating detailed compliance guidance (codes of conduct) for monitoring and monitoring. It means there are still a lot of details that need to be done. The ecosystem construction system that this law is designed to function as.
Therefore, as far as its success is measured, this law is still a work in progress and will continue to be for a long time to come.
For GPAI, the AI Act continues its risk-based approach and has (only) relaxed requirements for most of these models.
For commercial GPAI, this means transparency rules, including technical documentation requirements and disclosures regarding the use of copyrighted material used to train models. These provisions are intended to help downstream developers comply with their own AI laws.
There is also a second tier for the most powerful (and potentially risky) GPAIs, with the law requiring up-front risk assessment and mitigation for GPAIs that pose “systemic risks.” , obligations for model manufacturers have been strengthened.
Here, the EU is concerned, for example, with very powerful AI models that could pose a risk to human life, or even the risk that technology manufacturers would be unable to control the continued development of self-improving AI.
Lawmakers chose to rely on computational thresholds for model training as classifiers for this systemic risk tier. GPAI falls into this bracket based on the cumulative amount of compute used for training, measured in floating point operations (FLOPs) greater than 1025.
As of now, it is believed that there are no targeted models, but of course that could change as GenAI continues to develop.
AI safety experts involved in overseeing AI laws also have scope to flag concerns about systemic risks that may arise elsewhere. (For more information on the governance structure devised by the Block for the AI Act, including the various roles of the AI Office, please see our previous report.)
As a result of lobbying by Mr. Mistral and others, the GPAI rules have been watered down, reducing requirements for open source providers, for example (Lucky Mistral!). Research and development has also been carved out, so GPAI that has not yet been commercialized will not even be subject to transparency requirements and will be completely outside the scope of the law.
The long march towards compliance
The AI Act officially entered into force across the EU on August 1, 2024. With deadlines set at various intervals for various components to comply from early next year until around mid-2027, this day effectively sounded the gun.
Some of the key compliance deadlines are six months after the Prohibited Use Cases rule goes into effect. Nine months before the Code of Conduct begins to apply. Transparency and governance requirements take 12 months. 24 months for other AI requirements, including obligations for some high-risk systems. 36 months for other high-risk systems.
One of the reasons for this phased approach to legal provisions is to give companies sufficient time to properly carry out their operations. But beyond that, it is clear that regulators need time to figure out what compliance looks like in this cutting-edge situation.
At the time of writing, the bloc is busy developing guidance on various aspects of the law ahead of these deadlines, including a code of practice for GPAI authors. The EU is also negotiating the definition of “AI systems” in the law (i.e. what software will or will not be covered) and clarifications regarding prohibited uses of AI.
The full picture of what the AI Act means for covered companies is still blurring and taking shape. But key details are expected to be finalized in the coming months and early next year.
Another consideration is that as these technologies (and their associated risks) continue to evolve as a result of the pace of development in the AI field, what is required to comply with the law may also continue to change. There is. So this is one rulebook that you might want to keep as a living document.
AI rule enforcement
Supervision of the GPAI is centralized at EU level, with the AI Secretariat playing a key role. The penalties that the European Commission can apply to enforce these rules could amount to up to 3% of a model manufacturer’s global turnover.
In other regions, enforcement of the law’s rules regarding AI systems is decentralized, with member state-level authorities (with multiple designated supervisory bodies) responsible for assessing and investigating compliance issues for most AI apps. (as there may be cases where there is more than one person). . It remains to be seen how well this structure will work.
In theory, fines for violations of prohibited uses could reach up to 7% of global turnover (or 35 million euros, whichever is greater). Violations of other AI obligations can result in fines of up to 3% of global turnover, or up to 1.5% for providing false information to regulators. As a result, the scale of sanctions that enforcement authorities can impose is on a sliding scale.