The European Union so far is the world’s only jurisdiction area to promote comprehensive rules of artificial intelligence accompanied by the AI law.
Jerk Silva | Nurphoto | Getty Image
The European Union has officially launched the groundbreaking artificial intelligence law on Sunday, opening a highly finished path of strict restrictions and violations.
The EU AI method was officially enforced in August 2024.
On Sunday, the deadline for prohibiting a specific artificial intelligence system and requirements guarantee that the staff officially expires enough technical literacy.
In other words, companies must now comply with restrictions, and if they do, they may face penalties.
The AI method bans AI’s specific applications, which believe that citizens will have “unacceptable risks.”
These include social scoring systems, real -time face recognition, race, sexual life, sexual orientation, and other forms of biological authentication, and “operation” AI tools. Masu.
Companies face the fines of up to € 35 million ($ 35.8 million) or 7 % (which is higher) for violations of the EU AI law.
The size of the penalties depends on the infringement and scale of the fined company.
This is higher than the fine under GDPR, a strict digital privacy method in Europe. Companies are facing a fine of up to 20 million euros for GDPR violations or 4 % of the annual sales of annual sales.
“Not perfect” is “very needed”
It is worth emphasizing that the AI method is not completely forced. This is only the first step in the future development series.
Mozilla’s EU Public Public Public Public Policy and Government -related Tasos Stampelos said to CNBC, “not perfect,” but the EU’s AI method was “very needed.”
“It is very important to recognize that the AI method is mainly a product safety law,” Stampelos stated on the November CNBC modeling panel.
“According to the product safety rules, the moment you implement it, it is not a completed transaction. A lot of things have come after hiring the act,” he said.
“Currently, compliance depends on standards, guidelines, secondary laws, or derivatives following the AI law, which actually stipulates how compliance looks,” said Stampelos. 。
In December, EU AI Office, a newly created institution that regulates the use of models according to the AI method, publishes the 2nd draft of the general -purpose AI (GPAI) model, which refers to a system like Openai GPT family. I did. Large language model or LLM.
The second draft contained the exemption of a specific open source AI model provider, but included the requirements for the developers of the “whole body” GPAI model to receive strict risk assessments.
Do you set a global standard?
Some technology executives and investors are dissatisfied with some of the more troublesome aspects of the AI method, and are worried that innovation will be strangled.
In June 2024, Prince Constantine in the Netherlands told the CNBC in an interview that he was “really worried about the focus of Europe in the AI regulation.
“Our ambitions seem to be limited to excellent regulatory authorities,” said Contantijn. “It’s good to have a guardrail. We want to bring the market clear, predictable, etc., but it’s very difficult to do it in such a fast -fast space. “
Nevertheless, some people think that having a clear AI rule will benefit European leadership.
“While the United States and China are competing to build the biggest AI model, Europe shows the leadership to build the most reliable one,” said Bulgarian FinTech’s Payhawk engineering intelligence and growth director. DIYAN BOGDANOV states by email.
“Bias detection, regular risk evaluation, and the requirements of the EU AI methods on human monitoring do not limit innovation. They define what they look like,” he added.