U.S. President-elect Donald Trump and Elon Musk watch the sixth test flight of the SpaceX Starship rocket on November 19, 2024 in Brownsville, Texas.
Brandon Bell | via Reuters
The U.S. political landscape will undergo some changes in 2025, and those changes will have a significant impact on the regulation of artificial intelligence.
President-elect Donald Trump will be inaugurated on January 20th. The White House will be joined by a number of top advisers from the business world, including Elon Musk and Vivek Ramaswamy. They are expected to influence policy thinking regarding nascent technologies, including upcoming technologies. AI, virtual currency, etc.
Across the Atlantic, regulatory thinking in the UK and the European Union is diverging, and a tale of two jurisdictions is emerging. The EU has taken a tougher stance on the Silicon Valley giants that power the most powerful AI systems, while the UK has adopted a more moderate approach.
2025 is likely to see a major overhaul of the AI regulatory landscape around the world. CNBC covers some of the key developments to watch, from the evolution of the EU’s groundbreaking AI law to what the Trump administration can do for the United States.
Mr. Musk’s influence on US policy
Elon Musk walks through the Capitol Building on the day of his meeting with incoming Senate Republican Leader John Thune (R-SD) on December 5, 2024 in Washington, United States.
Benoît Tessier | Reuters
Although it wasn’t a big issue during President Trump’s campaign, artificial intelligence is expected to be one of the key areas to benefit from the next US administration.
For example, Mr. Trump appointed Mr. Musk as CEO of the electric car maker. teslawill co-lead his “Department of Government Efficiency” alongside Ramaswamy, an American biotech entrepreneur who withdrew from the 2024 presidential campaign to support Trump.
Appian CEO Matt Calkins told CNBC that Trump and Musk’s close relationship could put the US in an advantageous position when it comes to AI, adding that the billionaire’s OpenAI CEO He cited his experience as co-founder and CEO of his own AI research institute, xAI. positive indicators.
“We finally have someone in the U.S. government who truly knows and has an opinion about AI,” Calkins said in an interview last month. Musk is one of Trump’s most prominent supporters in the business world and has appeared at some of his campaign rallies.
At this time, there is no confirmation of what President Trump plans to do in terms of executive orders or potential executive orders. But Calkins believes Musk is likely to try to propose guardrails to ensure AI developments don’t endanger civilization — a risk he has warned about many times in the past. Ta.
“He definitely has a reluctance to allow AI to have catastrophic consequences for humanity,” Calkins told CNBC. “We were talking about it long before I got the position.”
Currently, there is no comprehensive federal AI legislation in the United States. Rather, a patchwork regulatory framework exists at the state and local level, with numerous AI bills introduced in 45 states, plus Washington, DC, Puerto Rico, and the US Virgin Islands.
EU AI law
The European Union is so far the only global jurisdiction to pursue comprehensive rules on artificial intelligence in its AI law.
Jack Silva | Null Photo | Getty Images
The European Union is so far the only jurisdiction globally to push for comprehensive legal rules for the AI industry. Earlier this year, the bloc’s AI Law – the first AI regulatory framework of its kind – officially entered into force.
Although the law has not yet been fully implemented, it has already caused tensions among major U.S. tech companies, concerned that some aspects of the regulations are too strict and could stifle innovation. There is.
In December, the EU AI Secretariat, the newly created model oversight body under the AI Act, published the second draft of the Code of Practice for General Purpose AI (GPAI) Models. This refers to systems like OpenAI’s GPT family of large-scale language models. or LLM.
The second draft included exemptions for providers of certain open source AI models. Such models are typically made publicly available so that developers can build their own custom versions. It also includes a requirement that developers of “systematic” GPAI models undergo rigorous risk assessments.
Computer & Communications Industry Association — Members include: Amazon, google and meta — warned that this “includes measures that go far beyond the scope of the Act’s agreement, including extensive copyright measures.”
The AI Office did not immediately respond to a request for comment from CNBC.
It is worth noting that the EU AI law is far from fully implemented.
As Shelley McKinley, chief legal officer of the popular code repository platform GitHub, told CNBC in November, “The next phase of work has begun. This is what’s behind us at this point. “It could mean there’s more in front of us than that.”
For example, in February, the first provisions of the law will come into force. These provisions target “high-risk” AI applications such as remote biometric identification, loan decisions, and education scoring. The third draft of the code on the GPAI model is expected to be published in the same month.
European tech leaders are concerned about the risk that EU punitive measures against American tech companies could provoke a backlash from President Trump and, in turn, soften the EU’s approach.
Let’s take antitrust law as an example. Andy Yen, CEO of Swiss VPN company Proton, said the EU is actively taking action to rein in the dominance of US tech giants, but that’s a sign of President Trump’s negative reaction. It is said that this may lead to a negative reaction.
“[President Trump’s]view is that he probably wants to regulate tech companies himself,” Yen told CNBC in an interview at the Web Summit technology conference in Lisbon, Portugal, in November. . “He doesn’t want Europe involved.”
UK copyright review
British Prime Minister Keir Starmer responds to a media interview while attending the 79th United Nations General Assembly held at the United Nations Headquarters in New York, United States, on September 25, 2024.
Leon Neal | via Reuters
One notable country is the United Kingdom. The UK has so far avoided introducing legal obligations for AI modelers over concerns that new laws would be too restrictive.
However, Keir Starmer’s government has said it plans to draw up AI-related legislation, but so far no details have been released. The general expectation is that the UK will take a more principles-based approach to AI regulation, in contrast to the EU’s risk-based framework.
Last month, the government withdrew its first major indicator of regulatory trends, announcing a consultation on measures to regulate the use of copyrighted content to train AI models. Copyright is a big issue, especially for generative AI and LLMs.
Most LLMs use public data from the open web to train their AI models. However, this often includes examples of artwork and other copyrighted material. Artists and publishers new york times They claim that these systems unfairly scrape valuable content without their consent to produce the original output.
To address this issue, the UK government is considering creating an exception to copyright law for training AI models, while allowing rights holders to use their works for training purposes. You can opt out.
Appian’s Calkins said the UK could eventually become a “world leader” in the issue of AI model piracy, adding that the country could “like the US have an overwhelming amount of support from domestic AI leaders”. “We are not influenced by lobbying,” he added.
US-China relations may become a point of tension
U.S. President Donald Trump (right) and Chinese President Xi Jinping walk past members of the People’s Liberation Army (PLA) during a welcome ceremony outside the Great Hall of the People in Beijing, China, Thursday, November 9, 2017.
Shen Qilai | Shen Qilai Bloomberg | Getty Images
Finally, there is a risk that geopolitical tensions between the United States and China will escalate under the Trump administration, as governments around the world seek to regulate rapidly growing AI systems.
During his first term as president, Trump pushed through a number of hawkish policies toward China. That included the decision to add Huawei to a trade blacklist that restricts trade with U.S. high-tech suppliers. He also initiated a move to ban TikTok, owned by Chinese company ByteDance, in the United States, but his stance on TikTok has since softened.
China is competing to beat the US for dominance in the field of AI. At the same time, the United States has taken steps to restrict China’s access to key technologies needed to train more advanced AI models, primarily chips like those designed by Nvidia. China is responding by building its own chip industry.
Technologists say the geopolitical rift between the United States and China over artificial intelligence could pose other risks, including the possibility that one of the two countries could develop a type of AI that is smarter than humans. I am concerned that there is.
Max Tegmark, founder of the nonprofit Future of Life Institute, said the United States and China could one day develop a type of AI that can improve itself and design new systems without human supervision. We believe that the two governments may be forced to draw up separate rules. AI safety.
“My optimistic path forward is that the United States and China must move on the one hand to prevent them from harming their own companies and building out-of-control AGI,” Tegmark said in a November interview with CNBC. “We are imposing national security standards on the United States, not to appease a rival superpower, but simply to protect ourselves.”
Governments are already looking to work together to consider how to create regulations and frameworks around AI. In 2023, the UK will host a global AI safety summit, attended by both the US and Chinese governments, to discuss potential guardrails surrounding the technology.
– CNBC’s Arjun Kharpal contributed to this report