Efforts to regulate the creation and use of artificial intelligence (AI) tools in the United States have been slow to achieve results, but President Joe Biden’s administration has outlined how the federal government should use AI and how AI companies should use it. I’m trying. Ensure your tools are safe and secure.
But the incoming Trump administration has very different views on how to approach AI, which could end up reversing some of the progress made over the past few years.
In October 2023, President Biden signed an executive order aimed at promoting “the development and use of safe, secure, and reliable artificial intelligence” within the federal government. President-elect Donald Trump has vowed to repeal the executive order, saying it stifles innovation.
Biden was also able to get seven major AI companies to agree to guidelines on how to safely develop AI going forward. Other than that, there are no federal regulations that specifically address AI. Experts say the Trump administration is likely to take a more hands-off approach to the industry.
“I think the biggest thing we’ll see is a wholesale repeal of the kind of early steps the Biden administration has taken toward meaningful AI regulation,” said Cody, senior policy advisor at the ACLU’s National Political Advocacy Office.・Mr. Wenzke says: “I think there’s a real threat that we’re going to see the growth of AI without significant guardrails. It’s going to be a little bit more open-ended.”
The industry has historically seen growth without guardrails, leading to a kind of wild west in AI. This could lead to problems such as the proliferation of deepfake pornography and political deepfakes unless lawmakers restrict how the technology is used.
One of the biggest concerns of the Biden administration and those in the tech policy field was how generative AI could be used to mount disinformation campaigns, including deepfakes. . This type of content could be used to sway election results. Wenzke said he doesn’t expect the Trump administration to focus on preventing the spread of disinformation.
AI regulation may not necessarily be a major focus for the Trump administration, but it is being watched, Wenzke said. Just this week, President Trump selected Andrew Ferguson to head the Federal Trade Commission (FTC), who is likely to oppose regulation of the industry.
According to a report in Punchbowl News, FTC Commissioner Ferguson said he aims to “put an end to the FTC’s attempts to become an AI regulator,” adding that the FTC, an independent agency accountable to the U.S. Congress, will He said he should take full responsibility. President’s office. He also suggested that the FTC should investigate companies that refuse to run ads next to hateful and extremist content on social media platforms.
Wenzke said Republicans believe Democrats are trying to regulate AI and make it “woke,” which would mean acknowledging things like the existence of transgender people and man-made climate change.
AI’s ability to “inform decisions”
However, artificial intelligence does more than just answer questions and generate images and videos. Kit Walsh, director of the Electronic Frontier Foundation’s AI and Access to Knowledge Legal Project, told Al Jazeera that AI is being used in a variety of ways that threaten people’s personal freedoms, including in court cases, and that AI is being used in a variety of ways to threaten people’s personal freedoms, including in court, and to prevent harm. He said AI needs to be regulated. .
People think that computers can eliminate bias by making decisions, but when AI is created using historical data that is itself biased, it actually makes biases more entrenched. It may cause For example, an AI system created to determine who gets parole could draw on data from cases in which black Americans were treated more harshly than white Americans.
“The most important issue with AI right now is using it to inform decisions about people’s rights,” Walsh says. “That ranges from predictive policing to deciding who gets government housing and health benefits. It’s also the private use of algorithmic decision-making about hiring, firing, housing, and more. ”
Walsh said there is a lot of “optimism and solutionism around technology” among some of the people Trump is recruiting into his administration, and that he ultimately wants to use AI to promote “more efficiency in government.” He says he may try to take advantage of it.
This is the goal of figures like Elon Musk and Vivek Ramaswamy, who lead what appears to be an advisory committee called the Ministry of Government Efficiency.
“It’s true that if you can tolerate less accurate decisions[with AI tools]you can save money or lay off some employees. “But I would not recommend that because it would harm people who rely on government agencies for essential services.” say.
If Trump’s first term as U.S. president, from 2017 to 2021, is any indication, his administration will spend more time deregulating than creating new ones. This includes regulating the creation and use of AI tools.
“We want smart regulations that pave the way for socially responsible development, deployment, and use of AI,” said Shyam Sundar, director of Penn State’s Center for Socially Responsible Artificial Intelligence. “At the same time, regulations should not be so strong that they stifle innovation.”
Sander said the “new revolution” sparked by generative AI has created “a bit of a Wild West mentality among technologists.” He said future regulations should focus on putting guardrails in place where necessary and encouraging innovation in areas where AI can be useful.