new york
CNN
–
Elon Musk’s government efficiency team reportedly uses artificial intelligence to guide cost-cutting decisions. AI experts say it could lead to security breaches, biased firing choices and cuts of highly qualified and critical government staff.
“Relying on AI systems for things like this is extremely complicated and difficult, and it takes a huge risk of violating people’s civil rights,” said David Evan Harris, an AI researcher who previously worked with Meta’s responsible AI team. “In the current AI systems we have, doing something like this with AI is simply a bad idea.”
Musk says he aims to rapidly cut at least $1 trillion from the federal government’s fiscal deficit. But in the process, his work with Doge has sparked uncertainty, confusion and frustration across the government.
Several recent media reports cited unnamed sources show that Musk’s Doge team is currently using AI to accelerate these cuts.
Experts say the approach reflects the same “first cut, later fixes” that Musk was brought to the Twitter acquisition two years ago. As a result, thousands of workers lost their jobs, causing technical glitches and lawsuits, alienating users and causing controversial policies that undermine the platform’s core advertising business. However, the consequences of dismantling government agencies, systems and services can be broader and more serious than slimming high-tech companies.
“If you have private companies, that’s a little different,” John Hutton, vice president of policy and programs for the National Association of Active and Retired Federal Employees, told CNN. “You do that in the federal government and people may die.”
This move also happens as Musk attempted to establish himself and his startup, Xai, as a leader in the AI industry. It is not clear whether the company’s technology is being used by Doge.
Representatives from Musk, Doge and the U.S. Personnel Management did not respond to requests for comment.
In early February, DOGE provided the Fed with AI software-sensitive Department of Education data accessed through Microsoft’s cloud services to analyze agency programs and spending, with two unnamed people familiar with the group’s actions told the Washington Post.
Doge staff are also developing a custom AI chatbot for the US General Services Agency called GSAI, Wired reported last month, citing two people who are familiar with the project. One nameless source said the tool would help “analyze a huge strip of contract and procurement data.”
After the Human Resources Administration sent an email to federal workers on February 23 and asked them to send five bullet points detailing “what they achieved last week,” Doge staff considered using AI to analyze responses, NBC News reported citing an unknown source who is familiar with the plan. The AI system evaluates the response and determines which positions are no longer needed. This indicates that the AI tool did not specify what to use.
Musk said in X’s post that AI is not “necessary” to confirm the answer, and that emails are “essentially a check to see if an employee has a pulse.”
Wired last month reported that Doge operatives edited the Department of Defense development software known as Autorif, or compiled an automated power reduction.
Last week, 21 US Digital Services (USDS) employees, an agency that evolved into Doge under the Trump administration, said they had resigned in protest. The group did not specifically mention AI, but said, “We will not use our skills as engineers to compromise core government systems, to endanger sensitive American data or to dismantle important public services.” The group shared it online in a letter to White House staff member Susan Wills.
White House spokesman Caroline Leavitt responded to his resignation in a statement saying, “protests, litigation and law must have been sleeping under the rock for the past few years,” according to an Associated Press report.
In X-Post, Musk was called a USDS employee who resigned from “Dem Political Holdovers, who refused to return to the office.”
Part of the problem is that in order to build effective and useful AI tools, you need to have a deeper understanding of the data being used to train them. This could not be used to train it, according to Amanda Renteria, CEO of Code for America, a nonprofit organization, working with the government to build digital tools and improve technical capabilities.
“You can’t train (AI tools) on a system you don’t know very well,” Renteria told CNN. The tool’s output may not make sense. AI tools can also make up for things wrong and sometimes. This is known as “hatsui.” People who are new to data seeking analysis from technology may not catch those mistakes.
“Because the government system is old, we often can’t expect to deploy new technology and get the right results,” she said.

In their letter, former USDS employees said they were interviewed by people wearing White House Visitor Badges that “demonstrate limited technical capabilities,” and accused them of “mishandling sensitive data and breaking critical systems.”
Among the employees working at Doge are a small number of men in their early 20s, with staff brought from other companies in Musk. CNN and others report this.
The White House said Amy Gleason, who had a healthcare background and worked for USDS during President Donald Trump’s first term, is Doge’s representative manager, but White House spokesperson Caroline Leavitt said Musk is overseeing the group’s efforts.
On Monday, Democracy Forward, a left-leaning nonprofit policy research organization focusing on the US administrative sector, said it had submitted a series of free-of-information requests as part of an investigation into reports of AI use by the Doge and Trump administrations. “Americans deserve to know what’s going on. We’ll include whether and how artificial intelligence is being used to rebuild the departments and institutions people rely on every day,” Skye Perryman, Forward CEO of Democracy, said in a statement.
Many of the concerns surrounding Doge’s AI use are similar with regard to the use of technology in other settings, including that technology can replicate biases that often exist among humans.
For example, some AI employment tools have been shown to support white male applicants more than other candidates. Large tech companies are accused of discrimination due to how their algorithms provided jobs and housing advertising. The AI-driven facial recognition technology used by police has led to illegal arrests. Also, various AI-generated photography tools took heat to produce inaccurate or offensive portrayals of different races.
If AI is used to determine what roles and projects to remove from the government, it could mean how they look, reducing important staff for the people they serve, or simply working, Harris added that women and people of color could be negatively affected.
For example, consider the idea of using AI to evaluate email responses from federal employees that outline weekly performance. Harris said responses from “really talented” federal workers whose first language is not English could be interpreted as “more favorable by the AI system than writing by people whose native English language.”
“Even if the AI system is not programmed to be biased, we may still prefer the type of language that a particular group uses over other groups,” he said.
The nature of these concerns is not new, but the potential fallout from using AI to determine mass government reductions can be more severe than in other settings.
Musk acknowledges that Doges can make mistakes and that important efforts such as Ebola prevention have already been eliminated. It is not clear how or involved the AI was in that decision.
AI offers the benefits of increasing efficiency. You can quickly analyze and analyze huge amounts of information. However, if not used carefully, it could also put government sensitive data and people’s personal information at risk, experts say. Without adequate protection and restricting who has access to the system, data fed to the AI program in a query can emerge unexpectedly in response to another request.
Harris is particularly concerned about how Doge handles personnel records. It describes it as one of the “most sensitive types of documents in any organization.”
“The idea that people in this group, who had no time to train a lot on how to process very sensitive documents, suddenly, not only have access to the HR records of a wide range of public institutions, but can also use them (records) to make quick firing.
And Renteria said the outcomes of government LAX data security could be significant.
“As a society, when we lose the idea that governments are taking care of your data, or at least we lose the idea that they start destroying those who are filing taxes, people access to food aid,” Renteria said.
But perhaps the most pressing concern is the lack of transparency regarding the use of AI reported by Doge. Which AI tools are used? How were they examined? And do humans oversee and audit results? CNN sent these questions to Doge and received no responses.
Julia Stoyanovich, an associate professor of computer science and director of the Center for Responsible AI at New York University, said that in order for AI to be effective, users need to clarify the goals of the technology and properly test whether AI systems meet those needs.
“I’m really interested to hear the Doge team clarify how they measure performance and how they measure the accuracy of results,” she said.