Scale AI CEO Alexandr Wang testifies before the House Armed Services Subcommittee on July 18, 2023 on hearings on cyber, information technology and innovation on battlefield AI at Capitol Hill, Washington.
Jonathan Ernst | Reuters
Scale AI announced its contract with the Department of Defense for its flagship AI Agent program on Wednesday. This is an important movement in the controversial military use of artificial intelligence.
AI giants that provide training data to major AI players like Openai, Google, Microsoft and MetaAccording to the release, the company has been awarded a prototype contract from Thunderforge, the DOD’s flagship program, Thunderforge, Thunderforge, Thunderforge.
Sources familiar with the situation said this was a multi-million dollar deal, requiring anonymity due to the secret nature of the contract.
Leading the Defense Innovation Unit, the program will develop and deploy AI agents, incorporating a team of “global technology partners” including Anduril and Microsoft. Use includes modeling and simulation, decision support, proposed action courses, and even automated workflows. The deployment of this program begins with the US Indo-Pacific command and US European command, then expands to other areas.
According to a release from DIU, “Thunderforge marks a critical shift towards data-driven warfare that powers AI, ensuring that we can predict and respond to threats with speed and accuracy.”
“Our AI solutions transform today’s military operational processes and modernize America’s defense. … The enhanced speed of DIU offers the greatest technical advantages for military leaders in our country,” CEO Alexandre Wang said in a statement.
Both scale and DIU emphasized speed, highlighting how AI can help military units make faster decisions. DIU mentioned the need for eight speeds (or synonyms) in releases.
DIU Director Doug Beck highlighted “machine speed” in a statement, while DIU Thunderforge lead and contractor Bryce Goodman said there was “a fundamental discrepancy between modern war speed and ability to respond.”
Scale says that the program operates under human surveillance, but DIU did not emphasize that.
AI Military Partnership
Scale’s announcement is part of a broad trend that not only is AI companies walking bans on military use of products, but also signing partnerships with the defense industry giant and the Department of Defense.
In November, Humanity, an Amazon-backed AI startup and defense contractor Palantir, founded by former Openi research executives, announced a partnership with Amazon Web Services, “providing access to AWS’ (humanity’s) Claude 3 and 3.5 family models.” This fall, Palantir signed a new five-year contract of up to $100 million to expand US military access to the Maven AI WARFARE program.
In December, Openai and Anduril announced a partnership that will allow defense technology companies to deploy sophisticated AI systems for their “national security mission.”
The Openai-Anduril partnership focuses on “improving the country’s counterman-name aircraft system (CUAS) and the ability to detect, assess and respond to deadly airborne threats in real time.”
Anduril, co-founded by Palmer Luckey, did not answer questions from CNBC at the time. They did not answer whether reducing the responsibility of human operators would translate into fewer humans in the high-stakes war decision loop.
At the time, Openai told CNBC it would withstand the policy of a mission statement banning the use of AI systems to harm others.
But according to some industry experts, that’s easier than that.
“The problem is that you don’t have control over how technology is actually used. If it’s not the current usage, then if you already share the technology, it certainly does in the long run.” “So I’m a bit curious about how companies actually perceive it. Does anyone have a security clearance that literally investigates its usage and validates that it’s within the limits that are not directly harmful?”
According to Mitchell, AI startup and Openai competitor Hugging Face has previously rejected military contracts, including contracts that do not include the possibility of direct harm. She said the team “understands that they were “a step away from direct harm,” adding, “It’s very clear that even what appears harmless, this is one part of the surveillance pipeline.”
Scale AI CEO Alexandr Wang spoke at CNBC’s Squawk Box on January 23, 2025 outside the World Economic Forum in Davos, Switzerland.
CNBC
Mitchell said even summarizing social media posts is considered a step away from being directly harmful, as these summaries can be used to potentially identify and retrieve enemy combatants.
“If it’s one step away from harm and helps spread the harm, is it actually good?” Mitchell said. “It’s a somewhat arbitrary line in the sand, and it feels like it’s actually a better ethical situation, but rather suited to the company’s PR and employees’ morale… You can say, “Don’t use this to give this technology.” Because of harm. ”
Mitchell called it “a game of words that provides some sort of veneer of acceptance and non-violence.”
Tech’s Military Pivot
In February, Google removed its pledge to refrain from using AI on potentially harmful applications such as weapons and surveillance, according to our updated “AI Principles.” This is a change from previous versions, saying Google will not pursue “weapons or other technologies that are primarily intended or implemented to cause or directly promote injuries to people” and “techniques that collect or use information for surveillance in violation of internationally accepted norms.”
In January 2024, Microsoft-backed Openai quietly removed the ban on military use of ChatGPT and other AI tools, just as it began working with the US Department of Defense on AI tools, including open source cybersecurity tools.
Until then, Openai’s policy page specified that the company did not allow weapons development or use models of “highly risky activities” such as military and war. However, in the updated language, Openai removed specific references to the military, but its policy states that users should not “use the service to harm themselves or others,” including “developing or using weapons.”
News of military partnerships and changes to mission statements follow a long-standing controversy about technology companies developing technologies for military use, highlighted by technology workers, particularly technology concerns that are addressing AI.
Almost every tech giant involved in the military contracts has expressed concern after thousands of Google employees protested their involvement with the Pentagon project Maven.
Palantir will take over the contract later.
Microsoft employees protest a $480 million army contract to provide soldiers with augmented reality headsets, and over 1,500 Amazon and Google workers have signed a letter protesting a $1.2 billion multi-year contract with the Israeli government and the military, where Tech Giants provides cloud computing services and data centers.
“The pendulum is always swinging with these kinds,” Mitchell said. “We’re like the buyer and seller market now, as our employees make fewer statements within technology companies than they did a few years ago. The profits of the company are much heavier than the profits of individual employees.”