China chose not to sign an international “blueprint” agreed to by about 60 countries, including the United States, this week that aims to establish guidelines for the military use of artificial intelligence (AI).
More than 90 countries took part in the Responsible Artificial Intelligence in the Military (REAIM) summit in South Korea on Monday and Tuesday, but about a third of attendees did not support the non-binding proposal.
Arthur Herman, an AI expert and senior fellow and director of the Quantum Alliance Initiative at the Hudson Institute, told Fox News Digital that the fact that around 30 countries have pulled out of this crucial AI development race is not necessarily a cause for concern, but that in Beijing’s case it’s more likely due to its general opposition to signing multilateral agreements.
China has mastered the art of brainwashing and is stepping up its AI censorship
“The bottom line is that China is always wary of any international agreement where it is not the architect or has no role in creating or orchestrating how that agreement is formed or implemented,” he said. “I think China sees all of these efforts, multilateral efforts, as a way to try to limit and constrain China’s ability to use AI to advance its military advantage.”
Herman described the summit and the blueprint agreed to by some 50 countries as an attempt to safeguard the expanding technologies surrounding AI by always ensuring “human control” over existing systems, particularly as it relates to military and defense issues.
“The algorithms that run our defense systems and our weapons systems depend heavily on how fast they can move,” he said. “They can move quickly to gather information and data, send it back quickly to command and control, and then make decisions.”
“The speed at which AI can move is crucial on the battlefield,” he added. “If decisions made by AI-driven systems involve the loss of human life, humans should have the final say on those decisions.”
Countries leading the way in AI development, like the United States, say maintaining the human element in critical decisions on the battlefield is crucial to avoiding false casualties and preventing machine-led conflict.
Army’s 500-day AI plan pushes two new strategies to protect soldiers
The summit, co-hosted by the Netherlands, Singapore, Kenya and the UK, was the second to be held after more than 60 countries attended the first conference in the Dutch capital last year.
It remains unclear why China, along with around 30 other countries, chose not to agree to building blocks for establishing AI safeguards after backing a similar “call to action” at last year’s summit.
Asked about details of the summit at a press conference on Wednesday, Chinese Foreign Ministry spokesman Mao Ning said that following an invitation, China had sent a delegation to the summit to “elaborate on China’s principles of AI governance.”
Mao pointed to the “Global Initiative on AI Governance” proposed by Chinese President Xi Jinping in October, saying it “offers a systematic view of China’s governance proposals.”
The spokesman did not explain why China did not support the non-binding blueprint presented at this week’s REAIM summit, but added that “China will maintain an open and constructive attitude in cooperation with other countries to bring more tangible results to mankind through AI development.”
Click here to get the FOX News app
Herman warned that while countries like the US and its allies will seek to forge a multilateral agreement to protect military AI practices, it is unlikely to do much to deter adversaries like China, Russia and Iran from developing nefarious technologies.
“When we’re talking about nuclear proliferation or missile technology, the most effective restraint is deterrence,” the AI expert explained. “Those who are determined to push forward with the use of AI, even to the point of using it as an automated killing machine, can be restrained by making it clear that if they develop such weapons, we can use them against you as well, because they believe it is in their interest to do so.”
“We can’t expect their altruism or high ethical standards to restrain them. They don’t work that way,” Herman added.
Reuters contributed to this report.