While the messy withdrawal and subsequent Taliban seizure of power has many questioning the lessons learned from Afghanistan’s two decades of war, one major American achievement from the time spent fighting the Taliban has emerged: the use of artificial intelligence to track terrorist attacks.
In 2019, U.S. and coalition forces began drawing down their forces in the country, leaving remaining forces lacking the capacity to maintain a human intelligence network to monitor Taliban movements.
By the end of 2019, the number of Taliban attacks against U.S. and coalition forces had skyrocketed to levels not seen since a decade, prompting Afghan security forces to develop an AI program called “Raven Sentry.”
Air Force creates fleet of AI-powered drones to protect human pilots
In a report released earlier this year, Col. Thomas Spurr, chief of the School of Military Strategy, Plans, and Operations at the U.S. Army War College, quoted A.J.P. Taylor as saying, “War has always been the mother of invention.” Spurr pointed to the development of tanks during World War I, atomic weapons during World War II, and the use of AI to track open-source intelligence as America’s longest war drew to a close.
Raven Sentry aimed to ease the burden on human analysts by sorting through vast amounts of data from “weather patterns, calendar events, increased activity around mosques and madrasahs, and activity around historical sites.”
Despite some initial challenges when the technology was first developed, a team of intelligence officers came together, forming a group they called “Nerd Lockers,” to develop a system that could “reliably predict” terrorist attacks.
“By 2019, the digital ecosystem infrastructure had advanced, along with advances in sensors and prototype AI tools that made it possible to detect and rapidly organize the scattered signals of insurgent attacks,” Spahr, who also worked on the program, first reported in The Economist.
New report ranks US top in AI readiness, while China, Russia and Iran lag behind
The AI program was cut short by a withdrawal on August 30, 2021, but its success was due to a “culture” of tolerance for early failure and technical expertise.
Spar said the Raven Sentry development team “was aware of concerns from senior military and political leaders about proper oversight and the relationship between humans and algorithms in combat systems.”
He also noted that AI testing is “doomed to fail” if leadership doesn’t tolerate experimentation when developing the program.
By October 2020, less than a year after its withdrawal, Raven Sentry had reached a 70% accuracy threshold for predicting when and where an attack was likely to occur, a technology that has proven crucial in the large-scale wars currently underway in both the Middle East and Ukraine.
Click here to get the FOX News app
“Advances in generative AI and large-scale language models are improving AI capabilities, and the ongoing wars in Ukraine and the Middle East demonstrate new advances,” the U.S. Army colonel wrote.
Spar also said that if the United States and its allies want to remain competitive in AI technology, they need to “balance the tension between computer speed and human intuition” by educating leaders who are skeptical of the ever-evolving technology.
Despite the AI programs’ success in Afghanistan, the Army colonel warned that “war is ultimately human, and our adversaries will adapt to cutting edge technology and often resort to simple, common-sense solutions.”
“Just as Iraqi insurgents learned that burning tires in the roads would degrade the optical performance of U.S. military aircraft, or Vietnamese guerrillas dug tunnels to avoid aerial surveillance, America’s adversaries will learn to fool AI systems and falsify data inputs,” he added. “After all, the Taliban defeated U.S. and NATO advanced technology in Afghanistan.”