WASHINGTON — The National Geospatial-Intelligence Agency is seeking to establish guidelines and standards for using artificial intelligence (AI) technology in critical areas, such as using satellite imagery to identify potential targets.
Based in Springfield, Virginia, NGA collects, analyzes and disseminates geospatial information derived from satellite and aerial imagery in support of national security, military operations and disaster response activities.
NGA Director Vice Adm. Frank Whitworth announced last week that the agency is launching a pilot program aimed at ensuring the reliability and trustworthiness of AI models used by analysts. The effort aims to develop guidelines for evaluating the performance and accuracy of computer vision models used to analyze satellite imagery and other geospatial data.
“Certification provides a standardized assessment framework, implements risk management, promotes a responsible AI culture, increases trust in AI, accelerates AI adoption and interoperability, and recognizes high-quality AI while identifying areas for improvement,” Whitworth said in a press briefing.
The move comes as NGA and other intelligence agencies have become increasingly reliant on AI-powered computer vision to quickly process the vast amounts of satellite imagery and geospatial data collected every day. By developing a consistent methodology for evaluating these AI models, NGA aims to strengthen confidence in the AI-generated insights that inform military operations and national security decision-making.
Targeting is “one of the hardest things we do”
Whitworth stressed that accuracy in intelligence gathering is important because lives are at stake. “We’re trying to make sure we can distinguish between combatants and non-combatants, between enemy and non-enemy,” he said. “It’s difficult. In my 35-plus years of experience, one of the hardest things we do is targeting.”
The pilot program is still in its early stages, with many details still being worked out, but Whitworth said that broadly speaking, it aligns with the Department of Defense’s ethical use of AI guidelines and responds to a recent White House executive order on the issue.
The agency also established a training program on responsible AI for all coders and users of geospatial intelligence data, with the goal of creating a culture of responsible AI use across the intelligence community.
The new AI programs come at a time when geospatial data is growing in volume and complexity. AI can help manage this deluge of data by automating the detection and classification of objects in images, freeing human analysts to focus on important tasks and interpretation.
Whitworth emphasized the agency’s role in current global conflicts, noting that NGA provides geospatial intelligence support to Israel in its war against Hamas-led Palestinian militant groups in the Gaza Strip and to Ukraine in its defense against Russian aggression. “Our responsibility is to ensure that Israel and Ukraine can defend themselves,” he said, emphasizing the importance of accurate, reliable intelligence in such sensitive situations.
“We have to remember that there are very clever adversaries out there who will perturb some of the training data and perturb some of these model solutions,” Whitworth warned, emphasizing the need for rigorous evaluation of AI systems.