In response to a request for information regarding the White House AI Action Plan, Anthropic has submitted recommendations to Science and Technology Policy (OSTP). Our recommendations are designed to better prepare America to capture the economic benefits and national security implications of powerful AI systems.
As CEO Dario Amodei writes in “The Machines of Loving Grace,” we hope that powerful AI systems will emerge in the second half of 2026 or early 2027. Powerful AI systems have the following characteristics:
Intellectual ability to match or exceed that of Nobel Prize winners in most fields, including biology, computer science, mathematics and engineering. The ability to navigate all the interfaces available today to people doing digital work, including the ability to process and generate text, audio and video, and the ability to automatically control the ability to automatically control comploads and keyboards like Myas. As with highly capable employees, tasks spanning hours, days, even weeks, even extended periods, days, and even weeks, as needed. Control experimental equipment, robotic systems and manufacturing tools through digital connections.
Our own recent research adds further evidence to the idea that powerful AI will soon arrive. The recently released Claude 3.7 Sonnet and Claude code, like systems released by other frontier racers, demonstrate significant improvements in capabilities and increased autonomy.
The United States believes it must take decisive action to maintain technical leadership. Our submission focuses on six key areas of addressing the economic and security impacts of powerful AI while maximizing the interests of all Americans.
National Security Testing: Agency must develop robust capabilities to assess both domestic and foreign AI models for potential national security impacts. This includes creating a standard evaluation framework, building a secure test infrastructure, and establishing a team of experts to analyze vulnerabilities in deployment systems. This includes reducing control of H20 chips, demands for intergovernmental contracts in countries hosting large chip deployments, and reductions in opposition to licensing thresholds. Enhanced lab security: AI systems are recommended to promote security for AI lab and intelligence energy. Infrastructure: To stay on the cutting edge of AI development, we recommend setting up an ambitious target to add 50 gigawatts of dedicated power by 2027. Impact: To ensure that the benefits of AI are widely shared across society, we recommend modernizing the mechanisms of economic data collection, such as the Census Bureau’s investigation, and preparing for potential changes in large-scale changes to the economy.
These recommendations are based on previous policy work by humanity, including our advocacy for responsible scaling policies and testing and evaluation. Our goal is to balance. It is to hamper innovation while reducing the serious risks posed by increasingly capable AI systems.
Our full submission here provides details on these recommendations and provides a practical implementation strategy to help the US government navigate this critical technological transition.