GAITHERSBURG, Md. — Today, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) National Artificial Intelligence Safety Laboratory announced agreements with Anthropic and OpenAI to enable formal collaboration on research, testing, and evaluation of AI safety.
The companies’ MOU establishes a framework for the US AI Safety Institute to have access to each company’s major new models before and after public release. The agreement will enable collaboration on how to assess functionality and safety risks, and how to mitigate those risks.
“Safety is essential to spurring breakthrough innovation, and with these agreements in place, we look forward to beginning our technical collaboration with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the National AI Safety Institute. “While these agreements are just the beginning, they mark an important milestone as we work to responsibly secure the future of AI.”
Additionally, the US AI Safety Lab will be working closely with its partners at the UK AI Safety Lab to provide Anthropic and OpenAI with feedback on potential safety improvements to their models.
The National AI Safety Institute builds on NIST’s 120-year history of advancing measurement science, technology, standards, and related tools. These contract-based evaluations will foster closer collaboration and exploratory research on advanced AI systems across a range of risk domains, furthering NIST’s AI efforts.
The evaluations conducted pursuant to these agreements will help advance the development and use of safe, secure, and trustworthy AI, building on the Biden-Harris Administration’s Executive Order on AI and voluntary commitments made to the Administration by leading AI model developers.
About the US AI Safety Institute
The National AI Safety Lab, housed within the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), was established in response to the Biden-Harris Administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. It is tasked with developing testing, evaluations, and guidelines that will help accelerate safe AI innovation in the U.S. and around the world.