US Government Teams with AI Startups for Safety Testing

US Government Teams with AI Startups for Safety Testing

The US government has reached agreements with leading AI startups OpenAI and Anthropic. These agreements are to test their technologies for safety. Announced on Thursday, the partnerships aim to assess upcoming AI models. The goal is to ensure thorough evaluations before their public release. The assessments will focus on safety concerns. This proactive approach aims to prevent potential issues with new AI technologies. Both startups will work closely with the government on these evaluations.

Early Access for Safety Evaluation

Under these agreements, the US AI Safety Institute, a division of the Commerce Department’s National Institute of Standards and Technology (NIST), will receive early access to the companies’ major AI models. This initiative will enable detailed assessments of the models’ capabilities and potential risks, while developing strategies to mitigate identified issues.

This initiative allows The US government AI Safety Institute to assess and mitigate risks in major AI models, according to wall street journal login.

Collaboration with the UK

The US AI Safety Institute will work closely with the UK’s AI Safety Institute. They will exchange feedback on safety enhancements. This collaboration follows recent regulatory discussions. California’s contentious AI safety bill SB 1047 was recently approved. The bill passed the state Assembly. This partnership aims to improve AI safety practices.

Focus on Safety and Technological Breakthroughs

Safety is crucial for driving technological breakthroughs,” stated Elizabeth Kelly. She is the director of the AI Safety Institute. “These agreements represent a significant step.” “They show our commitment to guiding the future of AI.” “We aim to do so responsibly.” “Our focus remains on advancing safety measures.” “This initiative marks a pivotal moment for AI development.”

Standardized Testing Initiatives

Both the US and UK organizations dedicate themselves to standardized testing procedures and commit to implementing these measures effectively. Previously, Anthropic tested its Sonnet 3.5 model with the UK institute. This testing set a precedent for collaborative evaluations. It established a benchmark for safety in AI. The collaboration aims to enhance evaluation processes. This partnership underscores their commitment to rigorous safety standards.


Mortgage Rates Decline for Second Consecutive Week

UK Antitrust Regulator Concludes Investigation of Tech Giants

The UK antitrust regulator, the Competition and Markets Authority (CMA), has concluded its investigation into the app stores operated by…


Support from OpenAI

OpenAI’s Chief Strategy Officer, Jason Kwon, expressed strong support for the initiative. “We fully support the US AI Safety Institute’s mission and are eager to collaborate on establishing safety best practices and standards for AI models,” he said. “Our joint efforts are crucial in shaping US leadership in responsible AI development and setting a global benchmark.”

Anthropic’s Commitment to Safety

Anthropic emphasized the importance of effective AI model testing. “Safe, reliable AI is essential for its positive impact,” said Jack Clark, Anthropic co-founder and head of policy. “This effort will enhance our ability to identify and address risks, advancing responsible AI development. We are proud to contribute to setting new standards for safe and trustworthy AI.”

These agreements mark a proactive approach to AI safety, aiming to balance innovation with rigorous testing to ensure the responsible development of advanced technologies.


Upgrade your subscription for unmatched political insights, real-time news, engaging essays, and expert opinions. Experience the combined power of The Washington Post and The Economist. Start today to enrich your understanding and engage meaningfully with the evolving global landscape.

Sales Support