Summarised by Centrist
AI startups OpenAI and Anthropic have agreed to allow the US government to have early access to AI models to ‘mitigate potential issues.’
Under this agreement, the US AI Safety Institute, part of the National Institute for Standards and Technology (NIST), will examine upcoming AI models from these companies.
Elizabeth Kelly, the director of the institute, noted, “Safety is essential to fuelling breakthrough technological innovation…these agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
OpenAI’s Chief Strategy Officer Jason Kwon said, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence.”
Meanwhile, Anthropic’s Jack Clark remarked that, “Safe, trustworthy AI is crucial for the technology’s positive impact… We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.”