AI Safety Institute
In 2023, the concept of AI safety gained prominence due to concerns about existential risks posed by advanced AI. The UK and US established their respective AI Safety Institutes (AISIs) during the AI Safety Summit in November 2023. These institutes aimed to evaluate and ensure the safety of frontier AI models. In May 2024, at the AI Seoul Summit, international leaders agreed to form a global network of AI Safety Institutes, including those from the UK, US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union. The UK's AISI evolved from the Frontier AI Taskforce, which was established in April 2023 with an initial budget of £100 million. Led by Ian Hogarth, the institute focuses on balancing safety and innovation. Unlike the European Union, which adopted the AI Act, the UK has been cautious about early legislation, fearing it might hinder growth or become obsolete due to rapid technological advancements. In May 2024, the UK AISI open-sourced a tool called "Inspect" to evaluate AI model capabilities. The US AISI was founded under the National Institute of Standards and Technology (NIST) in November 2023, following President Joe Biden's Executive Order on AI safety. Elizabeth Kelly, a former economic policy adviser, leads the institute. The US government also created the AI Safety Institute Consortium (AISIC) in February 2024, involving over 200 organizations, including major tech companies like Google and Microsoft. However, the $10 million budget allocated to the AISI has been criticized as insufficient, especially given the significant ...