For Strategic Partners

You may be able to do more than a concerned citizen. If you are an influential individual or organization and would like our advice on high-leverage approaches, contact us.

We have extensive experience briefing lawmakers, government officials, as well as industry and civil society leaders.

Below are examples of policies and interventions that we believe are crucial to address extinction risk from AI. They form incremental steps necessary to achieve a world safe from superintelligence.

Policies and Interventions

Public Acknowledgement of Risk

For change to happen, AI extinction risk must be in the public spotlight. Officials and authority figures can make this happen by publicly acknowledging the extinction risk from AI, following what experts have done with the Statement on AI Risk.

Implementation: Official government statements, parliamentary declarations, inclusion in national security assessments, and major media cycles.

Extinction Risk Preparedness

Establish formal government doctrine treating AI extinction risk as a national security priority, similar to pandemic or nuclear threat preparedness.

Components: Risk assessment frameworks, response protocols, and institutional responsibilities for monitoring and mitigation.

Halting the Development of Superintelligence

Establish a legal prohibition on developing artificial superintelligence systems until adequate security measures and international agreements are in place. This can be done legislatively or via executive decision.

Scope: Research and development activities aimed at creating systems that exceed human cognitive abilities across all domains.

Mandate Kill Switches

Mandatory emergency shutdown capabilities for advanced AI systems, allowing immediate termination if dangerous behavior is detected.

Requirements: Tamper-proof design, government access, fire drills, and automatic triggers for specified risk scenarios.

Restrict and Monitor Superintelligence Precursors

Restricting and monitoring the development and deployment of AI capabilities that are direct precursors to superintelligence, even if not superintelligent themselves.

Examples: AIs autonomously developing other AIs, AIs that can improve themselves, AIs that can escape from security containment, large concentrations of computing power.