"Extinction from AI" – What do experts mean?

Extinction isn't exaggerated for effect, experts really do mean that everyone, everywhere on Earth, could die from the consequences of artificial superintelligence (ASI): unstoppable AI systems that surpass humanity.

The top experts in AI, including the three most cited AI researchers and the CEOs of the biggest AI companies have explicitly warned that AI poses an extinction risk on par with nuclear war.

Below is the statement they signed:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

- Statement on AI Risk, 2023 (CAIS)

You can find the full list of signatories here.

What is superintelligence, and why is it so dangerous?

Superintelligence refers to AI systems that can outperform any human, any company, or any nation at any task or job. This means that in any competition against a superintelligence – economic, scientific, military, or political – humans will most likely lose.

Superintelligence goes by many different names: "Superhuman Machine Intelligence", "Artificial Superintelligence (ASI)", and sometimes even "Artificial General Intelligence (AGI)" is used to refer to it.

AI companies are explicitly working to build superintelligence. They believe that the upsides of building superintelligence are worth risking human extinction.

Here are some quotes from CEOs of the biggest AI companies talking about their ambitions, as well as the risk of extinction from superintelligence.

Sam Altman

Sam Altman

CEO of OpenAI

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

"OpenAI is a lot of things now, but before anything else, we are a superintelligence research company."

Mark Zuckerberg

Mark Zuckerberg

CEO of Meta

"For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry. We're also going to invest hundreds of billions of dollars into compute to build superintelligence."

Dario Amodei

Dario Amodei

CEO of Anthropic

"I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen."

"My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 per cent and 25 per cent."

How much time do we have?

There has been a big paradigm shift following the release of GPT-3 in 2020 and then ChatGPT in 2022. Before then, most experts saw superintelligence as a distant problem for future generations. That has changed.

The increasingly fast pace of AI progress has altered the way experts think about ASI. Many experts now believe we are at risk of reaching ASI within just a few years.

There is a big gap between how experts think about ASI, and how civil society thinks about it. There has not been much public discourse about it, and politicians are largely not aware of nor planning for ASI.

Here are some quotes of experts talking about when they think we might reach ASI, and how they were wrong in the past.

Geoffrey Hinton

Geoffrey Hinton

Nobel Prize Winner,

"Godfather" of AI

"The idea that they [AI systems] might overtake us quite soon suddenly became much more plausible. For a long time I'd said that would be 30 to 50 years but now I think it may only be a few years."

Daniel Kokotajlo

Daniel Kokotajlo

Ex-OpenAI Analyst

"By 2027, we may automate AI R&D leading to vastly superhuman AIs ('artificial superintelligence' or ASI). In [our scenario that is titled] AI 2027, AI companies create expert-human-level AI systems in early 2027 which automate AI research, leading to ASI by the end of 2027."

Dario Amodei

Dario Amodei

CEO of Anthropic

"Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage — a 'country of geniuses in a datacenter' — with the profound economic, societal, and security implications that would bring."

How should we prevent this?

As a first step, we must ban the development of superintelligence

AI progress is accelerating, companies are racing with each other to build superintelligence, and nations are getting increasingly involved. Some experts are recommending actions as drastic as cyberattacks and preventative strikes on datacenters to ensure competing nations do not make superintelligence first.

The only winner of a race to ASI would be the superintelligence itself. Since nobody can currently control a superintelligence, the end result is complete loss of human control in the world, regardless of which actor "wins" the race. We must coordinate to prevent this from happening.

Before we get to the point where drastic measures are taken, we must restrict the critical capabilities which enable the development of ASI. The policies and interventions are detailed on the partners page.

Here are some quotes from experts on the dramatic geopolitical implications of continuing the race to ASI:

Eric Schmidt

Eric Schmidt

Ex-Google CEO

"Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary. Finally, under dire circumstances, states may resort to broader hostilities by climbing up existing escalation ladders or threatening non-AI assets. We refer to attacks against rival AI projects as 'maiming attacks.'"

Dario Amodei

Dario Amodei

CEO of Anthropic

"[U]se AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition's strategy to promote democracy (this would be a bit analogous to 'Atoms for Peace'). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe."

FAQ