Top AI experts say AI poses an
extinction risk on par with nuclear war.

Prohibiting the development of superintelligence can prevent this risk.

Take Action

What are experts warning about?

In 2023, hundreds of the foremost AI experts came together to warn that:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

- Statement on AI Risk, CAIS

The signatories included the world's 3 most-cited AI researchers:

Yoshua Bengio

Yoshua Bengio

Turing Award Winner,
"Godfather" of AI

Geoffrey Hinton

Geoffrey Hinton

Nobel Prize Winner,
"Godfather" of AI

Ilya Sutskever

Ilya Sutskever

Co-founder of OpenAI and
Founder of SSI

...and even the CEOs of the leading AI companies:

Sam Altman

Sam Altman

CEO of OpenAI
(ChatGPT)

Demis Hassabis

Demis Hassabis

Nobel Prize Winner,
CEO of Google DeepMind
(Gemini)

Dario Amodei

Dario Amodei

CEO of Anthropic
(Claude)

AI Risk Warning Video Cover

What can we do?

The risk comes from the potential creation of superintelligent AI, which would be more powerful than any individual, any company, or any nation.

Despite the risks and warnings, the accelerating pace of AI development means superintelligence could arrive before 2030.

As with other technologies that threaten our civilization, we must coordinate to prevent superintelligent AI from being created by anyone, anywhere.

CAMPAIGN SUPPORT

Lord Browne of Ladyton

Lord Browne of Ladyton

Former UK Secretary of State for Defence

"History shows that every breakthrough technology reshapes the balance of power. With Superintelligent AI, the risks of unsafe development, deployment, or misuse could be catastrophic—even existential—as digital intelligence surpasses human capabilities. Control must come first, or we risk creating tools that outpace our ability to govern them."
Steven Adler

Steven Adler

Ex-OpenAI Dangerous Capability Evaluations Lead

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems. This is despite an understanding from the leadership of OpenAI and other AI companies that superintelligence, the technology they're building, could literally cause the death of every human on Earth."
Roman Yampolskiy

Roman Yampolskiy

Professor of Computer Science, University of Louisville

"My life's work has been devoted to Al safety and cybersecurity, and the core truths l've uncovered are as simple as they are terrifying: We cannot verify systems that evolve their own code. We cannot predict entities whose intelligence exceeds our own. And we cannot control what lies beyond our comprehension. Yet the global research community, driven by competition and unrestrained ambition, is rushing headlong to construct precisely such systems. Unless we impose hard limits, now, on the development of recursively self-improving Al, we are not merely risking catastrophic failure. We are engineering our own extinction."
Yuval Noah Harari

Yuval Noah Harari

Historian, Philosopher, and Best-Selling Author of 'Sapiens'

"We will not be able to understand a superintelligent AI. It will make its own decisions, set its own goals, and to achieve them it might trick and deceive humans in ways we cannot hope to understand. We are the Titanic headed towards the iceberg. In fact, we are the Titanic busy creating the iceberg. Let's change course and build a future that keeps humans in control."
Anthony Aguirre

Anthony Aguirre

Executive Director of FLI, Professor UC Santa Cruz

"At the Future of Life Institute, we've united leading scientists to address humanity's greatest threats. None is more urgent than uncontrolled AI. If we're going to build superintelligent AI, we'd need to ensure that it is safe, controllable, and actually desired by the public first."
Stuart Russell

Stuart Russell

Professor of Computer Science, UC Berkeley

"To proceed towards superintelligent AI without any concrete plan to control it is completely unacceptable. We need effective regulation that reduces the risk to an acceptable level. Developers cannot currently comply with any effective regulation because they do not understand what they are building."

Join over 0 people who have taken action already.