Global Leaders, Experts Demand Global Moratorium on Human-Surpassing AI Development

More than 700 global figures, including scientists, political leaders, and celebrities, have jointly urged governments to halt the development of artificial intelligence systems capable of outperforming human intelligence until strong safety measures and public consent are in place.
The demand was contained in an open letter released on Wednesday by the Future of Life Institute (FLI), a United States, based nonprofit organisation campaigning against the risks posed by unchecked AI advancement.
The group called for “a prohibition on the development of superintelligence until the technology is reliably safe and controllable, and has public buy-in.”
Among the signatories are leading AI researchers Geoffrey Hinton, the Nobel Prize, winning “Godfather of AI”; Stuart Russell, Professor of Computer Science at the University of California, Berkeley; and Yoshua Bengio, a renowned AI scientist from the University of Montreal.
Also signing the petition are Virgin Group founder Richard Branson, Apple co-founder Steve Wozniak, former US presidential adviser Steve Bannon, former National Security Adviser Susan Rice, Prince Harry and Meghan Markle, and US musician will.i.am.
AI ethicist Paolo Benanti of the Vatican also endorsed the call, reflecting the growing moral and ethical concerns over artificial intelligence research.
FLI President Max Tegmark said companies must not pursue human-level or super intelligent AI without an enforceable international framework to regulate the technology.
“We cannot afford a race toward systems that could outthink or override humanity without clear accountability,” he warned.
In the same vein, FLI co-founder Anthony Aguirre said while AI has great potential to improve medicine, science, and productivity, the current race for dominance among corporations poses serious societal dangers.
“The path AI corporations are taking, racing toward smarter-than-human AI designed to replace people, is completely out of step with what the public wants or what scientists think is safe,” he said.
The call follows similar appeals made during the United Nations General Assembly in September, when AI researchers and industry workers asked governments to set binding international rules, including red lines on superintelligence, by 2026.
Industry observers say the latest move reflects growing unease within the scientific community as major technology firms such as OpenAI, Google DeepMind, and Anthropic compete to create artificial general intelligence (AGI), a form of AI that could match or surpass all human intellectual abilities.