Over 700 experts in Science and Tech Demand a Halt on Advanced AI Systems
An open letter came out asking for a ban on building artificial intelligence systems that are “superintelligent.” The ban should stay in place until most scientists agree that these technologies can be made safely and in a way that the public agrees with.
The letter was put out by a nonprofit group called the Future of Life Institute. More than 700 people signed it. This includes winners of the Nobel Prize, people with a lot of experience in the tech industry, people who make government rules, artists, and well-known people like Prince Harry and Meghan Markle, the Duke and Duchess of Sussex.
The letter shows big and growing worries about work being done by huge tech companies like Google, OpenAI, and Meta Platforms. These companies want to create AI that can do better than humans at almost every thinking task. The letter says this has made people afraid of things like losing jobs because machines take them over, humans losing control and respect, risks to the country's safety, and the chance of huge harms to society or even threats to human life.
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the statement reads.
People who signed include AI experts Yoshua Bengio and Geoffrey Hinton, who both won the Turing Award, Apple co-founder Steve Wozniak, businessman Richard Branson, and actor Joseph Gordon-Levitt. From politics, it includes Steve Bannon, who was chief strategist in the White House under Donald Trump, Susan Rice, who was national security adviser under Obama, and Mike Mullen, who used to be chairman of the U.S. Joint Chiefs of Staff.
The open letter points out that AI has “unprecedented” power to make health and wealth better. But building superintelligent systems brings risks that haven't been handled well enough yet. People behind the letter say the race between big tech companies could push things too far, where we can't watch or control it anymore. Worries about the country's safety, people's rights, and humans losing their power are key. So are warnings about bad things we can't predict if machines get to human-level smarts or even higher.
The Future of Life Institute put out a popular letter in 2023 that asked for a pause on building strong AI models. But the top tech companies didn't listen. The people organizing this new effort say the problem is even more pressing now. They point to surveys showing most people doubt we should chase superintelligence without making sure it's safe and has strong rules. It's not clear yet if governments will step in soon enough or if companies will control themselves.
You see the full list of signatories on the Future of Life’s website.