OpenAI, Google & Microsoft urge for regulations to prevent “the risk of extinction from AI”

Joel Loynds
AI image of a computer exploding

A new open letter from hundreds of AI researchers and leaders calls for further regulations to prevent humanity’s extinction.

Tech CEOs, leaders, and many more have banded together once again to call for further regulation of AI technology. As the AI boom continues , there is a growing ongoing concern that artificial intelligence could be potentially disastrous for the human race.

Sam Altman, CEO of OpenAI has already signed the letter, alongside the head of Google Deep Mind and a representative from Microsoft.

Among them is Geoffrey Hinton, who recently left his post at Google. Despite spending years in AI, he has been quite open with his warnings for the future if it’s not monitored.

The letter comes from the Center for AI Safety, which researches potential risks surrounding AI.

Altman recently mentioned that the work being done with AI needs a watchdog similar to that of nuclear weapons. That comparison is brought up again here.

Instead of a full letter, currently, there’s just a statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Center for AI Safety

This letter differs from the version asking for a “pause” in development, which saw Elon Musk and Steve Wozniak, the co-founder of Apple. That letter was mostly ignored due to the lack of backing from AI labs.

AI leaders want regulations to prevent extinction

However, the tides have changed and companies actively developing AI are now seeking regulation in the event that they – or someone else – create something with super intelligence.

The risks involved were detailed in a previous blog post by Altman, wherein he mentions it’s impossible to prevent the creation of it due to the wide range of tools publicly available.

OpenAI has been at the forefront of this regulatory request, having previously been questioned by US Congress and agreeing that it does in fact need further restrictions.

AI is advancing faster than anyone suspected, but a few have pointed out that the existing language models aren’t as smart as they’re made out to be. Meanwhile, others are using the current tools to build research machines for exploring the dark web or to destroy humanity itself.

Related Topics

About The Author

E-Commerce Editor. You can get in touch with him over email: joel.loynds@dexerto.com. He's written extensively about video games and tech for over a decade for various sites. Previously seen on Scan, WePC, PCGuide, Eurogamer, Digital Foundry and Metro.co.uk. A deep love for old tech, bad games and even jankier MTG decks.