AI Experts From Google, OpenAI And Elsewhere Issue Dire Extinction Warning

A robot finger and human finger about to touch.
Some of the world's top AI scientists and contributors have jointly signed a brief but ominous warning about the "urgent risks" that loom as artificial intelligence technologies gain a stronger foothold, including the "risk of extinction" (presumably to flesh and blood humans). Among those who signed the startling statement are OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

There are actually quite a few signatures from individuals at OpenAI, which is behind ChatGPT, including co-founders John Schulman and Ilya Sutskever, along with the firm's head of policy research, Miles Brundage, and governance lead, Jade Leung.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the succinct statement reads.

This isn't the first time that top AI experts banded together to issue a warning. Just two months ago, an open letter signed by Tesla founder Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, and others called for a temporary halt on AI experiments. The reason? Because "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control."

Separately, Google and Alphabet CEO Sundar Pichai recently admitted that his company doesn't fully understand how AI works and that society at large is not ready for the upcoming rapid development of AI. And if that weren't enough, Dr. Geoffrey Hinton, widely considered the godfather of AI, issued an ominous warning of his own after quitting Google.

"It is hard to see how you can prevent the bad actors from using it for bad things," Dr. Hintin remarked.

Perhaps part of it is a knee-jerk reaction to the all the buzz surrounding AI. OpenAI started the frenzy by opening up its ChatGPT chatbot to the public, followed by Google releasing Bard and Snapchat joining the fray, to name just a few.

It's not all fear and doubt, though. Microsoft is employing AI features into Windows 11 with its Windows Copilot feature, and NVIDIA has been showing off some cool stuff like its ACE for Games technology, which enables having hyper-realistic conversations with non-playable characters (NPCs).

So what's all the fuss really about? To an extent, we're wading into uncharted waters. Or more like cannon-balling, with the recent rush to ride the AI wave. And hey, when AI immediately goes to the topic of human extinction when tasked with writing an AI horror story, well, we can't look back and say the writing was never on the wall.

Of course, the succinct statement signed by figures at OpenAI, Google, and elsewhere doesn't say what kind of mitigations we should collectively explore.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement...aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously," the Center for AI Safety, which posted the statement, explains.

Let the discussion begin.