AI Poses Threat To Human Existence, Experts In New Warning Letter The warning letter stated that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war
By Teena Jose
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Artificial intelligence (AI) experts, scientists and industry leaders, on Tuesday, have issued a new warning about the risks that AI poses to human existence.
Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website.
The statement noted that, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
More than 1,000 researchers and technologists, including Elon Musk, had signed a letter earlier this year calling for a six-month pause on AI development because, they said, it poses profound risks to society and humanity, as per reports. This letter was a response to OpenAI's release of a new AI model, GPT-4, but leaders at OpenAI, its partner Microsoft and Google didn't sign on and rejected the call for a voluntary industry pause.
But now, this recent letter reportedly endorsed by Microsoft's chief technology and science officers, as well as Demis Hassabis, CEO of Google's AI research lab DeepMind, and two Google executives who lead its AI policy efforts. The statement doesn't propose specific remedies but some, including Altman, have proposed an international regulator along the lines of the U.N. nuclear agency.
"The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them. There's a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority. So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other," said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move.