Read This Terrifying One-Sentence Statement About AI's Threat to Humanity Issued by Global Tech Leaders The nonprofit Center for AI Safety says new tech has the potential to lead to humanity's extinction.
By Dan Bova
AI experts, journalists, and policymakers are among those who signed an ominous one-sentence statement issued by the nonprofit Center for AI Safety:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Yikes. In an email sent to CBS MoneyWatch, Dan Hendrycks, the director of the Center for AI Safety, outlined the "numerous pathways to societal-scale risks from AI."
His list included AI being used to create bioweapons more deadly than natural pandemics and its ability to spread dangerous misinformation on a global scale. Hendrycks also notes that humanity's increasing dependence on AI could "make the idea of simply 'shutting them down' not just disruptive, but potentially impossible, leading to a risk of humanity losing control over our own future."
Related: Elon Musk Says We Should Stop Rapid AI Development Right Now — Here's Why
Humans are not completely rolling over for AI as of yet, with streaming platforms pulling down tens of thousands of AI-generated songs, including ones that eerily mimic Drake and The Weeknd. But it sort of feels like building a sandcastle wall in front of a tsunami, right? A report released by Goldman Sachs earlier this year estimates that AI could automate — and eliminate — as many as 300 million full-time jobs worldwide.
But not to worry, write the Goldman Sachs researchers: "The good news is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth."
(Phew! For someone who is keenly aware that this same story could have been written by ChatGPT in 2 seconds, I certainly feel relieved.)
In April, Elon Musk and a group of AI experts wrote an open letter calling for a six-month pause in developing new AI systems, warning that AI labs are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."