Elon Musk Predicted Artificial Intelligence Would Be 'Seriously Dangerous' by 2019. How Close Is That to Reality? Whether you are worried about human-like AI or eagerly awaiting it, I believe the kind of technology we see in the movies is a century or more away.
By Byron Reese Edited by Dan Bova
Opinions expressed by Entrepreneur contributors are their own.
In 2014, Elon Musk posted, then quickly deleted, a comment on an article about artificial intelligence. The text of that comment has been saved and reads, in part, as follows: "The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. ... The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most."
Related: A Humanoid Robot Called Sophia Mocked Elon Musk After Being Asked About the Dangers of AI
I have no idea why exactly Musk made this post and then decided to delete it.
But, that "five-year timeframe" mentioned in the post will be reached in 2019, so it is fair to ask the question, "Should we fear artificial intelligence?"
The term "AI" means one of two different things. The first is narrow AI, which is an AI designed to do one thing, such as keep spam out of your email inbox. The second is AGI, artificial general intelligence, which is an AI that is as smart and creative as you and me. Musk makes it clear that he is talking about this latter kind, for I don't think anyone worries that their spam filter will go rogue.
AGI doesn't exist, and no one has shown they know how to make it. But, there are credible people working on the problem. So, how far away are we from having this kind of AI? Estimates vary from five to 500 years, and I personally find myself on the high end of that range. Whether you are worried about AGI or eagerly awaiting it, I believe the kind of technology we see in the movies is a century or more away.
Related: AI Won't Replace Us Until It Becomes Much More Like Us
Why do I say this? A number of reasons. First, we don't know how our brains work. No one knows, for instance, how the color of your first bicycle is stored in the brain. We don't even know how the simplest brains work, such as the lowly nematode worm. This worm has 302 neurons in its brain and dedicated scientists have spent decades simply trying to model that tiny brain in a computer and even to this day there is speculation that the task cannot be done.
Beyond not knowing how the brain works, we don't know how the mind works. The mind is all the inexplicable stuff the brain does. For instance, your liver doesn't have a sense of humor, but your brain does. No single neuron in your brain is creative, and yet you somehow are. Where does that come from? Where do emotions come from? How are we able to imagine things? We don't know.
Then there is the problem of consciousness. Consciousness is the experience of being you. A computer can measure temperature, but you can feel warmth. How that mere matter experiences the world is a scientific question we don't even know how to pose scientifically, let alone answer, and it may well be that consciousness is a prerequisite for the kind of intelligence we have.
So, where are we with AI today?
Great strides have been made, but when I call my airline and say my frequent flier number to the computer, it has a hard time telling an "A" from an "H" from an "8." That's where we really are. Computers know how to do one thing: simple math. And there is a long road from there to the kind of intelligence a human has.
Related: Preparing for the Future of AI
AI as we practice it right now is actually a simple technology. The idea goes like this: Take a bunch of data about the past, study it and make projections about the future. That's it. It is a big deal, to be sure, because it is the beginning of us forming a planet-wide collective memory. For the first time, everyone can learn from the experience of everyone else by using AI to study all the data we currently collect. It is a powerful technology in that regard, but a simple one to conceptually understand. The idea that somehow this simple technique produces something as smart and versatile as a human is, to my mind, farfetched, or, at the very least, unproven.
So, why do people think this is something we are on the cusp of. Two reasons, I believe.
The first is the movies and TV we see. Look, I'll pay $11 to see Will Smith fight off a rogue AI bent on eliminating humanity. That sounds like a fun afternoon. Those movies are so realistic and seemingly plausible that over time we all do something called "reasoning from fictional evidence." The images in those movies bleed into reality.
But, the second reason I think some people expect an AGI soon is that it is the logical conclusion of a single core belief that people are machines. Our brains are machines, our minds are machines and consciousness, whatever it is, is a mechanistic process. This reductionist view of humans says that since we are machines, it is inevitable that we can build a mechanical you. And once we build that, it will improve rapidly and soon exceed us.
I have found in my research an almost universal acceptance of the "humans are machines" idea from people in the technology industry, and an almost universal rejection of it from everyone else. You don't have to appeal to some kind of spiritualism to hold this latter view. Humans might simply be a kind of emergence or quantum phenomena or something else that cannot be replicated in a factory. We certainly have abilities that we do not understand -- and the idea that we can replicate those abilities in silicon remains unproven at this point.