It is not uncommon these days to look in the news and find out that another one of Silicon Valley’s famous CEOs is expressing concern regarding AI. It has (rather unfortunately) developed a sort of negative reputation thanks to the Hollywood productions that are about robots taking control over the world and wiping out humans. The general argument that people pose against AI is this:
When computers realise that they are superior to human beings, they would basically want to remove the entire species itself.
Now, there are a few problems with that assertion that I shall elaborate in this blog post. But first, we have to understand what AI is exactly. I don’t want to bore the readers of this post with the details so this will just be a brief overview of artificial intelligence. If however you do want to get into the details you have the Internet at your fingertips. Google away!
An important thing that most people ignore is the fact that they’re associating artificial intelligence with physical humanoid robots (or other types of hardware). This is a fallacy because at the core of artificial intelligence is the software. Algorithms are more important when it comes to AI. Therefore throughout this post, when the word “computer” is mentioned, I’m referring more to the software part rather than the hardware.
Truth is, we already have artificial intelligence. It’s just that it’s highly specialized and not equivalent to human intelligence. For example, chess computers have been able to defeat world champions. Does that mean that computers are already smarter than humans? At playing chess, probably. But can the same set of algorithms play checkers or solitaire? Of course not! Chess computers are examples of specialized artificial intelligence that are good at doing only one particular task that they’re programmed to do. Chess computers possess weak AI in contrast to strong AI since they’re not sentient. This also raises the question of whether computers can really “think” or are the really “aware” of their presence. The legendary Alan Turing devised an interesting test to find out which he called “the imitation game”. I’ll probably have an entire blog post explaining the Turing Test in detail in the future.
Is it possible for the same computer (same set of algorithms) to master multiple games and be adept at playing them? A field that solves this (and many other problems) is machine learning. The core philosophy of machine learning is that a computer can learn anything when it’s given a set of training examples. When the computer has “seen” enough instances of a particular game, experiment or phenomenon, it “learns” the rules. The possibilities of a machine learning algorithm are endless. It also has many applications today. Ever wondered how Google knew which email is Spam, which one is a Promotion or which one is Important? Well, that’s machine learning working right there. The algorithm analyses the content of the email and is able to “predict” and classify an email. Smart isn’t it? That’s why machine learning is considered to be very important to AI. If we improve our hardware and have computers process large amounts of data, the possibilities include highly accurate weather prediction and accurate stock market predictions.
You may see this sentence in every news post or blog article with paranoid authors: “We are closer to AI than you think.” What they don’t do is follow up that statement with sufficient proof or evidence. They use trends of development in the past and merely extend it to the future. This is a rather rudimentary way of looking at things mainly because it ignores the evidence we have at hand. This evidence includes the recent tests that were conducted that are very closely related to the general artificial intelligence that we’re looking for.
One of such experiments is the Google Brain project where they used a large-scale machine learning algorithm and distributed the computation over 16,000 CPU cores. These computers were fed 10 million training examples from the entire YouTube library of video thumbnails and not so surprisingly the computer “learnt” to identify a cat. What’s amazing about this experiment is that the computer was never told what a cat was! If you’re as stupid as me, you wouldn’t have understood what this means at first. Well, similar to a little baby that learns to distinguish between a cat or a human or anything else, the same part of the computer “lights up” when it’s shown any picture of a cat. Of course, the baby doesn’t know the English word for the cat, but it knows that it’s a separate category of “thing” that it sees. Pretty cool stuff isn’t it? Well, it’s not quite close to a real human baby. For one thing, this experiment took their 16,000 CPU cores three days. That’s right. Days. Although we might not know accurately the time it takes for a human to register something as a cat, it would be safe to say that it would not take a whopping three days. Probably because the actual human brain has approximately 106(million) times as many neurons and synapses as the network that they had replicated. If we were to simulate the human brain for this particular experiment, it would take a lot more hardware and processing power. I’ll probably write a mini-post exploring the exact number in the future.
Computer vision is an integral part of the hunt that’s on for general AI. There’s a theory that states that it’s just one “super” algorithm that will be able to imitate true human-like intelligence. This is based on a very interesting experiment that was conducted on ferrets. They basically rewired the optic neurons that initially connected the eye to the visual cortex to the auditory cortex and surprisingly enough, the ferrets learned to see! Which leads us to conclude that these two parts of the brain are processing inputs in very similar (if not exactly same) ways. I don’t wish to go into the details but if you’re as curious as I am, here’s a link just for you.
If we do get to the general AI that we’re scared about, do we know why computers would try to wipe us out? Are they going to be programmed to be predatory or are they programmed to be selfish and evil? Why would they want the whole world to be their’s? We must understand, that unlike humans, computers make sense. They are completely logical and even if they do learn to possess emotions, it’s safe to say that these too can be programmed. Plus, they do not have the same desire for survival that humans do so they won’t find any need to kill humans when they feel that they’re “life” is under threat.
So we’ve taken care of the human-level intelligence. What about the super-intelligence that surpasses humans? The answer’s simple, we don’t know because we’re not that intelligent. But we don’t have to be worrying about that right now. I say that based on the tests that have been conducted until now which say that we’re pretty far away from human-level intelligence itself. I guess after that, it’ll be a race between humans and computers to see who’ll be able to come up with algorithms that achieve superhuman intelligence levels.
Remember the chess computers that play only chess? Well, there’s a company called DeepMind, recently acquired by Google, that developed an algorithm which can learn any of the old and simple two-dimensional Atari video games like Pong we had in the 70s. It learns it so well that it can actually beat humans at it! But of course, it takes a while to do that. It took nearly 3 hours to master Pong. Naturally, because of this, you’ll se headlines like “Google DeepMind AI outplays humans at video games”. Hold on! These are simple, two dimensional video games that actually took the computer three hours to master. 5 year old kids definitely learn the games faster. Also, DeepMind have admitted that it had difficulty playing games like Pac-Man and also that they were far away from getting it to play three dimensional complex games that we have today.
Now, I’m not trying to undermine the great achievements of computer scientists currently working on AI. They’re making significant progress in the field this very moment. However, our worry is a little premature. In fact, one of Google’s lead researchers in the Google Brain project and professor at Stanford; Andrew Ng says:
I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.
Now, if that doesn’t comfort you, I don’t know what will.
The concern that AI will be responsible for the fall of the human race is coming too soon. The fact is that this concern might stop us from doing solid research in the field and it’s important for people to understand that. Who wants the government to impose restrictions on research? I think we should be more worried about that than about AI killing us all.