One way or another, we all heard of Artificial Intelligence. It’s been there since we were born and it will surely outlive us. It’s a promise of a Utopian future and could be a harbinger of our own demise. It could help us end poverty, disease and pollution or it could see us a threat and decide to take us out. Whatever the future might hold, one thing’s certain: Artificial Intelligence is (or will be) the Pyramids of a generation and most likely mankind’s greatest creation.
In order to understand what AI is, we must first understand, well, Intelligence. Awkward as it may seem, defining this term is quite a challenge but scientists and philosophers alike are working relentlessly to come up with a proper answer. Whatever that answer may be, it’ll have a profound effect on the way we’re going to determine if a machine is intelligent or not.
Father of Artificial Intelligence
Alan Turing, considered the father of AI, invented a machine that shortened the Second World War by several years. Alan’s machine managed to decrypt a vast number of messages enciphered on Germany’s Enigma machine. It was so successful because it could accomplish the task of decrypting a message several thousand times faster than a human. But was his machine intelligent?
To answer this question, Turing actually devised a fairly simple test: a human player called an interrogator would have to observe a text conversation between another human and a machine. If the interrogator cannot tell which is which, then the machine is said to have passed the test.
Somewhat Intelligent Machines
Considering that Turing’s machine and others like it such as calculators, automated traffic light systems, autopilots and so on, cannot pass the test, scientists prefer to call them somewhat intelligent. These machines are able to perform a simple task (or a predefined set of tasks) faster than any human. What they can’t do is imagine a starry night, ponder upon the vastness of the Universe and, thankfully, they cannot create a better version of themselves.
Clearly we’re not there yet, and some scientists and even philosophers think that we’ll never be. To create Artificial Intelligence, we would have to completely understand how the brain works. From a scientific point of view, nothing is impossible and we’ll one day be able to fully explain, through equations, how our neurons communicate with each other. But some argue that it would be illogical for a human brain to state: “I totally understand how I work” and thus AI can never be accomplished.
Risks
Leaving all philosophical and technological limitations aside for a moment, how can we be sure that AI won’t start a genocide the very second it starts breathing? While sci-fi scenarios such as those in The Terminator and The Matrix are highly unlikely, brilliant contemporary minds such as Stephen Hawking and Elon Musk are concerned that the way we program AI might bring our destruction. For example, let’s take a machine with a single red button whose sole purpose is to have that button pressed. Given that humans won’t always be around to press it, the machine might connect to the Internet and start trying to build another robot that can press its (her?) button at will. What if it then assumes that humans might stop the newly built robot from achieving its task? You can see how this scenario can cause some issues for us humans.
Supercomputers vs Artificial Intelligence
Until we get there, we do currently have our very awesome supercomputers. Contrary to popular belief, Artificial Intelligence and supercomputers are separate concepts and serve different purposes. While SCs are great at storing huge amounts of data and making many more computations per second than humans can, they’re not that great at power management – which is exactly what AI needs. Supercomputers are designed to help us in areas such as climate or medical research and do not need to be mobile, while the purpose of AI is to replace or at least imitate humans.
Thus, engineers and scientists need to come up with a computer that can reach human levels of storage and computation speed and be no bigger than our brain. Once we reach that point, the possibilities are endless: intelligent robots will be able to replace dangerous human jobs such as policemen, firefighters, soldiers and even jobs that require softer skills such as babysitting or teaching.
And speaking of robots, some argue that the only way we can replicate human intelligence is if we actually build robot babies. They would start off with a nearly blank hard drive and then gradually increase their wits just as humans do: through physical and verbal interactions. But since we do not yet fully understand how our children grow and develop, it seems that AI’s birth is even farther into the future than we imagined.
Conclusion
Which gives us plenty of time to think about other important aspects. While the technical details of Artificial Intelligence are being figured out, we also need to address the ethical and moral elephant in the room. Can or should AIs have rights? Should they be considered persons? Are they conscious? Who’s responsible if an AI does something bad? These are extremely crucial questions we need to answer before the first one is born, questions that many have tried answering throughout the ages, such as Isaac Asimov in “I, Robot” or Philip K. Dick in “Do Androids Dream of Electric Sheep?“.
I hope these words sparked some interest into this very complex and daunting field, one that we’ve only just begun to explore. Let me know what you think in the comments section below.
2 Comments
Comments are closed.
“engineers and scientists need to come up with a computer that can reach human levels of storage and computation speed and be no bigger than our brain”, this is wrong. A AI computer can be miles away form the sensors and interface (the “body”) .. it might be two parts talking to each other
[…] Related: An Introduction to Artificial Intelligence: Facts and Fallacies […]