Artificial intelligence – when it all began
Some of you might think AI is the coolest and hottest thing happening in the software field now as we now talk about driver free cars, smart homes, we even have Siri in our hands today. If you are aware of the history of computers you will know that AI is one of the oldest field in Computer Science started in 1950s when electronic computers came in to being after world war II. That is when people saw computers playing chess, solving algebraic equations and answering puzzles. People truly believed that Artificial Intelligence is going to be a reality very soon and the fact is we are still waiting for the truly intelligent machines.
In 1950s Alan Turing even defined the ‘Turing Test’ to qualify if a machine is ‘Intelligent’. The definition of the test itself is very simple: “Computer (software or algorithm) passes the test if a human after posing questions cannot tell if the responses came from another human or not”. As per Peter Norwig (Director of Research at Google) the true AI researchers actually do not devote much time towards achieving the Turing Test, instead they focus on the underlying principles to achieve true intelligence as he jokingly states that passing the test is like “Aeronautical Engineers defining the goal as making machines that fly like pigeons that it can even fool other pigeons”. It is a fact the real breakthrough in ‘artificial flight’ came about when engineers such as Wright Brothers stopped imitating birds and focused on aerodynamics.
In 1965 Herbert Simon one of the pioneers in the field mentioned that in 20 years machines can to everything a human can, but many 20 years have come and gone. If we hear the exact same statement today, that in 20 years true human like machines will be around us, we can carry on a conversation with them, share a joke and order it to make dinner, wouldn’t we be delighted?
Researcher Paul Abraham stated that AI during 50s was like building a tower to the moon, each year the tower gets bigger and bigger, but the moon is not getting any closer. AI is really the oldest dream of Computer Scientists that we are yet to realize completely.
Intelligence Vs Simulation of Intelligence
When such capabilities existed in late 40s and 50s you might wonder why is it so hard to achieve artificial intelligence with all the computing power we have today. The answer is not an easy one but we can appreciate why is it so hard if we attempt to differentiate between the ‘simulation of Intelligence’ and ‘intelligence’.
Some of you may have seen the movie 2001 Space Odyssey where the truly intelligent HAL computer features – the computer that is sent as a companion to the crew on their long journey to planet Jupiter, which has so much intelligence that it eventually turns evil (for some reason any human like machines turns bad in movies). Compare that with the Automaton that features in the movie Hugo – which is basically a clockwork machine that looks like a little boy that draws a specific picture when wound up and given a pen (The images of HAL and the Automaton is featured on top of this post). The difference in simulation of intelligence and real artificial intelligence is the difference between Automaton and the HAL computer.
In 1997 when IBM’s Deep Blue beat the then world chess champion Gary Kasparov, people thought this is a great breakthrough in artificial intelligence but the fact is it is only the simulation of intelligence. The way it beat the champion was not by thinking like humans but by crunching 11 billion operations per second, all it could do was just play chess, nothing more. It was really not a step towards artificial intelligence. On the other hand IBM Watson the computer that beat the Jeopardy game champions is a significant step towards AI, it actually learns from books and there is some serious machine learning capabilities there. Hence the name ‘Cognitive Computing System’ for Watson
Computer Vs Brain
Let’s explore what it takes for serious computers to run in comparison with what it takes for our brain to run. Physicist Prof. Machio Kaku in his book ‘Physics of the Future’ explains it very well: IBM’s Blue Gene computer that simulates the thinking process of a mouse that has about 2 million neurons operate at the speed of 500 trillion operations per second, it occupies quarter of an acre and another computer called Dawn that could simulate 100 percent of rat’s brain with 55 million neurons has 147,456 processors, it consumes 1 Million watts of power and it needs over 6000 tons of air-conditioning to cool it.
Compare that with the human brain that has 100 Billion neurons (yes, billion), it uses just 20 watts, the heat generated cannot even be felt and one can say that it comes in a fairly compact package.
It becomes clear that fundamentally our brain does not operate like computer.
Finally…
What is AI: AI has many definitions, such as machines that can think like humans, machines that act rationally, a computer system that can perceive, reason and act, etc. Trying to put in little more definitive but in simple terms: A computer or a program that produces some result that the programmer has not foreseen, based on the what the program has learnt in the process of its execution can be said to be exhibiting some form of Intelligence. Essentially a software that learns like a baby and build its knowledge over a period of time.
Considering the current technology landscape with amazing portable computing power and user interface in form of mobile devices, high bandwidth internet everywhere, cloud computing that can make enormous computing power accessible, technologies that can synthesize big data, we can say that the enabling conditions of AI are rich. We are already starting to experience AI in our day to day life like having Siri or Google Now in our palms. It is a good example where NLP (Natural Language Processing – one of the AI capabilities) is nearly taken for granted, we are seeing some AI Games (checkout anki.com) and scientists are researching on Swarm Robots that are autonomous, learning from each other and collectively act on a common goal.
Mobile Computing, Cloud and Social Computing has caused technology disruption today. Artificial Intelligence holds the potential for tomorrow’s disruptions, we must watch out and get ready as we are sure to see something more than just fancy automatons.
Source: B2C
No comments:
Post a Comment