Where's My Robot Housekeeper?

July 30, 2009Tagged Toastmasters

I wonder where R2D2 and C-3PO from the Star Wars movies are. With all of the research on artificial intelligence, why don't we have robots that are anywhere near as human-like as them?

Or, where's HAL9000, the spaceship computer that Stanley Kubrick and Arthur C Clarke predicted would be here by now in their movie "2001: A Space Odyssey?" At least we should have computers that communicate fluently inEnglish, like HAL9000 did?

In fact, I would even settle for a robot that could fix me dinner and do the dishes! Why can't I just go to the nearest electronics store and buy a robotic housekeeper?

It sometimes seems like AI research has produced nothing of interest. But I suggest that there are three key facts that help explain, or dissolve, this illusion of AI's failure.

First, we have to remember that AI is still an incredibly young research field.

The term Artificial Intelligence was coined by John McCarthy, who is considered one of the founders of AI. He coined the term in 1956 while organizing the very first conference on the topic, which has become known as the "Dartmouth Conference" because it was held at Dartmouth, Massachusetts.

And the founder, John McCarthy, is still an active researcher and professor at Stanford. In fact, he was present at the Commonsense Symposium in Stanford, which I attended in 2007. The fact that I could present a research paper with the founder of the field in the audience helped me grasp how young the field really is.

Secondly, the computer hardware we use is massively inferior to what "natural intelligence" uses.

We can get some idea of the speed of computers compared to the speed of animal brains by estimating how much information a neuron can process and how many neurons there are in a given brain. Hans Moravec has done so and found that today's powerful personal computers are comparable to insect brains. A human brain is about a million times more powerful than today's computers.

But hardware is quickly improving. Moore's law predicts an exponential improvement of PC performance. If it holds up, we can expect PCs with the capacity of a mouse brain by 2010 and human-level capacity before 2025.

Thirdly, we have to take into account a phenomena that has come to be known as the "AI effect."

The AI effect is the unfortunate (at least for AI researchers) observation that as soon as something that seemed to require intelligence has been implemented in a computer, it doesn't seem intelligent anymore. When you look at the algorithm, it's just doing calculations, none of which seems very "intelligent" by itself.

For example, chess used to be seen as a game that requires intelligence. Many thought that a computer could never beat humans at chess because computers can't be intelligent.

Well, in 1997 IBM's DEEP BLUE computer beat the human world chess champion, Garry Kasparov. But no one said "Wow, look at that, we now have intelligent computers!" Instead they said "Well, chess doesn't really require that much intelligence. It just requires evaluating a lot of possible moves and selecting the best one."

These three factors, the fact that AI is still a very young research field, the fact that our computers are still much too slow, and the fact that we tend to explain away the progress that we do make, help explain our impatience with the progress of AI research.

Yes, there was some early overoptimism, like Kubrick and Clarke's movie 2001 that predicted computers would speak fluent English by now. But we should not replace premature optimism by equally premature pessimism. The greatest achievements of AI are still to come. And I believe that, if you keep your eyes on the development of AI in the near future, you will be witness to some truly amazing things!

Logical Agents that Plan, Execute, and Monitor Communication

July 12, 2009Tagged Scientific Publications, Andi-Land

Martin Magnusson and David Landén and Patrick Doherty (2009). Logical Agents that Plan, Execute, and Monitor Communication. 2nd Workshop on Logic and the Simulation of Interaction and Reasoning (LSIR-2).

Abductive Reasoning with Filtered Circumscription

July 11, 2009Tagged Scientific Publications

Martin Magnusson and Jonas Kvarnström and Patrick Doherty (2009). Abductive Reasoning with Filtered Circumscription. Proceedings of the 8th Workshop on Nonmonotonic Reasoning, Action and Change (NRAC 2009).