Tuesday 19 January 2016

From a tape, an eye and a pencil to artificial intelligence (Part 2)

The story about defining intelligence is yet to be finished and, as it appears, far from over. We started defining multiple types of it, like sports or social intelligence. And this is not, unfortunately, a different discussion. If we do not know what requirements our machines need to fulfil to consider them intelligent, then we are never going to finish arguing. But this definition of done here is also important for another reason - we need to know what we are trying to model with our machines.

Back in 1940s, so quite early for informatics, another model of computation was born. Threshold Logic, as called by autors Warren McCulloh and Walter Pitts, inspired a different approach: if we use our brain to think, maybe we should recreate that to build "thinking" machines? Pretty soon another researcher, Donald Hebb, suggested that we should consider plasticity to be the key of learning. And as Turing's approach was already known back then, people (including Turing) started mixing Turing machines with that brain-inspired computational methods. That is often considered to be the birth of Artificial Neural Networks.

In spite of an early spark of interest, the plasticity approach took some time to gain speed. It was not until late 1950s that Perceptron, a model of a neuron still used today, was brought to life by Frank Rosenblatt. Another 20 years went by until Paul Werbos described Backpropagation learning algorithm. That was still just the beginning of a very extensive research in neural nets that is still continued today.

And so it is 2016 now. Many different neuron models and learning algorithms have been created, massive amounts of data required to train our networks gathered and plenty of successful applications run on our machines. For example, we have neural networks able to recognize dogs on pictures and tell you their breed (better than at least most humans.) So here we go: if intelligence is an ability to gather some knowledge and apply it to new cases - we already have some artificial version of it! On the other hand, you would probably expect more from intelligent machines (even though the demonstrations are absolutely breathtaking for me as an information scientist.) After all, humans can do so many things, why can't those networks be alike? Well, that leads us back to "how many different types of intelligence we have" I guess. What we can't deny though is that at least some form of intelligence is achieved by our machines and algorithms today.

So if we consider neural networks to work pretty well, then are we done searching for AI? Absolutely not. It have been argumented by many that "this is how our brains work so we should pursue that direction", but I dare disagree. First of all, we do not really know how our brains work. Second, it is not always the best idea to bring out the big guns, and NNs may require exhaustive amounts of data for training (which sometimes is not even possible to gather!) Third, we only know that for some kind of intelligence neural networks appear to be a great model, not for just any problem. Finally, we already have some different approaches bringing good results as well.

It may appear to be entirely natural for us, humans, but detecting faces surely can be seen as some kind of intelligence. We have developed some classifiers that handle that very well! We use some clustering algorithms to help detect cancer. Some algorithms called support vector machines are used in financial forecasting. We can even use relatively simple mathematical tools like regression to forecast weather. All those applications can be considered to implement some forms of intelligence, because they include elements of reasoning: they use something they have seen before to make an informed guess about the future.

I am not getting into too much detail about how those different  (and many more others!) models were born because the point of this post is different. I wanted to show you that our computers, from simple calulators to even the most powerful super-machines, are all equivalents of a Turing machine (and actually quite imperfect ones, in some way!), and yet somehow people have managed to take that tape, eye and pencil to create aplications so complex and amazing that people can't differ them from humans by talking with them!

So are we far from building artificial intelligence? Or are we on the brink of discovering it? Or maybe we already have it and use it everyday? That weird journey through ideas and connections is not finished yet. Because hey, what is that intelligence again?

(The end)

No comments:

Post a Comment