Friday, March 30, 2007

"Why Smart Machines need Emotions"

In the 24th of February issue of New Scientist, Marvin Minsky, one of the fathers of artificial intelligence research, gives a brief interview on why he thinks emotions should now be regarded as a necessity in making more intelligent machines.

As many have pointed out over the years, artificial intelligence (AI) has proven very successful in solving problems which have been traditionally thought of as difficult ones: ones involving complex mathematical equations or game playing (e.g. chess). However, what AI has found extremely difficult is those tasks that we as humans find very easy, such as understanding the story in a children’s book or making the bed. The problem is one of inflexibility: if one method fails, then where a human is capable of formulating alternatives, an agent imbued with artificial intelligence is rarely able to. According to Minsky, a humans ability to do this is strongly influenced by emotions. Furthermore, he maintains that emotions usually 'simpler' than other forms of what is usually known as thought.

If I may make use of a quote, Minsky gives an example of how this may occur: "When someone gets angry, we can see that some of their mental resources switch off. They abandon some long-range plans and goals, and become less cautious and thoughtful. This frees them to be stronger and to think on their feet, making it easier for them to intimidate others." Not sure I quite agree with the switching on and off of mental resources, but the example illustrates the concept nicely.
Minsky’s view is that rational thought does not persist in an isolation with emotion merely providing additional features - in fact, quite the reverse: emotion is an integral part of thought processes. In a scheme he names the critic-selector model of the mind, the brain is made up of a number of resources, each of which is a structure or process responsible for a mode of thought, and which may be activated for the currently required behaviour. These may be either considered to be 'emotional' or 'intellectual' processes. Furthermore, critics and selectors are present, the former which may recognise problems or potential ones, and the latter which select which resources to activate at any given moment. This is as far as this theory is explained in this brief interview, however mention is of course made of his recently released book, "The Emotion Machine", in which the critic-selector model of the mind theory is further expounded.

As a final note of interest, the final question of the mini interview was on Minsky’s view of the future of AI. He says that there is a need for schemes which "combine multiple ways of thinking", much like that proposed in his scheme, and he encourages students, both in AI and neuroscience, to look into this as a matter of importance, although he does acknowledge that the resources are not in place to promote this sort of blue-skies research.

2 comments:

Chris Chatham said...

This critic-selector idea is interesting in the sense that it appears to relate to the critic-actor architecture used in some temporal difference learning algorithms, and recent evidence that the dorsal and ventral striatum may be doing something similar (http://psych.colorado.edu/~oreilly/papers/AtallahEtAl07.pdf)

Ian Parker said...

The basic problem is one of understanding natural language. There must be flexibility on the Web, since there are thousands of different approaches to a particular task.

I find it surprising in many ways that a computer cannot "make a bed" as making a bed is fairly straightforward dynamical simulation and prediction.

I am in fact more interested in what we have in our bed "Quieres dormir con fosforo". Google translate has to be able to recognize words in context and differentiate between the different types of "match".