"The possibility of intelligent behaviour is indicated by its manifestation in biological systems. It seems logical then that a suitable starting point for the study of behaviour-based robotics should begin with an overview of biological behaviour. First, animal behaviour defines intelligence. Where intelligence begins and ends is an open-ended question, but we will concede in this text that intelligence can reside in subhuman animals. Our working definition will be that intelligence endows a system (biological or otherwise) with the ability to improve its likelihood of survival within the real world and where appropriate to compete or cooperate successfully with other agents to do so. Second, animal behaviour provides an existence proof that intelligence is achievable. It is not a mystical concept, it is a concrete reality, although a poorly understood phenomenon. Thirdly, the study of animal behaviour can provide models that a roboticist can operationalise within a robotic system. These models may be implemented with high fidelity to their animal couterparts or may serve only as an inspiration for the robotics researcher."
These three points provide the basis for the point of view that animal behaviour has a lot to offer the robotics community (behaviour-based robotics to be specific), and hints at the potential feedback that such work may offer the biologists. Without actually mentioning the term, this description could just as well be applied to artificial (or computational) ethology. The first point I think to be particularly interesting. In my opinion, the working definition of 'intelligence' it introduces is not particularly controversial - however, the implication of the the phrase '...improve its likelyhood of survival...' for cognitive/autonomous robotics is that without some form of actual physical dependancy on the environment (e.g. 'food' - and I hesitantly add, some form of concept of 'life and death' for the agent concerned), intelligence for an artificially created being means nothing (see a related concept in embodiment: organismic embodiment). The second point is one which is generally assumed, but not usually explicitly stated, and something which I think is useful to remind oneself of occasionally. The third point I think is self evident, stated many times, and with plenty of examples in the literature. In fact I think it is the basis for most cognitive robotics work.
The next part of the introduction to chapter two lists two reasons why the robotics community has traditionally resisted the use of the previously mentioned methods of creating artificial agents with 'useful' behaviours (e.g. perceiving and acting in an environment):
"First, the underlying hardware is fundamentally different. Biological systems bring a large amount of evolutionary baggage unnecessary to support intelligent behaviour in their silicon based counterparts. Second, our knowledge of the functioning of biological hardware is often inadequate to support its migration from one system to another. For these and other reasons, many roboticists ignore biological realities and seek purely engineering solutions."
The second point is, I feel, perfectly justified. One only has to consider, for example, the complexity of natural neurons and networks in comparison to the most advanced artificial neural networks which use population-based firing rates, to see that this is true. The first point however, I don't think is necessarily true, especially if one considers that the biological hardware which 'produces' the intelligent behaviour we seek holds many of answers. In this case, an understanding of the 'evolutionary baggage' which produces the biological hardware would be of importance when seeking to understand the intelligent behaviour itself. Or so I think, anyway.
No comments:
Post a Comment