Autonomy is a concept often used, but not always clearly defined. Indeed, there are a number of definitions which are used, often dependant on the context in which it is used. For example, "autonomy" may be used to refer to a mobile robot in the sense that it can move around on its own (whetever the control system used), but the same term may also be applied to a biological agent capable of defining its own goals and surviving in the real world. In the debate of autonomy, and as indicated from these examples, the concepts of embodiment and emotion are also important in being able to explain the mechanisms involved. In recent times, emotio has become a hot topic in a wide range of disciplines, from neuroscience and psychology, to cognitive robotics. In order to elucidate the role of emotion in autonomy, Tom Ziemke reviews the concepts concerned and outlines a promising course of future research.
First comes a discussion of the difference between robotic and biological autonomy. This discussion is especially pertinent given the problem mentioned in the first paragraph: the widely differing definitions of autonomy used in robotics work. Important for biological autonomy is the concept of autopoiesis. Broadly speaking, an autopoietic agent is one which is capable of maintaining its own organisation: it has the ability to produce the components which define it. For example, a multicellular organism has the ability to create individual cells, which in turn form the organism itself. Despite a range of slightly different versions of the term, they all emphasise this self-constitutive property - and thereby exclude all current technology robots. Concerning robotics, autonomy generally refers to independance from human control. The aim is thus that the robot determines its own goals in the environment. This use of the term autonomy has some problems, particularly with regard to the biological definition, but is in widespread use. An important point raised though is that robotic autonomy is used to refer to systems which are embodied in mobile robots which act in the real world, as opposed to the mostly disembodied decision making systems of more traditional AI methods.
With embodiment comes the issue of grounding. Following Harnad's formulation of the symbol grounding problem, and Searle's Chinese room argument, the grounding of meaning for artificial agents is an important issue. A large amount of work has been carried out in this area throughout the 90's as a means of improving the behaviour improved. However, merely the imposition of a physical body does not necessarily result in intelligent behaviour, since this embodiment emphasises sensorimotor interaction, and not the other aspects (of which there are many) which are highly relevant for biological agents. The question then is: what is missing from robotic setups?
The argument is that robotic models, in addition to implementing the sensorimotor interactions which have been previously emphasised, must also link this to an equivalent of homeostatic processes: i.e. linking the body to the cognition, not just the bodies' sensors and motors. An example of this may be a need to keep a system variable (perhaps battery level) in a certain range - in this way, behaviour must be affected in order to achieve this. A number of theorists have likened this connection to a hierarchical organisation, with homeostatic processes (or metabolism) providing a base for the more 'cognitive' sensorimotor processes, thus supposedly resulting in a more complex, and meaningful, emergent behaviour. Homeostatic processes are often implemented in robotic systems as emotion or value systems, which are often ill-defined, and not usually grounded in homeostatic processes, but arbitrarily added as externally defined (or observer-defined) variables. The widely differing dfinition used for emotions are problematic when it comes to comparisons between architectures. One definition, provided by Damasio, breaks down the broad notion of emotion as that displayed by humans into "different levels of automated homeostatic regulation" - basically, the term "emotion" can be applied to a range of behaviours, ranging from metabolic regulation, through drives and motivations, to feelings (e.g. anger, happiness). In this way, these somewhat arbitrarily defined implementations of emotion may be seen to be higher levels of the emotion hierarchy, which may ultimately be tied to bodily processes (e.g. somatic theories of emotion).
Bringing this discussion of autonomy and emotion in artificial (robotic) systems together, it is clear that current technologies are neither autonomous in the narrow biological sense, nor implement grounded emotions (due to their supposed basis in biological homeostatic processes). However, it has been argued that the narrow biological definitions do not provide sufficient conditions for cognition, and that higher level cognitive processes are not necessarily emergent from these constitutive processes alone, but that interactive processes are also necessary. Similarly, the necessity of such autopoietic properties for self and consciousness are not established. Robotic models may then be used as models of autonomy without having to rely on such philosophical concerns. The emergent conclusion is though that embodied cognition of the form favoured in cognitive robotics work commits itself to a central role for the body, not just in sensorimotor terms, but also in homeostatic terms. The interplay between the two is then of central importance, and this investigation is proposed as a promising avenue for future research.
Ziemke, T. (2008). On the role of emotion in biological and robotic auonomy. BioSystems, 91(2), 401-408.