Thursday, May 03, 2007

Mechanistic and Phenomenal Embodiment

As may be seen from some of my previous posts, I consider the concept of embodiment to be a key one in both the understanding of neural intelligence and cognition, but also in the creation of artificial intelligence and cognitive robotics. The question of what embodiment actually means is thus a vital initial step. This paper, by Sharkey and Ziemke (ref at end of post), distinguishes two types of embodiment, mechanistic and phenomenal, which lie at opposite ends of a continuum, and assess whether either, given current robotic technologies, are sufficient for creating strong AI (defined by Searle, strong AI is the claim that "the implemented program, by itself, is constitutive of having a mind. The implemented program, by itself, guarantees mental life." - Searle, 1997).

Mechanistic Embodiment:
• Is the view that "cognition is embodied in the control architecture of a sensing and acting machine." This is the entirety of the agent: there is nothing separable from this mechanism - no internal representations or symbols (the symbol grounding problem is thus not an issue).
• This form of control stems from the work of Sherrington (who studied the nervous system and reflexes) and Loeb (who worked on tropisms - directed movements towards or away from stimuli, later termed taxis - the manner in which the environment directs the actions of the agent).
• Fraenkel and Gunn (1940) proposed that "the behaviour of many organisms can be explained as a combination of taxes working together and in opposition." The authors of this paper indicate that it was this general point of view which "heralded behaviour-based robotics".
• An excellent example of mechanistic embodiment as a combination of Sherrington and Loeb's ideas are the electronic tortoises built by Grey Walter in the 1950's. These tortoises were electromechanical devices, consisting of basic electronic components, which enabled them to exhibit relatively complex behaviours - including light aversion and attraction depending on battery level, obstacle avoidance, and constant movement.

Phenomenal Embodiment:
• As opposed to mechanistic embodiment which is wholly concerned with the physical, phenomenal embodiment is the embodiment of a mental or subjective world. It finds its roots in von Uexkull's Umwelt (the subjective or phenomenal world) which brings together an organisms perceptual and motor worlds. This view is however somewhat compatible with Sherrington and Loeb's view of the organism as an integral part of its environment - however, perception and action are of central importance, rather than physical form.
• The study of embodied cognition has seen a reassessment of the relevance of "life and biological embodiment". In this case, the biological (physical) embodiment defines the perceptual world, thus fundamentally affecting the phenomenal embodiment.
• This leads to a definition of cognition: "...cognition is viewed as embodied action by which Varela et al (1991) mean '...first, that cognition depends upon the kinds of experience that come from having a body with various sensorimotor capacities, and second, that these individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context.'".
• One more quote: "Thus phenomenal embodiment is the notion of cognition as an intrinsic part of a living body; as a process of that body."

Given these two extreme types of embodiment, the paper then turns to the question of whether either can lead to strong AI. The following is only the briefest of summaries of the arguments presented, and instead focuses on the conclusions drawn.

The first point of discussion on phenomenal embodiment was the presence of a living body. The uncontroversial argument that (current) robots do not have bodies which act in the same way as the bodies of biological agents is put forward. So a robot is constructed from previously fabricated parts brought together in a predefined configuration, whereas "the construction of an animal starts centrifugally; animal organs grow outwards from single cells." A further, and more important for this discussion, argument is that a living system, such as a single cell, is capable of creating and maintaining its identity in an environment with constant perturbations, and the fact that the individual components of the living system are continually changing/being renewed. It is from this that the concept of autopoiesis is introduced (which essentially means self creating or producing), which was first laid out by Maturana and Varela (1980). A quote: "An autopoietic machine, such as a living system, is a special type of homeostatic machine for which the fundamental variable to be maintained constant is its own organisation. This is unlike regular homeostatic machines which typically maintain single variables, such as temperature or pressure." It is argued that it is this concept that forms the largest barrier between current robots and living systems in terms of embodiment. In similar terms, robots may be described as allopoietic, where the organisation of the system is defined by elements which are themselves not part of the organisation - in this sense, machines cannot be defined as truly autonomous. Indeed, a number of people have made comments along these lines - essentially that it is not possible to have strong phenomenal embodiment on a robot, because it does not have the prerequisite living body. To get round this problem, the authors acknowledge that simulation may be used. However, since this would, in the authors words, "...view an allopoietic machine 'as if' it were an autopoietic machine..." it would be weak, and not strong embodiment.

Concerning mechanistic embodiment, the argument essentially revolves around the attribution of meaning by observers - or what is termed in the paper the "clever AutoHans error". Most will be familiar with the story of Hans, a horse apparently capable of basic arithmetic. The horse stunned audiences of scientists and the public alike with the ability to tap out with its hoof the answers to simple sums displayed in front of it on a piece of paper. Hans always got the answer right, which led to a number of serious theories concerning the use of mental arithmetic by the horse. To cut a long story short, it was eventually ascertained that Hans was picking up on subtle cues given by the observers when it had tapped its hoof the correct number of times (Hans tapped its hoof an arbitrary number of times when the sum was hidden from the observers). The 'moral of the story' is that it was the observers and the observers alone who attributed the capability of mental arithmetic to the horse - it is likely that the number of times that Hans tapped its hoof had no inherent meaning to it, merely that if it stopped tapping given a certain cue it would be rewarded. Similarly, if one were observing a small mobile robot, an AutoHans one could say, which was programmed to turn towards light and away from obstacles while always moving forward using light sensors and ultrasonic sensors respectively (essentially a type of Braitenberg vehicle), the behaviour of the robot could be described as light-following and obstacle avoiding. An uninformed observer may even state that the robot liked light, and didn't want to hurt itself by hitting a wall. However, these explanations only have meaning for the observer, just as with Hans the horse. Similarly, the behaviour of the Grey Walter tortoises would only have meaning for the observer - the mechanical agent itself merely acts according to pre-defined rules. Essentially, what happens is that the agent is created to perform a certain way in a certain environment, even if this is not the intention of the designer. The goal of behaviour is thus defined by the researcher and not by the agent - hence the same situation as before: behaviour only has meaning for the observer.

This brief discussion of the two extremes of embodiment had a common element for the authors: that with current technologies, robotics do not have living bodies. The implications for phenomenal embodiment were questions of physical, and hence mental, autonomy - for mechanistic embodiment these same issues of autonomy arise, albeit from a different source: that of goal creation. The authors conclusion based on these deficiencies is that strong embodiment is not in principle possible for robotics using current technologies. However, it is acknowledged that weak embodiment of both types exist, and that they maintain utility in the study of animal behaviour (as I have previously posted on, here and here). Essentially, one studies the allopoietic robotic system as if it were an autopoietic system. This approach may yield useful insights into behaviour, even if it cannot be claimed that the created agents are equivalent to their biological counterparts.

1 comment:

Unknown said...

Can't seem to add the refs yet (Blogger isn't letting me edit my own posts...) - apologies. Ref to paper below:

Sharkey, N. E. and T. Ziemke (2001). "Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI?" Journal of Cognitive Systems Research 2: 251-262