Tuesday, February 19, 2008

Encephalon returns

I haven't posted for a while now due to other commitments, but for now a link to the new incarnation of Encephalon, the first issue of which is now up at Sharp Brains. It has more posts than usual (24 in all), with a wide range of subjects: from God to stress, and free will to depression, via a healthy sprinkling of other neuroscience-related matters...

I guess it could be the 39th edition of Encephalon? (Unless I've missed one somewhere...)

Link to new Encephalon Home

Thursday, February 07, 2008

Simulation versus reality, and the reliance of cognition on embodiment

Cognitive robotics work makes extensive use of both real world robots and environments, and their simulation equivalents. Simulations are useful in that development time is shorter (or at least has the potential to be), so proof-of-concept experiemnts are readily implemented, and all of the variables are under the control of the designer, allowing better testing and debugging for example. However, from a practical point of view, there are a number of reasons why the use of a physical robotic agent is necessary. Brooks suggested through the “physical grounding hypothesis” [1, 2] that since simulations are by their very nature simplifications of the real world, they miss out details which may be important in terms of complexity of the problem faced by the virtual agent. However, by attempting to implement a high fidelity simulation model, one may use more resources (both human and computational) than by using a real robot – hence defeating the object of using a simulation at all. Related to this, it is also suggested that the designers of the simulation make assumptions as to what is required, thereby unintentionally introducing biases into the model, which would have an effect on the validity of the simulation. An effect of this may be unrealistic behaviours (or ones which would not map to real world behaviour). However, it is acknowledged that when a simulator designed to be independent of any particular theory is used, this last point is effectively rendered void [3].

In addition to the practical problems outlined in the previous paragraph, there are more philosophical concerns when considering embodiment which will now be briefly stated. The assertion that embodiment is necessary for cognition is now generally accepted, as evidenced by [4] for example. However the definition of the notion of embodiment is far from clear. Numerous definitions have been used, eight of the most frequently used concepts of which are reviewed by [5]. Among these are general definitions such as embodiment as structural coupling to the environment or as physical instantiation as opposed to software agents (as argued for in the previous paragraph). More restrictive definitions also exist, such as embodiment in an organism-like bodies (which have life-like, but not necessarily alive, bodies), or organismoid embodiment which states that only living bodies allow true embodiment. However, even if the most restrictive definition becomes generally accepted (strong embodiment: that a living body is required), it has been argued that studying 'weakly' embodied systems as if they were strongly embodied would still be a worthwhile research path [6].

One particularly persuasive argument regarding the essential elements of embodied cognition states that “...the sharing of neural mechanisms between sensorimotor processes and higher-level cognitive processes” is of central importance [7]. This view, which is supported by a wide range of empirical evidence, highlights the necessity of 'low-level' sensory motor contingencies for 'high-level' cognitive processes. In this way, cognition is fundamentally grounded in the sensory and motor capacities of the body in which it is instantiated; cognition can not exist without embodiment – a point emphasized in [8].

References:
[1] Brooks, R.A., Elephants don't play chess. Robotics and Autonomous Systems, 1990. 6: p. 3-15.
[2] Brooks, R.A., Intelligence without Representation. Artificial Intelligence, 1991. 47: p. 139-159.
[3] Bryson, J., W. Lowe, and L.A. Stein. Hypothesis Testing for Complex Agents. in Proceedings of the NIST Workshop on Performance metrics for intelligent systems. 2000.
[4] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press.
[5] Ziemke, T. What's that thing called Embodiment? in 25th Annual Meeting of the Cognitive Science Society. 2002. (review)
[6] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262. (review)
[7] Svensson, H. and T. Ziemke. Making sense of Embodiment: simulation theories and the sharing of neural circuitry between sensorimotor and cognitive processes. in 26th Annual Cognitive Science Society Conference. 2004. Chicago, IL.
[8] Clark, A. and R. Grush, Towards a cognitive robotics. Adaptive Behavior, 1999. 7(1): p. 5-16. (review)

Wednesday, February 06, 2008

A short note on Artificial Ethology

In attempting to understand the workings of a complex system such as the human brain, psychology has analysed the behaviour of individuals when performing certain tasks to infer the internal processes at work as those tasks are completed. The study of behaviour is thus an important aspect of brain research. In zoology, the term ‘ethology’ describes the study of animal behaviour. ‘Artificial ethology’ thus describes the study of behaviour of artificial agents [1], and has been described as being an important aspect of research in the development of autonomous [2] or developmental robotics [3].

Robotics have been used extensively in the past for exploring biological issues by using the observed behaviour of the artificial agents as a means of identifying functional requirements. ‘Grey’ Walter’s tortoises were created as a means of investigating goal seeking behaviour, with numerous parallels made to simple animal behaviour (as reviewed in [4]) and the use of biological inspiration in the same way as is currently used. Similarly, Braitenberg vehicles [5], particularly the simpler vehicles, have a strong biological influence (Valentino Braitenberg is himself a brain researcher, who proposed the vehicles as a thought experiment), and provide a strong example of how the environment, as coupled through the physical agent, plays just as important a role in the behaviour (and ‘autonomy’) of an agent as the control mechanism (as discussed in chapter six of “Understanding Intelligence” [6]). These two examples (many others are described and discussed in [6] and [1]) demonstrate that the use of robotic agents, and particularly the behaviour of those agents, to examine theoretical problems from the animal sciences is an established success. Indeed, it has been suggested that the ultimate aim of artificial agent research is to contribute to the understanding of human cognition [7].

References:
[1] Holland, O. and D. McFarland, Artificial Ethology. 2001, Oxford: Oxford University Press (summary)
[2] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262 (review)
[3] Meeden, L.A. and D.S. Blank, Introduction to Developmental Robotics. Connection Science, 2006. 18(2): p. 93-96
[4] Holland, O., Exploration and high adventure: the legacy of Grey Walter. Philosophical Transactions Of the Royal Society of London A, 2003. 361: p. 2085-2121
[5] Braitenberg, V., Vehicles, experiments in synthetic psychology. 1984, Cambridge, Mass.: MIT Press (review)
[6] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press
[7] Guillot, A. and J.-A. Meyer, The Animat contribution to Cognitive Systems Research. Journal of Cognitive Systems Research, 2001. 2: p. 157-165 (review)

Tuesday, February 05, 2008

What is autonomy?

ResearchBlogging.orgIn yesterdays post, I reviewed a paper which discussed the role of emotion in autonomy. The concept of autonomy itself was found to be quite fuzzy, with definitions being dependant on the field of research in which the term is used. In an attempt to elucidate the concept, the editorial of the special issue of BioSystems on autonomy (of which the previously reviewed paper was a part) explores some of the issues involved.

Starting from the broad definition of autonomy as being self-determination (or the ability to act independantly of outside influence), it can be seen that this description applies to many levels of a system (be it biological, or artificial). However, the role of external (environmental) influences cannot be discounted: the reactive nature of autonomous systems is an essential part of proceedings - to the extent that some theorists have argued that there is no distinction between the two - even in the theory of autopoiesis is this the case. So, even from a theoretical standpoint autonomy is not isolated from the environment, but emphasises the independence.

Eve here though is the term independence problematic. There are three aspects which are pointed out as being of importance to the present discussion: (1) the reactive extent of interactions with the environment, (2) the extent to which the control mechanisms are self-generated, and (3) the extent to which these inner processes can be reflected upon. From these three properties of independence, it can be seen that autonomy is on a sliding scale, rather than a binary property.

The final notion of relevance to the present discussion of autonomy is self-organisation, due to it being a central element in life, and in those properties which we desire artificial systems to have. While some have shied away from the use of this term because of the connotations of something doing the organising, the concept of self-organisation is generally used to refer to the spontaneous emergence of organisation, and/or the maintenance of the systems' organisation once in this state. An interesting aspect to the term self-organising is this: a self-organising system cannot be broken down into constituent parts for analysis since these parts are interdependent (an aspect likely to be emphasised by autopoietic approaches).

An additional aspect to the discussion of autonomy which is covered in this editorial paper is the theoretical tension between ALife (artificial life) and GOFAI (good old-fashioned AI) techniques. Where the latter has been often pilloried, the author points out a number of theoretical successes it has had in terms of describing autonomy and agency which has not been achieved by ALife due to its emphasis on lower level processes - an approach which in its own way has proven enormously successful in accounting for a number of mechanisms involved.

While this discussion of the term autonomy has not resulted in a hard and fast definition, the consideration of two closely related concepts (independence and self-organisation) has placed the term into a context applicable to a wide range of research fields. Indeed, this lack of a definite definition may prove to be more productive than counter-productive.

BODEN, M. (2008). Autonomy: What is it? Biosystems, 91(2), 305-308. DOI: 10.1016/j.biosystems.2007.07.003

Monday, February 04, 2008

On the role of emotion in biological and robotic autonomy

ResearchBlogging.orgAutonomy is a concept often used, but not always clearly defined. Indeed, there are a number of definitions which are used, often dependant on the context in which it is used. For example, "autonomy" may be used to refer to a mobile robot in the sense that it can move around on its own (whetever the control system used), but the same term may also be applied to a biological agent capable of defining its own goals and surviving in the real world. In the debate of autonomy, and as indicated from these examples, the concepts of embodiment and emotion are also important in being able to explain the mechanisms involved. In recent times, emotio has become a hot topic in a wide range of disciplines, from neuroscience and psychology, to cognitive robotics. In order to elucidate the role of emotion in autonomy, Tom Ziemke reviews the concepts concerned and outlines a promising course of future research.

First comes a discussion of the difference between robotic and biological autonomy. This discussion is especially pertinent given the problem mentioned in the first paragraph: the widely differing definitions of autonomy used in robotics work. Important for biological autonomy is the concept of autopoiesis. Broadly speaking, an autopoietic agent is one which is capable of maintaining its own organisation: it has the ability to produce the components which define it. For example, a multicellular organism has the ability to create individual cells, which in turn form the organism itself. Despite a range of slightly different versions of the term, they all emphasise this self-constitutive property - and thereby exclude all current technology robots. Concerning robotics, autonomy generally refers to independance from human control. The aim is thus that the robot determines its own goals in the environment. This use of the term autonomy has some problems, particularly with regard to the biological definition, but is in widespread use. An important point raised though is that robotic autonomy is used to refer to systems which are embodied in mobile robots which act in the real world, as opposed to the mostly disembodied decision making systems of more traditional AI methods.

With embodiment comes the issue of grounding. Following Harnad's formulation of the symbol grounding problem, and Searle's Chinese room argument, the grounding of meaning for artificial agents is an important issue. A large amount of work has been carried out in this area throughout the 90's as a means of improving the behaviour improved. However, merely the imposition of a physical body does not necessarily result in intelligent behaviour, since this embodiment emphasises sensorimotor interaction, and not the other aspects (of which there are many) which are highly relevant for biological agents. The question then is: what is missing from robotic setups?

The argument is that robotic models, in addition to implementing the sensorimotor interactions which have been previously emphasised, must also link this to an equivalent of homeostatic processes: i.e. linking the body to the cognition, not just the bodies' sensors and motors. An example of this may be a need to keep a system variable (perhaps battery level) in a certain range - in this way, behaviour must be affected in order to achieve this. A number of theorists have likened this connection to a hierarchical organisation, with homeostatic processes (or metabolism) providing a base for the more 'cognitive' sensorimotor processes, thus supposedly resulting in a more complex, and meaningful, emergent behaviour. Homeostatic processes are often implemented in robotic systems as emotion or value systems, which are often ill-defined, and not usually grounded in homeostatic processes, but arbitrarily added as externally defined (or observer-defined) variables. The widely differing dfinition used for emotions are problematic when it comes to comparisons between architectures. One definition, provided by Damasio, breaks down the broad notion of emotion as that displayed by humans into "different levels of automated homeostatic regulation" - basically, the term "emotion" can be applied to a range of behaviours, ranging from metabolic regulation, through drives and motivations, to feelings (e.g. anger, happiness). In this way, these somewhat arbitrarily defined implementations of emotion may be seen to be higher levels of the emotion hierarchy, which may ultimately be tied to bodily processes (e.g. somatic theories of emotion).

Bringing this discussion of autonomy and emotion in artificial (robotic) systems together, it is clear that current technologies are neither autonomous in the narrow biological sense, nor implement grounded emotions (due to their supposed basis in biological homeostatic processes). However, it has been argued that the narrow biological definitions do not provide sufficient conditions for cognition, and that higher level cognitive processes are not necessarily emergent from these constitutive processes alone, but that interactive processes are also necessary. Similarly, the necessity of such autopoietic properties for self and consciousness are not established. Robotic models may then be used as models of autonomy without having to rely on such philosophical concerns. The emergent conclusion is though that embodied cognition of the form favoured in cognitive robotics work commits itself to a central role for the body, not just in sensorimotor terms, but also in homeostatic terms. The interplay between the two is then of central importance, and this investigation is proposed as a promising avenue for future research.

Ziemke, T. (2008). On the role of emotion in biological and robotic auonomy. BioSystems, 91(2), 401-408.

Sunday, February 03, 2008

The Week of Science

As most probably know, the Week of Science is due to start tomorrow, and last five days instead of last years seven. As last year, I'll update this post as an index to my posts over the five day period.

Day 1: On the role emotion in biological and robotic autonomy

Day 2: What is Autonomy?

Day 3: A short note on Artificial Ethology

Day 4: Simulation versus reality, and the reliance of cognition on embodiment

Day 5: Due to other commitments, I wasn't able to produce a post of adequate quality for the final day of Just Science week. Bring on next year!

The RSS feed for the Just Science aggregator can be found here.