Tuesday, February 19, 2008

Encephalon returns

I haven't posted for a while now due to other commitments, but for now a link to the new incarnation of Encephalon, the first issue of which is now up at Sharp Brains. It has more posts than usual (24 in all), with a wide range of subjects: from God to stress, and free will to depression, via a healthy sprinkling of other neuroscience-related matters...

I guess it could be the 39th edition of Encephalon? (Unless I've missed one somewhere...)

Link to new Encephalon Home

Thursday, February 07, 2008

Simulation versus reality, and the reliance of cognition on embodiment

Cognitive robotics work makes extensive use of both real world robots and environments, and their simulation equivalents. Simulations are useful in that development time is shorter (or at least has the potential to be), so proof-of-concept experiemnts are readily implemented, and all of the variables are under the control of the designer, allowing better testing and debugging for example. However, from a practical point of view, there are a number of reasons why the use of a physical robotic agent is necessary. Brooks suggested through the “physical grounding hypothesis” [1, 2] that since simulations are by their very nature simplifications of the real world, they miss out details which may be important in terms of complexity of the problem faced by the virtual agent. However, by attempting to implement a high fidelity simulation model, one may use more resources (both human and computational) than by using a real robot – hence defeating the object of using a simulation at all. Related to this, it is also suggested that the designers of the simulation make assumptions as to what is required, thereby unintentionally introducing biases into the model, which would have an effect on the validity of the simulation. An effect of this may be unrealistic behaviours (or ones which would not map to real world behaviour). However, it is acknowledged that when a simulator designed to be independent of any particular theory is used, this last point is effectively rendered void [3].

In addition to the practical problems outlined in the previous paragraph, there are more philosophical concerns when considering embodiment which will now be briefly stated. The assertion that embodiment is necessary for cognition is now generally accepted, as evidenced by [4] for example. However the definition of the notion of embodiment is far from clear. Numerous definitions have been used, eight of the most frequently used concepts of which are reviewed by [5]. Among these are general definitions such as embodiment as structural coupling to the environment or as physical instantiation as opposed to software agents (as argued for in the previous paragraph). More restrictive definitions also exist, such as embodiment in an organism-like bodies (which have life-like, but not necessarily alive, bodies), or organismoid embodiment which states that only living bodies allow true embodiment. However, even if the most restrictive definition becomes generally accepted (strong embodiment: that a living body is required), it has been argued that studying 'weakly' embodied systems as if they were strongly embodied would still be a worthwhile research path [6].

One particularly persuasive argument regarding the essential elements of embodied cognition states that “...the sharing of neural mechanisms between sensorimotor processes and higher-level cognitive processes” is of central importance [7]. This view, which is supported by a wide range of empirical evidence, highlights the necessity of 'low-level' sensory motor contingencies for 'high-level' cognitive processes. In this way, cognition is fundamentally grounded in the sensory and motor capacities of the body in which it is instantiated; cognition can not exist without embodiment – a point emphasized in [8].

References:
[1] Brooks, R.A., Elephants don't play chess. Robotics and Autonomous Systems, 1990. 6: p. 3-15.
[2] Brooks, R.A., Intelligence without Representation. Artificial Intelligence, 1991. 47: p. 139-159.
[3] Bryson, J., W. Lowe, and L.A. Stein. Hypothesis Testing for Complex Agents. in Proceedings of the NIST Workshop on Performance metrics for intelligent systems. 2000.
[4] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press.
[5] Ziemke, T. What's that thing called Embodiment? in 25th Annual Meeting of the Cognitive Science Society. 2002. (review)
[6] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262. (review)
[7] Svensson, H. and T. Ziemke. Making sense of Embodiment: simulation theories and the sharing of neural circuitry between sensorimotor and cognitive processes. in 26th Annual Cognitive Science Society Conference. 2004. Chicago, IL.
[8] Clark, A. and R. Grush, Towards a cognitive robotics. Adaptive Behavior, 1999. 7(1): p. 5-16. (review)

Wednesday, February 06, 2008

A short note on Artificial Ethology

In attempting to understand the workings of a complex system such as the human brain, psychology has analysed the behaviour of individuals when performing certain tasks to infer the internal processes at work as those tasks are completed. The study of behaviour is thus an important aspect of brain research. In zoology, the term ‘ethology’ describes the study of animal behaviour. ‘Artificial ethology’ thus describes the study of behaviour of artificial agents [1], and has been described as being an important aspect of research in the development of autonomous [2] or developmental robotics [3].

Robotics have been used extensively in the past for exploring biological issues by using the observed behaviour of the artificial agents as a means of identifying functional requirements. ‘Grey’ Walter’s tortoises were created as a means of investigating goal seeking behaviour, with numerous parallels made to simple animal behaviour (as reviewed in [4]) and the use of biological inspiration in the same way as is currently used. Similarly, Braitenberg vehicles [5], particularly the simpler vehicles, have a strong biological influence (Valentino Braitenberg is himself a brain researcher, who proposed the vehicles as a thought experiment), and provide a strong example of how the environment, as coupled through the physical agent, plays just as important a role in the behaviour (and ‘autonomy’) of an agent as the control mechanism (as discussed in chapter six of “Understanding Intelligence” [6]). These two examples (many others are described and discussed in [6] and [1]) demonstrate that the use of robotic agents, and particularly the behaviour of those agents, to examine theoretical problems from the animal sciences is an established success. Indeed, it has been suggested that the ultimate aim of artificial agent research is to contribute to the understanding of human cognition [7].

References:
[1] Holland, O. and D. McFarland, Artificial Ethology. 2001, Oxford: Oxford University Press (summary)
[2] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262 (review)
[3] Meeden, L.A. and D.S. Blank, Introduction to Developmental Robotics. Connection Science, 2006. 18(2): p. 93-96
[4] Holland, O., Exploration and high adventure: the legacy of Grey Walter. Philosophical Transactions Of the Royal Society of London A, 2003. 361: p. 2085-2121
[5] Braitenberg, V., Vehicles, experiments in synthetic psychology. 1984, Cambridge, Mass.: MIT Press (review)
[6] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press
[7] Guillot, A. and J.-A. Meyer, The Animat contribution to Cognitive Systems Research. Journal of Cognitive Systems Research, 2001. 2: p. 157-165 (review)

Tuesday, February 05, 2008

What is autonomy?

ResearchBlogging.orgIn yesterdays post, I reviewed a paper which discussed the role of emotion in autonomy. The concept of autonomy itself was found to be quite fuzzy, with definitions being dependant on the field of research in which the term is used. In an attempt to elucidate the concept, the editorial of the special issue of BioSystems on autonomy (of which the previously reviewed paper was a part) explores some of the issues involved.

Starting from the broad definition of autonomy as being self-determination (or the ability to act independantly of outside influence), it can be seen that this description applies to many levels of a system (be it biological, or artificial). However, the role of external (environmental) influences cannot be discounted: the reactive nature of autonomous systems is an essential part of proceedings - to the extent that some theorists have argued that there is no distinction between the two - even in the theory of autopoiesis is this the case. So, even from a theoretical standpoint autonomy is not isolated from the environment, but emphasises the independence.

Eve here though is the term independence problematic. There are three aspects which are pointed out as being of importance to the present discussion: (1) the reactive extent of interactions with the environment, (2) the extent to which the control mechanisms are self-generated, and (3) the extent to which these inner processes can be reflected upon. From these three properties of independence, it can be seen that autonomy is on a sliding scale, rather than a binary property.

The final notion of relevance to the present discussion of autonomy is self-organisation, due to it being a central element in life, and in those properties which we desire artificial systems to have. While some have shied away from the use of this term because of the connotations of something doing the organising, the concept of self-organisation is generally used to refer to the spontaneous emergence of organisation, and/or the maintenance of the systems' organisation once in this state. An interesting aspect to the term self-organising is this: a self-organising system cannot be broken down into constituent parts for analysis since these parts are interdependent (an aspect likely to be emphasised by autopoietic approaches).

An additional aspect to the discussion of autonomy which is covered in this editorial paper is the theoretical tension between ALife (artificial life) and GOFAI (good old-fashioned AI) techniques. Where the latter has been often pilloried, the author points out a number of theoretical successes it has had in terms of describing autonomy and agency which has not been achieved by ALife due to its emphasis on lower level processes - an approach which in its own way has proven enormously successful in accounting for a number of mechanisms involved.

While this discussion of the term autonomy has not resulted in a hard and fast definition, the consideration of two closely related concepts (independence and self-organisation) has placed the term into a context applicable to a wide range of research fields. Indeed, this lack of a definite definition may prove to be more productive than counter-productive.

BODEN, M. (2008). Autonomy: What is it? Biosystems, 91(2), 305-308. DOI: 10.1016/j.biosystems.2007.07.003