Showing posts with label Embodiment. Show all posts
Showing posts with label Embodiment. Show all posts

Tuesday, September 23, 2014

DREAM

At the beginning of this month, I formally started work on the EU FP7 DREAM project (although the project itself started in April this year). Given that ALIZ-E finished at the end of August, this fit very well for me personally, as it means that I am able to stay in Plymouth. It is coordinated by the University of Skovde (Sweden), with Plymouth (PI Tony Belpaeme) as one of seven partners who between us cover a wide range of expertise. There are two standard robot platforms that will be used as part of the project: the Aldebaran Nao (with which I have plenty of experience in ALIZ-E), and Probo (a green soft-bodied and trunked robot developed by VUB), although the Nao will be the primary focus of development.


(From the nice new flashy project website) The DREAM project...
...will deliver the next generation robot-enhanced therapy (RET). It develops clinically relevant interactive capacities for social robots that can operate autonomously for limited periods under the supervision of a psychotherapist. DREAM will also provide policy guidelines to govern ethically compliant deployment of supervised autonomy RET. The core of the DREAM RET robot is its cognitive model which interprets sensory data (body movement and emotion appearance cues), uses these perceptions to assess the child’s behaviour by learning to map them to therapist-specific behavioural classes, and then learns to map these child behaviours to appropriate robot actions as specified by the therapists.
My work on this will be on the (robot) cognitive and behavioural aspects of this goal. While this is a slight departure from my memory-centred work in ALIZ-E, it remains in the context of child-robot interaction, retains a focus on application-focused development (though for autistic children rather than diabetic children), and maintains an emphasis on achieving autonomous operation (although in the context of supervised interactions). There is an exciting programme of aims and goals in place, and a very good group of partner institutions, so I'm looking forward to it!

Friday, August 30, 2013

Cognitive Architecture for Social Human-Robot Interaction

It's now the last day of the summer school in Cambridge, and it's been a very interesting if packed week of talks, activities and discussions. Just what a summer school should be in my opinion.

I gave my little special session talk yesterday to a little group (all 25-30 of them, to whom I am grateful for not leaving when I invited them to do so towards the beginning of my talk*). It was an introductory overview of the application of cognitive architectures to the development of autonomous systems for social human-robot interaction. Here's the abstract I used to try and draw people in:
What is Cognitive Architecture and why is it important for HRI? The ongoing developments towards social companion robots raises questions of information integration, behavioural control, etc, in coordination and collaboration with humans. While introducing cognitive architecture, I will emphasise fundamental organisation and common operating principles, specifically based on inspiration from human cognition: learning from the agents with which the robots must socially interact. In this special interest session, these issues will be explored, taking in examples from existing architectures along the way. I would like to put forward the idea that a consideration of Social HRI from the perspective of cognitive architecture enables a different take on the design of social robots - one that emphasises holistic human-robot interacting systems. In doing so, the intention is to leave participants with more questions than are answered, in the hope that some of the issues raised find themselves being further developed in ongoing work.
It was only a short talk, and I intentionally focused on the motivations for wanting to do so, rather than trying to persuade people to use one particular approach or another (even refraining from mentioning my own views on the matter as much as possible). Nevertheless, we had some interesting little discussions, including one on the nature of organisation of behaviour: there were a few people who insisted that the classic "perception -> cognition -> action" pipeline model was the only thing that needed to be considered. While I respectfully disagreed (as does a great deal of the literature on robotics, enaction, active perception, embodied cognition, etc), it did remind me that this assumption does seem to be implicit in many different perspectives, whether cognitive architecture or not.

In any case: we've just had a great talk from Prof. Roger Moore (Uni Sheffield) on the motivation and basis for his mathematical model of the Uncanny Valley effect as very well known in the popular media. Well worth a look at the paper, as it has a number of fundamental consequences for the HRI domain.

* I always start my talks with the conclusion...

Monday, October 15, 2012

The work of von Foerster: summary of a summary


The academic work of Heinz von Foerster was, and remains, highly influential in a number of disciplines, namely due to the pervasive implications of his distinction between first and second order cybernetics (and its antecedent ideas). Where first order cybernetics may be simply described as the study of feedback systems by observation, second-order cybernetics extends this observation of a system to incorporate the observer itself: it is reflexive in that the observation of the feedback system is itself a feedback system to be explained. While I am familiar with this concept, I am not particularly familiar with the body of work produced by von Foerster to instantiate this concept, although I have encountered numerous references to him, particularly when the subject is related to enactivism.

In 2003, Bernard Scott republished a summary of von Foersters' work which he originally published in 1979. The original paper was published just a few years after the official retirement of von Foerster, who apparently (as many an academic has before and since) continued his work for many subsequent years. It serves as a summary of the breadth of work and its contribution, and was republished partly in recognition of the continuing, and expanding, influence it exerts. This post is a very brief summary of this summary paper.

In general terms, von Foerster views on computation and cognition seem to be inherently integrated, holistic, proposing dynamic interactions between the micro, the macro and the global. This view thus contrasts with functional models of cognitive processes which in their nature, can only be static snapshots of the dynamic interactions at play: cf autopoietic theory that extends this notion with the principles of self-reconstitution and organisational closure. Particularly, he emphasises the necessity of considering perception, cognition and memory as indivisible aspects of a complete integrated cognitive system, cf enactivism.

With this consideration as a consistent thread, four primary phases in the development of von Foersters' research are identified. Firstly is his consideration of large molecules as the basis for biological computation, rather than the prevailing focus on neural networks, and that 'forms' of computation underlie all computational systems. Secondly is the exploration of self-organisation, and the reconciliation of organisation with the potential paradox of self-reference. In this sense, a system that increases in order (organisation) requires that its observer adapts its frame-of-reference to incorporate this: if this were not required of the observer, then the system can not be regarded as self-organising. The resulting infinite recursion provides an account of the conditions necessary for social communication and interaction: a consequence of the second order cybernetics. Thirdly is a focus on the nature of memory as being key to understanding cognition and consciousness. Returning to the notion of holistic cognition described above, this is in contrast to the perspective of memory as a static storage mechanism which was prevalent among behaviourist psychologists, and still remains prevalent in the work of designers of synthetic cognitive models and architectures (the countering of which is a key theme of my own research). The fourth and final identified phase (of the original 1979 paper that is) is the formalisation of the concept of self-referential systems and analysis as recursive computation, and the extension of this to apply also to the observer.

The threads of self reference and a holistic perspective have, as noted above, had a wide influence, and continue to do so. I did not realise before that Maturana and Varela's well known formulation of autopoiesis was done at the lab that von Foerster led (the Biological Computing Laboratory, University of Illinois). The relationship is of course clear now that I know about it (!): autopoiesis builds upon the self-reference and holism with self-reconstitution and organisational closure to form a fully reflexive theory. Similarly, enactivism seems to owe much to von Foersters' influence, with its integrated consideration of agent and environment, embodiment and cognition - a theme that has become increasing prevalent in recent years among those working on cognitive robotics with a more theoretical perspective - extending to the consideration of social behaviour. In all, the principle of second-order cybernetics and the theoretical perspectives upon which it is based remain important in the consideration of cognition and human behaviour despite its seemingly abstract theoretical nature, and Heinz von Foerster played a rather prominent role in providing its underpinnings.

Some of the 'buzzwords' raised in the summary of von Foersters' research which carry through as such today (among others - and I use the term buzzwords without any pejorative intent, merely as a 'note-to-self'):
- second order cybernetics
- self-organisation
- the holistic nature of cognition (developed as enactivism)

Paper reference:
Scott, B. (2003), "Heinz von Foerster - an appreciation (revisited)", Cybernetics and Human Knowing, 10(3/4), pp137-149

Tuesday, December 20, 2011

CFP: AISB Symposium on Computing and Philosophy

An upcoming event with which I have a minor involvement is the 4th incarnation of the AISB Symposium on Computing and Philosophy, which is due to take place in Birmingham, U.K., between the 2nd and 6th of July 2012. The AISB convention is this year being held in conjunction with the International Association of Computing And Philosophy (IACAP), and will mark the 100th anniversary of Alan Turing's birthday. There are 16 different symposia at this convention in all, all with varying emphases on the interaction between AI/Computing and Philosophy. For those of bent in that particular direction, there will be plenty to attract your attention!

An overview of the Symposium on Computing and Philosophy, from the website:

Turing’s famous question ‘can machines think?’ raises parallel questions about what it means to say of us humans that we think. More broadly, what does it mean to say that we are thinking beings? In this way we can see that Turing’s question about the potential of machines raises substantial questions about the nature of human identity. ‘If’, we might ask, ‘intelligent human behaviour could be successfully imitated, then what is there about our flesh and blood embodiment that need be regarded as exclusively essential to either intelligence or human identity?’. This and related questions come to the fore when we consider the way in which our involvement with and use of machines and technologies, as well as their involvement in us, is increasing and evolving. This is true of few more than those technologies that have a more intimate and developing role in our lives, such as implants and prosthetics (e.g. neuroprosthetics).

The Symposium will cover key areas relating to developments in implants and prosthetics, including:
  • How new developments in artificial intelligence (AI) / computational intelligence (CI) look set to develop implant technology (e.g. swarm intelligence for the control of smaller and smaller components)
  • Developments of implants and prosthetics for use in human,  primate and non-primate animals
  • The nature of human identity and how implants may impact on it (involving both conceptual and ethical questions)
  • The identification of, and debate surrounding, distinctions drawn between improvement or repair (e.g. for medical reasons), and enhancement or “upgrading” (e.g. to improve performance) using implants/prosthetics
  • What role other emerging, and converging, technologies may have on the development of implants (e.g. nanotechnology or biotechnology)
But the story of identity does not end with human implants and neuroprosthetics. In the last decade, huge strides have been made in ‘animat’ devices. These are robotic machines with both active biological and artificial (e.g. electronic, mechanical or robotic) components. Recently one of the organisers of this symposium, Slawomir Nasuto, in partnership with colleagues Victor Becerra, Kevin Warwick and Ben Whalley, developed an autonomous robot (an animat) controlled by cultures of living neural cells, which in turn were directly coupled to the robot's actuators and sensory inputs. This work raises the question of whether such ‘animat’ devices (devices, for example, with all the flexibility and insight of intelligent natural systems) are constrained by the limits (e.g. those of Turing Machines) identified in classical a priori arguments regarding standard ‘computational systems’. 
Both neuroprosthetic augmentation and animats may be considered as biotechnological hybrid systems. Although seemingly starting from very different sentient positions, the potential convergence in the relative amount and importance of biological and technological components in such systems raises the question of whether such convergence would be accompanied by a corresponding convergence of their respective teleological capacities; and what indeed the limits noted above could be.

For more information, see the symposium website. For those interested in submitting a paper, the deadline for submissions is the 1st of February 2012.

Friday, December 10, 2010

Thesis finally published

I am pleased to say that my thesis has now finally been published and submitted, with my graduation ceremony next week. After that, I will have truly finished the long and arduous road that was my PhD. Since I've stayed in academia (so far at least...) I'm going to engage in a little self-promotion (what's a blog for otherwise?), and give the thesis abstract for those (very few) of you who may be interested.

Foundations of a Constructivist Memory-Based approach to Cognitive Robotics:

Synthesis of a Theoretical Framework and Application Case Studies

Cognitive robotics are applicable to many aspects of modern society. These artificial agents may also be used as platforms to investigate of the nature and function of cognition itself through the creation and manipulation of biologically-inspired cognitive architectures. However, the flexibility and robustness of current systems are limited by the restricted use of previous experience.

Memory thus has a clear role in cognitive architectures, as a means of linking past experience to present and future behaviour. Current cognitive robotics architectures typically implement a version of Working Memory - a functionally separable system that forms the link between long-term memory (information storage) and cognition (information processing). However, this division of function gives rise to practical and theoretical problems, particularly regarding the nature, origin and use of the information held in memory and used in the service of ongoing behaviour.

The aim of this work is to address these problems by synthesising a new approach to cognitive robotics, based on the perspective that cognition is fundamentally concerned with the manipulation and utilisation of memory. A novel theoretical framework is proposed that unifies memory and control into a single structure: the Memory-Based Cognitive Framework (MBCF). It is shown that this account of cognitive functionality requires the mechanism of constructivist knowledge formation through ongoing environmental interaction, the explicit integration of agent embodiment, and a value system to drive the development of coherent behaviours.

A novel robotic implementation - the Embodied MBCF Agent (EMA) - is introduced to illustrate and explore the central features of the MBCF. By encompassing aspects of both network structures and explicit representation schemes, neural and non-neural inspired processes are integrated to an extent not possible in current approaches.

This research validates the memory-based approach to cognitive robotics, providing the foundation for the application of these principles to higher-level cognitive competencies.


This work was conducted at the University of Reading (U.K.) under the supervision of Dr. Will Browne. While I enjoyed my time there, it was a fairly lonely research process, and I am very much appreciating the opportunity for frequent and open discussions that I now have in Plymouth.

Wednesday, August 18, 2010

ALIZ-E videos and dancing robot

With my work on the ALIZ-E project, I get to play around (by play, I mean work...) with the Nao humanoid robot (it's a cute little thing). One of things I've been getting it to do recently is dance. Actually, a summer project student did most of the low-level implementation (the time consuming definition of angle joints etc), so I've just been dealing with how to use these behaviours. This video is the first and really simplistic example:


The video is actually pretty poor quality, and I've somehow managed to squash the picture (my first awful attempt at a youtube video...), but it shows the Nao moving around (even if calling it 'dancing' is a bit of a stretch at the moment). I've set up a You-tube channel on which I will put more (and better quality) videos in the future of our Nao robots engaging in interesting behaviours related to the ALIZ-E project.

Tuesday, November 25, 2008

Encephalon #59 @ Ionian Enchantment

Recently put up is the latest edition of Encephalon, at Ionian Enchantment. I'll get around to posting something other than links to the Encephalon editions sooner or later, but I'm afraid my blog is less of a priority than getting work done for my PhD... :-) Anyways, my three posts of particular interest in this fortnight's compilation:

- From Mind Hacks is a short piece on the Ganzfeld procedure: a method often used to induce hallucinations. Is it just me, or is the last hallucination that Vaughan mentions slightly disturbing...

- Physical exercise and 'brain health' from Sharp Brains

- Something that I found of particular interest is a review of a paper by Clark and Wheeler on Embodied cognition and cultural evolution, at Neuroanthropology. It's a long (but very good) review of the paper and the concepts involved, but helpfully some of the more important points are highlighted. Essentially, the problem is as follows (copied from article):
Whereas embodied cognition models the brain as a product of dynamic
interplay among processes at different time-scales — evolutionary,
developmental, and immediate –, evolutionary psychologists tend to assume the
existence of underlying, enduring structures in the brain, shaped by natural
selection and encoded (even where we cannot find evidence) in genetic
structures.

As far as I'm concerned a fascinating review of a paper which I will now endeavor to get my grubby little hands on...

Tuesday, September 16, 2008

Encephalon #54

I'm a day late, but the 54th installation of Encephalon yesterday returned home to Neurophilosophy. A few of my picks from this edition:

- Also the 'editor's pick' (something we'll no doubt see in future edition), from neurobiotaxis comes a nicely written post on the development of modularity in the brain, covering developmental processes and evolution.

- From Neuronism comes a piece on the Blue Brain project, specifically the recently observed persistent oscillatory activity in the gamma range in the simulated cortical columns.

- Dan Peterson reviews a paper on embodied cognition: particularly the link between motor skills and the language comprehension of those skills.

- Finally, from Neurophilosophy is a review of a paper on the reactivation of hippocampal cells during recall tasks.

Tuesday, August 19, 2008

Encephalon #52

The 52nd issue of Encephalon has now been put up at Ouroboros. With a nice Q&A layout, it covers the usual wide range of subjects, from neurogenesis to grannies, and from perception to culture. A couple of, in my view, the most interesting:

- From Neurophilosophy, a review of a paper on brain plasticity, particularly the visual cortex: visual experience can modulate the production of proteins which can influence plasticity along the visual pathway.

- From Neuroscientifically Challenged comes a look at the reason for sleep, and how the humble fruit fly has helped to shed some light on the problem.

- Finally, from Neuroanthropology is a lengthy review of a paper which lays the foundation of "cultural neuroscience": the influence of cultural and social factors on neural mechanisms, and how this may be taken into account in neuroimaging studies. My first thought though when reading this was that it would then be of more immediate concern to somehow account for individual differences in bodily morphology and individual personal histories - these, I would suggest, would have a more direct influence on development, and hence present neural mechanisms, than cultural influences - even though these are, as evidenced by this paper, obviously present. But then again, I'm not a neuroscientist, and have not studied the paper in great detail yet, so may have missed something.

Thursday, August 07, 2008

Hemispherical 'electronic eye' - and some implications...

The BBC News website yesterday reported on the development of a camera with a hemispheric detection surface, rather than the traditional 2D array. The paper on which this article is based (in Nature - link) proposes that this new technology will enable devices with "...a wide field of view and low aberrations with simple, few-component imaging optics", including bio-inspired devices for prosthetics purposes, and biological systems monitoring. This figure from the paper gives a very nice overview of the construction and structural properties of the system. Note that the individual sensory detectors are the same shape throughout - it is the interconnections between them which are modified. Abstract:
The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.
From the cognitive/developmental robotics point of view, this sort of sensory capability has (to my mind) some pretty useful implications. Given that the morphology of the robots concerned, and that would include the morphology of the sensory systems, take central importance in the development (or learning) that the robot may perform, then these more 'biologically plausible' shapes may allow better comparisons to be made between robotic agent models and animals. Furthermore, from the morphological computation point of view (e.g. here, and here), this sort of sensory morphology may remove the need for a layer of image pre-processing - motion parallax for example. As seen in flighted insects, the shape of the eye and arrangements of the individual visual detectors upon it remove the need for complex transformations when the insect is flying through an environment - an example of how morphology reduces 'computational load'. If effects similar to these can be taken advantage of in cognitive and developmental robotics research, then a greater understanding and functionality may be gained. The development of this type of camera may be an additional step in this direction.

Thursday, February 07, 2008

Simulation versus reality, and the reliance of cognition on embodiment

Cognitive robotics work makes extensive use of both real world robots and environments, and their simulation equivalents. Simulations are useful in that development time is shorter (or at least has the potential to be), so proof-of-concept experiemnts are readily implemented, and all of the variables are under the control of the designer, allowing better testing and debugging for example. However, from a practical point of view, there are a number of reasons why the use of a physical robotic agent is necessary. Brooks suggested through the “physical grounding hypothesis” [1, 2] that since simulations are by their very nature simplifications of the real world, they miss out details which may be important in terms of complexity of the problem faced by the virtual agent. However, by attempting to implement a high fidelity simulation model, one may use more resources (both human and computational) than by using a real robot – hence defeating the object of using a simulation at all. Related to this, it is also suggested that the designers of the simulation make assumptions as to what is required, thereby unintentionally introducing biases into the model, which would have an effect on the validity of the simulation. An effect of this may be unrealistic behaviours (or ones which would not map to real world behaviour). However, it is acknowledged that when a simulator designed to be independent of any particular theory is used, this last point is effectively rendered void [3].

In addition to the practical problems outlined in the previous paragraph, there are more philosophical concerns when considering embodiment which will now be briefly stated. The assertion that embodiment is necessary for cognition is now generally accepted, as evidenced by [4] for example. However the definition of the notion of embodiment is far from clear. Numerous definitions have been used, eight of the most frequently used concepts of which are reviewed by [5]. Among these are general definitions such as embodiment as structural coupling to the environment or as physical instantiation as opposed to software agents (as argued for in the previous paragraph). More restrictive definitions also exist, such as embodiment in an organism-like bodies (which have life-like, but not necessarily alive, bodies), or organismoid embodiment which states that only living bodies allow true embodiment. However, even if the most restrictive definition becomes generally accepted (strong embodiment: that a living body is required), it has been argued that studying 'weakly' embodied systems as if they were strongly embodied would still be a worthwhile research path [6].

One particularly persuasive argument regarding the essential elements of embodied cognition states that “...the sharing of neural mechanisms between sensorimotor processes and higher-level cognitive processes” is of central importance [7]. This view, which is supported by a wide range of empirical evidence, highlights the necessity of 'low-level' sensory motor contingencies for 'high-level' cognitive processes. In this way, cognition is fundamentally grounded in the sensory and motor capacities of the body in which it is instantiated; cognition can not exist without embodiment – a point emphasized in [8].

References:
[1] Brooks, R.A., Elephants don't play chess. Robotics and Autonomous Systems, 1990. 6: p. 3-15.
[2] Brooks, R.A., Intelligence without Representation. Artificial Intelligence, 1991. 47: p. 139-159.
[3] Bryson, J., W. Lowe, and L.A. Stein. Hypothesis Testing for Complex Agents. in Proceedings of the NIST Workshop on Performance metrics for intelligent systems. 2000.
[4] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press.
[5] Ziemke, T. What's that thing called Embodiment? in 25th Annual Meeting of the Cognitive Science Society. 2002. (review)
[6] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262. (review)
[7] Svensson, H. and T. Ziemke. Making sense of Embodiment: simulation theories and the sharing of neural circuitry between sensorimotor and cognitive processes. in 26th Annual Cognitive Science Society Conference. 2004. Chicago, IL.
[8] Clark, A. and R. Grush, Towards a cognitive robotics. Adaptive Behavior, 1999. 7(1): p. 5-16. (review)

Wednesday, February 06, 2008

A short note on Artificial Ethology

In attempting to understand the workings of a complex system such as the human brain, psychology has analysed the behaviour of individuals when performing certain tasks to infer the internal processes at work as those tasks are completed. The study of behaviour is thus an important aspect of brain research. In zoology, the term ‘ethology’ describes the study of animal behaviour. ‘Artificial ethology’ thus describes the study of behaviour of artificial agents [1], and has been described as being an important aspect of research in the development of autonomous [2] or developmental robotics [3].

Robotics have been used extensively in the past for exploring biological issues by using the observed behaviour of the artificial agents as a means of identifying functional requirements. ‘Grey’ Walter’s tortoises were created as a means of investigating goal seeking behaviour, with numerous parallels made to simple animal behaviour (as reviewed in [4]) and the use of biological inspiration in the same way as is currently used. Similarly, Braitenberg vehicles [5], particularly the simpler vehicles, have a strong biological influence (Valentino Braitenberg is himself a brain researcher, who proposed the vehicles as a thought experiment), and provide a strong example of how the environment, as coupled through the physical agent, plays just as important a role in the behaviour (and ‘autonomy’) of an agent as the control mechanism (as discussed in chapter six of “Understanding Intelligence” [6]). These two examples (many others are described and discussed in [6] and [1]) demonstrate that the use of robotic agents, and particularly the behaviour of those agents, to examine theoretical problems from the animal sciences is an established success. Indeed, it has been suggested that the ultimate aim of artificial agent research is to contribute to the understanding of human cognition [7].

References:
[1] Holland, O. and D. McFarland, Artificial Ethology. 2001, Oxford: Oxford University Press (summary)
[2] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262 (review)
[3] Meeden, L.A. and D.S. Blank, Introduction to Developmental Robotics. Connection Science, 2006. 18(2): p. 93-96
[4] Holland, O., Exploration and high adventure: the legacy of Grey Walter. Philosophical Transactions Of the Royal Society of London A, 2003. 361: p. 2085-2121
[5] Braitenberg, V., Vehicles, experiments in synthetic psychology. 1984, Cambridge, Mass.: MIT Press (review)
[6] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press
[7] Guillot, A. and J.-A. Meyer, The Animat contribution to Cognitive Systems Research. Journal of Cognitive Systems Research, 2001. 2: p. 157-165 (review)

Tuesday, February 05, 2008

What is autonomy?

ResearchBlogging.orgIn yesterdays post, I reviewed a paper which discussed the role of emotion in autonomy. The concept of autonomy itself was found to be quite fuzzy, with definitions being dependant on the field of research in which the term is used. In an attempt to elucidate the concept, the editorial of the special issue of BioSystems on autonomy (of which the previously reviewed paper was a part) explores some of the issues involved.

Starting from the broad definition of autonomy as being self-determination (or the ability to act independantly of outside influence), it can be seen that this description applies to many levels of a system (be it biological, or artificial). However, the role of external (environmental) influences cannot be discounted: the reactive nature of autonomous systems is an essential part of proceedings - to the extent that some theorists have argued that there is no distinction between the two - even in the theory of autopoiesis is this the case. So, even from a theoretical standpoint autonomy is not isolated from the environment, but emphasises the independence.

Eve here though is the term independence problematic. There are three aspects which are pointed out as being of importance to the present discussion: (1) the reactive extent of interactions with the environment, (2) the extent to which the control mechanisms are self-generated, and (3) the extent to which these inner processes can be reflected upon. From these three properties of independence, it can be seen that autonomy is on a sliding scale, rather than a binary property.

The final notion of relevance to the present discussion of autonomy is self-organisation, due to it being a central element in life, and in those properties which we desire artificial systems to have. While some have shied away from the use of this term because of the connotations of something doing the organising, the concept of self-organisation is generally used to refer to the spontaneous emergence of organisation, and/or the maintenance of the systems' organisation once in this state. An interesting aspect to the term self-organising is this: a self-organising system cannot be broken down into constituent parts for analysis since these parts are interdependent (an aspect likely to be emphasised by autopoietic approaches).

An additional aspect to the discussion of autonomy which is covered in this editorial paper is the theoretical tension between ALife (artificial life) and GOFAI (good old-fashioned AI) techniques. Where the latter has been often pilloried, the author points out a number of theoretical successes it has had in terms of describing autonomy and agency which has not been achieved by ALife due to its emphasis on lower level processes - an approach which in its own way has proven enormously successful in accounting for a number of mechanisms involved.

While this discussion of the term autonomy has not resulted in a hard and fast definition, the consideration of two closely related concepts (independence and self-organisation) has placed the term into a context applicable to a wide range of research fields. Indeed, this lack of a definite definition may prove to be more productive than counter-productive.

BODEN, M. (2008). Autonomy: What is it? Biosystems, 91(2), 305-308. DOI: 10.1016/j.biosystems.2007.07.003

Thursday, January 10, 2008

Internal Representations: a metaphor

What follows is a brief note on why I don't believe that internal representations necessarily mean complex modelling capabilities, through the use of a (slightly suspect) metaphor. This isn't based on any peer reviewed work, just some thoughts I jotted down in a rare moment of effective cerebral activity :-)

Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:

(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.

(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.

Of course, there are variations on the above two methods. The most interesting/important being:

(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.

What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.

In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.

If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.

But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.

Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)

Friday, December 21, 2007

Embodiment and the Mind

Taken from the introduction to the seminal work "Descartes' Error", in the following extract, Damasio describes the often unmentioned link between the brain (and the mind) and the body: generally assumed, but rarely stated - although this emphasis on the role of the body in the proceedings is of course more central in embodied cognition, cognitive robotics, and the like.

"Surprising as it may sound, the mind exists in and for an integrated organism; our minds would not be the way they are if it were not for the interplay of body and brain during evolution, during individual development, and at the current moment. The mind had to be first about the body, or it could not have been. On the basis of the ground reference that the body continuously provides, the mind can then be about many other things, real and imaginary.


This idea is anchored in the following statements: (1) the human brain and the rest of the body constitute an indissociable organism, integrated by means of mutually interactive biochemical and neural regulatory circuits (including endocrine, immune, and autonomic neural components); (2) the organism interacts with the environment as an ensemble: the interaction is neither of the body alone nor of the brain alone; (3) the physiological operations that we call mind are derived from the structural and functional ensemble rather than from the brain alone: mental phenomena can be fully understood only in the context of an organism's interacting in an environment. That the environment is, in part, a product of the organism's activity itself, merely underscores the complexity of interactions we must take into account."


The central thesis of the rest of the book is well known: that 'logical' reasoning and decision making does not, indeed cannot, exist without the central role of emotion processes. This directly contradicted the traditional view of emotion clouding the 'cold', logical reasoning processes - a premise also used by 'traditional' AI (i.e. that not based on environmental situatedness, or some other form of grounding).

Antonio Damasio, "Descartes' Error: emotion, reason and the human brain", 1994, New York: Grosset/Putnam

Tuesday, November 06, 2007

Behaviour-based robotics and artificial ethology

The following are quotes from the introductory paragraphs to chapter two of "Behaviour-based Robotics", Ronald C. Arkin, 1998 (MIT Press):

"The possibility of intelligent behaviour is indicated by its manifestation in biological systems. It seems logical then that a suitable starting point for the study of behaviour-based robotics should begin with an overview of biological behaviour. First, animal behaviour defines intelligence. Where intelligence begins and ends is an open-ended question, but we will concede in this text that intelligence can reside in subhuman animals. Our working definition will be that intelligence endows a system (biological or otherwise) with the ability to improve its likelihood of survival within the real world and where appropriate to compete or cooperate successfully with other agents to do so. Second, animal behaviour provides an existence proof that intelligence is achievable. It is not a mystical concept, it is a concrete reality, although a poorly understood phenomenon. Thirdly, the study of animal behaviour can provide models that a roboticist can operationalise within a robotic system. These models may be implemented with high fidelity to their animal couterparts or may serve only as an inspiration for the robotics researcher."

These three points provide the basis for the point of view that animal behaviour has a lot to offer the robotics community (behaviour-based robotics to be specific), and hints at the potential feedback that such work may offer the biologists. Without actually mentioning the term, this description could just as well be applied to artificial (or computational) ethology. The first point I think to be particularly interesting. In my opinion, the working definition of 'intelligence' it introduces is not particularly controversial - however, the implication of the the phrase '...improve its likelyhood of survival...' for cognitive/autonomous robotics is that without some form of actual physical dependancy on the environment (e.g. 'food' - and I hesitantly add, some form of concept of 'life and death' for the agent concerned), intelligence for an artificially created being means nothing (see a related concept in embodiment: organismic embodiment). The second point is one which is generally assumed, but not usually explicitly stated, and something which I think is useful to remind oneself of occasionally. The third point I think is self evident, stated many times, and with plenty of examples in the literature. In fact I think it is the basis for most cognitive robotics work.

The next part of the introduction to chapter two lists two reasons why the robotics community has traditionally resisted the use of the previously mentioned methods of creating artificial agents with 'useful' behaviours (e.g. perceiving and acting in an environment):

"First, the underlying hardware is fundamentally different. Biological systems bring a large amount of evolutionary baggage unnecessary to support intelligent behaviour in their silicon based counterparts. Second, our knowledge of the functioning of biological hardware is often inadequate to support its migration from one system to another. For these and other reasons, many roboticists ignore biological realities and seek purely engineering solutions."

The second point is, I feel, perfectly justified. One only has to consider, for example, the complexity of natural neurons and networks in comparison to the most advanced artificial neural networks which use population-based firing rates, to see that this is true. The first point however, I don't think is necessarily true, especially if one considers that the biological hardware which 'produces' the intelligent behaviour we seek holds many of answers. In this case, an understanding of the 'evolutionary baggage' which produces the biological hardware would be of importance when seeking to understand the intelligent behaviour itself. Or so I think, anyway.

Wednesday, August 08, 2007

Emotion understanding from the perspective of autonomous robots research

The following bullet points are the outline of a paper review I gave at the Neuro-IT summer school which I was recently fortunate enough to have attended. It was given in a workshop led by Tom Ziemke on whether robots need emotions. This review paper was one of three covered during the workshop. Written by Lola Canamero, currently at the University of Hertfordshire, it essentially looks at how emotion research in robotics can aid in the understanding of emotions, whilst also aiding in the development of more 'intelligent' robots, by reviewing work that has occured in the field. Hope it's of interest, reference at end as usual.

Overview
• The contribution of emotion modelling in autonomous robots to emotion research
– The Questions that need answering (‘Bottlenecks’)
– Current/past approaches
– Interdisciplinary issues
– Challenges and goals for the future

Introduction
• Advantages of affective features in robots:
– human-robot interaction
– improved performance and adaptation in the ‘real world’
• How are these features related to emotions in biology?
• Focus on physical robots - not simulation
• The contributions that modelled emotions can make to emotion research:
– Human perception of emotions
– ‘Virtual Laboratories’
– Understand by building (the synthetic approach)
– The value of simplification (although the risk of oversimplification must be kept in mind)
• Contribution to emotion research in general, not just to human emotion research

Interdisciplinary action and Aims
• The necessity of long-term interdisciplinary efforts to achieve “principled emotion-based architectures”
• Two additional aims:
– finding solutions to problems arising in autonomous robots research
– production of tools to test emotion theories and gain insight

Questions
• Regarding models:
– scope and limitations of emotion theories?
– is a general definition of emotion required?
• Regarding mechanisms:
– plausible underlying emotion mechanisms?
– How can the different postulated mechanisms be reconciled and integrated?
• Applications:
– what emotions can be implemented in autonomous and interactive robots?
– are different models suitable for different tasks?
• Assessment:
– how can emotional states/processes be quantified?
– does observed behaviour aid understanding?

Current and past approaches
• Adaptation to environment - two time scales for autonomous robots:
– Emotion in Action selection
• behaviour control
• emergent emotions
– Learning, and,
– Memory
Emotion in Action Selection
• Behavioural control:
– emotions grounded in an internal value system: at the heart of autonomous behaviour (survival)
– motivations may be used to drive behaviour selection
• Emergent emotions:
– emotions in the eye of the beholder
– emergent from interaction with environment and dependant on morphology (Braitenberg)
Emotion and Learning
• Typically follows association or reinforcement learning models
– typically uses external reward signals
– how to make these signals ‘meaningful’?
• A more biologically plausible approach: an internal ‘value system’
– the learning of responses to reward and punishment as indicated by the value system
Emotion and Memory
• Memory management: must be both timely and accurate
• Using emotion:
– ‘mood congruent recall’ in humans
– the priming of memories relevant to the current emotional state

Interdisciplinary Issues
• Many parallels between autonomous robot emotion research, and emotion theories:
– Mechanisms underlying involvement of emotions in cognition and action
– Emotion elicitors
– Emotions as cognitive modes
– Emotions, value systems and motivation
Emotions in cognition and action
• How does emotion influence cognition and behaviour?
• ‘Circuit Models’:
– postulate set of neural mechanisms - promising for study of specific neural circuits, but difficulty in integrating at a global scale
• ‘Adaptational Models’:
– emotions as dynamic patterns of neuromodulations - can’t make contributions to human neural process examination, but allows study of the ‘global picture’
Emotion Elicitors
• What mechanisms are in place to allow influences to cause emotions?
– Establishing causal relations and possible implementation approaches a problem
– similar problem with Appraisal theories
• Gap between level of abstraction and implementation details too large
• Neuroscience feedback required concerning ‘valence’
Emotions as Cognitive modes
• The view that emotions have a global and and synchronised influence on the relation with the world
• Issues in implementing this view:
– The aspect and mechanism of emotion required
– How to account for cultural and individual differences?
– How to model relation between cognitive modes and action tendencies?
Emotions, value systems and motivation
• The role emotion plays in the production of action in autonomous robots:
– emotions allow more varied and flexible behaviour (related to goals)
– emotions as second-order control systems
– motivation factors and value systems
• Many different architectural implementations
• A quantitative assessment of utility of emotions?

Challenges and goals for the future
• The authors identified research directions, or challenges to be overcome:
– the grounding problem of artificial emotions
– dissolving the ‘mind-body’ problem
– linking emotion and intelligence
– how to measure progress?
Grounding emotions
• The drawbacks of a priori design of emotion constructs/mechanisms:
– over-attribution (over-design)
– lack of grounding (no ‘meaning’ for the robot)
• The emergent approach is promising
– counters over-attribution
• Computational models incorporating developmental and/or evolutionary perspectives:
– helps overcome the grounding problem
Dissolving the mind-body problem
• Investigating the links between ‘higher’ and ‘lower’ levels of cognition and action, and the influence of emotion
– “Symbolic AI” and “Embodied AI”
– The need for overlap between the two
• Problems that need to be addressed:
– role that emotion plays in synchronisation
– mechanisms for bridging the gap between internal and external aspects of emotion
– the integration of multiple levels of emotion generation
Linking emotion and intelligence
• Emotions are now considered pervasive in cognition and action, and an essential element of intelligence
– should not become an unquestioned assumption
• The modelling and study of individual cognitive and emotional systems necessary but not sufficient to understanding both:
– they are deeply intertwined, and should also be studied as such - in parallel
Measuring Progress
• What are the contributions of emotions, and how can this be quantified?
• “An obvious way of doing this is by running control expt’s in which the robot performs the same task ‘with’ and ‘without’ emotions and comparing the results.”
• Quantitative evaluations necessary in addition to qualitative ones

Summary
• The dual potential:
– The use of these robotic models as tools and ‘virtual laboratories’
– A modelling approach that “fosters conceptual clarification”
• The field is in its infancy; but progress is evident
• Necessity for interdisciplinary effort for understanding emotions in general

Reference: "Emotion understanding from the perspective of autonomous robots research", Lola CaƱamero, Neural Networks 18 (2005) 445-455

Tuesday, August 07, 2007

New Blog: Issues in the philosophy of mind and cogsci

A few days ago I came across a recently started blog called "Issues in the Philosophy of Mind and Cognitive Science". Written by a final year Philosophy and Cognitive Science student, Jack Josephy (see here for some other articles he's written), at the University of Sussex, it currently only has a few posts, but each looks at some of the philosophical issues involved in each of the post topics. So far, there are posts on intelligence, the nature/nurture debate, and machine consciousness. I particularly like the central emphasis he places on embodiment, something which I place great importance on.

Thursday, August 02, 2007

Life, Consciousness, and Brain Evolution

The February issue of Discover Magazine (07/03/07) carries an interview with Gerald Edelman, Nobel prize winner (for work on the structure of antibodies), and founder/director of the Neurosciences Institute. In this interview, conducted by Susan Kruglinski, he discusses his views on consciousness and the work he, and collaborators, are conducting with robots to shed more light on its mysteries. In doing so, the concept of life and the evolution of the brain is also briefly discussed.

From the outset of the interview, Edelman states his belief that consciousess can be created in artificial systems. He does, however, make a distinction between living conscious artefacts and non-living conscious artefacts. He takes 'living' to be "the process of copying DNA, self-replication, under natural selection". Anything with these properties is a living system - all else is not. Consciousness created in an artificial system would then be fundamentally different from our own (human) consciousness - although he does say that he would personally treat is as though it were alive: accord it the same basic respect ("...I'd feel bad about unplugging it.").

When it comes to giving a definition of what consciousness is, Edelman starts by turning to proprties described by the psychologist and philosopher William James: (1) its the thing you lose when you fall into a deep dreamless sleep, which you regain when you wake up, (2) it's continuous and changing, and (3) it's modulated or modified by attention, and so not exhaustive. From this, Edelman describes two states of consciousness. The first is primary consciousness. This supposedly arose with the evolution of a neuronal structure which allowed an interaction between perceptual categorisation and memory. In this way an internal scene could be created which could be linked to past scenes (i.e. memory). Built on this is secondary consciousness - resulting from the development of another neural structure (or structures, which is apparent in humans, and to a certain extent in chimps), which enabled conceptual systems to be connected: enabling the development of semantics and "true language", resulting in higher-order consciousness. A more simplified view of this consciousness is that it requires the inernalistion of stimuli, the remembering of them, and the interactions of these processes (not only perception and memory, but also things such as emotion). From this theory of consciousness, Edelman says that its further understanding would allow a clearer picture of how knowledge is acquired, which has importance in many diffenent respects.

It is based on this view of consciousness that he describes the Neuroscience Institute's approach to understanding consciousness. They construct what are described as Brain Based Devices (BBD's), which are essentially robots with simulated nervous systems. This artificial nervous system is modelled on that of a vertebrate or mammalian brain - although of course the number of neurons and synaptic connections in the simulated are many orders of magnitude smaller than in their natural counterparts. Nonetheless, one of their BBD's, called Darwin VII, is capable of undergoing conditioning: learning to associate objects in its environment with 'good' or 'bad' taste (where these 'tastes' have been defined a priori as fundamental properties of the environment). An important point regarding this experimentation is that it was conducted using real physical robots in the 'real world' (albeit simplified for the purpose of the task, it wasn't a simulation environment). Edeleman points out that a big problem with simulated environments is the difficulty of replicating reality, or in his words: "...you can't trace a complete picture of the environment." As demonstrated by the conditioning experiment, these BBD's are capable of learning: an example given in the interview of a segway-football match between a BBD-segway, and one programmed using 'traditional' AI techniques. Five matches were played, and the BBD-based device won each time. Edelman puts this down to the learning capabilities, and behavioural flexibility, from the fact that it learned all actions, rather than merely implement a set of algorithms (as a 'traditional AI' system does).

The BBD's being controlled by artificial nervous systems leads to questions regarding specifics of implementation. Instead of individually simulating the million or so neurons that make up the simulated nervous system, it is actually groups of around 100 neurons being simulated together, with the mean firing rate for this sub-populaion being taken (mean firing-rate models). This average firing rate is a reflection of synaptic change. According to Edelman, this sort of response is not just biologically plausible, it is identical: "The responses are exactly like those of [biological] neurons" (square brackets added).

The final part of the interview looks at other work going on at the institute. Currently, work is progressing on Darwin 12, the latest incarnation of the BBD's. This version is new as it intends to look at how embodiment affects the development of learning in the artificial nervous system, and its general functionality. It has both wheels and legs, and nearly 100 sensors in each of its legs. Mention is also made of other work concerning rhythm and melody as intrinsic human capabilities, more so than any other animal, and how this may have led to the development of language. This aspect of work seems to be only loosely brushed-over, so I do likewise.
I think that this interview, albeit reasonably short, covered a number of very interesting concepts on a wide range of subjects. However, I feel that it doesn't entirely succeed in bringing all of these elements together. An interesting read nonetheless.

Wednesday, July 25, 2007

Cronos and Cognitive Reserve

Over the past couple of days, I've come across two very interesting posts which I thought I'd share: the first is the concept of "Cognitive Reserve", by way of an interview with one of its main proponents, and the second is an interesting look at robot ethics and machine consciousness, with some nice links to some very interesting people.

Cognitive Reserve: SharpBrains has an interview with Yaakov Stern (Division Leader of the Cognitive Neuroscience Division of the Sergievsky Center, and Professor of Clinical Neuropsychology, at the College of Physicians and Surgeons of Columbia University, New York) on Cognitive Reserve. The idea is basically that some people who are better able to withstand the effects of Alzheimer's have a greater cognitive reserve, i.e. numbers of neurons, which make up for the deficit. Also central to the theory is that mental and physical training aids in the building up of this cognitive reserve in a cumulative way.

Robot Ethics: A nice post I came across a few days ago at Bloggetiblog - a discussion of robot ethics. It mentions the humanoid robot Cronos in relation to Owen Hollands machine consciousness project, a quote from Murray Shanahan, and a brief look at the ethical issues facing robots and their designers. Altogether a nice read.