Showing posts with label Cognitive Robotics. Show all posts
Showing posts with label Cognitive Robotics. Show all posts

Wednesday, April 13, 2016

A new start in Lincoln

The past few months have been busy - I attended the Human-Robot Interaction conference in Christchurch (New Zealand), and then I packed up and left Plymouth, after six (near enough) very happy and relatively productive years with Tony Belpaeme's HRI group.

Now, I have joined the University of Lincoln (U.K.) as a lecturer (I'm not sure, but it's supposedly similar to a US/rest of the world Assistant Prof, but permanent rather than tenure-track...) in the School of Computer Science. I'm also a member of the Lincoln Centre for Autonomous Systems, which is a relatively large research group covering a range of (you guessed it) autonomous robotic systems, human-robot interaction/collaboration, and bio-inspired machine vision. It's an exciting place to join - the School and the research group are expanding rapidly both in terms of student numbers and research projects/income, and we're due to move into a new purpose-built building over the next year (I'm moving in the next few weeks...).

So, what are my plans for the future? Broadly speaking, to research and teach, or to teach and research, whichever way you choose to look at it. In terms of research, I intend to continue my general line of research involving the combination of cognitive/developmental robotics and social human-robot interaction. It's going to a little while before I get the things in my head up and running in the real world, but hopefully not too long.

An exciting, and yet fairly daunting time for me - learning a new place, new colleagues, new procedures. However, I'm looking forward to it! Apparently this is my 200th post on this blog. Apt perhaps that this is a new start.

Tuesday, January 12, 2016

2nd Workshop on Cognitive Architectures for Social Human-Robot Interaction


In a follow-up to the first iteration of the workshop, I'm organising the next version of the workshop on Cognitive Architectures for Social HRI (with Greg Trafton and Severin Lemaignan). It will take place immediately before the International conference on Human-Robot Interaction, in Christchurch (New Zealand), on Monday the 7th of March 2016.

You can find all the necessary details on the workshop website.

The focus this time will specifically be on how social interaction (between robots and humans in particular, but not necessarily exclusively) can be supported by cognitive architectures - and what functions and mechanisms are required for this. To this end, we asking that all authors answer a set of six specific questions in their submissions, to provide a basis for comparison, and to initiate discussions as the workshop.

I hope to see you there!

Monday, August 24, 2015

Social HRI Summer School: talk on experimental challenges

I'm in Aland, Finland, at the moment taking part in the 2nd Summer School on Social Human-Robot Interaction (the first one took place in Cambridge in 2013). We're at the end of the first day, and what a fascinating first day it has been. Great talks from Tony Belpaeme, David Vernon and Yiannis Demiris. Between these I played support-act, and filled a slot in the programme with some background and observations on performing HRI experiments. The title/abstract of my talk:

"Experimental HRI: a wander through the challenges"
Running HRI experiments is difficult. Running HRI experiments outside of the lab, in the real world, can introduce even more difficulties. Having to deal with real people's quirks and foibles just adds to the challenges! However, there is so much of interest in doing just that: the development of better social robots, and to support the creation of robotic assistants and tools that can help people in their daily lives.
In this talk, an overview will be given of some of the constraints and trade-offs that may be encountered when implementing and running HRI experiments, but also of the opportunities that arise, and effects that can be taken advantage of. Examples from Child-Robot Interaction studies will serve to highlight these, including robots to help children learn, and running experiments in schools and hospitals.
Some of these issues may already be familiar or be intuitive, and will certainly be non-exhaustive, but the intention is to outline the basis of a toolkit of experimental HRI considerations that can be thrown at any attempts to release experiments into 'the wild'.

Tuesday, September 23, 2014

DREAM

At the beginning of this month, I formally started work on the EU FP7 DREAM project (although the project itself started in April this year). Given that ALIZ-E finished at the end of August, this fit very well for me personally, as it means that I am able to stay in Plymouth. It is coordinated by the University of Skovde (Sweden), with Plymouth (PI Tony Belpaeme) as one of seven partners who between us cover a wide range of expertise. There are two standard robot platforms that will be used as part of the project: the Aldebaran Nao (with which I have plenty of experience in ALIZ-E), and Probo (a green soft-bodied and trunked robot developed by VUB), although the Nao will be the primary focus of development.


(From the nice new flashy project website) The DREAM project...
...will deliver the next generation robot-enhanced therapy (RET). It develops clinically relevant interactive capacities for social robots that can operate autonomously for limited periods under the supervision of a psychotherapist. DREAM will also provide policy guidelines to govern ethically compliant deployment of supervised autonomy RET. The core of the DREAM RET robot is its cognitive model which interprets sensory data (body movement and emotion appearance cues), uses these perceptions to assess the child’s behaviour by learning to map them to therapist-specific behavioural classes, and then learns to map these child behaviours to appropriate robot actions as specified by the therapists.
My work on this will be on the (robot) cognitive and behavioural aspects of this goal. While this is a slight departure from my memory-centred work in ALIZ-E, it remains in the context of child-robot interaction, retains a focus on application-focused development (though for autistic children rather than diabetic children), and maintains an emphasis on achieving autonomous operation (although in the context of supervised interactions). There is an exciting programme of aims and goals in place, and a very good group of partner institutions, so I'm looking forward to it!

Friday, December 20, 2013

HRI 2014 Workshop on Cognitive Architectures for Human-Robot Interaction

I am co-organising a half-day workshop at the 9th ACM/IEEE International Conference on Human-Robot Interaction next year (HRI'14), that will be held on the 3rd of March 2014 in Bielefeld, Germany. If you have any interest in this topic, or even would just like to find out more, please consider joining us! We have intended that this will be an inclusive event, with a high discussion content, and an emphasis on dissemination of ideas that will hopefully influence ongoing (social) Human-Robot Interaction research.

I've been interested in cognitive systems, cognitive robotics and cognitive architectures for a while now as my interest (and subsequent research) is in exploring general principles of cognition/intelligence, both for understanding natural systems and for the development of 'better' robotic systems (to test theories and accomplish tasks). I think that Human-Robot Interaction provides a fascinating context to explore cognitive architectures, as it provides a very different set of challenges to theory and implementation than have typically been considered. Hence the workshop!


*******************************************************************
CALL FOR WORKSHOP SUBMISSIONS AND PARTICIPATION

HRI 2014 Workshop on Cognitive Architectures for Human-Robot Interaction

Monday 3rd March, 2014 (Bielefeld, Germany)

http://www.tech.plym.ac.uk/socce/staff/paulbaxter/cogarch4hri/

*******************************************************************

IMPORTANT DATES
***************
** Submission deadline: Friday 10th January, 2014
** Notification of acceptance: Monday 20th January, 2014
** Final (accepted) submission: Friday 7th February, 2014
** Workshop: Monday 3rd March, 2014 (half day)

DESCRIPTION
***********
Cognitive Architectures are constructs (encompassing both theory and models) that seek to account for cognition (over multiple timescales) using a set of domain-general structures and mechanisms. Typically inspired by human cognition, the emphasis is on deriving a set of principles of operation not constrained to a specific task or context. This therefore presents a holistic perspective: it forces the system designer to initially take a step back from diving into computational mechanisms and consider what sort of functionality needs to be present, and how this relates to other cognitive competencies. Thus the very process of applying such an approach to HRI may yield benefits, such as the integration of evidence from the human sciences in a principled manner, the facilitation of comparison of different systems (abstracting away from specific computational algorithms), and as a more principled manner to verify and refine the resultant autonomous systems.

For HRI, such an approach to building autonomous systems based on Cognitive Architecture, 'cognitive integration', would emphasise first those aspects of behaviour that are common across domains, before applying these to specific interaction contexts for evaluation. Furthermore, given inspiration from human cognition, it can also inherently take into account the behaviour of the humans with which the system should interact, with the intricacies and sub-optimality that this entails.

To date, there have been relatively few efforts to apply such ideas to the context of HRI in a structured manner. The aim of this workshop is therefore to provide a forum to discuss the reasons and potential for the application of Cognitive Architectures to autonomous HRI systems. It is expected that by attending this workshop and engaging in the discussions, participants will gain further insight into how a consideration of Cognitive Architectures complements the development of autonomous social robots, and contribute to the cross-fertilization of ideas in this exciting area.

SUBMISSION AND PARTICIPATION
****************************
Contributions are sought from all who are interested in participation. A light-touch review process will be applied to check for relevance - the emphasis of the workshop is on inclusion, discussion and dissemination. Prior to the workshop, the organizers will integrate these into a list a perspectives that will form the basis for the discussions.

Please prepare a 2-page position paper on your research-informed perspective on cognitive architectures for human-robot interaction (particularly social). The HRI template should be used for this submission (ACM SIG Proceedings). Submissions should be sent to: paul.baxter(a)plymouth.ac.uk All accepted position papers will be archived on the workshop website.

ORGANISERS AND CONTACT
**********************
** Paul Baxter (Plymouth University, U.K.) and Greg Trafton (Naval Research Laboratory, USA)
** Email: paul.baxter(a)plymouth.ac.uk

Tuesday, August 27, 2013

Summer School and Research Details

The Summer School on Social Human-Robot Interaction is now in full swing! Great talks so far, and some really good hands-on sessions. Last night was a workshop given by Aardman Animations on building models with plasticine - brilliant fun, and a nice insight on how to give the illusion of life to inanimate objects, which is of course a goal of social robotics research!

...see #hrisummerschool on Twitter...

And in other news, I've finally started to update my Research Details page, on which I outline in a little more detail the general research themes I am interested in. Please wander along and have a look!

Monday, April 22, 2013

Summer School on Social Human-Robot Interaction

On the off-chance that there's anyone reading this who may be interested, but who hasn't heard this elsewhere (...), then I'd like to mention a research-oriented summer school on Social Human-Robot Interaction (HRI) that will take place this coming August at Cambridge University, U.K. from the the 26th to 30th of August 2013.

Organised primary under the purview of the project I am employed by (ALIZ-E), and also involving the Accompany project, the aim of the school is to provide both theoretical background and practical skills to support researchers in the area of Social HRI. In stating 'social' the emphasis is moved away from industrial robots (or robots in manufacturing/automation contexts), for which interaction with humans in also necessary, and towards the use of robots in contexts where those characteristics of human-human interaction are more important (for example, companion robots, education support, caring in hospital/home, etc).


The application process has started already, with the deadline for submitting applications on the 30th April. With support from the IEEE and EuCognition, there will be a limited number of scholarships available for participants.


There's a list of topics being covered now available on the conference website, with the programme yet to be finalised. A little taster though:

The summer school will have a wide-ranging programme of lectures, discussions and hands-on ateliers on topics such as social signal processing, robotics and autism, child-robot interaction, multi-modal communication, natural language interaction, smart environments, robot assisted therapy, interaction design for robots, tools and technologies, and ethics. The school is participants who seek background and hands-on experience in the interdisciplinary science and technology supporting social human-robot interaction.
I'll be doing a little something on Cognitive Architecture for Social HRI at the school, and so emphasising aspects of cognitive processing and organisation for robot control and behaviour relevant (or at least of interest) to social interaction. Which is of course a fascinating subject that you would be foolish to miss :-p

Monday, October 15, 2012

The work of von Foerster: summary of a summary


The academic work of Heinz von Foerster was, and remains, highly influential in a number of disciplines, namely due to the pervasive implications of his distinction between first and second order cybernetics (and its antecedent ideas). Where first order cybernetics may be simply described as the study of feedback systems by observation, second-order cybernetics extends this observation of a system to incorporate the observer itself: it is reflexive in that the observation of the feedback system is itself a feedback system to be explained. While I am familiar with this concept, I am not particularly familiar with the body of work produced by von Foerster to instantiate this concept, although I have encountered numerous references to him, particularly when the subject is related to enactivism.

In 2003, Bernard Scott republished a summary of von Foersters' work which he originally published in 1979. The original paper was published just a few years after the official retirement of von Foerster, who apparently (as many an academic has before and since) continued his work for many subsequent years. It serves as a summary of the breadth of work and its contribution, and was republished partly in recognition of the continuing, and expanding, influence it exerts. This post is a very brief summary of this summary paper.

In general terms, von Foerster views on computation and cognition seem to be inherently integrated, holistic, proposing dynamic interactions between the micro, the macro and the global. This view thus contrasts with functional models of cognitive processes which in their nature, can only be static snapshots of the dynamic interactions at play: cf autopoietic theory that extends this notion with the principles of self-reconstitution and organisational closure. Particularly, he emphasises the necessity of considering perception, cognition and memory as indivisible aspects of a complete integrated cognitive system, cf enactivism.

With this consideration as a consistent thread, four primary phases in the development of von Foersters' research are identified. Firstly is his consideration of large molecules as the basis for biological computation, rather than the prevailing focus on neural networks, and that 'forms' of computation underlie all computational systems. Secondly is the exploration of self-organisation, and the reconciliation of organisation with the potential paradox of self-reference. In this sense, a system that increases in order (organisation) requires that its observer adapts its frame-of-reference to incorporate this: if this were not required of the observer, then the system can not be regarded as self-organising. The resulting infinite recursion provides an account of the conditions necessary for social communication and interaction: a consequence of the second order cybernetics. Thirdly is a focus on the nature of memory as being key to understanding cognition and consciousness. Returning to the notion of holistic cognition described above, this is in contrast to the perspective of memory as a static storage mechanism which was prevalent among behaviourist psychologists, and still remains prevalent in the work of designers of synthetic cognitive models and architectures (the countering of which is a key theme of my own research). The fourth and final identified phase (of the original 1979 paper that is) is the formalisation of the concept of self-referential systems and analysis as recursive computation, and the extension of this to apply also to the observer.

The threads of self reference and a holistic perspective have, as noted above, had a wide influence, and continue to do so. I did not realise before that Maturana and Varela's well known formulation of autopoiesis was done at the lab that von Foerster led (the Biological Computing Laboratory, University of Illinois). The relationship is of course clear now that I know about it (!): autopoiesis builds upon the self-reference and holism with self-reconstitution and organisational closure to form a fully reflexive theory. Similarly, enactivism seems to owe much to von Foersters' influence, with its integrated consideration of agent and environment, embodiment and cognition - a theme that has become increasing prevalent in recent years among those working on cognitive robotics with a more theoretical perspective - extending to the consideration of social behaviour. In all, the principle of second-order cybernetics and the theoretical perspectives upon which it is based remain important in the consideration of cognition and human behaviour despite its seemingly abstract theoretical nature, and Heinz von Foerster played a rather prominent role in providing its underpinnings.

Some of the 'buzzwords' raised in the summary of von Foersters' research which carry through as such today (among others - and I use the term buzzwords without any pejorative intent, merely as a 'note-to-self'):
- second order cybernetics
- self-organisation
- the holistic nature of cognition (developed as enactivism)

Paper reference:
Scott, B. (2003), "Heinz von Foerster - an appreciation (revisited)", Cybernetics and Human Knowing, 10(3/4), pp137-149

Tuesday, December 20, 2011

CFP: AISB Symposium on Computing and Philosophy

An upcoming event with which I have a minor involvement is the 4th incarnation of the AISB Symposium on Computing and Philosophy, which is due to take place in Birmingham, U.K., between the 2nd and 6th of July 2012. The AISB convention is this year being held in conjunction with the International Association of Computing And Philosophy (IACAP), and will mark the 100th anniversary of Alan Turing's birthday. There are 16 different symposia at this convention in all, all with varying emphases on the interaction between AI/Computing and Philosophy. For those of bent in that particular direction, there will be plenty to attract your attention!

An overview of the Symposium on Computing and Philosophy, from the website:

Turing’s famous question ‘can machines think?’ raises parallel questions about what it means to say of us humans that we think. More broadly, what does it mean to say that we are thinking beings? In this way we can see that Turing’s question about the potential of machines raises substantial questions about the nature of human identity. ‘If’, we might ask, ‘intelligent human behaviour could be successfully imitated, then what is there about our flesh and blood embodiment that need be regarded as exclusively essential to either intelligence or human identity?’. This and related questions come to the fore when we consider the way in which our involvement with and use of machines and technologies, as well as their involvement in us, is increasing and evolving. This is true of few more than those technologies that have a more intimate and developing role in our lives, such as implants and prosthetics (e.g. neuroprosthetics).

The Symposium will cover key areas relating to developments in implants and prosthetics, including:
  • How new developments in artificial intelligence (AI) / computational intelligence (CI) look set to develop implant technology (e.g. swarm intelligence for the control of smaller and smaller components)
  • Developments of implants and prosthetics for use in human,  primate and non-primate animals
  • The nature of human identity and how implants may impact on it (involving both conceptual and ethical questions)
  • The identification of, and debate surrounding, distinctions drawn between improvement or repair (e.g. for medical reasons), and enhancement or “upgrading” (e.g. to improve performance) using implants/prosthetics
  • What role other emerging, and converging, technologies may have on the development of implants (e.g. nanotechnology or biotechnology)
But the story of identity does not end with human implants and neuroprosthetics. In the last decade, huge strides have been made in ‘animat’ devices. These are robotic machines with both active biological and artificial (e.g. electronic, mechanical or robotic) components. Recently one of the organisers of this symposium, Slawomir Nasuto, in partnership with colleagues Victor Becerra, Kevin Warwick and Ben Whalley, developed an autonomous robot (an animat) controlled by cultures of living neural cells, which in turn were directly coupled to the robot's actuators and sensory inputs. This work raises the question of whether such ‘animat’ devices (devices, for example, with all the flexibility and insight of intelligent natural systems) are constrained by the limits (e.g. those of Turing Machines) identified in classical a priori arguments regarding standard ‘computational systems’. 
Both neuroprosthetic augmentation and animats may be considered as biotechnological hybrid systems. Although seemingly starting from very different sentient positions, the potential convergence in the relative amount and importance of biological and technological components in such systems raises the question of whether such convergence would be accompanied by a corresponding convergence of their respective teleological capacities; and what indeed the limits noted above could be.

For more information, see the symposium website. For those interested in submitting a paper, the deadline for submissions is the 1st of February 2012.

Monday, December 19, 2011

Me: films and robots

My first post in over a year, so lets start with something silly :-)

I'm sure that anyone who is working, or has worked, with robots has been influenced in some way (if not inspired) by some depiction in a work of fiction - most likely film - whether they choose to admit it or not (those who don't are probably lying). I'm quite happy to admit to this - and can point to two such robotic intelligent devices. What precisely about them gave rise to this influence I don't know - and I don't really want to deconstruct it in case it turns out to be ridiculous and/or trivial - but here they are nonetheless for you to assess.

Johnny 5 is alive!
The first one is the amazing - and actually fairly realistic (in terms of achievable mechanical complexity) - Johnny 5 from Short Circuit. I can't really say enough about this dude - I did really want one of the little mini-me's from the second film though! I can't really remember the first time I watched this, but I do know that over the many occasions I've watched the films I'm still drawn to it, despite the occasionally dodgy special effects (I'm thinking of the dancing)... 

The second one is the intelligent space-ship/robot arm thing in the Flight of the Navigator - 'Max'. I'm not entirely if this is supposed to be AI robot, or alien-being-controlling-a-robot, but any device that can fly a spaceship, go manic, and time-travel is alright in my book. The single eye-on-an-arm-thing was a bit strange, though even with such a fairly simple setup, the array of emotional expression was really quite impressive.

(I've only just realised that both of these films were released in '86 - this is just coincidence, as I watched both on TV a number of years afterwards - I didn't watch them in the cinema or anything). I'm not sure sure these would be the choices of most people - and I'm not going to bring age into it - but they're mine :-)


/rant
Having said all that though, there is a bit of a cautionary note I think. As much as the portrayal of the robot in science fiction is of course hugely beneficial in terms of building and maintaining interest in these synthetic devices, I do wonder sometimes whether this actually has the long-term reverse effect: building expectations of what such devices can do, not just beyond that which is currently possible, but beyond that which is even probable as possible. In the end, would this just not turn people off when they realise that the real state-of-the-art is actually fairly mundane? Or that what people like me think of as really quite exciting in terms of development just pale in comparison with the vividly recreated imaginations of script writers and graphic designers? In the end, surely such levels of unfulfilled expectation will serve as a damper on funding initiatives (am thinking of potential career prospects here...!) - "but what you are trying to do isn't really exciting, they were talking about it in the '70s/'80's/'90's/etc...". Either that, or the reality drifts so far from expectation that most people don't understand what's going on, and you end up in the same place. But that is perhaps for another discussion, on public engagement with science...

Or maybe I'm reading far too much into all of this, and should really just sit back, relax, and enjoy the view...

/endrant

Friday, December 10, 2010

Thesis finally published

I am pleased to say that my thesis has now finally been published and submitted, with my graduation ceremony next week. After that, I will have truly finished the long and arduous road that was my PhD. Since I've stayed in academia (so far at least...) I'm going to engage in a little self-promotion (what's a blog for otherwise?), and give the thesis abstract for those (very few) of you who may be interested.

Foundations of a Constructivist Memory-Based approach to Cognitive Robotics:

Synthesis of a Theoretical Framework and Application Case Studies

Cognitive robotics are applicable to many aspects of modern society. These artificial agents may also be used as platforms to investigate of the nature and function of cognition itself through the creation and manipulation of biologically-inspired cognitive architectures. However, the flexibility and robustness of current systems are limited by the restricted use of previous experience.

Memory thus has a clear role in cognitive architectures, as a means of linking past experience to present and future behaviour. Current cognitive robotics architectures typically implement a version of Working Memory - a functionally separable system that forms the link between long-term memory (information storage) and cognition (information processing). However, this division of function gives rise to practical and theoretical problems, particularly regarding the nature, origin and use of the information held in memory and used in the service of ongoing behaviour.

The aim of this work is to address these problems by synthesising a new approach to cognitive robotics, based on the perspective that cognition is fundamentally concerned with the manipulation and utilisation of memory. A novel theoretical framework is proposed that unifies memory and control into a single structure: the Memory-Based Cognitive Framework (MBCF). It is shown that this account of cognitive functionality requires the mechanism of constructivist knowledge formation through ongoing environmental interaction, the explicit integration of agent embodiment, and a value system to drive the development of coherent behaviours.

A novel robotic implementation - the Embodied MBCF Agent (EMA) - is introduced to illustrate and explore the central features of the MBCF. By encompassing aspects of both network structures and explicit representation schemes, neural and non-neural inspired processes are integrated to an extent not possible in current approaches.

This research validates the memory-based approach to cognitive robotics, providing the foundation for the application of these principles to higher-level cognitive competencies.


This work was conducted at the University of Reading (U.K.) under the supervision of Dr. Will Browne. While I enjoyed my time there, it was a fairly lonely research process, and I am very much appreciating the opportunity for frequent and open discussions that I now have in Plymouth.

Tuesday, August 26, 2008

On cognition

A while ago now, Andy at Figural Effect posted a concise summary (by way of quotations) of four definitions of 'Cognition' from researchers in different disciplines: Williamson, LeDoux, Clark & Grush, and Neisser. Because I see the Clark & Grush definition as more clearly defining a computationally implementable framework (as I've discussed previously), I find this one the more appealing, however, as noted by Andy, the views from different disciplines is interesting to note.

Related to this, and as mentioned in the comments of the Figural Effect post, is an interesting compilation of definitions of intelligence as viewed from psychology, AI, and others. Personally though, the view of intelligence given by H.G. Wells in 'The Time Machine' is possibly the most elegant I have seen.

Thursday, August 14, 2008

The Animat project at Reading University

As reported on the BBC news website (and now many other news channels), and as subsequently nicely summarised by Mo at Neurophilosophy, the Animat project at the University of Reading aims to use a biological neuron culture to not just control a mobile robot, but also to receive sensory signals (in this case from 4 sonar sensors) from the robot - thus a closed loop system. The work is being done by CIRG (of which I am a part), in collaboration with the Pharmacy department, and hopes to allow the the biological neuronal culture to learn control of the mobile robot through the feedback mechanism in order to produce meaningful real-world behaviours - such as obstacle avoidance for example.

The official press release is here, and further information can be found here.

Thursday, August 07, 2008

Hemispherical 'electronic eye' - and some implications...

The BBC News website yesterday reported on the development of a camera with a hemispheric detection surface, rather than the traditional 2D array. The paper on which this article is based (in Nature - link) proposes that this new technology will enable devices with "...a wide field of view and low aberrations with simple, few-component imaging optics", including bio-inspired devices for prosthetics purposes, and biological systems monitoring. This figure from the paper gives a very nice overview of the construction and structural properties of the system. Note that the individual sensory detectors are the same shape throughout - it is the interconnections between them which are modified. Abstract:
The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.
From the cognitive/developmental robotics point of view, this sort of sensory capability has (to my mind) some pretty useful implications. Given that the morphology of the robots concerned, and that would include the morphology of the sensory systems, take central importance in the development (or learning) that the robot may perform, then these more 'biologically plausible' shapes may allow better comparisons to be made between robotic agent models and animals. Furthermore, from the morphological computation point of view (e.g. here, and here), this sort of sensory morphology may remove the need for a layer of image pre-processing - motion parallax for example. As seen in flighted insects, the shape of the eye and arrangements of the individual visual detectors upon it remove the need for complex transformations when the insect is flying through an environment - an example of how morphology reduces 'computational load'. If effects similar to these can be taken advantage of in cognitive and developmental robotics research, then a greater understanding and functionality may be gained. The development of this type of camera may be an additional step in this direction.

Thursday, February 07, 2008

Simulation versus reality, and the reliance of cognition on embodiment

Cognitive robotics work makes extensive use of both real world robots and environments, and their simulation equivalents. Simulations are useful in that development time is shorter (or at least has the potential to be), so proof-of-concept experiemnts are readily implemented, and all of the variables are under the control of the designer, allowing better testing and debugging for example. However, from a practical point of view, there are a number of reasons why the use of a physical robotic agent is necessary. Brooks suggested through the “physical grounding hypothesis” [1, 2] that since simulations are by their very nature simplifications of the real world, they miss out details which may be important in terms of complexity of the problem faced by the virtual agent. However, by attempting to implement a high fidelity simulation model, one may use more resources (both human and computational) than by using a real robot – hence defeating the object of using a simulation at all. Related to this, it is also suggested that the designers of the simulation make assumptions as to what is required, thereby unintentionally introducing biases into the model, which would have an effect on the validity of the simulation. An effect of this may be unrealistic behaviours (or ones which would not map to real world behaviour). However, it is acknowledged that when a simulator designed to be independent of any particular theory is used, this last point is effectively rendered void [3].

In addition to the practical problems outlined in the previous paragraph, there are more philosophical concerns when considering embodiment which will now be briefly stated. The assertion that embodiment is necessary for cognition is now generally accepted, as evidenced by [4] for example. However the definition of the notion of embodiment is far from clear. Numerous definitions have been used, eight of the most frequently used concepts of which are reviewed by [5]. Among these are general definitions such as embodiment as structural coupling to the environment or as physical instantiation as opposed to software agents (as argued for in the previous paragraph). More restrictive definitions also exist, such as embodiment in an organism-like bodies (which have life-like, but not necessarily alive, bodies), or organismoid embodiment which states that only living bodies allow true embodiment. However, even if the most restrictive definition becomes generally accepted (strong embodiment: that a living body is required), it has been argued that studying 'weakly' embodied systems as if they were strongly embodied would still be a worthwhile research path [6].

One particularly persuasive argument regarding the essential elements of embodied cognition states that “...the sharing of neural mechanisms between sensorimotor processes and higher-level cognitive processes” is of central importance [7]. This view, which is supported by a wide range of empirical evidence, highlights the necessity of 'low-level' sensory motor contingencies for 'high-level' cognitive processes. In this way, cognition is fundamentally grounded in the sensory and motor capacities of the body in which it is instantiated; cognition can not exist without embodiment – a point emphasized in [8].

References:
[1] Brooks, R.A., Elephants don't play chess. Robotics and Autonomous Systems, 1990. 6: p. 3-15.
[2] Brooks, R.A., Intelligence without Representation. Artificial Intelligence, 1991. 47: p. 139-159.
[3] Bryson, J., W. Lowe, and L.A. Stein. Hypothesis Testing for Complex Agents. in Proceedings of the NIST Workshop on Performance metrics for intelligent systems. 2000.
[4] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press.
[5] Ziemke, T. What's that thing called Embodiment? in 25th Annual Meeting of the Cognitive Science Society. 2002. (review)
[6] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262. (review)
[7] Svensson, H. and T. Ziemke. Making sense of Embodiment: simulation theories and the sharing of neural circuitry between sensorimotor and cognitive processes. in 26th Annual Cognitive Science Society Conference. 2004. Chicago, IL.
[8] Clark, A. and R. Grush, Towards a cognitive robotics. Adaptive Behavior, 1999. 7(1): p. 5-16. (review)

Wednesday, February 06, 2008

A short note on Artificial Ethology

In attempting to understand the workings of a complex system such as the human brain, psychology has analysed the behaviour of individuals when performing certain tasks to infer the internal processes at work as those tasks are completed. The study of behaviour is thus an important aspect of brain research. In zoology, the term ‘ethology’ describes the study of animal behaviour. ‘Artificial ethology’ thus describes the study of behaviour of artificial agents [1], and has been described as being an important aspect of research in the development of autonomous [2] or developmental robotics [3].

Robotics have been used extensively in the past for exploring biological issues by using the observed behaviour of the artificial agents as a means of identifying functional requirements. ‘Grey’ Walter’s tortoises were created as a means of investigating goal seeking behaviour, with numerous parallels made to simple animal behaviour (as reviewed in [4]) and the use of biological inspiration in the same way as is currently used. Similarly, Braitenberg vehicles [5], particularly the simpler vehicles, have a strong biological influence (Valentino Braitenberg is himself a brain researcher, who proposed the vehicles as a thought experiment), and provide a strong example of how the environment, as coupled through the physical agent, plays just as important a role in the behaviour (and ‘autonomy’) of an agent as the control mechanism (as discussed in chapter six of “Understanding Intelligence” [6]). These two examples (many others are described and discussed in [6] and [1]) demonstrate that the use of robotic agents, and particularly the behaviour of those agents, to examine theoretical problems from the animal sciences is an established success. Indeed, it has been suggested that the ultimate aim of artificial agent research is to contribute to the understanding of human cognition [7].

References:
[1] Holland, O. and D. McFarland, Artificial Ethology. 2001, Oxford: Oxford University Press (summary)
[2] Sharkey, N.E. and T. Ziemke, Mechanistic versus Phenomenal embodiment: can robot embodiment lead to strong AI? Journal of Cognitive Systems Research, 2001. 2: p. 251-262 (review)
[3] Meeden, L.A. and D.S. Blank, Introduction to Developmental Robotics. Connection Science, 2006. 18(2): p. 93-96
[4] Holland, O., Exploration and high adventure: the legacy of Grey Walter. Philosophical Transactions Of the Royal Society of London A, 2003. 361: p. 2085-2121
[5] Braitenberg, V., Vehicles, experiments in synthetic psychology. 1984, Cambridge, Mass.: MIT Press (review)
[6] Pfeifer, R. and C. Scheier, Understanding Intelligence. 2001, Cambridge, Massachusetts: MIT Press
[7] Guillot, A. and J.-A. Meyer, The Animat contribution to Cognitive Systems Research. Journal of Cognitive Systems Research, 2001. 2: p. 157-165 (review)

Tuesday, February 05, 2008

What is autonomy?

ResearchBlogging.orgIn yesterdays post, I reviewed a paper which discussed the role of emotion in autonomy. The concept of autonomy itself was found to be quite fuzzy, with definitions being dependant on the field of research in which the term is used. In an attempt to elucidate the concept, the editorial of the special issue of BioSystems on autonomy (of which the previously reviewed paper was a part) explores some of the issues involved.

Starting from the broad definition of autonomy as being self-determination (or the ability to act independantly of outside influence), it can be seen that this description applies to many levels of a system (be it biological, or artificial). However, the role of external (environmental) influences cannot be discounted: the reactive nature of autonomous systems is an essential part of proceedings - to the extent that some theorists have argued that there is no distinction between the two - even in the theory of autopoiesis is this the case. So, even from a theoretical standpoint autonomy is not isolated from the environment, but emphasises the independence.

Eve here though is the term independence problematic. There are three aspects which are pointed out as being of importance to the present discussion: (1) the reactive extent of interactions with the environment, (2) the extent to which the control mechanisms are self-generated, and (3) the extent to which these inner processes can be reflected upon. From these three properties of independence, it can be seen that autonomy is on a sliding scale, rather than a binary property.

The final notion of relevance to the present discussion of autonomy is self-organisation, due to it being a central element in life, and in those properties which we desire artificial systems to have. While some have shied away from the use of this term because of the connotations of something doing the organising, the concept of self-organisation is generally used to refer to the spontaneous emergence of organisation, and/or the maintenance of the systems' organisation once in this state. An interesting aspect to the term self-organising is this: a self-organising system cannot be broken down into constituent parts for analysis since these parts are interdependent (an aspect likely to be emphasised by autopoietic approaches).

An additional aspect to the discussion of autonomy which is covered in this editorial paper is the theoretical tension between ALife (artificial life) and GOFAI (good old-fashioned AI) techniques. Where the latter has been often pilloried, the author points out a number of theoretical successes it has had in terms of describing autonomy and agency which has not been achieved by ALife due to its emphasis on lower level processes - an approach which in its own way has proven enormously successful in accounting for a number of mechanisms involved.

While this discussion of the term autonomy has not resulted in a hard and fast definition, the consideration of two closely related concepts (independence and self-organisation) has placed the term into a context applicable to a wide range of research fields. Indeed, this lack of a definite definition may prove to be more productive than counter-productive.

BODEN, M. (2008). Autonomy: What is it? Biosystems, 91(2), 305-308. DOI: 10.1016/j.biosystems.2007.07.003

Monday, February 04, 2008

On the role of emotion in biological and robotic autonomy

ResearchBlogging.orgAutonomy is a concept often used, but not always clearly defined. Indeed, there are a number of definitions which are used, often dependant on the context in which it is used. For example, "autonomy" may be used to refer to a mobile robot in the sense that it can move around on its own (whetever the control system used), but the same term may also be applied to a biological agent capable of defining its own goals and surviving in the real world. In the debate of autonomy, and as indicated from these examples, the concepts of embodiment and emotion are also important in being able to explain the mechanisms involved. In recent times, emotio has become a hot topic in a wide range of disciplines, from neuroscience and psychology, to cognitive robotics. In order to elucidate the role of emotion in autonomy, Tom Ziemke reviews the concepts concerned and outlines a promising course of future research.

First comes a discussion of the difference between robotic and biological autonomy. This discussion is especially pertinent given the problem mentioned in the first paragraph: the widely differing definitions of autonomy used in robotics work. Important for biological autonomy is the concept of autopoiesis. Broadly speaking, an autopoietic agent is one which is capable of maintaining its own organisation: it has the ability to produce the components which define it. For example, a multicellular organism has the ability to create individual cells, which in turn form the organism itself. Despite a range of slightly different versions of the term, they all emphasise this self-constitutive property - and thereby exclude all current technology robots. Concerning robotics, autonomy generally refers to independance from human control. The aim is thus that the robot determines its own goals in the environment. This use of the term autonomy has some problems, particularly with regard to the biological definition, but is in widespread use. An important point raised though is that robotic autonomy is used to refer to systems which are embodied in mobile robots which act in the real world, as opposed to the mostly disembodied decision making systems of more traditional AI methods.

With embodiment comes the issue of grounding. Following Harnad's formulation of the symbol grounding problem, and Searle's Chinese room argument, the grounding of meaning for artificial agents is an important issue. A large amount of work has been carried out in this area throughout the 90's as a means of improving the behaviour improved. However, merely the imposition of a physical body does not necessarily result in intelligent behaviour, since this embodiment emphasises sensorimotor interaction, and not the other aspects (of which there are many) which are highly relevant for biological agents. The question then is: what is missing from robotic setups?

The argument is that robotic models, in addition to implementing the sensorimotor interactions which have been previously emphasised, must also link this to an equivalent of homeostatic processes: i.e. linking the body to the cognition, not just the bodies' sensors and motors. An example of this may be a need to keep a system variable (perhaps battery level) in a certain range - in this way, behaviour must be affected in order to achieve this. A number of theorists have likened this connection to a hierarchical organisation, with homeostatic processes (or metabolism) providing a base for the more 'cognitive' sensorimotor processes, thus supposedly resulting in a more complex, and meaningful, emergent behaviour. Homeostatic processes are often implemented in robotic systems as emotion or value systems, which are often ill-defined, and not usually grounded in homeostatic processes, but arbitrarily added as externally defined (or observer-defined) variables. The widely differing dfinition used for emotions are problematic when it comes to comparisons between architectures. One definition, provided by Damasio, breaks down the broad notion of emotion as that displayed by humans into "different levels of automated homeostatic regulation" - basically, the term "emotion" can be applied to a range of behaviours, ranging from metabolic regulation, through drives and motivations, to feelings (e.g. anger, happiness). In this way, these somewhat arbitrarily defined implementations of emotion may be seen to be higher levels of the emotion hierarchy, which may ultimately be tied to bodily processes (e.g. somatic theories of emotion).

Bringing this discussion of autonomy and emotion in artificial (robotic) systems together, it is clear that current technologies are neither autonomous in the narrow biological sense, nor implement grounded emotions (due to their supposed basis in biological homeostatic processes). However, it has been argued that the narrow biological definitions do not provide sufficient conditions for cognition, and that higher level cognitive processes are not necessarily emergent from these constitutive processes alone, but that interactive processes are also necessary. Similarly, the necessity of such autopoietic properties for self and consciousness are not established. Robotic models may then be used as models of autonomy without having to rely on such philosophical concerns. The emergent conclusion is though that embodied cognition of the form favoured in cognitive robotics work commits itself to a central role for the body, not just in sensorimotor terms, but also in homeostatic terms. The interplay between the two is then of central importance, and this investigation is proposed as a promising avenue for future research.

Ziemke, T. (2008). On the role of emotion in biological and robotic auonomy. BioSystems, 91(2), 401-408.

Thursday, January 10, 2008

Internal Representations: a metaphor

What follows is a brief note on why I don't believe that internal representations necessarily mean complex modelling capabilities, through the use of a (slightly suspect) metaphor. This isn't based on any peer reviewed work, just some thoughts I jotted down in a rare moment of effective cerebral activity :-)

Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:

(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.

(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.

Of course, there are variations on the above two methods. The most interesting/important being:

(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.

What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.

In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.

If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.

But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.

Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)

Monday, January 07, 2008

The simple brain?

There's a nice post up at Thinking as a Hobby on the possibility that the brain (actually more the neocortex than brain as a whole) just isn't as complex as it may appear. Basically, the idea is that while the evolutionarily older parts of the brain are specialised, the newer neocortex is more uniform and generally generic. The question then arises as to how this generic structure gives rise to functions such as language, which only appears in the species with the most developed neocortex (i.e. us humans). The solutions are either that the assumption of uniformity is wrong, or that emergence plays a huge role (where simple rules give rise to complex behviour). A very nice thought-provoking post.

I was thinking though, from the complex behaviour as being emergent standpoint above, that this wouldn't be enough to explain something like language. The environment would have to play an overly large role on proceedings (compared to simply being a matter of brain complexity): specifically inter-human interaction, or more broadly, societies, would have to be taken into account. Essentially, the complexity of behaviour that undoubtedly exists comes from the external world rather than the internal 'rules'. The consequence of this would be that to study the emergence of language (for example), inter-agent interaction would be just as, if not more, important than the internal complexity of an individual agent. So instead of the relatively simple neocortex making things easier in terms of describing complex behaviour such as language, it would actually become more difficult, since there would be multiple concurrent levels of analysis.

Just a thought, mind, I could be missing the point :-)

Back to the beginning though, this post by Derek James is very interesting.