The area of my work is cognitive robotics - which I basically take to mean the development of a 'cognitive' system based on a robotic platform, and thus which enables the incorporation of theories of embodiment (which I believe is becoming more prevalent). I am unclear, however, on a more precise definition of the term 'cognitive robotics': what does it actually mean? My intention here is to look at this definition more closely, to further explore my interpretation of this term. My thoughts will undoubtedly not fit with everybody’s ideas, but this is not meant as an objective definition, more as a personal research guide.
Starting with a more populist definition, “cognitive robotics” is defined by Wikipedia as being: "concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments using limited computational resources. Robotic cognitive capabilities include perception processing, attention allocation, anticipation, planning, reasoning about other agents, and reasoning about their own mental states. Robotic cognition embodies the behaviour of intelligent agents in the physical world (or a virtual world, in the case of simulated CR).” The rest of the article gives slightly more detail, but this statement is essentially what it revolves around. This definition is quite a high-level one, and is very general – in my humble opinion, too general to be of much practical use, in addition to which I believe it misses an important element. So, a more detailed definition is needed.
This question (what does cognitive robotics actually mean) was a discussion topic at last summer's COGRIC, which I was extremely fortunate to have attended. The discussion at one of the ideas sessions turned to the definition provided by Clark and Grush in their landmark 1999 paper "Towards a cognitive robotics" (full reference below). In this paper, the authors discuss the 'conditions' required for a science (in their discussion, robotics) to be considered cognitive. A summary of this paper is quite appropriate in the form of quotations, although the reading of the full paper is of course to be recommended.
• Truly cognitive phenomena are those that involve off-line reasoning, vicarious environmental exploration, and the like.
• Cognizers, on our account, must display the capacity for environmentally decoupled thought and the contemplation of options. The cognizer is thus a being who can think or reason about its world without directly engaging those aspects of the world that its thoughts concern.
• We hold that fluent, coupled real-world action-taking is a necessary component of cognition.
• Cognition, we want to say, requires both fluent real-world coupling and the capacity to improve such engagements by the use of de-coupled, off-line reasoning.
• We do not advocate the focus on top level cases (such as mental arithmetic, chess playing and so on) that characterised early cognitivist research. The motor emulation case was chosen precisely because it displays one way in which the investigation of quite biologically basic, robotics-friendly problems can nonetheless phase directly into the investigation of agents that genuinely cognize their worlds.
• It is these “Cartesian Agents”, we believe, that must form the proper subject matter of any truly cognitive robotics.
So, to summarise, cognition requires not only real-time interaction with the real world (thus incorporating the concept of embodiment), it also requires the ability to internally improve ones interaction with the environment without it actually being present. So, the cognitive agent must be able internally simulate in some way its interactions with the world, and be able to learn from this process. The first part of this is a pervasive element of autonomous robotics design - the ability to interact in real time with its environment, and also the ability to learn from this interaction. The importance of the second part of this definition was not at first apparent to me, but upon reflection, and after examining Hesslow's simulation hypothesis (and its use by eminent researchers, such as Murray Shanahan, in their work), it does seem an essential extension to the first part. The second to last bullet point I think to be a particularly important point too. I think it is this that separates this cognitive robotics from what I will term classical AI - which attempted, and possibly still attempts, to recreate human high-level behaviour in the form of mathematical algorithms and the like. Again, the idea of embodiment is central - that a central element of a genuinely cognitive agent would be the real time interaction with the real world: not only is the planning of a behaviour necessary, but also the actual execution of that behaviour. Robotics is thus a very apt choice for the implementation of these ideas - all variables (figuratively speaking...) are controllable, affording good scientific practices, and relatively easy measurement of both behaviours and internal functioning (which is obviously not possible with biological agents).
Just to finish, I think most of the ideas covered here are discussed in a paper I reviewed a few days ago, by Warwick and Nasuto. Within the context set by this paper, cognitive robotics (and embodied cognition in general) seems to be emerging as a very important research field. I would of course appreciate any comments that people would have on this subject, as this isn’t comprehensive and probably overlooks some aspects.
REF: A. Clark and R. Grush, "Towards a cognitive robotics," Adaptive Behavior, vol. 7, pp. 5-16, 1999.
UPDATE (17/01): Further discussion in my following post
2 comments:
Sorry to comment on an old post but I reached this via your post linking to the four definitions of cognition and this, somehow, seemed the more appropriate place for my comment.
So, to summarise, cognition requires not only real-time interaction with the real world (thus incorporating the concept of embodiment), it also requires the ability to internally improve ones interaction with the environment without it actually being present. So, the cognitive agent must be able internally simulate in some way its interactions with the world, and be able to learn from this process.
I think this makes sense but with one caveat. The ability to "interact" with the environment even when it isn't present, the ability, in other words, to think abstractly, must itself be constituted (or at least depend) from our real-time coping with the environment.
A robot, for e.g., that avoids obstacles using Brooks' subsumption architecture, and plans its routes using predicate calculus, is definitely an engineering achievement but it would tell us very little about cognition.
Thanks for your comment - any discussing comments are welcome, however old the post is!
I agree with your caveat - it is implicit in the statement you quoted, and you're quite right to make it clear. From a developmental point of view, since all 'knowledge' (be it sensorimotor contingencies or more abstract declarative information) has been (and would have to have been) learned through experience, then it would follow that any 'internal simulation' would have to occur based on this - the same competencies used for real-world behaviours (though with the necessary differences to prevent overt actions).
Thanks again for your comment.
Post a Comment