In the following notes, the term 'animat' is often used. This means essentially an artificial cognitive agent, be it simulated or embodied in a robot. I think many of the principles and views expressed here are highly relevant to my discussion of the term 'cognitive robotics'.
Traditional AI was confronted with the Symbol Grounding Problem (Harnad - from arbitrary symbols, what do these actually mean in relation to the external world/environment?), which is related to the Frame Problem (from what I can sense, what is relevant?). Furthermore, these traditional AI systems were prone to the Brittleness Problem (Holland - performance only guaranteed in a very tightly defined domain). The animat approach emphasises the need to take into account the fact that all known cognitive agents are embodied in the real world: the control system must be situated in the environment.
In a paper (full ref below) reviewing the SAB2000 conference, in Paris, seven major areas of animat research were identitified: (1) the embodiment of cognitive agents (with the work of Pfeifer et al, and Krichmar et al being prime examples); (2) Perception and motor control (mainly focused on sensor and actuator design); (3) Action selection and behavioural sequences (how to choose what action to perform next - the influence of prediction methods is apparent); (4) Internal modelling, or mapping, of the environment (this section seems to be heavily influenced by the place-cell phenomenon discovered in rats - see previous post for further comments on this); (5) Learning (a number of elements in this part, including classifier systems, reinforcement learning, the influence of emotions, and biologically plausible hebbian learning); (6) Evolution (the evolution of controllers is the obvious presence, but the evolution of both agent morphology and controller (or genotype and phenotype) is also present); (7) Collective Behaviours (studies on cooperation between agents, communication, and even the emergence of the competition approach to multi-agent interactions, where hostile competition is used instead of cooperation).
An interesting end to the paper reviews the short-, medium-, and long-term goals of animat research. Given that this paper was written six years ago, it is notable that these three (from my point of view anyway) are still highly relevant. For the short-term goal, it is stated that it would be to "devise architectures and working principles that allow a real animal, a simulated animal, or a robot to exhibit a behaviour that solves a specific problem of adaptation in a specific environment". From my (albeit limited) perspective, a large amount of work is at this stage, or towards the end of this stage. The intermediate-term goal would be to "generalise this practical knowledge and make progress towards understanding what architectures and working principles can allow an animat to solve what kinds of problems in what kinds of environments". I believe that a notable example of work in this stage is that of Pfeifer and associates on embodied cognition. In this stage, a systematic comparison between different architectures (in the same problem domain) would be required in order to establish some general (and universal?) principles. According to the author of this paper: "...the fact is that the number of architectures and working principles has grown much faster than the number of comparisons...". Finally, it seems that the consensus at SAB2000 was that the long-term (ultimate?) goal of animat research is to contribute to our understanding of human cognition. From my limited experience, I do not believe that it is not often that this point is made explicit, despite the connotations it would necessarily have for the progress of the work, and the methodologies utilised in so doing.
REF: Guillot; Meyer (2001), "the animat contribution to cognitive systems research", Journal of Cognitive Systems Research, vol 2, pp157-165