Tuesday, May 22, 2007

Encephalon #23

The 23rd issue of Encephalon is now up at Madam Fathom! Another excellent installment, which this time has four main themes: Approaches to understanding the mind, Evolution, Cognitive abilities, and Language and behaviour.

My picks from this issue:
- A brief review of the problems with the modular view of mind/brain at The Neurocritic which led Fuster to develop his Network Memory theory (something I've posted on a number of times).
- A review of the Human Brain Evolving Symposium at The Thinking Meat Project. In particular the last paragraph, which actually has nothing to do with the symposium itself - it's a general comment on the utility of scientific method and discourse, and one which I think is an excellent point.

Friday, May 18, 2007

Video lectures on How the Mind Works

It's a link that many have referenced in their posts over the past week or so (I found out through MindHacks and MindBlog) - a series of short lectures on how the brain works, covering a wide variety of topics, including consciousness (by Dan Dennett), and 11 others which sound equally interesting. They're between 15 and 25 minutes long - perfect for having on whilst eating lunch (and what I intend to do...).

Link to the lectures

Wednesday, May 16, 2007

Processing at the top of the cortical hierarchy and the Perception-Action cycle

As I've reviewed previously, Fuster’s Network Memory Theory proposes that hierarchies of cortical neural networks may be formed through experience, and that this arrangement explains the full range of behaviour, from the most basic of movements to complex movements and planning. The development of these hierarchies depend not only on the direct experience of the individual, but also on the 'memory of the species' upon which these may be based: "Memory is formed in the cortex from the bottom up, along ontogenetic gradients, from primary areas to progressively higher areas of association." The purpose of the present paper (ref at end) is to describe the perception-action cycle (by which the hierarchies are integrated with the environment), and a description of what goes on at the upper levels of the cortical hierarchies. I'm considering this latter point to be an important consideration, as an undefined 'head' of the respective hierarchies may lead the theory open to problems of having to explain seemingly all-powerful executive constructs (my opinion, I may be misplaced here).

As may be self evident (though not according to the enactive perception view for example), sensory information is necessary to determine which actions are appropriate at any given moment. These actions would change the state of the environment of some way, which in turn would change the sensory information obtained. This process is thus circular, leading to the term 'The Perception-Action cycle'. In the paper, neuroanatomical and neuroimaging evidence from humans and other primates is presented in support of this view: "there is anatomical evidence of an orderly descent of connections from prefrontal to premotor to motor cortex...", and, "... as Koechlin et al. also show, the activation of progressively lower frontal areas that process the action is cumulative." The evidence then points to a situation where "Automatic and well-rehearsed actions in response to simple stimuli are integrated at low levels of the cycle, in sensory areas of the posterior (perceptual) hierarchy and in motor areas of the frontal (executive) hierarchy." Furthermore, it also supports the prediction of the perception-action cycle that sensory information flows from posterior sensory regions towards frontal controlling (or executive) areas of the hierarchy structure.

So, what happens at the top of the hierarchies? As has been mentioned, the lower levels of the hierarchy are responsible for the relatively simple actions - as one 'rises' in the hierarchy, the actions and perceptual stimuli represented would be more complex, and more abstract from the real world grounding of representations lower down. Cooperation, and eventually integration, of the hierarchies would thus become more important at these higher levels. The cortical areas postulated to be at the top are the prefrontal cortex (particularly the rostral area thereof), and areas of the higher sensory association cortex. The Koechlin et al study mentioned previously also showed, using fMRI analysis, the descending activation down the hierarchies, supporting the theorised order of processing in the frontal hierarchy (the upper levels influencing the functioning of the lower levels).

Given that this difference exists as one rises up the hierarchy, how are actions decided when there may in some instances be competing options from either ends of the hierarchy. Fuster proposes that the next action in a sequence of actions is determined under two influences: the current sensory situation, and "the processing of the global aspects of the sequence in upper frontal areas...". As mentioned in the previous paragraph, the upper layers of both hierarchies 'inform' those below - before this may occur, the information in both hierarchies is integrated with previous information. In this way, the prefrontal cortex integrates temporal associations which are stored in the networks of the perceptual and executive cortex.

As a final comment, a note is made on serial and parallel processing in the hierarchies. "Contrary to common misconception, nowhere in an ascending or descending cortical hierarchy does processing need to be exclusively serial. In part because of the dependence on feedback throughout, the processing in the perception-action cycle takes place not only in series but also in parallel."

J.M. Fuster (2004), "Upper processing stages of the perception-action cycle", Trends in Cognitive Sciences, vol. 8, no. 4, pp143-145
Koechlin et al (2003), "The architecture of cognitive control in the human prefrontal cortex", Science, 302, pp1181-1185

Monday, May 14, 2007

Conscious Entities

Discovered a blog today called "Conscious Entities", which I think is very interesting in general, and discusses a wide variety of concepts related to consciousness. What brought me there was a post which is essentially on embodiment, even though that phrase isn't mentioned explicitly. It is concerned with the Blue Brain project, where an attempt has been made to simulate a mouse brain (an extremely good attempt in my opinion). As I've mentioned in numerous previous posts, I believe embodiment to be essential for the emergence (or otherwise) of intelligence: Peter (of Conscious Entities) mentions that "How the body shapes the way we think" (by Pfeifer and Bongard) elaborate on this view by providing some compelling examples - although he seems unconvinced by the arguments, and makes some valid points.

Related post: Mechanistic and Phenomenal Embodiment

Relevant reference: "Understanding Intelligence", R. Pfeifer and C. Scheier, MIT Press, 2001

Thursday, May 10, 2007

Encephalon #22

The latest edition of Encephalon is now up at The John Hawks Weblog.

Only 11 made it in this time, and they're all excellent as usual. I've two picks from this edition:
- The Neurocritic has two posts on the neural correlates of attention: Bottoms-up (which looks at a recent paper in Science), and Tops-down.
- Sensory substitution, and what is has to do with bats, over at Developing Intelligence

Thursday, May 03, 2007

Mechanistic and Phenomenal Embodiment

As may be seen from some of my previous posts, I consider the concept of embodiment to be a key one in both the understanding of neural intelligence and cognition, but also in the creation of artificial intelligence and cognitive robotics. The question of what embodiment actually means is thus a vital initial step. This paper, by Sharkey and Ziemke (ref at end of post), distinguishes two types of embodiment, mechanistic and phenomenal, which lie at opposite ends of a continuum, and assess whether either, given current robotic technologies, are sufficient for creating strong AI (defined by Searle, strong AI is the claim that "the implemented program, by itself, is constitutive of having a mind. The implemented program, by itself, guarantees mental life." - Searle, 1997).

Mechanistic Embodiment:
• Is the view that "cognition is embodied in the control architecture of a sensing and acting machine." This is the entirety of the agent: there is nothing separable from this mechanism - no internal representations or symbols (the symbol grounding problem is thus not an issue).
• This form of control stems from the work of Sherrington (who studied the nervous system and reflexes) and Loeb (who worked on tropisms - directed movements towards or away from stimuli, later termed taxis - the manner in which the environment directs the actions of the agent).
• Fraenkel and Gunn (1940) proposed that "the behaviour of many organisms can be explained as a combination of taxes working together and in opposition." The authors of this paper indicate that it was this general point of view which "heralded behaviour-based robotics".
• An excellent example of mechanistic embodiment as a combination of Sherrington and Loeb's ideas are the electronic tortoises built by Grey Walter in the 1950's. These tortoises were electromechanical devices, consisting of basic electronic components, which enabled them to exhibit relatively complex behaviours - including light aversion and attraction depending on battery level, obstacle avoidance, and constant movement.

Phenomenal Embodiment:
• As opposed to mechanistic embodiment which is wholly concerned with the physical, phenomenal embodiment is the embodiment of a mental or subjective world. It finds its roots in von Uexkull's Umwelt (the subjective or phenomenal world) which brings together an organisms perceptual and motor worlds. This view is however somewhat compatible with Sherrington and Loeb's view of the organism as an integral part of its environment - however, perception and action are of central importance, rather than physical form.
• The study of embodied cognition has seen a reassessment of the relevance of "life and biological embodiment". In this case, the biological (physical) embodiment defines the perceptual world, thus fundamentally affecting the phenomenal embodiment.
• This leads to a definition of cognition: "...cognition is viewed as embodied action by which Varela et al (1991) mean '...first, that cognition depends upon the kinds of experience that come from having a body with various sensorimotor capacities, and second, that these individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context.'".
• One more quote: "Thus phenomenal embodiment is the notion of cognition as an intrinsic part of a living body; as a process of that body."

Given these two extreme types of embodiment, the paper then turns to the question of whether either can lead to strong AI. The following is only the briefest of summaries of the arguments presented, and instead focuses on the conclusions drawn.

The first point of discussion on phenomenal embodiment was the presence of a living body. The uncontroversial argument that (current) robots do not have bodies which act in the same way as the bodies of biological agents is put forward. So a robot is constructed from previously fabricated parts brought together in a predefined configuration, whereas "the construction of an animal starts centrifugally; animal organs grow outwards from single cells." A further, and more important for this discussion, argument is that a living system, such as a single cell, is capable of creating and maintaining its identity in an environment with constant perturbations, and the fact that the individual components of the living system are continually changing/being renewed. It is from this that the concept of autopoiesis is introduced (which essentially means self creating or producing), which was first laid out by Maturana and Varela (1980). A quote: "An autopoietic machine, such as a living system, is a special type of homeostatic machine for which the fundamental variable to be maintained constant is its own organisation. This is unlike regular homeostatic machines which typically maintain single variables, such as temperature or pressure." It is argued that it is this concept that forms the largest barrier between current robots and living systems in terms of embodiment. In similar terms, robots may be described as allopoietic, where the organisation of the system is defined by elements which are themselves not part of the organisation - in this sense, machines cannot be defined as truly autonomous. Indeed, a number of people have made comments along these lines - essentially that it is not possible to have strong phenomenal embodiment on a robot, because it does not have the prerequisite living body. To get round this problem, the authors acknowledge that simulation may be used. However, since this would, in the authors words, "...view an allopoietic machine 'as if' it were an autopoietic machine..." it would be weak, and not strong embodiment.

Concerning mechanistic embodiment, the argument essentially revolves around the attribution of meaning by observers - or what is termed in the paper the "clever AutoHans error". Most will be familiar with the story of Hans, a horse apparently capable of basic arithmetic. The horse stunned audiences of scientists and the public alike with the ability to tap out with its hoof the answers to simple sums displayed in front of it on a piece of paper. Hans always got the answer right, which led to a number of serious theories concerning the use of mental arithmetic by the horse. To cut a long story short, it was eventually ascertained that Hans was picking up on subtle cues given by the observers when it had tapped its hoof the correct number of times (Hans tapped its hoof an arbitrary number of times when the sum was hidden from the observers). The 'moral of the story' is that it was the observers and the observers alone who attributed the capability of mental arithmetic to the horse - it is likely that the number of times that Hans tapped its hoof had no inherent meaning to it, merely that if it stopped tapping given a certain cue it would be rewarded. Similarly, if one were observing a small mobile robot, an AutoHans one could say, which was programmed to turn towards light and away from obstacles while always moving forward using light sensors and ultrasonic sensors respectively (essentially a type of Braitenberg vehicle), the behaviour of the robot could be described as light-following and obstacle avoiding. An uninformed observer may even state that the robot liked light, and didn't want to hurt itself by hitting a wall. However, these explanations only have meaning for the observer, just as with Hans the horse. Similarly, the behaviour of the Grey Walter tortoises would only have meaning for the observer - the mechanical agent itself merely acts according to pre-defined rules. Essentially, what happens is that the agent is created to perform a certain way in a certain environment, even if this is not the intention of the designer. The goal of behaviour is thus defined by the researcher and not by the agent - hence the same situation as before: behaviour only has meaning for the observer.

This brief discussion of the two extremes of embodiment had a common element for the authors: that with current technologies, robotics do not have living bodies. The implications for phenomenal embodiment were questions of physical, and hence mental, autonomy - for mechanistic embodiment these same issues of autonomy arise, albeit from a different source: that of goal creation. The authors conclusion based on these deficiencies is that strong embodiment is not in principle possible for robotics using current technologies. However, it is acknowledged that weak embodiment of both types exist, and that they maintain utility in the study of animal behaviour (as I have previously posted on, here and here). Essentially, one studies the allopoietic robotic system as if it were an autopoietic system. This approach may yield useful insights into behaviour, even if it cannot be claimed that the created agents are equivalent to their biological counterparts.