Monday, January 29, 2007

Encephalon #15 and the Capgras Delusion

The 15th Edition of Encephalon is now out at SharpBrains!

An excellent roundup of blogposts as usual, however, one post caught my eye in particular: The limits of rational thought at Neurontic (Author: Orli). It discusses a book by Richard Powers ("The Echo Maker"), the subject of which is suffering from a rare condition known as Capgras Delusion (which manifests itself after severe head trauma). In this condition, the sufferer has the sincere belief that those persons closest to them (a sister in this story) is an imposter - i.e. the person is recognised as being known, however, the reaction is to think that it is someone impersonating the loved one. If I might use a quote from the blog post, which succinctly describes the neurological fault:

"Capgras Delusion is now believed to be a neurological syndrome caused by faulty wiring between the two areas of the brain involved in facial recognition: the temporal lobe, which contains pathways specializing in identifying faces, and the limbic system, which is responsible for attributing emotional significance to these faces. "

The first time I came across this syndrome was as a case study explored by V.S. Ramachandran ("Phantoms in the Brain"), where he described a young man, also suffering from Capgras Delusion, who thought his parents were imposters. Going back to the blog post, it was mentioned that Damasio studied people with brain damage such that they did not have the regions known to be required for emotion-related processing. His findings are quite astonishing: those with this disability were unable to make even the simplest of decisions. Again if I may, a quote of a quote in the blog post:

"And Damasio's conclusion is that unless you have the emotional inputs, you can never evaluate between the logical possibilities."

Orli's post finishes with a comment on making snap decisions: research conducted has indicated that snap decisions are often better than those made with the input of the 'higher cognitive processes'. My thoughts naturally turn to the implications of Damasio's conclusions to my own work: cognitive robotics. We, as humans, consider ourselves to be rational beings, capable of 'cold, hard logic'. However, the evidence provided by the sufferers of the Capgras Delusion suggest otherwise: that emotion in intimitely linked to cognitive processing, and more importantly, to decision making. This may have profound implications for the way in which cognitive models in general are approached - it seems as though an internal means of evaluating between the 'rational' choices is required, a value function if you will. In other words, emotions, or emotion-like constructs. From my perspective, it seems as though cognitive models are at the moment viewing the human cogntive system from the 'cold, hard logic' point of view, although I believe that this is slowly changing.

As always, thoughts and comments are more than welcome - I seek to learn.

Friday, January 26, 2007

The Binding Problem

Pure Pedantry has reviewed a paper on the binding problem by Bodelon, Fallah and Reynolds. It uses a method which involves an LCD screen and varying visual cue frequencies to determine whether or not the binding of stimuli separated in the visual processing stream require a distinct amount of time (i.e. a separate resource). The findings indicate a string 'yes': the implications of which may be very important. As soon as I can get my hands on a copy, I'll attempt a review, with the possible implications for cognitive robotics, and related subjects.

The Animat Contribution to Cognitive Systems Research

In the following notes, the term 'animat' is often used. This means essentially an artificial cognitive agent, be it simulated or embodied in a robot. I think many of the principles and views expressed here are highly relevant to my discussion of the term 'cognitive robotics'.

Traditional AI was confronted with the Symbol Grounding Problem (Harnad - from arbitrary symbols, what do these actually mean in relation to the external world/environment?), which is related to the Frame Problem (from what I can sense, what is relevant?). Furthermore, these traditional AI systems were prone to the Brittleness Problem (Holland - performance only guaranteed in a very tightly defined domain). The animat approach emphasises the need to take into account the fact that all known cognitive agents are embodied in the real world: the control system must be situated in the environment.

In a paper (full ref below) reviewing the SAB2000 conference, in Paris, seven major areas of animat research were identitified: (1) the embodiment of cognitive agents (with the work of Pfeifer et al, and Krichmar et al being prime examples); (2) Perception and motor control (mainly focused on sensor and actuator design); (3) Action selection and behavioural sequences (how to choose what action to perform next - the influence of prediction methods is apparent); (4) Internal modelling, or mapping, of the environment (this section seems to be heavily influenced by the place-cell phenomenon discovered in rats - see previous post for further comments on this); (5) Learning (a number of elements in this part, including classifier systems, reinforcement learning, the influence of emotions, and biologically plausible hebbian learning); (6) Evolution (the evolution of controllers is the obvious presence, but the evolution of both agent morphology and controller (or genotype and phenotype) is also present); (7) Collective Behaviours (studies on cooperation between agents, communication, and even the emergence of the competition approach to multi-agent interactions, where hostile competition is used instead of cooperation).

An interesting end to the paper reviews the short-, medium-, and long-term goals of animat research. Given that this paper was written six years ago, it is notable that these three (from my point of view anyway) are still highly relevant. For the short-term goal, it is stated that it would be to "devise architectures and working principles that allow a real animal, a simulated animal, or a robot to exhibit a behaviour that solves a specific problem of adaptation in a specific environment". From my (albeit limited) perspective, a large amount of work is at this stage, or towards the end of this stage. The intermediate-term goal would be to "generalise this practical knowledge and make progress towards understanding what architectures and working principles can allow an animat to solve what kinds of problems in what kinds of environments". I believe that a notable example of work in this stage is that of Pfeifer and associates on embodied cognition. In this stage, a systematic comparison between different architectures (in the same problem domain) would be required in order to establish some general (and universal?) principles. According to the author of this paper: "...the fact is that the number of architectures and working principles has grown much faster than the number of comparisons...". Finally, it seems that the consensus at SAB2000 was that the long-term (ultimate?) goal of animat research is to contribute to our understanding of human cognition. From my limited experience, I do not believe that it is not often that this point is made explicit, despite the connotations it would necessarily have for the progress of the work, and the methodologies utilised in so doing.

REF: Guillot; Meyer (2001), "the animat contribution to cognitive systems research", Journal of Cognitive Systems Research, vol 2, pp157-165

Thursday, January 25, 2007

Artificial Neural Network and Implementation tutorial

Thanks to Mind Hacks for pointing out the following tutorial (by Sach Barber) on Artificial Neural Networks (ANNs), and their subsequent implementation in C# (which is similar to C++ in many respects).

From what I've read of it so far, it includes a good overview of the biological inspiration behind ANNs, before moving on to progressively more complex network topologies. One thing I like about this tutorial is that it covers how to actually code these networks - something I find lacking from other sources. The tutorial is split into three parts: the first provides the biology background, and introduces the perceptron; the second part goes through multilayer perceptrons and backpropagation (with C# implementation notes); and the third part explores the use of genetic algorithms for ANN training.

And from the comments provided at the end of the tutorial, it's well worth a read!

Link to the first part of the tutorial.

Public Debate on "The Future of Science"

Yet again, the BBC has provided another story of interest to me: launched today, a government initiative - called ScienceHorizons - has been created to promote debate on the future of science - or more particularly, people's "hopes and fears for future technologies". The project is due to run until the autumn, when the results are to be given to the government to help with policy building.

Science and Innovation Minister Malcolm Wicks: "Over the coming decades, we're going to have some huge ethical debates about science as new discoveries are made and new technologies emerge."..."We will all need to be part of making informed decisions about how we develop and use scientific and technological advances..."

The idea is that people download discussion packs, which include cartoons and other information, to kick-start group debates and discussions. These would be recorded and sent back to ScienceHorizons for processing. The aim is to have as wide a range of people as possible participating, not just those involved in the sciences. The ScienceHorizons website has all the information needed to get started, and a what looks like a fairly simple interface for both requesting discussion packs, and reporting back the results. Here's hoping that this initiative, which sounds great in principle, actually produces decent results which will help the future government make the right decisions.

Tuesday, January 23, 2007

A brief overview of Mirror Neurons

Notes on "Mirrors in the Mind", by Rizzolatti, Fogassi and Gallese, Scientific American Magazine, November 2006 issue, p30-37.

The need for deduction of the meaning of complex (human) behaviours is quite probable, however, the ease with which simple behaviours and actions are understood suggests that there is another, simpler, explanation. This line of thought (among others) has led to the discovery and characterisation of so-called 'mirror neurons'. This paper follows the line of research central to this new understanding, and is written by some of those who were, and still are, actively involved in its progress.

The first indications of these neurons came from primate studies. It was noticed that certain neurons fired in a monkey brain when the individual was engaged in simple goal directed behaviour. Even more interestingly, these same neurons fired when another individual was observed performing these same actions. It was because of this behaviour that they were called mirror neurons. They were studied through single unit recordings, and through behaviour analysis in monkeys, which provided strong evidence for the presence of these special neurons. A second possible method of analysing the functions of mirror neurons would be to remove the mirror neuron system - however, because of the very wide ranging cognitive deficiencies which would result, this method would not have provided any conclusive evidence. Instead, an alternative method was used. Given their supposed function in understanding, their activity should reflect the meaning of the action rather than the action itself. Two experiments were run on this basis. In the first, it was tested to see whether the activity of mirror neurons could be induced from only the sound of an action being performed - the results showed that a subset of mirror neurons which were activated for the sight and sound of an action being performed were also activated for the sound alone. This subset was dubbed audiovisual mirror neurons. In the next experiment, even fewer cues were provided for the monkey - merely sufficient for it to create an internal representation of what was going on. Again, the results confirmed that mirror neurons underlie the understanding of motor actions - when enough information is present for an action to be inferred, mirror neurons fire as if recreating the parts of the action which are not directly observed.

Based on these observations in monkeys, a natural progression of the research was to assess whether mirror neurons also exist in humans. Cortical signal measurements provided an indication that they were present, but the accurate localisation of the neurons in human subjects is more difficult than in monkeys (no invasive procedures). Brain imaging was thus used to address this question. Using PET (Positron Emission Tomography), further support for this view was obtained. A further question raised by this experiment was explore the possibility that mirror neurons allow one to understand the intention of an action by distinguishing a particular action from others which have different goals.

To address this question, monkeys were used for single unit recordings (of parietal neurons). In one experiment, its task was to grasp a piece of food and bring it to its mouth. Its next action was to grasp a piece of food, but this time place it into a container. For most of the neurons, there was a distinct difference in readings between the two cases, despite the similarities in actions, because of the different goals. This result "illustrated that the motor system is organised in neuronal chains, each of which encodes the intention of the act." The second part of the experiment then attempted to address the issue of whether this neural behaviour explains how we understand the intentions of others. Using a similar procedure as previously, a monkey was set to watch an experimenter perform one of two actions - grasp and then either bring food to the mouth, or place in a container - and single unit recordings taken. The results were quite striking (tome at least): "The patterns of firing in the monkey's brain exactly matched those we observed when the monkey itself performed the acts...". This was for both tasks, thus illustrating a link between the neural representations of motor actions and the capacity to understand the intention of those same actions in others. Using brain imaging, these results seem to be also confirmed in humans.

Further studies showed that emotions may also be communicated in this way. Given that the communication of emotions is of vital importance in a social network (such as the societies and interpersonal relationships in which humans persist), the action of mirror neurons may explain this - "Thus, when people use the expression 'I feel your pain' to indicate understanding and empathy, they may not realise just how literally true their statement could be". It is acknowledged that this mechanism does not explain all possible aspects of social cognition, but it does go a long way in providing a neural basis for interpersonal interactions. Deficiencies in this system may then be responsible for conditions such as autism spectrum disorders, in which interpersonal communication and difficulty with picking up social cues are characteristic. (V.S. Ramachandran, among others, has written on this subject.)
On a side note, philosophers have long held that one must experience something 'within oneself' in order to truly comprehend it - the finding of mirror neurons provides a physical basis for this view. More than this, this discovery has the potential to dramatically change "the way we understand the way we understand". More recent recent evidence has suggested that the mirror neuron mechanism plays an important part of how people learn new skills. Imitation is very important for humans when young and learning skills and knowledge, much more so than any of the other primates. Could the development of the mirror neuron system in our phylogenetic past be responsible for this? Studies are ongoing to address the question of whether they exist in other species, or whether they are a more recent evolutionary development. Whatever the answer, it is clear that mirror neurons have an important role to play in communication and learning - more detailed characterisation is the subject of ongoing research.

Saturday, January 20, 2007

The Week of Science Challenge

For some reason, I've signed up to the "Week of Science", coordinated by Just Science. The aim is to make one post a day on one of the following topics, and to avoid subjects popular with 'anti-science' groups/individuals:

- Published, peer-reviewed research and their own research.
- Their expert opinion on actual scientific debates - think review articles.
- Descriptions of natural phenomena (e.g., why slugs dissolve when you put salt on them, or what causes sun flares; scientific knowledge that has reached the level of fact)

A sign-up page is here, and a current list of participants, here.

Looking at the list of participants so far, I feel a little in awe, as they are what I consider to be giants of science blogging. I hope I can keep up... I have to admit though that part of my reason for doing this is very selfish - I have a pile of unread papers and books sitting on my desk, and I intend on using the week as an additional incentive to get through some of them (and in so doing, post my notes).

UPDATE (20/01): I should have mentioned that The Week of Science runs from Monday, February 5 until Sunday, February 11.

Friday, January 19, 2007

The rise of Asian science research

Found a story on the BBC on a report published by the think-tank Demos on the rise of Asian science research, and the 'risk' to UK science research.

The acceleration of Indian, Chinese, and South Korean research is shifting dominance from the west (Europe and the US) to the east, with a number of factors contributing, for example the blossoming economies, increases in state spending and the return of academics who formally worked in the west. The report proposed a 'join them' approach, encouraging the fostering of close research links to enable collaborative work, instead of competition, which will be the only way of maintaining an important role on the world scientific stage. Of course, hand-in-hand with this must come a significant increase in government funding. If new cooperation schemes come about, it'll certainly be interesting. I imagine if you look at the scientific impact of nations in only a couple of years, it'll be a lot different to this...

And on a completely different, and irrelevant, note: the visitor numbers to this blog nearly doubled just last night - which I am attributing completely to a mention on Cognitive Daily (on Science Blogs), and on Mind Hacks. Thanks! :-)

Thursday, January 18, 2007

"Artificial Ethology"

A book which I need to look at, and which others may also find interesting:

"Artificial Ethology"
Owen Holland and David McFarland, 2001
ISBN: 0198510578

Summary (From Amazon):

"Artificial ethology is an exciting new field of research that explores the ways that we can use robots and robotics to enhance our understanding of how real animals behave. Modelling and computer simulations combined with empirical research are the traditional tools of animal behaviour. This new text sets out to show how experimentation with animal-like robots can add a new dimension to our understanding of behavioural questions. Introductory chapters explain the history of theuse of models in animal behaviour, and describe how animal like mobile robots 'evolved' during the development of the discipline. Then thematic chapters scrutinise sensory processes and orientation, motor co-ordination, and motivation and learning in turn. Each thematic exploration is exemplified by a series of case studies, written by some of the leading researchers in artificial ethology. From robotic lobsters to robot crickets and robot 'sheepdogs', each of these case studies give a detaileddescription of a particular problem, research approach, and robot application. The examples bring the text to life, and will enable students to get an in- depth picture of the potential and the practicalities of this research. The text concludes with a discussion of general points arising from the use of robots in biological research, and the rationale for using real robots as opposed to simulation. Aimed at advanced students taking courses in animal behaviour, the text should also be ofinterest to computer scientists and engineers interested in robotics, artificial intelligence, and the study of biological systems. "

Wednesday, January 17, 2007

Further note on "Cognitive Robotics"

In my previous post, I discussed the preliminaries of a definition of the term "cognitive robotics". There is a few further point I'd like to mention now, which I neglected before.

In the discussion, no mention was made of what sort of implementation the cognitive architecture should use - that is, the definition does specify symbolic rule-based systems or artificial neural networks, or some hybrid. My interpretation of this is that it is the functionality of the system which is important, and not the computational substrate. The current state of affairs seems to indicate that it is the neural network-like architectures which hold the most promise, but that does not necessarily mean (in my opinion) that other approaches are without merit (see this for an alternative point of view, and this for some follow up discussion).

Monday, January 15, 2007

What does Cognitive Robotics mean?

The area of my work is cognitive robotics - which I basically take to mean the development of a 'cognitive' system based on a robotic platform, and thus which enables the incorporation of theories of embodiment (which I believe is becoming more prevalent). I am unclear, however, on a more precise definition of the term 'cognitive robotics': what does it actually mean? My intention here is to look at this definition more closely, to further explore my interpretation of this term. My thoughts will undoubtedly not fit with everybody’s ideas, but this is not meant as an objective definition, more as a personal research guide.

Starting with a more populist definition, “cognitive robotics” is defined by Wikipedia as being: "concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments using limited computational resources. Robotic cognitive capabilities include perception processing, attention allocation, anticipation, planning, reasoning about other agents, and reasoning about their own mental states. Robotic cognition embodies the behaviour of intelligent agents in the physical world (or a virtual world, in the case of simulated CR).” The rest of the article gives slightly more detail, but this statement is essentially what it revolves around. This definition is quite a high-level one, and is very general – in my humble opinion, too general to be of much practical use, in addition to which I believe it misses an important element. So, a more detailed definition is needed.

This question (what does cognitive robotics actually mean) was a discussion topic at last summer's COGRIC, which I was extremely fortunate to have attended. The discussion at one of the ideas sessions turned to the definition provided by Clark and Grush in their landmark 1999 paper "Towards a cognitive robotics" (full reference below). In this paper, the authors discuss the 'conditions' required for a science (in their discussion, robotics) to be considered cognitive. A summary of this paper is quite appropriate in the form of quotations, although the reading of the full paper is of course to be recommended.

• Truly cognitive phenomena are those that involve off-line reasoning, vicarious environmental exploration, and the like.
• Cognizers, on our account, must display the capacity for environmentally decoupled thought and the contemplation of options. The cognizer is thus a being who can think or reason about its world without directly engaging those aspects of the world that its thoughts concern.
• We hold that fluent, coupled real-world action-taking is a necessary component of cognition.
• Cognition, we want to say, requires both fluent real-world coupling and the capacity to improve such engagements by the use of de-coupled, off-line reasoning.
• We do not advocate the focus on top level cases (such as mental arithmetic, chess playing and so on) that characterised early cognitivist research. The motor emulation case was chosen precisely because it displays one way in which the investigation of quite biologically basic, robotics-friendly problems can nonetheless phase directly into the investigation of agents that genuinely cognize their worlds.
• It is these “Cartesian Agents”, we believe, that must form the proper subject matter of any truly cognitive robotics.

So, to summarise, cognition requires not only real-time interaction with the real world (thus incorporating the concept of embodiment), it also requires the ability to internally improve ones interaction with the environment without it actually being present. So, the cognitive agent must be able internally simulate in some way its interactions with the world, and be able to learn from this process. The first part of this is a pervasive element of autonomous robotics design - the ability to interact in real time with its environment, and also the ability to learn from this interaction. The importance of the second part of this definition was not at first apparent to me, but upon reflection, and after examining Hesslow's simulation hypothesis (and its use by eminent researchers, such as Murray Shanahan, in their work), it does seem an essential extension to the first part. The second to last bullet point I think to be a particularly important point too. I think it is this that separates this cognitive robotics from what I will term classical AI - which attempted, and possibly still attempts, to recreate human high-level behaviour in the form of mathematical algorithms and the like. Again, the idea of embodiment is central - that a central element of a genuinely cognitive agent would be the real time interaction with the real world: not only is the planning of a behaviour necessary, but also the actual execution of that behaviour. Robotics is thus a very apt choice for the implementation of these ideas - all variables (figuratively speaking...) are controllable, affording good scientific practices, and relatively easy measurement of both behaviours and internal functioning (which is obviously not possible with biological agents).

Just to finish, I think most of the ideas covered here are discussed in a paper I reviewed a few days ago, by Warwick and Nasuto. Within the context set by this paper, cognitive robotics (and embodied cognition in general) seems to be emerging as a very important research field. I would of course appreciate any comments that people would have on this subject, as this isn’t comprehensive and probably overlooks some aspects.

REF: A. Clark and R. Grush, "Towards a cognitive robotics," Adaptive Behavior, vol. 7, pp. 5-16, 1999.

UPDATE (17/01): Further discussion in my following post

Encephalon #14

The 14th Installment of Encephalon is now available at Mixing Memory!

The Homepage of Encephalon is here.

Friday, January 12, 2007

Why Brains?

Found a great post on "Brain Hammer" entitled "Why Brains?" which briefly overviews the arguments for the proposal that any reduction of mental states should be a neural reduction - essentially the proposal that (if I've understood correctly) neural systems only are responsible for mental states. Three reasons are given: (1) there are no non-controversial examples of minds that are not based on a neural system; (2) that there are no reasons to doubt that the seat of mind is in the brain; and (3) no other approach has even had a small amount of success in explaining mental phenomena compared to the neurocentric approach.

An alternative view to this position is that other systems, with no neural-system properties, are also capable of having mental states. This post is obviously against this. I was just wondering whether this insistence on neural systems precludes the possibility of systems with similar properties (i.e. massively parallel processing, etc...) from being able to have mental states at all. For instance, if one were able to reduce (an aspect of) mental states, would it not be possible to 'model' it in a way that doesn't rely on a strictly neural system? I would think that certain properties of neural systems would be required, but does that mean that neural systems alone are capable of 'mental states'. Just idle thoughts, I might be getting the wrong end of the stick :-)

Tuesday, January 09, 2007

Interference with Bottom-Up Feature Detection by Higher-Level Object Recognition

Carrying on very nicely from yesterdays post related to enactive perception comes this paper. It has found its way onto the BBC news pages, where the authors discuss the utility of making 'snap' decisions. Not read it fully yet, but the abstract is below:

"Drawing portraits upside down is a trick that allows novice artists to reproduce lower-level image features, e.g., contours, while reducing interference from higher-level face cognition. Limiting the available processing time to suffice for lower- but not higher-level operations is a more general way of reducing interference. We elucidate this interference in a novel visual-search task to find a target among distractors. The target had a unique lower-level orientation feature but was identical to distractors in its higher-level object shape. Through bottom-up processes, the unique feature attracted gaze to the target. Subsequently, recognizing the attended object as identically shaped as the distractors, viewpoint invariant object recognition interfered. Consequently, gaze often abandoned the target to search elsewhere. If the search stimulus was extinguished at time T after the gaze arrived at the target, reports of target location were more accurate for shorter presentations. This object-to-feature interference, though perhaps unexpected, could underlie common phenomena such as the visual-search asymmetry that finding a familiar letter N among its mirror images is more difficult than the converse. Our results should enable additional examination of known phenomena and interactions between different levels of visual processes. "

Link to abstract, here.

Monday, January 08, 2007

From MindBlog: Constructive memory a tool for anticipating futures.

A very interesting post summarising a paper on the importance of memory for anticipation - an essential element of normal functioning. Indeed, if you are a subscriber to the enactive perception viewpoint, then this is central (and essential) to the way in which we (as humans) perceive and act in the world.

MindBlog: Constructive memory a tool for anticipating futures.

Machine Intelligence

Notes on "Historical and current Machine Intelligence", K. Warwick and S.J. Nasuto, IEEE Instrumentation and Measurement Magazine, vol 9, issue 6, pp20-26, December 2006

This paper aims to provide a realistic assessment of the present state of machine intelligence (which is more widely known as artificial intelligence - AI). In doing so, a brief history is provided, and the potential future discussed.

Before the discussion of machine intelligence, the first point covered is the question of intelligence itself - what is it? The number of definitions is numerous, with numerous differences depending on the context of its use. For the purpose of this paper, the authors take a very general and basic definition: intelligence is "...the variety of information processing processes that collectively enable a being to pursue autonomously its survival". This definition allows one to study intelligence regardless of the species or type of agent in which it is embodied - it requires only that the processing capability mentioned affords survival. Further characterisation of this definition occurs through the following parts of the paper.

Current AI thinking embraces the artificial neural network, the foundations of which were laid by McCulloch and Pitts (the basic element of which, the perceptron, attempted to approximate the functioning of a biological neuron - that is weighted inputs coming together to a single threshold unit, the output of which was binary). This use of natural systems inspired processing provides a link between natural neurophysiology and information processing: "It has been shown that one layer of suitable nonlinear neurons [in a multilayer perceptron], ..., can approximate any nonlinear function with arbitrary accuracy, given enough nonlinear neurons. This means that a multilayer perceptron network can be a universal function approximator."

This way of thinking was replaced with a behaviouralist perspective, where behaviour was the important factor without much importance for how the behaviour came about. This put AI in closer relation to the cognitive sciences - and was exemplified by the Turing test, in which it is only behaviour which is the subject of the test. Another example of this type is the expert system - which is typified by the IF...THEN rule construction. A major criticism made of this type of system is that it is not capable of storing every possible eventuality. More recently, some of these drawbacks have been addressed by the use of Bayesian networks which merge the operation of these symbolic systems and the sub-symbolic operation of neural networks in a framework based on graph theory, probability, and statistics.

The popularity of artificial neural networks returned in the mid 1980s: producing architectures which roughly fell into two categories - feedforward (e.g. multilayer perceptron), and recurrent networks (e.g. Hopfield networks). They replaced the central control of the expert systems with a distributed control system, thereby realigning with the neurophysiology of the biological system.

Evolutionary algorithms address one of the problems with artificial neural networks, namely, their fixed topology, by implementing a distributed system of 'agents', where each one may be simply a rule. There are no a priori relationships between these agents, and the population of agents is acted upon by a pseudo evolutionary process (genetic algorithm) tuned to maximise performance in the task at hand. An example of these is the Learning Classifier System, where a population of rules is created and modified by a genetic algorithm.

Another form of distributed processing used in AI is swarm intelligence. Based on observations of populations in nature (e.g. ants, flocks of birds etc), this method uses numerous individual agents, each of which explores a potential solution. The overall solution then emerges as a result of interactions among individuals. Examples of these algorithms include ant algorithms, particle swarm optimisation and stochastic diffusion search. From this description, it may be seen that there are similarities between this and the evolutionary approach.

The paper moves on to make a brief mention of reinforcement learning. In this approach the agent receives a qualitative evaluation of its actions from the environment, and not from a supervisor as is otherwise normally the case. Trial and error is thus often used by the agent to obtain a 'value function' which is used to evaluate the positive or negative effects of its actions. Reinforcement learning has been proposed to underlie the development of a reward mechanism in the biological nervous system. Recent developments include extension into multiagent setups.

A general problem with all of the preceding approaches to machine intelligence has been that they provide disembodied information processing. However, in the 1990s, it was realised that this was insufficient: embodied cognition stems from the realisation that a theory of intelligence must involve a physically embodied agent, interacting in real time with its environment using sensory-motor contingencies. This approach requires that the agent is autonomous, and that it arrives at 'intelligent behaviour' through its own interactions with the environment (rather than being pre-programmed or similar). Embodiment is thus of central importance in this approach. Also "Cognitive robotics puts an emphasis on using psychological, cognitive, and neuroscience findings in the construction of robots".

The development of machine intelligence is of immense practical use in itself, in addition to its potential utility in gaining insight into its biological counterpart. These include, among others, interactive games and other forms of entertainment, applications in the military and agriculture domains, and mine clearing.

The final part of the paper centres on the fundamental nature of machine intelligence. It proposes that the concept of intelligence needs to be revised in the light of recent developments in human and nonhuman studies, which parallel the views of Patricia Churchland on consciousness. It continues to state that such a revision has already started to take place - as seen through the developments discussed in the previous parts of the paper. Furthermore, it is suggested that intelligence would be better described by its characteristics: the distributed nature of the underlying processing, agent autonomy, and embodiment etc. This list of characteristics would have to include those features identified by human psychological and cognitive research, animal research and machine research, without which a complete list cannot be hoped to be compiled. This approach to intelligence is called the rational approach, which views human and machine intelligence as but different versions of intelligence.

Friday, January 05, 2007

Celebrities and Science

This article on the BBC news website reports a pamphlet by the charity "Sense about Science" which has essentially listed statements made by "celebrities" (in quotation marks because imo some of them are only well known for their stupidity) in support of, or against, various topics in the scientific domain - such as organic food, avoiding cancer, etc. The aim of this pamphlet is to urge those in the public domain to check their facts before weighing into arguments on one side or the other, especially since (an unfortunate side effect of the society we live in) they have a disproportionate amount of publicity, and hence responsibility. I think the discussion of science in the public domain can only do good - however, misleading information from people with wide media access will form misconceptions which may do harm (e.g. the MMR jab situation a few years ago?). And all it would take to resolve is a bit of fact checking and background reading...

Lessons for Symbolic and Sub-Symbolic Architectures from Biology

Notes on "Symbolic and Sub-Symbolic representations in computational models of human cognition: what can be learned from biology?", T.D. Kelley, Theory and Psychology, vol 13(6), pp 847-860, 2003

There has been considerable debate concerning the role of symbols in the human cognitive architecture: does it use symbols as a representation of knowledge, or does it use distributed representations of knowledge. This paper proposes the third option - that of a hybrid between the two types of system. Lessons from biology are examined and the ACT-R architecture is proposed as the happy medium. The paper first reviews symbolic and sub-symbolic representations separately before discussing their hybrid in ACT-R. The biological evidence supporting this is then presented before the concluding remarks.

Classic cognitive psychology has argued that knowledge is represented as a series of symbols. This idea may be understood by using the metaphor of a computer. In its most basic terms, a computer performs an input-process-output function. The input may take the form of a series of symbols, which are representations of other concepts or constructs. They may be manipulated using a predetermined set of instructions, which then generates an output symbol or symbols. Besides the relationships between symbols which are explicitly supplied, symbolic relationships may also be inferred, however the process of making such inferences is complex and difficult to implement computationally.

Sub-symbolic systems have traditionally supported by those in the field of artificial intelligence, particularly those in the connectionist 'movements'. They are most often associated with artificial neural networks as metaphors of biological neural networks. The artificial neurons (or perceptrons in some implementations) operate in parallel to recognise a given input, by adjusting the weights between the individual nodes. Such a network of nodes may thus be considered an autonomous learning system - which is one of its greatest strengths.

Each of these two systems thus shows certain advantages in the study of the human cognitive system - the sub-symbolic system as an autonomous learning system (though a predetermined training algorithm is needed) and the symbolic system as a means of easily representing complex relationships (though it has difficulties with gaining the knowledge in the first place - the symbol grounding problem). It may be considered that the two approaches as opposite ends of a single continuum. Keeping this in mind for the next part of the paper, it may be viewed that sub-symbolic systems recognise inputs, which then get passed to the symbolic system. First, a summary of the differences:
· symbolic systems process in series, sub-symbolic systems operate in parallel.
· sub-symbolic systems have distributed knowledge, whereas symbolic systems do not (e.g. 4+5 is expicitly represented in a symbolic system at a particular location, but not in its sub-symbolic counterpart).
· sub-symbolic systems learn to recognize inputs, and respond to the recognised input in accordance with learning rules, whereas symbolic systems are not concerned with the recognition of the input, only with the manipulation of the symbols following the recognition.

From these last points, the benefit of a hybrid system between the two methods may be seen, provided that the advantages of both are combined. This brings the paper to the discussion of ACT-R, which is a hybrid, or integration, of sub-symbolic and symbolic systems (developed by Anderson and Lebiere, 1998). It has a production system architecture, where the main type of processing occurs within an if-then format - including a declarative memory (memory for facts) and a procedural memory (a skills memory which for us is not easily verbalised). In ACT-R, each symbolic component is linked to underlying sub-symbolic processes, which are continuously varying and which operate in parallel, while the symbolic part operates in series. However, while this hybrid system takes the advantages of both individual methods, it is still prone to the deficiencies of each. So, much of the 'knowledge' of the architecture is essentially hard-coded by the programmer, and not learned by the sub-symbolic network. Later versions of the ACT-R architecture have started to incorporate perceptual modules which have started to perform this task, thus reducing this particular deficiency. Hybrid architectures in general have become more and more popular as a way of overcoming the shortcomings of the individual approaches.

The rest of the paper is devoted to presenting the biological (human) cognitive system and using this information to provide support for the symbolic sub-symbolic hybrid architecture. Firstly, the biology and capability of the human cognitive system is examined, then these aspects of the cognitive system across different species.

Again using the idea of a continuum, the human cognitive system has at its highest level a symbol processing system, supposedly located in prefrontal cortex regions of the brain. This region has long been seen as the highest 'level' of the brain, representing "the zenith of human cognitive capabilities". Further to this, it (the neocortex) has developed on top of phylogenetically older and simpler systems/brain regions. As an example, language is a symbolic system, which is known to heavily involve the frontal regions - damage to these regions result in severe impairment of language capabilities. At the other end of the continuum, the lowest cognitive mechanism is the reflex, which are the result of the most basic neural networks where the synaptic weights have been set over the course of evolution (this idea has been discussed in a previous post).

Similar to the continuum of simple to complex processes within a single human brain, a continuum exists across different species within the animal kingdom, from single celled organisms and those with the simplest of nervous systems (which could potentially be replicated by a sub-symbolic architecture), to mammals and primates. The learning capabilities of insects may be described as being associative, which is the simple association of a stimulus with a response. This type of learning has been frequently criticised as not being detailed enough to support complex relationships - although this seems at odds with the comments of Joaquin Fuster (reviewed in a previous post). It is on this basis that the need for symbolic systems is proposed. However, when using this, the possibility of coming up against the 'Chinese room argument' (Searle, 1979) becomes likely - i.e. the 'blind' manipulation of symbols without knowing their true meaning.

The conclusions of the paper essentially reiterate the point that a hybrid between symbolic and sub-symbolic approaches to cognitive architectures appears to be the best approach, and have as such been gaining popularity. The SOAR architecture, a fully symbolic architecture, has recently been supplemented with sub-symbolic features - activation levels (as reviewed in a previous post) to great effect. One argument however that arises is that sub-symbolic architectures will eventually 'catch-up' with their symbolic counterparts, and so should not be considered at the bottom of the hierarchy. This is acknowledged to be possible by the author of the paper, although it is noted that traditional connectionist approaches (by which I assume he means multilayer perceptrons) have been shown to be inadequate for representing complex symbolic relationships (by Fodor and Pylyshyn, 1988). Having said this, the symbolic approach still holds numerous attractive benefits to the study of complex cognition, especially when used in combination with a sub-symbolic system.

Thursday, January 04, 2007

The Duck-Rabbit Illusion


I like it because it’s a good and simple demonstration of how perception isn’t simply a one-way street from senses to brain. A degree of expectation is also required – the brain to sense direction is also present.


This Picture is from Wikipedia, in the "Illusions" article (www.wikipedia.org/en).

Wednesday, January 03, 2007

Happy New Year

I wish all a happy, productive and successful new year :-)

(and I'm testing out the publish-by-email feature of Blogger...)