The February issue of Discover Magazine (07/03/07) carries an interview with Gerald Edelman, Nobel prize winner (for work on the structure of antibodies), and founder/director of the Neurosciences Institute. In this interview, conducted by Susan Kruglinski, he discusses his views on consciousness and the work he, and collaborators, are conducting with robots to shed more light on its mysteries. In doing so, the concept of life and the evolution of the brain is also briefly discussed.
From the outset of the interview, Edelman states his belief that consciousess can be created in artificial systems. He does, however, make a distinction between living conscious artefacts and non-living conscious artefacts. He takes 'living' to be "the process of copying DNA, self-replication, under natural selection". Anything with these properties is a living system - all else is not. Consciousness created in an artificial system would then be fundamentally different from our own (human) consciousness - although he does say that he would personally treat is as though it were alive: accord it the same basic respect ("...I'd feel bad about unplugging it.").
When it comes to giving a definition of what consciousness is, Edelman starts by turning to proprties described by the psychologist and philosopher William James: (1) its the thing you lose when you fall into a deep dreamless sleep, which you regain when you wake up, (2) it's continuous and changing, and (3) it's modulated or modified by attention, and so not exhaustive. From this, Edelman describes two states of consciousness. The first is primary consciousness. This supposedly arose with the evolution of a neuronal structure which allowed an interaction between perceptual categorisation and memory. In this way an internal scene could be created which could be linked to past scenes (i.e. memory). Built on this is secondary consciousness - resulting from the development of another neural structure (or structures, which is apparent in humans, and to a certain extent in chimps), which enabled conceptual systems to be connected: enabling the development of semantics and "true language", resulting in higher-order consciousness. A more simplified view of this consciousness is that it requires the inernalistion of stimuli, the remembering of them, and the interactions of these processes (not only perception and memory, but also things such as emotion). From this theory of consciousness, Edelman says that its further understanding would allow a clearer picture of how knowledge is acquired, which has importance in many diffenent respects.
It is based on this view of consciousness that he describes the Neuroscience Institute's approach to understanding consciousness. They construct what are described as Brain Based Devices (BBD's), which are essentially robots with simulated nervous systems. This artificial nervous system is modelled on that of a vertebrate or mammalian brain - although of course the number of neurons and synaptic connections in the simulated are many orders of magnitude smaller than in their natural counterparts. Nonetheless, one of their BBD's, called Darwin VII, is capable of undergoing conditioning: learning to associate objects in its environment with 'good' or 'bad' taste (where these 'tastes' have been defined a priori as fundamental properties of the environment). An important point regarding this experimentation is that it was conducted using real physical robots in the 'real world' (albeit simplified for the purpose of the task, it wasn't a simulation environment). Edeleman points out that a big problem with simulated environments is the difficulty of replicating reality, or in his words: "...you can't trace a complete picture of the environment." As demonstrated by the conditioning experiment, these BBD's are capable of learning: an example given in the interview of a segway-football match between a BBD-segway, and one programmed using 'traditional' AI techniques. Five matches were played, and the BBD-based device won each time. Edelman puts this down to the learning capabilities, and behavioural flexibility, from the fact that it learned all actions, rather than merely implement a set of algorithms (as a 'traditional AI' system does).
The BBD's being controlled by artificial nervous systems leads to questions regarding specifics of implementation. Instead of individually simulating the million or so neurons that make up the simulated nervous system, it is actually groups of around 100 neurons being simulated together, with the mean firing rate for this sub-populaion being taken (mean firing-rate models). This average firing rate is a reflection of synaptic change. According to Edelman, this sort of response is not just biologically plausible, it is identical: "The responses are exactly like those of [biological] neurons" (square brackets added).
The final part of the interview looks at other work going on at the institute. Currently, work is progressing on Darwin 12, the latest incarnation of the BBD's. This version is new as it intends to look at how embodiment affects the development of learning in the artificial nervous system, and its general functionality. It has both wheels and legs, and nearly 100 sensors in each of its legs. Mention is also made of other work concerning rhythm and melody as intrinsic human capabilities, more so than any other animal, and how this may have led to the development of language. This aspect of work seems to be only loosely brushed-over, so I do likewise.
I think that this interview, albeit reasonably short, covered a number of very interesting concepts on a wide range of subjects. However, I feel that it doesn't entirely succeed in bringing all of these elements together. An interesting read nonetheless.