A couple of links which I've found quite interesting:
- A blog called "Reality Apologetics": written by Jon Lawhead (a recent philosophy graduate), it covers a wide wide of subjects, though focusing primarily on philosophy of mind. I've only just found it, but there appears to be some interesting stuff.
- A post entitled "Robot thoughts" at Saint Gasoline: a quick review of limitations in artificial consciousness, although the concepts of consciousness and thought seem to be confused, followed by a link to the widely reported story on Floreano's evolving robotic colonies (and the emergence of liars and altruists) - actually, I found the resulting comments more interesting than the post itself...
- A post called "Nature inspires creepy robot swarms" at Environmental Graffiti which seems to indicate that most robotics work is aimed at producing agents capable of world domination. While I feel it was over-the-top (although maybe I missed the well hidden sarcasm), it does raise some interesting issues over the public perception of the research field in which I find myself - 'intelligent' robotics. Having said that, the post was a good read :-)
And now for a short rant... As some may have noticed, I have a Technorati search chart on the right hand side of my blog. When I first put it in, it provided me with a quick and easy way to keep track of some very interesting blog posts. I even had the pleasure of watching the number of "cognitive robotics" posts linking here increase after my posts on the definition of cognitive robotics. However, in recent months (since last November), the system has let me down. It no longer updates the graph (an annoyance since I quite like the visual aspect), and most importantly, the search seems to be flooded with posts from dud/porn blogs. I suppose that's not Technorati's fault - no, wait, it is: my search term is "cognitive robotics", and I'd expect a search engine to work moderately well at finding relevant posts (the occasional dud may slip through the net, but this is just silly). So, I'm about to remove the once mighty keyword chart. Apologies, rant over...
UPDATE 01/02/08: I've tried to remove the keyword chart, but Blogger won't let me. Grr...
Thursday, January 31, 2008
Sunday, January 27, 2008
New low-power MRI machine
As reported in January's issue of the IEEE Spectrum, what is essentially a very low power MRI (magnetic resonance imaging) machine has produced its first images of a human brain. Whereas a standard MRI machine produces magnetic fields of around 1.5 tesla, this new version produces only around 46 microtesla - an over thirty thousand-fold reduction, and a field apparently comparable in strength to the earths' magnetic field. This reduction in power results in a slightly different method for producing the images.
In a standard MRI machine, a strong magnetic field is used to align the proton in each of the hydrogen atoms before using an RF pulse to knock them out of alignment. As they snap back into alignment with the magnetic field, they emit a signal which can be detected and used to create a 3D image. In the new version, the very small magnetic field isn't enough to align the protons, so a short duration (1 second) magnetic pulse of slightly higher magnitude (30 millitesla). The resulting signals are very small, so an array of highly sensitive magnetometers are used (so-called superconducting quantum interference devices, or SQUIDS). A hugely important additional advantage of using these SQUIDS is that they are also used in the MEG (magnetoencephalography) imaging technique. This potential for MRI and MEG using the same machine raises the intriguing possibility of producing simultaneous structural images (using the MRI) and brain activation maps (using the MEG).
One other major advantage of using this low-power MRI technique is its potential to image tumors. Due to the subtle differences between cancerous and non-cancerous tissue, the differences are not readily captured by standard MRI pictures - whereas the low-power version can. Furthermore, the possibility arises of using this type of imaging during operations themselves, as the very low magnetic fields used would not interfere with the use of metal surgical implements. As with any newly developed technology though, it will be a fair few years before it will be in full use - although this situation will be helped due to comparatively low cost of the new device: due to the absent need for high magnetic fields, the new machines may cost as little as one tenth of its high-powered counterpart.
UPDATE 29/01: Vaughan at MindHacks has pointed out the downsides of using SQUIDs, which I didn't mention.
In a standard MRI machine, a strong magnetic field is used to align the proton in each of the hydrogen atoms before using an RF pulse to knock them out of alignment. As they snap back into alignment with the magnetic field, they emit a signal which can be detected and used to create a 3D image. In the new version, the very small magnetic field isn't enough to align the protons, so a short duration (1 second) magnetic pulse of slightly higher magnitude (30 millitesla). The resulting signals are very small, so an array of highly sensitive magnetometers are used (so-called superconducting quantum interference devices, or SQUIDS). A hugely important additional advantage of using these SQUIDS is that they are also used in the MEG (magnetoencephalography) imaging technique. This potential for MRI and MEG using the same machine raises the intriguing possibility of producing simultaneous structural images (using the MRI) and brain activation maps (using the MEG).
One other major advantage of using this low-power MRI technique is its potential to image tumors. Due to the subtle differences between cancerous and non-cancerous tissue, the differences are not readily captured by standard MRI pictures - whereas the low-power version can. Furthermore, the possibility arises of using this type of imaging during operations themselves, as the very low magnetic fields used would not interfere with the use of metal surgical implements. As with any newly developed technology though, it will be a fair few years before it will be in full use - although this situation will be helped due to comparatively low cost of the new device: due to the absent need for high magnetic fields, the new machines may cost as little as one tenth of its high-powered counterpart.
UPDATE 29/01: Vaughan at MindHacks has pointed out the downsides of using SQUIDs, which I didn't mention.
Thursday, January 10, 2008
Internal Representations: a metaphor
What follows is a brief note on why I don't believe that internal representations necessarily mean complex modelling capabilities, through the use of a (slightly suspect) metaphor. This isn't based on any peer reviewed work, just some thoughts I jotted down in a rare moment of effective cerebral activity :-)
Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:
(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.
(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.
Of course, there are variations on the above two methods. The most interesting/important being:
(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.
What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.
In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.
If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.
But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.
Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)
Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:
(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.
(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.
Of course, there are variations on the above two methods. The most interesting/important being:
(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.
What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.
In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.
If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.
But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.
Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)
Tuesday, January 08, 2008
Getting published and dealing with rejection...
Just came across a post from SCLin's neuroscience blog listing a few links to resources giving hints and tips on how to write a cover letter to a journal for with a paper submission, how to review papers, and my two favorites: how to deal with rejection of a paper (part 1 and part 2).
Link to post
Link to post
Monday, January 07, 2008
The simple brain?
There's a nice post up at Thinking as a Hobby on the possibility that the brain (actually more the neocortex than brain as a whole) just isn't as complex as it may appear. Basically, the idea is that while the evolutionarily older parts of the brain are specialised, the newer neocortex is more uniform and generally generic. The question then arises as to how this generic structure gives rise to functions such as language, which only appears in the species with the most developed neocortex (i.e. us humans). The solutions are either that the assumption of uniformity is wrong, or that emergence plays a huge role (where simple rules give rise to complex behviour). A very nice thought-provoking post.
I was thinking though, from the complex behaviour as being emergent standpoint above, that this wouldn't be enough to explain something like language. The environment would have to play an overly large role on proceedings (compared to simply being a matter of brain complexity): specifically inter-human interaction, or more broadly, societies, would have to be taken into account. Essentially, the complexity of behaviour that undoubtedly exists comes from the external world rather than the internal 'rules'. The consequence of this would be that to study the emergence of language (for example), inter-agent interaction would be just as, if not more, important than the internal complexity of an individual agent. So instead of the relatively simple neocortex making things easier in terms of describing complex behaviour such as language, it would actually become more difficult, since there would be multiple concurrent levels of analysis.
Just a thought, mind, I could be missing the point :-)
Back to the beginning though, this post by Derek James is very interesting.
I was thinking though, from the complex behaviour as being emergent standpoint above, that this wouldn't be enough to explain something like language. The environment would have to play an overly large role on proceedings (compared to simply being a matter of brain complexity): specifically inter-human interaction, or more broadly, societies, would have to be taken into account. Essentially, the complexity of behaviour that undoubtedly exists comes from the external world rather than the internal 'rules'. The consequence of this would be that to study the emergence of language (for example), inter-agent interaction would be just as, if not more, important than the internal complexity of an individual agent. So instead of the relatively simple neocortex making things easier in terms of describing complex behaviour such as language, it would actually become more difficult, since there would be multiple concurrent levels of analysis.
Just a thought, mind, I could be missing the point :-)
Back to the beginning though, this post by Derek James is very interesting.
Thursday, January 03, 2008
The Week of Science returns!
Happy New Year to all, may it bring good fortune and happiness to all :-)
After a resounding success last year, Just Science Week is returning at the beginning of February this year (4th - 8th). It's two days shorter this year, covering Monday to Friday, to ensure that there isn't a drop-off in posts over the weekend as before. I will be participating again, and hope to post some decent material. The aim for the participants is to post at least one post a day on scientific topics only - and not to post on non-science issues for that week. All of the posts will be aggregated, so a single feed will let you keep up with everything. What's the difference between science and non-science?
Taken from Just Science 2008 website - visit to sign up!
After a resounding success last year, Just Science Week is returning at the beginning of February this year (4th - 8th). It's two days shorter this year, covering Monday to Friday, to ensure that there isn't a drop-off in posts over the weekend as before. I will be participating again, and hope to post some decent material. The aim for the participants is to post at least one post a day on scientific topics only - and not to post on non-science issues for that week. All of the posts will be aggregated, so a single feed will let you keep up with everything. What's the difference between science and non-science?
What counts as science and non-science? A post which discusses the political implications of science is not science; a post which discusses the cognitive psychology or neuroscience of individual political orientation is science. A post which uses a reference to Creationism before elucidating a biological topic is science; a post which discusses the social and religious dynamics of Creationism is not.
Taken from Just Science 2008 website - visit to sign up!
Subscribe to:
Posts (Atom)