With robotic devices increasingly prevalent in 'real life', and the prospect of ever more autonomous robots, there is a need for legislation to be updated to reflect the changing conditions. An article I read on Wired a few days ago reminded of an EU project that started last year: RoboLaw, which has the aim of exploring how emerging robotics technologies influence and are affected by the law (see also this, which I've just come across). This issue is brought into sharper focus in the case of something going wrong, where the question of responsibility arises. For instance, there's been a lot going around in recent months on the autonomous car efforts of Google and others. If there were to be a crash, who would take the blame? Would it be the manufacturer in the obvious absence of driver error, or perhaps those responsible for road/signalling maintenance? Indeed, would the technically possible autonomy be scaled back to maintain direct human oversight in order to mitigate the potential legal minefield? While there have been some legislative attempts to address this, there clearly is a way to go.
Individual researchers actually working on the supporting technologies have increasingly considered the implications, and potential implications, of their own field of research, typically focusing on the ethics involved in the (proposed) applications. Indeed, a couple of years ago now, I wrote something on the consequences of my ongoing work on memory in the context of human-robot interaction, though my effort was more directed at the possible legal implications of memory system technologies than ethics. The paper considered the implications of new computational means of providing the function of memory (specifically the use of sub-symbolic networks). It specifically proposed that as a consequence of the details of the technologies potentially used, current privacy legislation may not be suitable to account for new generations of autonomous social robots.
In my view, this is a small example of a wider need to consider the actual technologies in (proposed) use when considering legislative requirements - hence a need for the scientific/engineering community to engage with the legislative process (and therefore vice versa). However, in order for this to be effected, I feel that it would be beneficial to have common perspective or approach on the part of the researchers, which even if not unified is at least coherent. With one of the main deliverables of the project intended to be a white paper recommending regulatory guidelines to European legislators, this project has the potential to help provide this.
Showing posts with label Comment. Show all posts
Showing posts with label Comment. Show all posts
Tuesday, March 19, 2013
Thursday, July 12, 2012
Uncertainty in Science
Just came across an interesting article on Wired: Science today. Written by Stuart Firenstein, a biological scientist and active in the public understanding of science. It's on how uncertainty and doubt are actually good things, fundamental drivers of the scientific method; and not something to be brushed under the carpet or made out to indicate complete certainty or ignorance as it frequently is by politicians and activists on all sides of a politically charge argument, or jumped on by the media (e.g. the MMR jab fiasco a few years ago) .
Taking the hot topic (hehe...) of of global warming as an example, Firenstein notes that the lack of clear-cut, unambiguous answers isn't an indication that science cannot provide anything of utility in the debate, and should not be discarded as a result: " Revision is a victory in science, and that is precisely what makes it so powerful." . If science is the search for knowledge, then what is often overlooked is that newly acquired knowledge is a means for forming and framing new questions; each step is just that, and not a certain end in itself.
A little extract:
"We live in a complex world that depends on sophisticated scientific knowledge. That knowledge isn’t perfect and we must learn to abide by some ignorance and appreciate that while science is not complete it remains the single best method humans have ever devised for empirically understanding the way things work."
From http://www.wired.com/wiredscience/2012/07/firestein-science-doubt/
Taking the hot topic (hehe...) of of global warming as an example, Firenstein notes that the lack of clear-cut, unambiguous answers isn't an indication that science cannot provide anything of utility in the debate, and should not be discarded as a result: " Revision is a victory in science, and that is precisely what makes it so powerful." . If science is the search for knowledge, then what is often overlooked is that newly acquired knowledge is a means for forming and framing new questions; each step is just that, and not a certain end in itself.
A little extract:
"We live in a complex world that depends on sophisticated scientific knowledge. That knowledge isn’t perfect and we must learn to abide by some ignorance and appreciate that while science is not complete it remains the single best method humans have ever devised for empirically understanding the way things work."
From http://www.wired.com/wiredscience/2012/07/firestein-science-doubt/
Sunday, January 15, 2012
Book: "Dreaming in Code"
![]() |
| Cover of "Dreaming in Code", by Scott Rosenberg |
In going through the development (evolution is perhaps a better word) of an open-source software project - a PIM called Chandler - a number of general software development issues are come across and discussed in the context of historical software engineering developments. This is what I particularly liked about it - the central story of a particular software development project, with the deeper consideration and historical context for some of the central features of its ongoing work, both technical, and personal. And this is what seems to persistently emerge as a fundamental confounding factor of software development: the computer is as precise as could be wished for, but those telling it what to do are decidedly not so (an observation so familiar to anyone who has written, or attempted to write, any piece of code).
Having said this, unlike some other reviews, I found the chapter dedicated to methods (chapter 9) to be a bit too contrived and preachy - particularly given the easy-going style of the rest of the book. This isn't to say the subject matter is out of place: it goes through . I think it's no coincidence that this chapter seems to depart furthest from the Chandler story narrative, a form that suited the rest of the book so well. I suppose it depends on your perspective: I enjoyed the Chandler story, with it providing the context for delves into general software engineering principles and history, so when this context fell away in chapter 9, I was left missing it.
So, would I recommend reading this? I think it depends (which is a cop-out of course...). I'm technically minded, I write code regularly, but I'm not a software engineer, and I've had no real prior knowledge of how big software projects are run. As such, the book provides a little glimpse of this. I think that while it would be too much to say that this book provides a full coverage of software development techniques by means of little examples, it does provide enough that I learned something from it. If, however, you have already been embedded within a formal software development setting, then I don't think you will gain anything new by reading this - apart from confirmation that software development (closed-, open- source of otherwise) is hard work, with many pitfalls, and having more money to do something actually seems to make it even more difficult. This of course could be reason enough though! For me though, I enjoyed reading the book, for the little asides to the main story as much as anything. However, what is obviously minimally required is some form of interest in the way computers are cajoled to do our (collective) bidding.
It seems as though since the book was written, Chandler has fallen to the very criticisms that the book makes of closed-source, commercial software developments: closed development, lack of user testing, and changing specifications (among others). However, despite the departure of the books seemingly main protagonist, Mitch Kapor, Chandler does seem to still be going, with the 1.0 release the focus of so much discussion in the book, but with not too much activity (e.g. planning pages on project wiki last updated in 2008, and project blog not updated since 2009...).
Scott Rosenberg (2007), "Dreaming in Code", Crown Publishing, ISBN: 978-1400082469. Website accompanying the book is here.
Wednesday, December 28, 2011
Software Quality
I do not regard myself as a software engineer. Sure, I 'do' programming (usually with some C-based language if you're interested), but such programming is to implement some functionality that I want or need for some purpose - to implement a model or a data processing tool for instance - which has been for my own use. As such, there has been minimal effort put into inherent extend-ability or modalarisation of my code. I've always paid relatively good attention to code commenting and some flexibility of use (personal, that is), but that mainly stems from my inability to remember implementational details rather than some grand vision of code reuse. I've also never really been bothered about efficiency or speed. I think this stems from the fact that when an undergrad, I did some programming with some fairly limited processors, on which resources (particularly memory) were limited and had to be at least considered, if not actively managed - moving from those things to a proper desktop led me to stop worrying about resources to the extent that I stopped even considering them. I would, and do, implement code in the way that it looked like it worked in my head, not in a way that would have been particularly efficient either in terms of computational resources or (my) time: another side-effect of the lack of memory on my part, I could look back on my code and almost reconstruct my line of thought. In summary, I was more a hacker than a crafter of code - it was more important what the stuff did and how it corresponded to the things swirling in my head, than how it looked on the screen, or how it actually worked. And I thought that this was all that was required, and was to be honest a little smug in thinking so.
However, in more recent times, my programming has had a number of additional constraints imposed. These are probably so fundamental to most normal people (by which I mean most programmers and software engineers), that me mentioning them may border on the ridiculous, but I think that they are probably not to your bog-standard academic-type, like me. It basically boils down to a simple fact: that there are other people out there, and that under certain circumstances, they might actually need to use/modify your code, or collaborate with you on this code. This process has only relatively recently begun to directly affect me and my work, since a little way into starting as part of the ALIZ-E project, but it's one that I am increasingly having to take into account, and one that I seemingly find myself a little reluctant to. Basically, I would say that the idea is that putting the effort into those aspects of software development that are not necessarily directly related to the desired functionality, but to more general infrastructure and usability from the perspective of the programmer (as well as the notional user of course), is very beneficial in the long run. Put like that, it seems sensible. But it's not particularly obvious when you're actually trying to throw together something that works in the face of a deadline. At least it didn't to me.
Anyway, this issue was raised in my mind over the past week or two for two reasons: firstly, I noticed that a project colleague has put up (or updated?) a page on his website about software quality; secondly, I just happen to be reading a book (given to me by some very dear friends) about programming (which is far more interesting than it may sound, but that is I think for another post when I've finished reading the book...).
This project colleague is Marc Schroder, and he is at DFKI in Germany, and this is the page I am referring to. The first time I met him was at the first ALIZ-E software integration meeting, at which he kept talking about good programming practice, and good practice regarding implementation structures and methods. To be perfectly honest, I viewed a lot of this as programming idealism, and distracting from the task at hand: implementing a (complex) system that would do what we wanted. Speaking to him during one of the meeting intervals, I made the point to him my opinion that a lot of academic programmers were hackers like me - good at making code that did what we wanted, but not necessarily good software engineers. I have no doubt that he'd heard and experienced this before, and indeed he was probably rather exasperated at the situation with the academic-hacker-types like me. He, on the other hand, has a good deal of experience with not just good quality programming, but also of multi-party project management, of which I had no experience. So, he knows what is involved in making some software actually work in a collaborative environment in which the participants are remotely interacting.
From the description on his software quality page page, and the various evangelist-style talks he's given us in the project on good coding practice (and I don't mean this in a negative manner - just descriptive of the style that academic-types speak to each other on subject matters that deeply interest them...), I have subsequently expanded my list of coding requirements. Or at least, I've added them to my desiderata, and am trying to actually incorporate them into the stuff I normally do. The order is roughly in importance for me at the moment, and misses things out (from Marc's list at least) probably because I don't (yet!?) understand their necessity.
I'd say that I now understand the necessity for these three things at least. And that I know that I need to apply it to my work as a fundamental feature rather than a mere after-thought. But I am also aware that this process has just begun for me, and that there is far more that I need to learn about testing regimes, interface definitions, etc, that are as yet just too unfamiliar to me. And yet there remains this vestigial resistance to such practice in favour of the hack-together-for-application methodology...
![]() |
| Sometimes I feel that this is what programming actually is... Image from here. |
However, in more recent times, my programming has had a number of additional constraints imposed. These are probably so fundamental to most normal people (by which I mean most programmers and software engineers), that me mentioning them may border on the ridiculous, but I think that they are probably not to your bog-standard academic-type, like me. It basically boils down to a simple fact: that there are other people out there, and that under certain circumstances, they might actually need to use/modify your code, or collaborate with you on this code. This process has only relatively recently begun to directly affect me and my work, since a little way into starting as part of the ALIZ-E project, but it's one that I am increasingly having to take into account, and one that I seemingly find myself a little reluctant to. Basically, I would say that the idea is that putting the effort into those aspects of software development that are not necessarily directly related to the desired functionality, but to more general infrastructure and usability from the perspective of the programmer (as well as the notional user of course), is very beneficial in the long run. Put like that, it seems sensible. But it's not particularly obvious when you're actually trying to throw together something that works in the face of a deadline. At least it didn't to me.
Anyway, this issue was raised in my mind over the past week or two for two reasons: firstly, I noticed that a project colleague has put up (or updated?) a page on his website about software quality; secondly, I just happen to be reading a book (given to me by some very dear friends) about programming (which is far more interesting than it may sound, but that is I think for another post when I've finished reading the book...).
This project colleague is Marc Schroder, and he is at DFKI in Germany, and this is the page I am referring to. The first time I met him was at the first ALIZ-E software integration meeting, at which he kept talking about good programming practice, and good practice regarding implementation structures and methods. To be perfectly honest, I viewed a lot of this as programming idealism, and distracting from the task at hand: implementing a (complex) system that would do what we wanted. Speaking to him during one of the meeting intervals, I made the point to him my opinion that a lot of academic programmers were hackers like me - good at making code that did what we wanted, but not necessarily good software engineers. I have no doubt that he'd heard and experienced this before, and indeed he was probably rather exasperated at the situation with the academic-hacker-types like me. He, on the other hand, has a good deal of experience with not just good quality programming, but also of multi-party project management, of which I had no experience. So, he knows what is involved in making some software actually work in a collaborative environment in which the participants are remotely interacting.
From the description on his software quality page page, and the various evangelist-style talks he's given us in the project on good coding practice (and I don't mean this in a negative manner - just descriptive of the style that academic-types speak to each other on subject matters that deeply interest them...), I have subsequently expanded my list of coding requirements. Or at least, I've added them to my desiderata, and am trying to actually incorporate them into the stuff I normally do. The order is roughly in importance for me at the moment, and misses things out (from Marc's list at least) probably because I don't (yet!?) understand their necessity.
- Error handling - as in properly, returning descriptions of what actually went wrong, so that you can figure it out, not just some way of not crashing the entire computer when something doesn't go to plan...
- Test-driven development - I think I understand the main principles involved, but to be honest, the practical details of how this actually should be approached are still tantalisingly out of reach... The idea of it being something like a living specification that keeps up to date with the code, is an actually useful tool in verifying updates, and replaces (to a certain extent) an external body of documentation, seems like a good idea all round.
- Refactoring - now this is something I have actually been doing for a while, though not for the efficiency aspects, but more for the matching of code operation to my internal cognitive machinations, and for some (limited) future flexibility (so I can easily change parameters and rerun for example)
I'd say that I now understand the necessity for these three things at least. And that I know that I need to apply it to my work as a fundamental feature rather than a mere after-thought. But I am also aware that this process has just begun for me, and that there is far more that I need to learn about testing regimes, interface definitions, etc, that are as yet just too unfamiliar to me. And yet there remains this vestigial resistance to such practice in favour of the hack-together-for-application methodology...
Monday, December 19, 2011
Me: films and robots
My first post in over a year, so lets start with something silly :-)
I'm sure that anyone who is working, or has worked, with robots has been influenced in some way (if not inspired) by some depiction in a work of fiction - most likely film - whether they choose to admit it or not (those who don't are probably lying). I'm quite happy to admit to this - and can point to two such robotic intelligent devices. What precisely about them gave rise to this influence I don't know - and I don't really want to deconstruct it in case it turns out to be ridiculous and/or trivial - but here they are nonetheless for you to assess.
The first one is the amazing - and actually fairly realistic (in terms of achievable mechanical complexity) - Johnny 5 from Short Circuit. I can't really say enough about this dude - I did really want one of the little mini-me's from the second film though! I can't really remember the first time I watched this, but I do know that over the many occasions I've watched the films I'm still drawn to it, despite the occasionally dodgy special effects (I'm thinking of the dancing)...
I'm sure that anyone who is working, or has worked, with robots has been influenced in some way (if not inspired) by some depiction in a work of fiction - most likely film - whether they choose to admit it or not (those who don't are probably lying). I'm quite happy to admit to this - and can point to two such robotic intelligent devices. What precisely about them gave rise to this influence I don't know - and I don't really want to deconstruct it in case it turns out to be ridiculous and/or trivial - but here they are nonetheless for you to assess.
![]() |
| Johnny 5 is alive! |
The second one is the intelligent space-ship/robot arm thing in the Flight of the Navigator - 'Max'. I'm not entirely if this is supposed to be AI robot, or alien-being-controlling-a-robot, but any device that can fly a spaceship, go manic, and time-travel is alright in my book. The single eye-on-an-arm-thing was a bit strange, though even with such a fairly simple setup, the array of emotional expression was really quite impressive.
(I've only just realised that both of these films were released in '86 - this is just coincidence, as I watched both on TV a number of years afterwards - I didn't watch them in the cinema or anything). I'm not sure sure these would be the choices of most people - and I'm not going to bring age into it - but they're mine :-)
(I've only just realised that both of these films were released in '86 - this is just coincidence, as I watched both on TV a number of years afterwards - I didn't watch them in the cinema or anything). I'm not sure sure these would be the choices of most people - and I'm not going to bring age into it - but they're mine :-)
/rant
Having said all that though, there is a bit of a cautionary note I think. As much as the portrayal of the robot in science fiction is of course hugely beneficial in terms of building and maintaining interest in these synthetic devices, I do wonder sometimes whether this actually has the long-term reverse effect: building expectations of what such devices can do, not just beyond that which is currently possible, but beyond that which is even probable as possible. In the end, would this just not turn people off when they realise that the real state-of-the-art is actually fairly mundane? Or that what people like me think of as really quite exciting in terms of development just pale in comparison with the vividly recreated imaginations of script writers and graphic designers? In the end, surely such levels of unfulfilled expectation will serve as a damper on funding initiatives (am thinking of potential career prospects here...!) - "but what you are trying to do isn't really exciting, they were talking about it in the '70s/'80's/'90's/etc...". Either that, or the reality drifts so far from expectation that most people don't understand what's going on, and you end up in the same place. But that is perhaps for another discussion, on public engagement with science...
Having said all that though, there is a bit of a cautionary note I think. As much as the portrayal of the robot in science fiction is of course hugely beneficial in terms of building and maintaining interest in these synthetic devices, I do wonder sometimes whether this actually has the long-term reverse effect: building expectations of what such devices can do, not just beyond that which is currently possible, but beyond that which is even probable as possible. In the end, would this just not turn people off when they realise that the real state-of-the-art is actually fairly mundane? Or that what people like me think of as really quite exciting in terms of development just pale in comparison with the vividly recreated imaginations of script writers and graphic designers? In the end, surely such levels of unfulfilled expectation will serve as a damper on funding initiatives (am thinking of potential career prospects here...!) - "but what you are trying to do isn't really exciting, they were talking about it in the '70s/'80's/'90's/etc...". Either that, or the reality drifts so far from expectation that most people don't understand what's going on, and you end up in the same place. But that is perhaps for another discussion, on public engagement with science...
Or maybe I'm reading far too much into all of this, and should really just sit back, relax, and enjoy the view...
/endrant
Thursday, October 14, 2010
On Memory, from St. Augustine
I've been sitting on this quote for a while. I somehow came across it a few years ago (though I can no longer remember how I first found it), and used part of it as the opening quote of my thesis (in that quest to find a really old, obscure, but relevant way of opening the first chapter - I guess as a means of 'showing off' your supposed breadth of research...):
"There are all things preserved distinctly and under general heads, each having enetered by its own avenue, as light, and all colors and forms of bodies, by the eyes; by the ears all sorts of sounds; all smells by the avenue of the nostrils; all tastes by the mouth; and by the sensation of the whole body, what is hard or soft, hot or cold, smooth or rugged, heavy or light, either outwardly or inwardly to the body. All these doth that great harbor of the memory receive in her numberless secret and inexpressible windings, to be forthcoming, and brought out at need; each entering in by his own gate, and there laid up. Nor yet do the things themselves enter in; only the images of the things perceived, are there in readiness, for thought to recall. Which images, how they are formed, who can tell, though it doth plainly appear by which each hath been brought in and stored up? For even while I dwell in darkness and silence, in my memory I can produce colors, if I will, and discern betwixt black and white, and what others I will. Nor yet do sounds break in and disturb the image drawn in by my eyes which I am reviewing, though they also are there, lying dormant, and laid up, as it were, apart. For these too I call for, and forthwith they appear."
From the 1976 Translation version of "The Confessions of St. Augustine" (398 A.D.), Book 10, by Edward B. Pusey.
I've not found a more poetic description of the introspective function of memory, which is of course as relevant now as it was when written in the 4th century A.D.
Thursday, August 20, 2009
A brief update - and Twittering
I've not posted on here for a while now. I'm in the process of writing up my PhD thesis now, and additional writing for this blog is beyond me at the moment. I don't, however, want this blog to die as I wish to continue using it to post notes and thoughts as I (hopefully at least...) begin a career in academia.
Incidentally, I have recently given in to a certain amount of hype, and a personal recommendation, and signed up to Twitter (with the alias one_paulie). Now, at this moment in time, I don't see it being a particularly useful tool (or even a very interesting one), but there was one potential use which I thought deserved further investigation: the use of Twitter as a live-update conference tool. Not been to a conference yet where I've noticed its use, but I'm going to two in September (ECAL and ICAIS if anyone's interested, and wants to have a chat with me there?) and want to see if it actually has some practical use.
I've put a sidebar on this page (on the right hand side of this page) with my Twitter feed, it'll most likely get more activity than this blog for the time being (but then probably mostly focused on conference and/or paper related activities)...
Incidentally, I have recently given in to a certain amount of hype, and a personal recommendation, and signed up to Twitter (with the alias one_paulie). Now, at this moment in time, I don't see it being a particularly useful tool (or even a very interesting one), but there was one potential use which I thought deserved further investigation: the use of Twitter as a live-update conference tool. Not been to a conference yet where I've noticed its use, but I'm going to two in September (ECAL and ICAIS if anyone's interested, and wants to have a chat with me there?) and want to see if it actually has some practical use.
I've put a sidebar on this page (on the right hand side of this page) with my Twitter feed, it'll most likely get more activity than this blog for the time being (but then probably mostly focused on conference and/or paper related activities)...
Monday, February 09, 2009
Kary Mullis: Celebrating the scientific experiment
Over on TED, I've just watched a talk given by Kary Mullis (a Nobel Prize winner for his invention of the PCR technique) on the history of the scientific experiment. The video was posted only in January, but was apparently filmed It's a fairly rambling talk, but he tells some interesting and funny stories, from history-discussing surfers, to space faring frogs (well, nearly), and seems generally critical of the state of reporting in modern science.
Sunday, February 08, 2009
Agents or Programs: a definition of "Agent"
Another of these terms around which care is needed when used is 'agent'. It's one of those words which is defined in almost every paper in which it is used. This ensures that its use is clear in each instance, but also results in a multitude of definitions, most of which bear great similarity, whilst maintaining some fundamental differences. One attempt to provide a unified definition, and a general taxonomy to encompass the different contexts in which the term 'agent' is used, is provided by Franklin and Graesser (1996). In a short review of a number of contemporary agents in research and development use, a general definition is constructed and then extended.
From the review of a number of different agents in use at the time, a number of general characteristics of agents become apparent. These are compiled into the following definition proposed by the authors:
"An Autonomous Agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in persuit of its own agenda and so as to effect what it senses in the future."One immediate point to notice is the dependance of this definition on the concept of autonomy: agency then inherently requires the system in question to be autonomous (with all the vaguearies that this definition brings...). Furthermore, agents are situated in environments: being both affected by them, and having an effect on them. Without this flow of information, there is no agent: using one of the authors examples, if an agent equipped with light sensors is placed in an environment with no light, then it cannot be considered to be an agent. Another important point that should be mentioned is the "...over time..." bit: this part insists on temporal persistance, i.e. the ability to sense, effect, and thereby influence what is sensed in the future. Finally, the point about "... its own agenda" follows on from the concern over autonomy: the agenda need not be a single task which needs fulfilling, or even a particularly well-defined one (consider the artificial life goal of 'survival').
An immediate application of this definition for the authors is to distinguish between software agents and general computer programs. In this case, software programs (a payroll program in the example used) might have a task and an environment, but there is no persistance (i.e. the program output wouldn't affect its subsequent inputs), and there is no autonomy for the system (or at least it is hoped that there isn't!). Whilst this distinction between software agents and programs can be made, it is clear that the definition is fairly broad-brush. In the remainder of the paper, the authors attempt to expand upon this in order to allow a classification of different types of agent within this overarching definition.
Agents may be classed in a variety of ways, for example by functionality, architecture, sensors, effectors, etc. As long as the four properties previously mentioned are satified, any additional properties may be used to provide delineations of type. This, however, does not provide a very structured means of classification, as it is limited only by the possible imaginable agent properties. The authors propose using a hybrid between a biologically inspired taxonomic tree, and a mathematical binary classification tree. Using the former, a scheme which classifies the major classes of agents can be constructed (task specific for example). Further delineations can be provided by a binary classification scheme, for example planning vs non-planning, learning vs non-learning, mobile vs non-mobile, etc. This type of classification can then be defined as a topological space, which leads to further classifications methods.
In summary, the authors have provided a clear delineation between software agents and programmes, and have described a possible taxonomy for the classification of different types of autonomous agent. This taxonomy in itself however doesn't seem to solve anything, since all of the categories for classification are arbitrarily chosen, and may be (even among the examples chosen in the paper) contentious in themselves. However, the general definition of the term "agent" extracted from a number of examples can be used as a basic guide for discussion - and it is interesting to note that the definition for agent inherently assumes autonomy: in order for an agent to be considered an agent, it must to some degree be autonomous (and that's a whole separate question...).
Stan Franklin, Art Graesser (1996). Is it an Agent, or just a Program? A taxonomy for autonomous agents Third International Workshop on Agent Theories, Architectures, and Languages
Friday, February 06, 2009
Encephalon #63
Of Two Minds has just put up the 63rd edition of Encephalon. A nice round-up as usual, with congratulations of course going to Omnibrain on his good fortune... :-p My three highlights as usual:
- Brain Blogger brings us a short piece on serotonin: specifically, how the genetic basis of the serotonin system, and the natural variation in the system as a result of individual genetic differences, results in different susceptibilities to anxiety, depression etc. Possibly also responsible for certain differences in personalities (seems sensible to me, but then I imagine that there are many drivers for ones personality). The interesting question then arises though of whether medicine should be used to make up for some of these individual differences. A fascinating question, whose answer has wide ranging consequences.
- Are birth defects the result of traumas that the mother has been subject to? This question with a seemingly obvious negatory answer is tackled by Vaughan over at Mind Hacks. The theory known as 'maternal impression' was well known in the 19th century, but passed out of use in the late 19th century. However, in the second half of the 20th century, a number of studies showed that severe maternal stress did have an effect on the children's brain development. Vaughans article is well worth the read.
- And finally, the question of Free-Will! PsyBlog reviews three fascinating studies on how manipulating people's views on free-will (by getting them to read statements either for or against the concept of free-will) led to their behaviours being modified. A decreased belief in free will led to people being less helpful towards others, and even increased agression! Which leads to a view that a belief in free will is beneficial to society. The concepts free-will and determinism (as promoted by science) must however be reconciled since our society is fundamentally based on the former, and increasingl on the latter.
- Brain Blogger brings us a short piece on serotonin: specifically, how the genetic basis of the serotonin system, and the natural variation in the system as a result of individual genetic differences, results in different susceptibilities to anxiety, depression etc. Possibly also responsible for certain differences in personalities (seems sensible to me, but then I imagine that there are many drivers for ones personality). The interesting question then arises though of whether medicine should be used to make up for some of these individual differences. A fascinating question, whose answer has wide ranging consequences.
- Are birth defects the result of traumas that the mother has been subject to? This question with a seemingly obvious negatory answer is tackled by Vaughan over at Mind Hacks. The theory known as 'maternal impression' was well known in the 19th century, but passed out of use in the late 19th century. However, in the second half of the 20th century, a number of studies showed that severe maternal stress did have an effect on the children's brain development. Vaughans article is well worth the read.
- And finally, the question of Free-Will! PsyBlog reviews three fascinating studies on how manipulating people's views on free-will (by getting them to read statements either for or against the concept of free-will) led to their behaviours being modified. A decreased belief in free will led to people being less helpful towards others, and even increased agression! Which leads to a view that a belief in free will is beneficial to society. The concepts free-will and determinism (as promoted by science) must however be reconciled since our society is fundamentally based on the former, and increasingl on the latter.
Tuesday, August 19, 2008
Encephalon #52
The 52nd issue of Encephalon has now been put up at Ouroboros. With a nice Q&A layout, it covers the usual wide range of subjects, from neurogenesis to grannies, and from perception to culture. A couple of, in my view, the most interesting:
- From Neurophilosophy, a review of a paper on brain plasticity, particularly the visual cortex: visual experience can modulate the production of proteins which can influence plasticity along the visual pathway.
- From Neuroscientifically Challenged comes a look at the reason for sleep, and how the humble fruit fly has helped to shed some light on the problem.
- Finally, from Neuroanthropology is a lengthy review of a paper which lays the foundation of "cultural neuroscience": the influence of cultural and social factors on neural mechanisms, and how this may be taken into account in neuroimaging studies. My first thought though when reading this was that it would then be of more immediate concern to somehow account for individual differences in bodily morphology and individual personal histories - these, I would suggest, would have a more direct influence on development, and hence present neural mechanisms, than cultural influences - even though these are, as evidenced by this paper, obviously present. But then again, I'm not a neuroscientist, and have not studied the paper in great detail yet, so may have missed something.
- From Neurophilosophy, a review of a paper on brain plasticity, particularly the visual cortex: visual experience can modulate the production of proteins which can influence plasticity along the visual pathway.
- From Neuroscientifically Challenged comes a look at the reason for sleep, and how the humble fruit fly has helped to shed some light on the problem.
- Finally, from Neuroanthropology is a lengthy review of a paper which lays the foundation of "cultural neuroscience": the influence of cultural and social factors on neural mechanisms, and how this may be taken into account in neuroimaging studies. My first thought though when reading this was that it would then be of more immediate concern to somehow account for individual differences in bodily morphology and individual personal histories - these, I would suggest, would have a more direct influence on development, and hence present neural mechanisms, than cultural influences - even though these are, as evidenced by this paper, obviously present. But then again, I'm not a neuroscientist, and have not studied the paper in great detail yet, so may have missed something.
Labels:
Blogging,
Cognition,
Comment,
Embodiment,
Encephalon,
Links
Thursday, August 07, 2008
Sense about Science
Something I blogged about over a year ago, I've just noticed that the Sense about Science campaign (which is a charitable trust "promoting good science and evidence in public debates") has a leaflet "...to help people to query the status of science and research reported in the media". As well as this, they've produced a button which links to it: Hopefully, it will help prevent this sort of thing (from Jan 2007)...
Hemispherical 'electronic eye' - and some implications...
The BBC News website yesterday reported on the development of a camera with a hemispheric detection surface, rather than the traditional 2D array. The paper on which this article is based (in Nature - link) proposes that this new technology will enable devices with "...a wide field of view and low aberrations with simple, few-component imaging optics", including bio-inspired devices for prosthetics purposes, and biological systems monitoring. This figure from the paper gives a very nice overview of the construction and structural properties of the system. Note that the individual sensory detectors are the same shape throughout - it is the interconnections between them which are modified. Abstract:
The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.From the cognitive/developmental robotics point of view, this sort of sensory capability has (to my mind) some pretty useful implications. Given that the morphology of the robots concerned, and that would include the morphology of the sensory systems, take central importance in the development (or learning) that the robot may perform, then these more 'biologically plausible' shapes may allow better comparisons to be made between robotic agent models and animals. Furthermore, from the morphological computation point of view (e.g. here, and here), this sort of sensory morphology may remove the need for a layer of image pre-processing - motion parallax for example. As seen in flighted insects, the shape of the eye and arrangements of the individual visual detectors upon it remove the need for complex transformations when the insect is flying through an environment - an example of how morphology reduces 'computational load'. If effects similar to these can be taken advantage of in cognitive and developmental robotics research, then a greater understanding and functionality may be gained. The development of this type of camera may be an additional step in this direction.
Tuesday, August 05, 2008
On Grandmother Cells
These cells were originally proposed at the opposite end of the spectrum to ensemble or population coding - where it is the pattern of activity across a group of neurons which codes for a sensory percept. Although the term was coined by Jerry Lettvin in the late 1960's, after which its use quickly proliferated, it was actually proposed as a scientific theory a number of years earlier by the Polish neurophysiologist Jerzy Konorski.
Proposed in his work "Integrative Activity of the Brain" (1967), Konorski predicted the existence of individual neurons sensitive to sensory stimuli such as faces, hands, emotional expressions, etc - and named them "Gnostic" neurons. These were proposed to be located in specific areas of the cortex (in "gnostic fields"), such as the ventral temporal cortex (for the face field), and the posterior parietal cortex (for space fields). These predictions have proven reasonably similar to current proposals for the extra-striate visual cortex in monkeys.
Naturally however, Konorski's work was influenced by that of others. Firstly, in the early 1960's, Hubel and Wiesel demonstrated the hierarchical processing of sensory information in the geniculo-striate system: from simple receptive fields up to the ability to selectively generalise across the retina. Secondly was research on what was then known as the Association Cortex by Pribram and Mishkin: lesions of the inferior temporal cortex produced specific visual cognition impairments in monkeys.
These two bodies of evidence, along with his own familiarity with various agnosias which follow human cortical lesions, led to Konorski's proposal of gnostic cells as a means of accounting for these cognitive impairment effects. Despite the publication of these ideas, and the subsequent coining of the term 'grandmother cells', for at least a decade afterwards, gnostic cells were only taken up in the learning literature, not the perception literature. The term has now, however, had greater use in general textbooks and the pattern recognition literature.
Two features of the gnostic cells have long histories in neuroscience research. Firstly, they are examples of labeled line coding. Labeled line coding refers to a neuron property that allows it to code a particular stimulus property, such as line orientation in the visual field. Secondly, gnostic cells were held to be at the top of a 'hierarchy of increasing convergence'. This concept of convergence hierarchies had, for example, been proposed by William James (the pontifical cell), C.S. Sherrington (in "Man on His Nature", 1940), and Barlow (with the slightly modified concept of cardinal cells, 1972).
In conclusion, the paper notes that the idea of convergence of neural input onto one cell seems to have arisen independently a number of times - and that contemporary human brain imaging have revealed cortical regions (e.g. inferior temporal cortex) which resemble the gnostic fields proposed by Konorski.
As an example of more recent research efforts in exploring this converging hierarchy proposal, Quiroga et al ("Sparse but not 'grandmother-cell' coding in the medial temporal lobe", TICS, 12(3), 2008) - which also involved the somewhat infamous experiments which identified the 'Jennifer Aniston cells' - identified very sparse coding of visual percepts in the medial temporal lobe. In this paper though, a number of arguments were presented for why these cannot be considered to be grandmother cells - a view which I think may be widespread: sparse encoding but not convergence onto a single cell.
Gross, C. (2002). Genealogy of the "Grandmother Cell". The Neuroscientist, 8(5), 512-518.
Friday, April 04, 2008
Science and Engineering as Art
From the BBC news website (again...) is a short opinion piece by Mark Miodownik, head of the Materials Research Group at King's College London, who (as seems to be all too common these days) bemoans the lack of development of young scientists/researchers/engineers in present day Britain. Pointing as many others do towards flaws in the education system, and a lack of willing among commercial enterprises to put money into keeping people in science (rather than letting them be tempted by the larger salaries available in other lines of work), he makes a comparison between science and art, which is new to me, but which I rather like:
"Science is like poetry in this respect: it is an expression of something sublime. Engineering likewise is an expression of human emotions and passions - cars, hip replacements and even washing machines are as much expressions of our soul as paintings, literature and music."
Thursday, January 31, 2008
Links, and Technorati search...
A couple of links which I've found quite interesting:
- A blog called "Reality Apologetics": written by Jon Lawhead (a recent philosophy graduate), it covers a wide wide of subjects, though focusing primarily on philosophy of mind. I've only just found it, but there appears to be some interesting stuff.
- A post entitled "Robot thoughts" at Saint Gasoline: a quick review of limitations in artificial consciousness, although the concepts of consciousness and thought seem to be confused, followed by a link to the widely reported story on Floreano's evolving robotic colonies (and the emergence of liars and altruists) - actually, I found the resulting comments more interesting than the post itself...
- A post called "Nature inspires creepy robot swarms" at Environmental Graffiti which seems to indicate that most robotics work is aimed at producing agents capable of world domination. While I feel it was over-the-top (although maybe I missed the well hidden sarcasm), it does raise some interesting issues over the public perception of the research field in which I find myself - 'intelligent' robotics. Having said that, the post was a good read :-)
And now for a short rant... As some may have noticed, I have a Technorati search chart on the right hand side of my blog. When I first put it in, it provided me with a quick and easy way to keep track of some very interesting blog posts. I even had the pleasure of watching the number of "cognitive robotics" posts linking here increase after my posts on the definition of cognitive robotics. However, in recent months (since last November), the system has let me down. It no longer updates the graph (an annoyance since I quite like the visual aspect), and most importantly, the search seems to be flooded with posts from dud/porn blogs. I suppose that's not Technorati's fault - no, wait, it is: my search term is "cognitive robotics", and I'd expect a search engine to work moderately well at finding relevant posts (the occasional dud may slip through the net, but this is just silly). So, I'm about to remove the once mighty keyword chart. Apologies, rant over...
UPDATE 01/02/08: I've tried to remove the keyword chart, but Blogger won't let me. Grr...
- A blog called "Reality Apologetics": written by Jon Lawhead (a recent philosophy graduate), it covers a wide wide of subjects, though focusing primarily on philosophy of mind. I've only just found it, but there appears to be some interesting stuff.
- A post entitled "Robot thoughts" at Saint Gasoline: a quick review of limitations in artificial consciousness, although the concepts of consciousness and thought seem to be confused, followed by a link to the widely reported story on Floreano's evolving robotic colonies (and the emergence of liars and altruists) - actually, I found the resulting comments more interesting than the post itself...
- A post called "Nature inspires creepy robot swarms" at Environmental Graffiti which seems to indicate that most robotics work is aimed at producing agents capable of world domination. While I feel it was over-the-top (although maybe I missed the well hidden sarcasm), it does raise some interesting issues over the public perception of the research field in which I find myself - 'intelligent' robotics. Having said that, the post was a good read :-)
And now for a short rant... As some may have noticed, I have a Technorati search chart on the right hand side of my blog. When I first put it in, it provided me with a quick and easy way to keep track of some very interesting blog posts. I even had the pleasure of watching the number of "cognitive robotics" posts linking here increase after my posts on the definition of cognitive robotics. However, in recent months (since last November), the system has let me down. It no longer updates the graph (an annoyance since I quite like the visual aspect), and most importantly, the search seems to be flooded with posts from dud/porn blogs. I suppose that's not Technorati's fault - no, wait, it is: my search term is "cognitive robotics", and I'd expect a search engine to work moderately well at finding relevant posts (the occasional dud may slip through the net, but this is just silly). So, I'm about to remove the once mighty keyword chart. Apologies, rant over...
UPDATE 01/02/08: I've tried to remove the keyword chart, but Blogger won't let me. Grr...
Monday, January 07, 2008
The simple brain?
There's a nice post up at Thinking as a Hobby on the possibility that the brain (actually more the neocortex than brain as a whole) just isn't as complex as it may appear. Basically, the idea is that while the evolutionarily older parts of the brain are specialised, the newer neocortex is more uniform and generally generic. The question then arises as to how this generic structure gives rise to functions such as language, which only appears in the species with the most developed neocortex (i.e. us humans). The solutions are either that the assumption of uniformity is wrong, or that emergence plays a huge role (where simple rules give rise to complex behviour). A very nice thought-provoking post.
I was thinking though, from the complex behaviour as being emergent standpoint above, that this wouldn't be enough to explain something like language. The environment would have to play an overly large role on proceedings (compared to simply being a matter of brain complexity): specifically inter-human interaction, or more broadly, societies, would have to be taken into account. Essentially, the complexity of behaviour that undoubtedly exists comes from the external world rather than the internal 'rules'. The consequence of this would be that to study the emergence of language (for example), inter-agent interaction would be just as, if not more, important than the internal complexity of an individual agent. So instead of the relatively simple neocortex making things easier in terms of describing complex behaviour such as language, it would actually become more difficult, since there would be multiple concurrent levels of analysis.
Just a thought, mind, I could be missing the point :-)
Back to the beginning though, this post by Derek James is very interesting.
I was thinking though, from the complex behaviour as being emergent standpoint above, that this wouldn't be enough to explain something like language. The environment would have to play an overly large role on proceedings (compared to simply being a matter of brain complexity): specifically inter-human interaction, or more broadly, societies, would have to be taken into account. Essentially, the complexity of behaviour that undoubtedly exists comes from the external world rather than the internal 'rules'. The consequence of this would be that to study the emergence of language (for example), inter-agent interaction would be just as, if not more, important than the internal complexity of an individual agent. So instead of the relatively simple neocortex making things easier in terms of describing complex behaviour such as language, it would actually become more difficult, since there would be multiple concurrent levels of analysis.
Just a thought, mind, I could be missing the point :-)
Back to the beginning though, this post by Derek James is very interesting.
Monday, October 22, 2007
On Robots and Psychology
Valentino Braitenberg's Vehicles are often the first lesson in (cognitive) robotics courses as a prime example of the interaction between an agent and its environment, and how complex behaviour does not necissarily imply a complex control architecture (as well as Rodney Brooks' work of course). The fourteen vehicles (or agents) of increasing complexity demonstrate an 'evolution' of behaviours, dependant as much on the environment as on the morphologyand control architecture of the agents themselves. I think the following paragraphs, quoted from the introduction to the book (full reference at end) illustrates how this examination may lead to insight into the biological organisms from which they were inspired (whilst not claiming to 'solve' any of the problems):
I like this, as I feel that it represents in some way (albeit more poetically than is generally stated) one of the aims of cognitive robotics as a field: elucidating issues in psychology/neuroscience, via a process (which I have vastly oversimplified here) of model creation on the basis of some biological system, implementation of said model (embodiment in the real world, or simulation thereof), and then evaluation of the resulting system against the original biological system. Hence the emphasis on behaviour - as a means of performing this comparison (since the 'computational substrate' is so obviously different) - which leads to the field of Artificial Ethology.
Ref: Valentino Braitenberg, "Vehicles: experiments in synthetic psychology", 1984
I have been dealing for many years with certain structures within animal brains that seemed to be interpretable as pieces of computing machinery because of their simplicity and/or regularity. Much of this work is only interesting if you are yourself involved in it. At times, though, in the back of my mind, while I was counting fibres in the visual ganglia of the fly or synapses in the cerebral cortex of the mouse, I felt knots untie, distinctions dissolve, difficulties disappear, difficulties I had experienced much earlier when I still held my first naive philosophical approach to the problem of the mind. This process of purification has been, over the years, a delightful experience. The text I want you to read is designed to convey some of this to you, if you are prepared to follow me not through a world of real brains but through a toy world that we create together.
We will talk only about machines with very simple internal structures, too simple in fact to be interesting from the point of view of mechanical or electrical engineering. Interest arises, rather, when we look at these machines or vehicles as if they were animals in a natural environment. We will be tempted, then, to use psychological language in describing their behaviour. And yet we know very well that there is nothing in these vehicles that we have not put in ourselves. This will be an interesting educational game.
I like this, as I feel that it represents in some way (albeit more poetically than is generally stated) one of the aims of cognitive robotics as a field: elucidating issues in psychology/neuroscience, via a process (which I have vastly oversimplified here) of model creation on the basis of some biological system, implementation of said model (embodiment in the real world, or simulation thereof), and then evaluation of the resulting system against the original biological system. Hence the emphasis on behaviour - as a means of performing this comparison (since the 'computational substrate' is so obviously different) - which leads to the field of Artificial Ethology.
Ref: Valentino Braitenberg, "Vehicles: experiments in synthetic psychology", 1984
Monday, October 08, 2007
Hammers and Distributed Memory
In “The Feeling of what Happens” (1999), Antonio Damasio describes, among many other things, the distributed nature of memory. The following quote describes an example of how the concept of an object (in this case a hammer) may be represented in the brain (from page 220 of the book):
This 'story' of the recall of an abstract concept (by which I mean something which is not explicitly tied to a particular sensory experience) describes a situation where memory, as is generally thought of (recalling objects and events), is fully distributed throughout the brain, and can not be localised in a particular region. This concept has had much supporting empirical evidence found in recent years, including that I've reviewed in last weeks series of posts (which I believe was first proposed by Lashley in the early 20th century).
The brain forms memories in a highly distributed manner. Take, for instance, the memory of a hammer. There is no single place of our brain where we will find an entry with the word hammer followed by a neat dictionary definition of what a hammer is. Instead, as current evidence suggests, there are a number of records in our brain that correspond to different aspects of our past interaction with hammers: their shape, the typical movement with which we use them, the hand shape and the hand motion required to manipulate the hammer, the result of the action, the word that designates it in whatever many languages we know. These records are dormant, dispositional, and implicit, and they are based on separate neural sites located in separate high-order cortices. The separation is imposed by the design of the brain and by the physical nature of our environment. Appreciating the shape of a hammer visually is different from appreciating its shape by touch; the pattern we use to move the hammer cannot be stored in the same cortex that stores the pattern of its movement as we see it; the phonemes with which we make the word hammer cannot be stored in the same place, either. The spatial separation of the records poses no problem, as it turns out, because when all the records are made explicit in image form they are exhibited in only a few sites and are coordinated in time in such a fashion that all the recorded components appear seamlessly integrated.
If I give you the word hammer and ask you to tell me what hammer means, you come up with a workable definition of the thing, without any difficulty, in no time at all. One basis for the definition is the rapid deployment of a number of explicit mental patterns concerning these varied aspects. Although the memory of separate aspects of our interaction with hammers are kept in separate parts of the brain, in dormant fashion, those different parts are coordinated in terms of their circuitries such that the dormant and implicit records can be turned into explicit albeit sketchy images, rapidly and in close temporal proximity. The availability of all those images allows us, in turn, to create a verbal description of the entity and that serves as a base for the definition.
This 'story' of the recall of an abstract concept (by which I mean something which is not explicitly tied to a particular sensory experience) describes a situation where memory, as is generally thought of (recalling objects and events), is fully distributed throughout the brain, and can not be localised in a particular region. This concept has had much supporting empirical evidence found in recent years, including that I've reviewed in last weeks series of posts (which I believe was first proposed by Lashley in the early 20th century).
Wednesday, September 12, 2007
Encephalon #31
The thirty-first edition of Encephalon is up at Dr. Deb. It has been since Monday, but I've only just got round to reading it. A good one as usual, and this one even starts with a picture of Spock... :-) There are three contributions to this edition which I found particularly interesting:
- Vaughan at Mind Hacks brings us a look at the psychology of believing news reports. Case studies are reviewed in support of the view that if false information is presented first it is likely to be believed, even in the face of subsequent corrections - indeed, these corrections may even embed the incorrect initially provided information (perhaps through the effect by which repeated information is more likely to be believed). The implications of this effect is quite wide ranging as pointed out by Vaughan: "As I'm sure these principles are already widely known among government and commercial PR departments, bear them in mind when evaluating public information."
- From Neurobiotaxis comes a review of the "Triune brain" theory, espoused by Paul MacLean. This theory of brain evolution proposes the well known three "layers" (for want of a better word on my part) of brain organisation, from the evolutionary primitive structures of the spinal cord and brain stem, through the midbrain structures of the limbic system, to the cerebral cortex, which is proposed as the most advanced structure in evolutionary terms. The post deals with it in terms of affective neuroscience, and asks the question whether the "triune brain" view is appropriate and relevant as a major theory. The conclusion after very detailed discussion (it took me a while to take in all the information) is essentially no, but it is noted that in the area of emotional behaviour, it is still defended by some (though at times as a useful conceptualisation rather than a prediction-producing model). A good read.
- Finally, a post on Synaesthesia by Mo at Neurophilosophy, particularly the recently discovered mirror-touch synaesthesia. After a review of the neuropsychological basis of synaesthesia (possibly by excess cross-modal neural connectivity, or by impaired inhibition across regions), MTS is introduced as a condition whose 'carriers' experience tactile information when they see another person being touched. Synaesthesia has long been something I have been interested in, partly as an example of how (if one ascribes to the cross-modal connections view) an 'error' in the development of the brain doesn't lead to impaired performance of any sort, and in some cases quite the opposite - demonstrating the amazing flexibility of the system that is the brain.
- Vaughan at Mind Hacks brings us a look at the psychology of believing news reports. Case studies are reviewed in support of the view that if false information is presented first it is likely to be believed, even in the face of subsequent corrections - indeed, these corrections may even embed the incorrect initially provided information (perhaps through the effect by which repeated information is more likely to be believed). The implications of this effect is quite wide ranging as pointed out by Vaughan: "As I'm sure these principles are already widely known among government and commercial PR departments, bear them in mind when evaluating public information."
- From Neurobiotaxis comes a review of the "Triune brain" theory, espoused by Paul MacLean. This theory of brain evolution proposes the well known three "layers" (for want of a better word on my part) of brain organisation, from the evolutionary primitive structures of the spinal cord and brain stem, through the midbrain structures of the limbic system, to the cerebral cortex, which is proposed as the most advanced structure in evolutionary terms. The post deals with it in terms of affective neuroscience, and asks the question whether the "triune brain" view is appropriate and relevant as a major theory. The conclusion after very detailed discussion (it took me a while to take in all the information) is essentially no, but it is noted that in the area of emotional behaviour, it is still defended by some (though at times as a useful conceptualisation rather than a prediction-producing model). A good read.
- Finally, a post on Synaesthesia by Mo at Neurophilosophy, particularly the recently discovered mirror-touch synaesthesia. After a review of the neuropsychological basis of synaesthesia (possibly by excess cross-modal neural connectivity, or by impaired inhibition across regions), MTS is introduced as a condition whose 'carriers' experience tactile information when they see another person being touched. Synaesthesia has long been something I have been interested in, partly as an example of how (if one ascribes to the cross-modal connections view) an 'error' in the development of the brain doesn't lead to impaired performance of any sort, and in some cases quite the opposite - demonstrating the amazing flexibility of the system that is the brain.
Subscribe to:
Comments (Atom)


