Thursday, January 10, 2008

Internal Representations: a metaphor

What follows is a brief note on why I don't believe that internal representations necessarily mean complex modelling capabilities, through the use of a (slightly suspect) metaphor. This isn't based on any peer reviewed work, just some thoughts I jotted down in a rare moment of effective cerebral activity :-)

Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:

(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.

(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.

Of course, there are variations on the above two methods. The most interesting/important being:

(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.

What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.

In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.

If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.

But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.

Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)

4 comments:

Pat Parslow said...

I think I am more or less with you on this. Extensive internal representations are probably not even necessary for cognition if the subject of the process is within your perceptual grasp, but the ability to create detailed representations within the 'mind' are, presumably, necessary to reason about things which are absent, or abstract.

I must admit, I tend to think of the internal representation as including the processes which go alongside the 'absolute' representation - so the abstraction of common patterns, any reasoning about what this 'thing' might mean in relevant contexts etc. all come together in my (internal representation of) representation. Why do I think like this? Through experience, I guess, and my (internal) representation of systems which, frankly, would not achieve much if the (absolute) representation interpretation were used. And I wonder why I make my head ache ;-)

Pat Parslow said...

Just re-reading my position as stated, it would appear that I actually disagree with your opening statement, precisely because I agree with your reasoning!

Anonymous said...

I am mostly with the previous commenter -- that is, I believe that an "internal representation" is, for conscious grown adults a representation is a lot more than just a slightly actively modified imprint. The metaphor only works for things you have never encountered before and which you cannot integrate into previous experience, i.e. can not connect with other internal representations.

Besides, the first metaphor does not seem like GOFAI (but I am not really qualified): In GOFAI, someone external tried to accurately model the whole world and then inputs that into a system, and there is no actively forming a representation, just recognition -- a GOFAI metaphor would involve searching through all images of rocks someone external had put in your memory.

Derek said...

Interesting post. If I could put labels on your metaphorical categories, they could be:

active
passive
blended (active/passive)

I'm also reminded of bottom-up vs. top-down processes, as well as learning approaches (supervised, unsupervised, reinforcement).

The metaphor seems like it might be a useful conceptualization. The active/passive distinction also made me think of sonar systems. With a passive sonar system you just sit back and let the waves come to you, but active sonar systems emit their own pings which bounce off other stuff and send the signal back to the sensors. Those seem analogous to your two instances of making an impression of the rock by either dropping it onto the clay or actively molding the clay around it.

So I generally like the metaphor, even if the mapping doesn't quite fit. I'm less convinced by your application of it to different research programs.