What follows is a brief note on why I don't believe that internal representations necessarily mean complex modelling capabilities, through the use of a (slightly suspect) metaphor. This isn't based on any peer reviewed work, just some thoughts I jotted down in a rare moment of effective cerebral activity :-)
Consider the following scenario: I have a large flat piece of plasticine. I also have a small rock. Let us assume that for some incredibly important reason (which has somehow slipped my mind at this moment in time), I wish to create a representation of the rock using the plasticine, to fulfill some vital task. The two main options are:
(1) I use my hands and my eyes, and I create a model of the rock using the plasticine which is visually accurate. This would be difficult (my skills as an artist are non-existant) and time consuming. The result would be a reasonably accurate (though not perfect) representation of the rock - the advantage of this method is that anybody else would be able to look at my model and be able to say without very much effort that I have a model of a rock.
(2) I let the rock fall onto my piece of plasticine, such that it leaves an impression (assuming the rock is heavy enough and/or the plasticine is soft enough). The resulting representation of the rock would be far more accurate, though incomplete, and dependant on exactly how the rock made contact (how deep it was pushed, the angle, etc). Furthermore, whilst I may know exactly what it is (with limitations due to my very limited knowledge of the conditions), somebody else would take longer to recognise what is being represented. However, it so much easier to do this than to create a sculpture.
Of course, there are variations on the above two methods. The most interesting/important being:
(3) I take the piece of plasticine in my hand and I push it onto the surface of the rock. In this way, I control (to a certain extent at least) what imprint is left as I vary the force and the angle. I'm still just left with an impression in a piece of plasticine, but the benefit is that I have more information on that impression. The effort expended is almost as little as in case two though. Of course another observer would have just as difficulty in recognising what this impression was a representation of.
What we have here is a very coarse, and not very subtle, metaphor for how I see the three main views of so called 'internal representations'. Basically, despite the adverse connotations that the term 'representation' (with regard to cognition and intelligence) may conjure for some, I don't believe that it necessarily implies highly complex internal modelling facilities. I wouldn't want to take the metaphor much further than I've elaborated above for fear of it failing utterly, but the three options may be seen to roughly correspond to the following.
In point one, you have the GOFAI (good old fashioned artificial intelligence) point of view, which, supported by personal introspection, uses complex internal modelling processes to create a high-fidelity internal representation upon which various mathematical planning processes may be performed. In the second point you have reactive or behavioural robotics, where the environment pushes itself onto the agent, and shapes its behaviour directly. The metaphor is failing already - there shouldn't be any representation in this case (not an explicit one anyway - though I would argue that it is implicit in the resulting behaviour - plenty of room for argument on that one!) - but the point is that the information gleaned from the environment isn't of much use in itself, only in terms of the effect it has. It's far easier to do this though, in terms of computational load etc.
If you view these two approaches as extremes, then point three may be seen as a 'middle road' - a vastly reduced amount of computational effort through taking advantage of what is there (both in terms of the environment and the agent itself), but with the presence of more information due to the additional knowledge of how that representation was acquired. So, perhaps the analogue for this point would be active perception, or perhaps more generally, embodied cognition. As with most things, I feel this 'middle road' to have more potential than the two extremes - although that is of course not to say that they are devoid of merit, for they both have produced many interesting and useful results.
But why do I think that the concept of internal representations is important? Because I think that internal simulation (as Germund Hesslow called it, otherwise generally referred to as imagination, simulation, etc) is central to many, if not all, cognitive tasks, which in turn is dependant on previous experience and internal knowledge: i.e. internal representations.
Finally, I would very much like to hear anybody's views on this topic - and my use of this suspect metaphor. I'm not aware of anyone else having used a similar metaphor (as undoubtedly they have, but then I haven't looked), so would appreciate it if someone could tell me if they have heard of one. I think I could do with reading the work of someone who's formulated this properly :-)