Tuesday, November 07, 2006

The notion of "chunking" for cognitive models

These notes taken largely from Alan Newell's observations as described in his William James Lectures (and the subsequent book, "Unified Theories of Cognition", 1990).

Chunking has a long history in psychology - the classic study into the capacity of short-term memory by Miller (1956) introduced the term in the most commonly used context. It is a unit of memory organisation, formed by bringing together a set of already formed chunks to form a larger unit. It implies the ability to build up structures recursively, thus leading to the hierarchical organisation of memory, and it appears to be a ubiquitous feature of human memory (sits reasonably well with Joaquin Fuster's view of memory being based upon associations at all levels).

Using the example of the effect of practice on task completion time, this memory organisation sets out three general propositions. Firstly, chunking occurs constantly and at a (fairly?) constant rate. More experience results in more chunking. Secondly, peformance of a task is faster if there are more relevant chunks to the said task. This last point presents a direct conflict with the processing in current digital computers: the more rules there are, the more computational overhead, thus the slower the processing. Thirdly, the presence of the hierarchical structure predicts that higher level chunks will apply to fewer situations than lower level chunks. The higher the chunk in the hierarchy, the more sub-patterns it has, thus the less likely is it to match the current situation exactly.

These initial observations in general, and this application as an example, provides the theoretical basis of the SOAR cognitive architecture. Thus, the observation of human behaviour leads to general behavioural traits, which may (or may not) be considered as "laws of operation". Modelling these most general "laws" may thus provide further insight into human functional operation.

No comments: