Some very brief thoughts and notes on this article: In Aprils issue of the IEEE Spectrum, Jeff Hawkins (of Palm Pilot fame, and his book "On Intelligence") briefly discusses the Hierarchical Temporal Memory theory/framework as a novel engineering tool, and actual and potential applications. The HTM framework is based on the proposed operation of the human neocortex, which makes up around 60% of the brain. Although the neocortex is very uniform at both the macro- and micro-scopic levels, different regions of it are 'responsible' for a wide range of functions and differing modalities, leading to the view that its underlying functionality (that is, the 'mode of neural processing') is also uniform - that it is a general purpose learning machine. Jeff Hawkins proposes that (as further detailed in his book) that the interconnectivity of the neocortex may be traced out forming a hierarchy. It is this hierarchical connectivity, between elements of equivalent functionality, which forms the basis of the HTM. It would be useful to quote one of the concluding paragraphs which gives insight into the motivation of the work:
"HTM is not a model of a full brain or even the entire neocortex. Our system doesn't have desires, motives, or intentions of any kind. Indeed, we do not even want to make machines which are humanlike. Rather, we want to exploit a mechanism that we believe to underlie much of human thought and perception. This operating principle can be applied to many problems of pattern recognition, pattern discovery, prediction and, ultimately, robotics. But striving to build machines that pass the Turing Test is not our mission."
This emphasis on HTM's as an engineering solution is made quite explicit, with a number of examples mentioned where the tools which have been developed on these principles have been used. These include modelling networks (e.g. computer or social), and image processing (e.g. for use in the automotive industry). The limitations of the system, on the other hand, are acknowledged: HTM's work best when there is inherent hierarchical structure in the data, due to the hierarchical structure and nature of the HTM's themselves. Also, in its current state, HTM's are unable to deal with long memory sequences, or specific timing events, rendering them unable to deal with natural language processing or robotics, however there is nothing in principle that makes it impossible at some point in the future.
The basic principles of operation are of course not completely novel in themselves, even as acknowledged by the author. A number of other efforts and theories have used similar hierarchical structures, including Hierarchical Hidden Markov Models, and the Network Memory theory of Professor Fuster, which I have writen about in previously. I think it serves as another example of how this type of hybrid between parallel processing (a central tenet of the theory of brain information processing) and hierarchical structure is a promising line of research (indeed, it is in this area that I am working), which may be considered biologically non-implausible given our current understanding of the brain.