Notes on "Historical and current Machine Intelligence", K. Warwick and S.J. Nasuto, IEEE Instrumentation and Measurement Magazine, vol 9, issue 6, pp20-26, December 2006
This paper aims to provide a realistic assessment of the present state of machine intelligence (which is more widely known as artificial intelligence - AI). In doing so, a brief history is provided, and the potential future discussed.
Before the discussion of machine intelligence, the first point covered is the question of intelligence itself - what is it? The number of definitions is numerous, with numerous differences depending on the context of its use. For the purpose of this paper, the authors take a very general and basic definition: intelligence is "...the variety of information processing processes that collectively enable a being to pursue autonomously its survival". This definition allows one to study intelligence regardless of the species or type of agent in which it is embodied - it requires only that the processing capability mentioned affords survival. Further characterisation of this definition occurs through the following parts of the paper.
Current AI thinking embraces the artificial neural network, the foundations of which were laid by McCulloch and Pitts (the basic element of which, the perceptron, attempted to approximate the functioning of a biological neuron - that is weighted inputs coming together to a single threshold unit, the output of which was binary). This use of natural systems inspired processing provides a link between natural neurophysiology and information processing: "It has been shown that one layer of suitable nonlinear neurons [in a multilayer perceptron], ..., can approximate any nonlinear function with arbitrary accuracy, given enough nonlinear neurons. This means that a multilayer perceptron network can be a universal function approximator."
This way of thinking was replaced with a behaviouralist perspective, where behaviour was the important factor without much importance for how the behaviour came about. This put AI in closer relation to the cognitive sciences - and was exemplified by the Turing test, in which it is only behaviour which is the subject of the test. Another example of this type is the expert system - which is typified by the IF...THEN rule construction. A major criticism made of this type of system is that it is not capable of storing every possible eventuality. More recently, some of these drawbacks have been addressed by the use of Bayesian networks which merge the operation of these symbolic systems and the sub-symbolic operation of neural networks in a framework based on graph theory, probability, and statistics.
The popularity of artificial neural networks returned in the mid 1980s: producing architectures which roughly fell into two categories - feedforward (e.g. multilayer perceptron), and recurrent networks (e.g. Hopfield networks). They replaced the central control of the expert systems with a distributed control system, thereby realigning with the neurophysiology of the biological system.
Evolutionary algorithms address one of the problems with artificial neural networks, namely, their fixed topology, by implementing a distributed system of 'agents', where each one may be simply a rule. There are no a priori relationships between these agents, and the population of agents is acted upon by a pseudo evolutionary process (genetic algorithm) tuned to maximise performance in the task at hand. An example of these is the Learning Classifier System, where a population of rules is created and modified by a genetic algorithm.
Another form of distributed processing used in AI is swarm intelligence. Based on observations of populations in nature (e.g. ants, flocks of birds etc), this method uses numerous individual agents, each of which explores a potential solution. The overall solution then emerges as a result of interactions among individuals. Examples of these algorithms include ant algorithms, particle swarm optimisation and stochastic diffusion search. From this description, it may be seen that there are similarities between this and the evolutionary approach.
The paper moves on to make a brief mention of reinforcement learning. In this approach the agent receives a qualitative evaluation of its actions from the environment, and not from a supervisor as is otherwise normally the case. Trial and error is thus often used by the agent to obtain a 'value function' which is used to evaluate the positive or negative effects of its actions. Reinforcement learning has been proposed to underlie the development of a reward mechanism in the biological nervous system. Recent developments include extension into multiagent setups.
A general problem with all of the preceding approaches to machine intelligence has been that they provide disembodied information processing. However, in the 1990s, it was realised that this was insufficient: embodied cognition stems from the realisation that a theory of intelligence must involve a physically embodied agent, interacting in real time with its environment using sensory-motor contingencies. This approach requires that the agent is autonomous, and that it arrives at 'intelligent behaviour' through its own interactions with the environment (rather than being pre-programmed or similar). Embodiment is thus of central importance in this approach. Also "Cognitive robotics puts an emphasis on using psychological, cognitive, and neuroscience findings in the construction of robots".
The development of machine intelligence is of immense practical use in itself, in addition to its potential utility in gaining insight into its biological counterpart. These include, among others, interactive games and other forms of entertainment, applications in the military and agriculture domains, and mine clearing.
The final part of the paper centres on the fundamental nature of machine intelligence. It proposes that the concept of intelligence needs to be revised in the light of recent developments in human and nonhuman studies, which parallel the views of Patricia Churchland on consciousness. It continues to state that such a revision has already started to take place - as seen through the developments discussed in the previous parts of the paper. Furthermore, it is suggested that intelligence would be better described by its characteristics: the distributed nature of the underlying processing, agent autonomy, and embodiment etc. This list of characteristics would have to include those features identified by human psychological and cognitive research, animal research and machine research, without which a complete list cannot be hoped to be compiled. This approach to intelligence is called the rational approach, which views human and machine intelligence as but different versions of intelligence.
No comments:
Post a Comment