Sunday, February 08, 2009

Agents or Programs: a definition of "Agent"

ResearchBlogging.orgA little while ago, a friend of mine asked me what a robot was. I thought about it for a little while, before coming to the conclusion that there wasn't actually a satisfactory answer: it depends on who you're speaking to and what you're talking about. There's the standard populist view of a robot as being an autonomous mechanical device, often in human or animal form, and then the less interesting definitions which hold robots to be man-made machines for automating some industrial process. None of these definitions are any more correct than the others, they merely serve different purposes in different contexts. There seem to be a number of this sort of ambiguous term floating around. Nothing wrong with it as such, until you have two people communicating each with slightly differing definitions, each definition carrying with it a slightly different set of assumptions, which are best not left undiscussed.

Another of these terms around which care is needed when used is 'agent'. It's one of those words which is defined in almost every paper in which it is used. This ensures that its use is clear in each instance, but also results in a multitude of definitions, most of which bear great similarity, whilst maintaining some fundamental differences. One attempt to provide a unified definition, and a general taxonomy to encompass the different contexts in which the term 'agent' is used, is provided by Franklin and Graesser (1996). In a short review of a number of contemporary agents in research and development use, a general definition is constructed and then extended.

From the review of a number of different agents in use at the time, a number of general characteristics of agents become apparent. These are compiled into the following definition proposed by the authors:

"An Autonomous Agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in persuit of its own agenda and so as to effect what it senses in the future."
One immediate point to notice is the dependance of this definition on the concept of autonomy: agency then inherently requires the system in question to be autonomous (with all the vaguearies that this definition brings...). Furthermore, agents are situated in environments: being both affected by them, and having an effect on them. Without this flow of information, there is no agent: using one of the authors examples, if an agent equipped with light sensors is placed in an environment with no light, then it cannot be considered to be an agent. Another important point that should be mentioned is the "...over time..." bit: this part insists on temporal persistance, i.e. the ability to sense, effect, and thereby influence what is sensed in the future. Finally, the point about "... its own agenda" follows on from the concern over autonomy: the agenda need not be a single task which needs fulfilling, or even a particularly well-defined one (consider the artificial life goal of 'survival').

An immediate application of this definition for the authors is to distinguish between software agents and general computer programs. In this case, software programs (a payroll program in the example used) might have a task and an environment, but there is no persistance (i.e. the program output wouldn't affect its subsequent inputs), and there is no autonomy for the system (or at least it is hoped that there isn't!). Whilst this distinction between software agents and programs can be made, it is clear that the definition is fairly broad-brush. In the remainder of the paper, the authors attempt to expand upon this in order to allow a classification of different types of agent within this overarching definition.

Agents may be classed in a variety of ways, for example by functionality, architecture, sensors, effectors, etc. As long as the four properties previously mentioned are satified, any additional properties may be used to provide delineations of type. This, however, does not provide a very structured means of classification, as it is limited only by the possible imaginable agent properties. The authors propose using a hybrid between a biologically inspired taxonomic tree, and a mathematical binary classification tree. Using the former, a scheme which classifies the major classes of agents can be constructed (task specific for example). Further delineations can be provided by a binary classification scheme, for example planning vs non-planning, learning vs non-learning, mobile vs non-mobile, etc. This type of classification can then be defined as a topological space, which leads to further classifications methods.

In summary, the authors have provided a clear delineation between software agents and programmes, and have described a possible taxonomy for the classification of different types of autonomous agent. This taxonomy in itself however doesn't seem to solve anything, since all of the categories for classification are arbitrarily chosen, and may be (even among the examples chosen in the paper) contentious in themselves. However, the general definition of the term "agent" extracted from a number of examples can be used as a basic guide for discussion - and it is interesting to note that the definition for agent inherently assumes autonomy: in order for an agent to be considered an agent, it must to some degree be autonomous (and that's a whole separate question...).

Stan Franklin, Art Graesser (1996). Is it an Agent, or just a Program? A taxonomy for autonomous agents Third International Workshop on Agent Theories, Architectures, and Languages

No comments: