This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
, where:
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 77
• • •
Q is a nonempty set of states R is a binary relation on Q, i.e., R⊆ Q× Q, which shows which states are related to other states L: Q → 2Prop is a truth assignment function that shows which propositions are true in each state, where Prop is a set of atomic propositions
Temporal logic formulae, e.g., CTL* formulae, are constructed through the use of operators combined with the path quantifiers A (meaning “for all paths”) or E (meaning “there exists a path”). The five basic CTL* operators are as follows (Emerson & Halpern, 1986): • X (next time) requires that a property hold in the next state • F (eventually) requires a property to hold at some state on a path • G (always) requires a property to hold at every state on a path • U (until) requires a property p to hold in a path until another property q holds • R (release) as a dual operator of U The above logic could be extended to accommodate past, present, and future properties of an agent system, as in FML (Fisher & Wooldridge, 1997). Having constructed a model of an agent as an X-machine, it is possible to apply existing model checking techniques to verify its properties. That would require transformation of an X-machine into another model that resembles a Kripke structure. Such a process exists, called the exhaustive refinement of an X-machine to a FSM, and results in a model in which CTL* formula may be applied. However, exhaustive refinement suffers two major disadvantages: • The loss of expressiveness that the X-machine possesses • The combinatorial explosion The former has to do with the memory structure attached to the Xmachine. In an equivalent FSM resulting from the process of refinement, the memory will be implicitly contained in states. It would be therefore impossible to verify that certain properties are true for some memory instances of the states of the original X-machine model, because this information is lost during the refinement process. The latter has to do with properties contained in the model’s memory but do not play any role in model checking with respect to some other properties. Exhaustive refinement may result in a possibly infinite state space, if such properties are included in the equivalent FSM, thus making model checking impossible.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
78
Kefalas, Holcombe, Eleftherakis & Gheorghe
In order to apply model checking in X-machines, temporal logic is extended with memory quantifier operators: • Mx, for all memory instances • mx, there exist memory instances which together with the basic temporal operators of CTL*, can form expressions suitable for checking the properties of an agent model. The resulting logic XmCTL can verify the model expressed as X-machine against the requirements, because it can prove that certain properties, which implicitly reside in the memory of X-machine, are true (Eleftherakis & Kefalas, 2001). For example, in an agent whose task is to carry food to its nest as in the example of Figure 3, model checking can verify whether food will eventually be dropped in the nest by the formula: AG [¬Mx (m1 ≠ none) ∨ EFMx (m1 = none) ] where m1 indicates the first element of the memory tuple. The formula states that in all states of the X-machine, it is true that either the ant does not hold any food or there exists a path after that state where eventually the ant does not hold any food. Another example is the following formula: E [Mx (m1 = none) U Mx (m1 ≠ none) ] i.e., there exists a path in which the ant eventually holds food, and in all previous states, the ant holds nothing. Also, another useful property to be checked is:
¬EFmx [ (m1 ≠ none) ∧ (m3 = nil) ] i.e., if the ant holds something, then the food list is not empty. The new syntax and semantics facilitate model checking of X-machines in two ways: • Expressiveness suited to the X-machine model • Effective reduction of the state space through selective refinement of the original X-machine model
COMPLETE TESTING OF AGENTS In the previous section, we focused on modeling of an agent and verification of the models specified with respect to requirements. Having ensured that Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 79
the model is “correct,” we need to also ensure that the implementation is “correct,” this time with respect to the model. This can be achieved through testing, but only under one important assumption, i.e., testing is complete. To guarantee “correctness” of the implementation, one must be certain that all tests are performed, and the results correspond to what the model has specified. Holcombe and Ipate (1998) presented a testing method, which is a generalization of Chow’s W-method (Chow, 1978) for FSM testing. It is proved that this testing method finds all faults in the implementation (Ipate & Holcombe, 1998). The method works based on the following assumptions: • The specification and the implementation of the system can be represented as X-machines • The X-machine corresponding to the specification and the X-machine corresponding to the implementation have the same typeΦ • •
Assuming the above, the method also requires that: The X-machine satisfies the design for test conditions Its associated automaton is minimal
The associated automaton of an X-machine is the conversion of the Xmachine to a FSM by treating the elements of Φ as abstract input symbols. The design for test conditions states that the type Φ of the two machines is complete with respect to memory and output distinguishable. A function ϕ∈Φ is called complete with respect to memory if:
∀m ∈ M, ∃ σ∈ Σ such that (m, σ ) ∈ dom ϕ A type Φ is called complete with respect to memory M, if any basic function will be able to process all memory values, that is if:
∀ϕ ∈ Φ , ϕ is complete with respect to M A type Φ is called output distinguishable if any two different processing functions will produce different outputs on each memory/input pair, that is if:
∀ ϕ1,ϕ2∈ Φ if ∃m ∈ M, σ ∈ Σ such that for some m1',m2', ∈ M, γ ∈Γ ϕ1(m, σ ) = (γ , m1') and ϕ2(m,σ ) = (γ , m2'), then ϕ1 = ϕ2. If Φ is not complete, then additional input symbols may be introduced such as to make processing functions complete (Holcombe & Ipate, 1998). Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
80
Kefalas, Holcombe, Eleftherakis & Gheorghe
Figure 4: An X-machine that satisfies the design for test conditions ignore_nest
stay_at_nest move
MOVING FREELY
AT NEST
ignore_food
move
move_to_food found_nest
ignore_space
lift_food
ignore_food
move_to_food
FOLLOW TRAIL
HAVE FOOD
ignore_space
lift_food
move_to_nest
move_to_food
ignore_nest
In Figure 4, the X-machine illustrates a model of an ant that looks for food at random or follows the pheromone trail to find food and nest. The input set is Σ = {space, nest, pheromone, food}. The X-machine satisfies the design for test conditions and its associated automaton is minimal. When these requirements are met, the W-method may be employed to produce the k-test set X of the associated automaton, where k is the difference of the number of states of the two associated FSMs. The test-set X consists of processing functions for the associated automaton, and it is given by the formula: X = S(Φ k+1 ∪ Φ k ∪ … ∪ Φ ∪ {ε })W where W is a characterization set, and S a state cover. Informally, a characterization set W⊆ Φ * is a set of processing functions for which any two distinct states of the machine are distinguishable. The state cover S⊆ Φ * is a set of processing functions such that all states are reachable by q0. The W and S sets in the agent X-machine in Figure 4 are as follows: W= {stay_at_nest, move move_to_food, found_nest} S= {ε , move, move move_to_food, move_to_food lift_food} The derived test-set X, for k = 0, i.e., model and implementation are considered as FSM with the same number of states, is the following: Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 81
X= {move move move_to_food, move move move move_to_food, move ignore_nest move move_to_food, move lift_food found_nest, move move_to_food lift_food found_nest, move_to_food lift_food found_nest, move_to_food lift_food ignore_space found_nest,…} The fundamental test function is defined recursively and converts these sequences into sequences of inputs of the X-machine. Let XM = (Σ, Γ, Q, M, Φ, F, q0, m0) be a deterministic stream X-machine with Φ complete with respect to M, and let q ∈ Q, m∈ M. A function tq, m: Φ *→ Σ * will be called a test function of M with respect to q and m, will be defined recursively as (Ipate & Holcombe, 1998): tq,m(ε ) = or tq, m(ϕ1…ϕnϕn+1 )=
ε (the empty input symbol) tq, m(ϕ1ϕ2…ϕn )σn+1 if ∃ a path q,q1,…,qn-1,qn, in M starting from q, where σn+1 is such that (mn, σn+1)∈domϕn+1 and mn is the final memory value computed by the machine along the above path on the input sequence tq, m(ϕ1ϕ2…ϕn) tq, m(ϕ1ϕ2…ϕn),otherwise
The test-set containing sequences of inputs for the ant X-machine is the following: {space space pheromone, space space space, pheromone, space nest space pheromone, space food nest, space pheromone food nest, pheromone food nest, pheromone food space nest, …} The test-set so produced is proved to find all faults in the agent implementation. The testing process can therefore be performed automatically by checking whether the output sequences produced by the implementation are identical with the ones expected from the agent model.
AGENTS AS AGGREGATION OF BEHAVIORS Agents can be modeled as a stand-alone (possibly complex) X-machine as shown in the previous section. However, an agent can also be viewed as a set of simpler components, which model various different behaviors of the Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
82
Kefalas, Holcombe, Eleftherakis & Gheorghe
agent. This fits with the three principles of complex agent systems: decomposition, abstraction, and organization (Jennings, 2001). Another approach for reactive agents is described in the subsumption architecture (Brooks, 1991), in which behaviors can communicate with each other in order to result in a situated agent with the desired overall robust performance. Similarly, Collinot et al. (1996) developed the Cassiopeia method, in which agents are defined by following three steps: • Identifying the elementary behaviors that are implied by the overall task • Identifying the relationship between elementary behaviors • Identifying the organizational behaviors of the system A methodology of building communicating X-machines from existing stand-alone X-machine is developed so that modeling can be split into two separate activities: • The modeling of X-machine components • The description of the communication between these components • • • •
The approach has several advantages for the developer who: Does not need to model a communicating system from scratch Can reuse existing models Can consider modeling and communication as two separate distinct activities Can use existing tools for stand-alone and communicating X-machines
Let us discuss the above one by one. Certain approaches for building a communicating system require a brand new conceptualization and development of a system as a whole. This approach has a major drawback, i.e., one cannot reuse existing models that have been already verified and tested for their “correctness.” Often, in agent systems, components from other agent systems are required. A desirable approach would be to conceptualize the system as a set of independent smaller models, which need to communicate with each other. Thus, one does not need to worry about the individual components, in which model-checking techniques and testing are applied but only by appropriately linking those components. This would lead to a disciplined development methodology, which implies two distinct and largely independent development activities, i.e., building models and employing communication between them. Also, this means that existing languages and tools for modeling, model checking, and testing are still useful and can be further extended to support larger communicating systems. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 83
Several theoretical approaches for communicating X-machines have been proposed (Balanescu et al., 1999; Cowling et al., 2000; Barnard, 1998). In this section, we will describe the one that focuses on the practical development of communicating systems but also subsumes all others (Kefalas et al., 2001). In this approach, the functions of an X-machine, if so annotated, read input from a communicating stream instead of the standard input stream. Also, the functions may write to a communicating input stream of another X-machine. The normal output of the functions is not affected. The annotation used is the solid circle (IN port) and the solid box (OUT port) to indicate that input is read from another component and output is directed to another component, respectively. For example, function ϕ in Figure 5 accepts its input from the model xm1 and writes its output to model x-m2. Multiple communications channel for a single X-machine may exist. Another example is a simple form of communication between two ants. Assume that one ant is responsible for notifying another ant about the position of the food source. In order to achieve communication, the X-machines should be modified as illustrated in Figure 6. The function lift_food of the X-machine model ant2 becomes: Figure 5: An abstract example of a communicating X-machine component X-machine standard
x-m2 x-m1
standard
input stream
output stream
channel for receiving message from x-m1
φ IN port
channel for sending message to x-m2
OUT port
Figure 6: Ant2 X-machine sends a message about food position in Ant1 X-machine, by utilizing a communicating port ANT1 drop_food
ant2
ANT2 ant1
GOING BACK TO NEST move_to_nest
MOVING FREELY
move_to_nest find_food
AT FOOD
lift_food
move
move_to_nest
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
84
Kefalas, Holcombe, Eleftherakis & Gheorghe
lift_food( (f,x,y),(none,(x,y),foodlist) ) → (OUTx-m ant2(f,x,y), (f,(x,y),<(x,y) :: foodlist>)), if f ∈ FOOD ∧ (x,y)∉ foodlist and the function find_food of X-machine model ant1 becomes: find_food(INx-m ant1 (f,fpx,fpy), (food,(x,y),foodlist) ) → (“more food,” (food,(x,y), < (fpx,fpy)::foodlist >)), if f ∈ FOOD ∧ (fpx,fpy) ∉ foodlist Function find_food of ant2 may be modified accordingly to write a message to the OUT port, if needed. The approach is practical, in the sense that the designer can separately model the components of an agent and then describe the way in which these components communicate. This allows a disciplined development of situated agents. Practically, as we shall see later, components can be reused in other systems, because the only thing that needs to be changed is the communication part. In the following, we will use communicating X-machines to model the collective foraging behavior of a colony of honeybees, as it is fully compatible with the rules used by foraging honeybees (Vries & Biesmeijer, 1998) that include specifications for: • Traveling from the nest to the source • Searching for the source • Collecting nectar from the source • Traveling back to the nest • Transmitting the information about the source (the dancing of the returning bee) • The reaction of a bee in the nest to the dancing of a nest mate A foraging bee can be modeled according to a set of independent behaviors, which constitute the components of the overall agent. Figure 7 shows some of the behaviors of a foraging bee modeled as simple X-machines, with an input set S and a memory tuple M. Each machine has different memory, inputs (percepts), and functions. Some states and functions were named differently to show the modularity of the approach. It is assumed that the bee perceives: • Empty space to fly (space) • The hive (nest) Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 85
Figure 7: The behaviors of a foraging bee modeled separately as Xmachine components Behavior Traveling from nest to source: Σ = {space, source} M = (bee_pos, source_pos) q0 = at source
X-machine model fly_to_source
AWAY FROM SOURCE
find_source
AT SOURCE keep_flying
keep_flying
Searching for the source: Σ = {space, source} M = (bee_pos, source_pos) q0 = flying
search detect_source
AT NECTAR SOURCE
FLYING fly_back fly_back
Collecting nectar from the source: Σ = {nectar, rbee} M = (nectar_amount) q0 = carrying nothing Traveling back to the nest: Σ = {nest, space} M = (nest_pos) q0 = at hive Transmitting information about the source (dancing): Σ = {fbee, space, nest, source} M = (bee_pos, source_pos) q0 = in the nest Reacting to the information transmitting by the dancing: Σ = {space, lost, source_pos} M = (status, source_pos) q0 = flying freely
• • • •
collect_nectar
CARRYING NOTHING
CARRYING NECTAR transfer_nectar detect_hive
OUT OF HIVE
AT HIVE fly_out
dancing
find_source fly_out
IN THE NEST
OUT OF NEST fly_in keep_fly_out
fly
loose_source_info get_info_from_dance
FLYING FLEELY
FLYING TO SOURCE fly
ignore_dance
The source of nectar (source) An amount of nectar (nectar) Other bees, i.e., foraging bees (fbee) or receiving bees (rbee) Understands when it has lost its orientation (lost)
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
86
Kefalas, Holcombe, Eleftherakis & Gheorghe
The memory of each X-machine holds information on the bee, the source, and the nest positions (bee_pos, source_pos and nest_pos), the amount of nectar carried (nectar_amount), and its status (employed or unemployed). For example, consider the X-machine modeling the dancing behavior. Its functions are defined as follows (Gheorghe et al., 2001): dancing(fbee, (bee_pos, source_pos)) → (“dancing”, (bee_pos, source_pos)) fly_out(space, (bee_pos, source_pos)) → (“flying out”, (bee_pos’, source_pos)) fly_in(nest, (bee_pos, source_pos)) → (“flying in”, (bee_pos’, source_pos)) find_source(source,(bee_pos,source_pos))→ (“sourcefound”,(source_pos, source_pos)) keep_fly_out(space,(bee_pos ,source_pos)) → (“keep flying”,(bee_pos’, source_pos)) where bee_pos, bee_pos’, source_pos∈Set_of_positions. The bee position can be calculated by some external function or some other X-machine. Figure 8 shows in detail how communication can be achieved directly by various honeybees, e.g., an employed foraging bee sends the source position to another foraging bee through the dancing behavior: dancing(fbee, (bee_pos, source_pos)) → (OUTx-m reacting(source_pos), (bee_pos, source_pos)) while an unemployed foraging bee reads the source position by the function: Figure 8: An example of two communicating behaviors; an employed bee sends information about the source position to an unemployed bee x-m reacting dancing fly_out
OUT OF NEST fly_in
loose_source_info
get_info_from_dance
find_source
IN THE NEST
X-M DANCING
x-m dancing
fly
FLYING FLEELY
FLYING TO SOURCE fly
ignore_dance keep_fly_out
Behavior of a foraging employed bee
x-m dancing
X-M REACTING
Behavior of a foraging unemployed bee
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 87
get_info_from_dance(INx-m dancing(source_pos),(unemployed, nil)) → (“getting source info”, (employed, source_pos)). If the foraging bee is currently employed, it just ignores the message: ignore_dance(INx-m dancing(source_pos),(employed, source_pos)) → (“ignoring source info”, (employed, source_pos)). The same communication takes place when a foraging bee transfers the amount of nectar that it is carrying to a receiving bee waiting at the hive. The separate behaviors can be put together in a communicating X-machine model. Figure 9 shows the complete foraging bee system, which is made up of component X-machines, which communicate via channels. Each machine works separately and concurrently in an asynchronous manner. Each machine can read inputs from a communication channel instead of its standard input steam. Also, each machine can send a message through a communication channel that will act as input to functions of another component. The figure shows an extra component, i.e., the perception system of the bee, which provides percepts to various behaviors. In addition, more machines can be modeled, as for example, the X-machine that builds an environment map (positions of obstacles, nest, food items, etc.). Information held in the memory of this machine could be used to efficiently move around the environment, or even to model a proactive behavior for the agent. Thus, modeling of an agent can be incremental by providing components, which will advance further the level of intelligent behavior.
Figure 9: Communicating X-machine modeling agent bee through aggregation of behaviors FORAGING BEE
detect_space detect_nest
PERCEPTING
detect_bee
space, source
traveling from nest to source
space, source
searching for source
detect_source
rbee
collecting nectar from source and transferring it
detect_nectar
got_lost
nest, space
traveling back to nest
ubee, space, nest, source
dancing
space, lost
reacting to dancing
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
88
Kefalas, Holcombe, Eleftherakis & Gheorghe
The whole system works as follows: an employed bee accepts inputs from the environment, which cause transitions in the X-machine components that model its individual behaviors. Having found a source and after collecting nectar (appropriate transitions have been performed in source and environment X-machines), the bee returns to the hive, and on the sight of another foraging bee, performs the dancing which, as shown earlier, transmits the information of the source position. An unemployed bee accepts the input from its communication port and changes its status to employed. It can then perceive inputs from the environment and travels to the source. The whole process may then be repeated. In parallel, other bees can do the same in an asynchronous manner. The approach is practical, in the sense that the developer can separately model the components of an agent and then describe the way in which these components-behaviors communicate. Also, components can be reused in other systems, because the only thing that needs to be changed is the communication part. For example, the behavior for avoiding obstacles is a component of any biology-inspired agent, e.g., foraging bees or ants. The major advantage is that the methodology also lends itself to modular model checking and testing strategies in which X-machines are individually tested as components, while communication is tested separately with existing methodologies, mentioned earlier.
MULTI-AGENT MODELS Modeling multi-agent systems requires the consideration of the means of communicating between agents, in order to coordinate tasks, cooperate, etc. Also, modeling of artificial environments in which agents act imposes the need of exchanging “messages” between agents and the environment. For example, a number of ants modeled as X-machines need to interact with their environment, which contains few seeds (food items) that are also modeled as Xmachines. These two ants, which may be instances of the same model class, can communicate with the environment in order to achieve the desired behavior, i.e., to lift a heavy seed that is far from the abilities of a single agent (Figure 10). Several behaviors are omitted for the sake of exposition. The ant is capable of lifting a food item only if the strength it possesses is bigger than the weight of a food item. In any other case, cooperation between ants is necessary, which can be achieved by communication of ants and the food item machine. The method used in the previous section to describe communicating X-machines can also serve this purpose.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 89
Figure 10: Ants model cooperating in the lifting task through communication with the environment store
ANT1
AT NEST walk_to_nest
free_walk
free_walk
SEED
find_nest
CARRYING FOOD
lift find_nest continue_free_walk
MOVING FREELY
force_applied
BECOME LIGHTER
found lift_food
find_food
AT FOOD
attempt_lift
found lift
force_applied
ON GROUND
found
force_applied force_released
LIFTED
force_released
put_down
BECOME HEAVIER
ANT2
found
force_released put_down
In addition, one may require agents that resemble one another, i.e., they have a common set of behaviors, but extra individual behaviors that determine some task that characterizes their individuality. For example, in a colony of foraging bees, some bees are responsible for collecting nectar from a source as well as have the ability to “inform” others about the location of the source (the dance of the foraging bee), while other bees are responsible for storing the nectar in the hive (Seely & Buhrman, 1999). Nevertheless, all have the ability to fly, receive nectar, etc. Such situations can be modeled with X-machines, as long as there is a way to define classes of models and instances of these classes, which can inherit generic behaviors from the chain of hierarchy. Figure 11 demonstrates the whole multi-agent system that models the colony of the honeybees as well as the environment and its various components, such as the source and the nest. The same happens when coordination is achieved by some other agent through scheduling and decomposition of a large task into smaller tasks, which are manageable by individual agents. Ready-made components may be used to complete the multi-agent system, as discussed before. If, however, these components bear some incompatibility with the rest of the agents, communication and interaction protocols may be required. One can easily imagine Xmachines that act as a synthetic glue between agents, modeling, for example,
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
90
Kefalas, Holcombe, Eleftherakis & Gheorghe
Figure 11: The model of the honeybees’ multi-agent system and its interaction with the environment. ENVIRONMENT space
nest
source nectar rbee ubee ebee lost
FORAGING BEE (employed) NEST
detect_nest
PERCEPTING
space, source space, source
traveling from nest to source searching for source
rbee
collecting nectar from other bee
detect_nectar
got_lost
detect_bee
RECEIVING BEE
SOURCE
detect_source
detect_space
collecting nectar from source and transfer it
nectar
FORAGING BEE (unemployed)
traveling back to nest
nest, space ubee, space, nest, source
dancing source_pos
space, lost
reacting to dancing
reacting to dancing
KQML parsers (Finin et al., 1997) or the Contract Net Protocol (Davis & Smith, 1983).
TOOLS X-machines modeling is based on a mathematical notation, which, however, implies a certain degree of freedom, especially as far as definition of functions are concerned. In order to make the approach practical and suitable for the development of tools around X-machines, a standard notation is devised, and its semantics are fully defined (Kapeti & Kefalas, 1999). Our aim was to use this notation, namely, X-machine Definition Language (XMDL), as an interchange language between developers who could share models
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 91
Figure 12: The use of XMDL in system development X-machine model
coding
XMDL model
check
Development Tool
Model Checking
Syntax Analyzer
Parser Tool
Completeness Checking Algorithms
Testing
Parser Tool Compiler
Parser Tool
written in XMDL for different purposes (Figure 12). To avoid complex mathematical notation, the language symbols are completely defined in ASCII code. A model developed with XMDL consists of the following: • The model for a component X-machine • The coding referring to possible communication of this component with other X-machines Briefly, a XMDL-based model is a list of definitions corresponding to the 8-tuple of the X-machine. The language also provides syntax for: • Use of built-in types such as integers, Booleans, sets, sequences, bags, etc. • Use of operations on these types, such as arithmetic, Boolean, set operations, etc. • Definition of new types • Definition of functions and the conditions under which they are applicable In XMDL, the functions take two parameter tuples, i.e., an input symbol and a memory value, and return two new parameter tuples, i.e., an output and a new memory value. A function may be applicable under conditions (if-then) or unconditionally. Variables are denoted by “?”. The informative where in combination with the operator “<-” is used to describe operations on memory values. The full syntax and semantics of XMDL can be found in Kefalas (2000). For example, the following list presents part of the XMDL code for the agent model described earlier (Figure 3):
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
92
Kefalas, Holcombe, Eleftherakis & Gheorghe
#model ant. #type coord = {-10 … 10}. #type position = (coord,coord). #basic_types = [FOOD]. … #input = {space, nest} union FOOD. … #memory (carrying, ant_position, food_positions). #init_memory (none, (0,0), nil). #init_state {at_nest}. #states = {at_nest, at_food, moving_freely, going_back_to_nest, looking_for_food}. #fun lift_food( (?f,?x,?y),(none,(?x,?y),?foodlist) )= if ?f belongs FOOD and (?x,?y) not belongs ?foodlist then ((“lifting food”),(?f,(?x,?y),<(?x,?y) :: ?foodlist>)). #fun find_food( (?f,?fpx,?fpy), (?food,(?x,?y),?foodlist) )= if ?f belongs FOOD and ?f not belongs ?foodlist then ((“more food”),(?food,(?x,?y),<(?fpx,?fpy) :: ?foodlist>)). #fun drop_food( (nest,0,0), (?food,(?x,?y),?foodlist) )= ((“dropping food”),(none,(0,0),?foodlist)). #fun find_nest( (nest,0,0), (none,(?x,?y),?foodlist) )= ((“found nest again”),(none,(0,0),?foodlist)). … #transition (at_nest, ignore_food) = at_nest. #transition (at_nest, move) = moving_freely. #transition (moving_freely, lift_food) = at_food. … #end. In order to incorporate the semantics of communicating X-machines, the syntax of XMDL provides the following annotation: #modelinstance_of <model name> [with: #init_state = . #init_memory ]. #communication of <model name>: Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 93
reads from <model name> writes <message> to <model name> [where <expression> from (memory|input|output) ]. For example, the following list represents the communication part of an ant agent: #model ant1 instance_of ant. #communication of ant1: find_food reads from ant2. #model ant2 instance_of ant. #communication of ant2: lift_food writes (?f,?x,?y) to ant1 where (?f,?x,?y) from input (?f,?y,?y). • • • •
The XMDL language is: Orthogonal, because there are rather few primitive structures that can be combined in rather few ways Mark-up, because it supplies the users with the freedom of a nonpositioning language Strongly typed, because it performs type checking, set checking, and function checking, as well as exception handling Very close to the mathematical notation
DISCUSSION We have demonstrated how formal methods and, specifically, X-machines can be used as the core method in intelligent agent engineering process. First, we argued that X-machines are better suited to the requirements of agent systems than simple automata as far as their modeling is concerned. The use of memory structure and functions provide a better mapping to the requirements of an agent model, and they are proven essential when simple automata fail to provide a finite model. Second, the relation of X-machines to FSM gives the opportunity to exploit existing techniques for verification and testing. Verification is achieved through a variant of temporal logic adopted for X-machines, namely, XmCTL, which can be utilized for model checking of agents, i.e., to verify whether certain desired properties hold in the agent model. Testing refers to the agent implementation, and it is complete, in the sense that a variant of the W-method can automatically generate all possible test cases for the agent. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
94
Kefalas, Holcombe, Eleftherakis & Gheorghe
Third, a methodology for building complex agent systems through aggregation of behaviors as well as multi-agent systems is discussed. Communicating Xmachines are demonstrated as a powerful extension to X-machines that facilitate modeling of large-scale systems, without the loss of the ability to use model checking and testing in each individual component. Finally, a notation that makes X-machine practical was presented. The XMachine Definition Language provides the appropriate syntax and semantics in order to make the formal method: • Practical, in the sense that complex mathematical definitions can be written using simple text • General, because the notation becomes standard, and therefore, the same model can be used throughout a variety of tools • Modular, because it can accommodate component-based communicating X-machine model development • Disciplined, because modeling of components and communication between them are regarded as two separate developing activities Currently, XMDL forms the basis for the development of various tools, such as automatic translations to programming languages such as PROLOG and JAVA (Kefalas, 2000) or transformations to other formal notations such as Z (Kefalas & Sotiriadou, 2000). Also, various other tools such as animators, test-case generators, and model checkers have XMDL as an interchange language. Finally, XMDL is recently extended to accommodate certain objectoriented features of X-machine modeling as well as cardinality of communication in a multi-agent system (Kefalas et al., 2001).
CONCLUSION Further work is required in developing the tools further and also in building more examples of agent systems and carrying out the testing and verification described. Because the X-machine method is fully grounded in the theory of computation, it is fully general and will be applicable to any type of computational task. The paradigm of the X-machine is also convenient when it comes to implementing the models in an imperative programming language. In fact, the translation is more or less automatic. The existence of the powerful testing method described lays the foundation for the method to be used in potentially critical applications. Finally, the model checking developments will lead to a situation in which one of the key issues in agent software engineering, namely,
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 95
how can we guarantee that the agent system constructed will exhibit the desired emergent behavior, can be solved, or at least substantial progress toward this goal will be achieved.
REFERENCES Attoui, A. & Hasbani, A. (1997). Reactive systems developing by formal specification transformations. Proceedings of the 8th International Workshop on Database and Expert Systems Applications (DEXA 97), (pp. 339–344). Balanescu, T., Cowling, A.J., Gheorgescu, H., Gheorghe, M., Holcombe, M., & Vertan, C. (1999). Communicating stream X-machines systems are no more than X-machines. Journal of Universal Computer Science, 5 (9), 494–507. Barnard, J. (1998). COMX: a design methodology using communicating Xmachines. Journal of Information and Software Technology, 40, 271– 280. Benerecetti, M., Giunchiglia, F., & Serafini, L. (1999). A model checking algorithm for multiagent systems. In J.P. Muller, M.P. Singh, & A.S. Rao (Eds.). Intelligent Agents V (LNAI Volume 1555). (pp. 163–176). Heidelberg: Springer-Verlag. Brazier, F., Dunin-Keplicz, B., Jennings, N., & Treur, J. (1995). Formal specification of multi-agent systems: a real-world case. Proceedings of International Conference on Multi-Agent Systems (ICMAS’95), (pp. 25–32). Cambridge, MA: MIT Press. Brooks, R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics Automation, 2 (7), 14–23. Brooks, R.A. (1991). Intelligence without reason. In J. Mylopoulos & R. Reiter (Eds.). Proceedings of the 12th International Joint Conference on Artificial Intelligence (pp. 569–595). Morgan Kaufmann. Burch, J.R., Clarke, E.M., McMillan, K.L., Dill, D.L., & Hwang, J. (1992). Symbolic model checking: 1020 states and beyond, Information and Computation, 98 (2), 142–170. Chow, T.S. (1978). Testing software design modeled by finite-state machines. IEEE Transactions on Software Engineering, 4 (3), 178–187. Clarke, E. & Wing, J.M. (1996). Formal methods: state of the art and future directions, ACM Computing Surveys, 28 (4), 626–643. Clarke, E.M., Emerson, E.A., & Sistla, A.P. (1986). Automatic verification of finite state concurrent systems using temporal logic specifications. ACM Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
96
Kefalas, Holcombe, Eleftherakis & Gheorghe
Transactions on Programming Languages and Systems, 8 (2), 244– 263. Collinot, A., Drogul, A., & Benhamou, P. (1996). Agent oriented design of a soccer robot team. In Proceedings of the 2nd International Conference on Multi-Agent Systems, (pp. 41–47). Cowling, A.J., Gheorgescu, H., & Vertan, C. (2000). A structured way to use channels for communication in X-machines systems. Formal Aspects of Computing, 12, 485–500. Davis, R. & Smith, R. (1983). Negotiation as a metaphor for distributed problem solving. Artificial Intelligence, 20 (1), 63–109. Deneubourg, J.-L., Aron, S., Goss, S., & Pasteels J.-M. (1990). The selforganizing exploratory pattern of the Argentine ant. Journal of Insect Behavior, 3, 159–168. Dorigo, M. & Di Caro, G. (1999). The ant colony optimization meta-heuristic. In D. Corne, M. Dorigo, & F. Glover (Eds.), New Ideas in Optimization. New York: McGraw-Hill, pp. 11–32. Eilenberg, S. (1974). Automata, Machines and Languages. Vol. A. New York: Academic Press. Eleftherakis, G. & Kefalas, P. (2001). Model checking safety critical systems specified as X-machines. Matematica-Informatica, Analele Universitatii Bucharest, 49 (1), 59–70. Eleftherakis, G. & Kefalas, P. (2001). Towards model checking of finite state machines extended with memory through refinement. In G.,Antoniou, N.,Mastorakis, & O.,Panfilov (Eds.). Advances in Signal Processing and Computer Technologies, World Scientific and Engineering Society Press, pp. 321–326. Emerson, E.A. & Halpern, J.Y. (1986). Sometimes and not never revisited: On branching time versus linear time, Journal of the ACM, 33, 151–178. Ferber, J. (1996). Reactive distributed artificial intelligence: principles and applications. In Foundations of Distributed Artificial Intelligence. New York: John Wiley & Sons, pp. 287–314. Finin, T., Labrou, Y., & Mayfield, J. (1997). KQML as an agent communication language. In J.M. Bradshaw (Ed.). Software Agents, AAAI Press, pp. 291–316. Fisher, M. & Wooldridge, M. (1997). On the Formal Specification and Verification of Multi-Agent Systems. International Journal of Cooperating Information Systems, 6(1), 37–65. Futatsugi, K., Goguen, J., Jouannaud, J.-P., & Meseguer, J. (1985). Principles of OBJ2. In B. Reid (Ed.). Proceedings, Twelfth ACM Symposium on Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Formal Method for Development of Agent-Based Systems 97
Principles of Programming Languages, (pp. 52–66). Association for Computing Machinery. Georghe, M., Holcombe, M., & Kefalas, P. (2001). Computational models for collective foraging, Biosystems, Elsevier Science, 61, 133-141. Harel, D. (1987). Statecharts: A visual approach to complex systems. Science of Computer Programming, 8 (3). Holcombe, M. (1988). X-machines as a basis for dynamic system specification. Software Engineering Journal, 3 (2), 69–76. Holcombe, M. & Ipate, F. (1998). Correct Systems: Building a Business Process Solution. London: Springer-Verlag. Inverno, d’ M., Kinny, D., Luck, M., & Wooldridge, M. (1998). A formal specification of dMARS. In M.P. Singh, A. Rao, & M.J. Wooldridge (Eds.). Intelligent Agents IV (LNAI Volume 1365), Heidelberg: SpringerVerlag, pp. 155–176. Ipate, F. & Holcombe, M. (1998). Specification and testing using generalised machines: a presentation and a case study. Software Testing, Verification and Reliability, 8, 61–81. Jennings, N.R. (2000). On agent-based software engineering. Artificial Intelligence, 117, 277–296. Jennings, N.R. (2001). An agent-based approach for building complex software systems. Communications of the ACM, 44 (4), 35–41 . Jones, C.B. (1990). Systematic software development using VDM (2nd ed.). New York: Prentice Hall. Kapeti, E. & Kefalas, P. (2000). A design language and tool for X-machines specification. In D.I. Fotiadis & S.D. Nikolopoulos (Eds.). Advances in Informatics, World Scientific Publishing Company, pp. 134–145. Kefalas, P. (2000). Automatic Translation from X-machines to Prolog. Technical Report TR-CS01/00, Department of Computer Science, CITY Liberal Studies. Kefalas, P. (2000). XMDL user manual: version 1.6. Technical Report TRCS07/00, Department of Computer Science, CITY Liberal Studies. Kefalas, P., Eleftherakis, G., & Kehris, E. (2001). Modular modeling of largescale systems using communicating X-machines. In Y. Manolopoulos & S. Evripidou (Eds.). Proceedings of the 8th Panhellenic Conference in Informatics (pp. 20–29). Livanis Publishing Company. Kefalas, P. & Sotiriadou, A. (2000). A compiler that transforms X-machines specification to Z. Technical Report TR-CS06/00, Department of Computer Science, CITY Liberal Studies. Kripke, S. (1963). A semantical analysis of modal logic I: normal modal Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
98
Kefalas, Holcombe, Eleftherakis & Gheorghe
propositional calculi. Zeitschrift fur Mathematische Logik und Grundlagen Mathematik, 9, 67–96. McMillan, K.L. (1993). Symbolic Model Checking. Dordrecht: Kluwer. Odell, J., Parunak, H.V.D., & Bauer, B. (2000). Extending UML for agents. In Proceedings of the Agent-Oriented Information Systems Workshop at the 17th National Conference on Artificial Intelligence. Rao, A.S. & Georgeff, M.P. (1993). A model-theoretic approach to the verification of situated reasoning systems. In R. Bajcsy (Ed.). Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI’93), (pp. 318–324). Morgan Kaufmann. Rao, A.S., & Georgeff, M. (1995). BDI Agents: from theory to practice. In Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS-95), (pp. 312–319). Reisig, W. (1985). Petri nets — an introduction. EATCS Monographs on Theoretical Computer Science, 4, Heidelberg: Springer-Verlag. Rosenschein, S.R. & Kaebling, L.P. (1995). A situated view of representation and control. Artificial Intelligence, 73 (1–2), 149–173. Seeley, T.D. & Buhrman, S.C. (1991). Group decision making in swarms of honey bees. Behavioral Ecology and Sociobiology, 45, 19–31. Spivey, M. (1989). The Z notation: A reference manual. New York: Prentice Hall. Vries, H. de & Biesmeijer, J.C. (1998). Modeling collective foraging by means of individual behavior rules in honey-bees. Behavioral Ecology and Sociobiology, 44 109-124. Wooldridge, M. & Ciancarini, P. (2001). Agent-oriented software engineering: The state of the art. LNCS (Vol. 1957, pp. 1-28) Heidelberg: Springer-Verlag. Wulf, W.A., Shaw, M., Hilfinger, P.N., & Flon, L. (1981). Fundamental Structures of Computer Science. New York: Addison-Wesley. Young, W.D. (1991). Formal methods versus software engineering: is there a conflict? In Proceedings of the 4th Testing, Analysis, and Verification Symposium, pp. 188–899.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
99
Chapter V
Engineering Emotionally Intelligent Agents Penny Baillie University of Southern Queensland, Australia Mark Toleman University of Southern Queensland, Australia Dickson Lukose Mindbox Inc., USA
ABSTRACT Interacting with intelligence in an ever-changing environment calls for exceptional performances from artificial beings. One mechanism explored to produce intuitive-like behavior in artificial intelligence applications is emotion. This chapter examines the engineering of a mechanism that synthesizes and processes an artificial agent’s internal emotional states: the Affective Space. Through use of the affective space, an agent can predict the effect certain behaviors will have on its emotional state and, in turn, decide how to behave. Furthermore, an agent can use the emotions produced from its behavior to update its beliefs about particular entities and events. This chapter explores the psychological theory used to structure the affective space, the way in which the strength of emotional states can be diminished over time, how emotions influence an agent’s perception, and the way in which an agent can migrate from one emotional state to another. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
100
Baillie, Toleman & Lukose
INTRODUCTION This chapter examines the affective, core mechanism of the Emotionally Motivated Artificial Intelligence (EMAI) architecture: the Affective Space. The design of the affective space is motivated by research into the affective agent domain and the identification of the shortage of agent architectures that have the capacity for decision making influenced by a mechanism that simulates human emotional intelligence (Goleman, 1995). Picard (1997) affirms the importance of this type of emotionally influenced decision making in computers. She suggests that if affective decision making were integrated into computers, it would provide a competent solution to emulating the intelligence of humans, where decisions are often made with insufficient knowledge, limited memory, and relatively slow processing speeds. Emotions are an integral part of human decision making, and by giving machines a similar mechanism, it could help in problem solving, where options cannot be fully explored, data is incomplete, and processing time is short. In recent times, there have been a number of architectures designed to produce artificial agents capable of expressing and processing emotions [Silas T Dog (Blumberg, 1997), PETEEI (El-Nasr, 1998), EBC Framework (Velasquez, 1999), Emotional Agents (Reilly, 1996), and Creatures (Grand et al., 1997)]. These models cover a wide range of affective phenomena and differ broadly between implementations. As a complete examination of these architectures would constitute a publication in its own right, a comprehensive review of these models will not appear in this chapter. This chapter begins by examining a brief overview of the EMAI architecture. This is followed with an in-depth examination of the affective space; the architecture’s primary emotion-producing mechanism. The chapter continues by examining how emotions are produced and processed by the affective space. Finally, an examination of some future trends for the use of emotional agents is given.
OVERVIEW OF THE EMAI ARCHITECTURE The EMAI architecture consists of several major processing and knowledge representation areas. These areas work together in a complex network of information gathering, manipulating, and updating. As shown in Figure 1, any agent implemented using the EMAI architecture receives external sensory data from its environment. It also processes internal sensory data from the Motivational Drive Generator in the Knowledge Area. Internal State Registers
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
101
simulate low-level biological mechanisms (such as hunger and fatigue). The sensory processor and the affective space integrate both types of sensory data into the agent’s belief system via an emotional filtering process. Sensory input (internal and external) received by the sensory processor may activate goals in the agent. The goals are processed by the agent’s constructive area, where plans are chosen that will satisfy these goals. These plans are generated by the Event Space Generator, which generates a series of competing events that could be performed by the agent to satisfy its goals. Before the agent schedules the events for execution in the Deliberate Area, each event is ordered by the Intention Generator in collaboration with the affective space and sorted from most liked to least liked. Once the agent has the list of events sorted by emotional affect, the behavior actuator begins executing them in order. The events executed by the behavior actuator at any moment in time become the EMAI agent’s outward behavior. Although the goal-orientated nature of the EMAI architecture is equally as important as the emotional mechanisms, the focus of this chapter is on the engineering of the emotional aspects of the architecture, therefore, goals and plans will not be discussed here. For further information on the goal setting and planning mechanisms in EMAI architecture, the reader is encouraged to see Baillie (2002). The affective space, shown in Figure 1, is the focal point of the agent architecture. The affective space acts as an emotional filter that influences an EMAI agent’s perception of its beliefs and the environment and, as a consequence, how it behaves. While there have been a number of emotional agents that have preceded EMAI, none have structured emotions in the unrivaled multidimensional sense as does the EMAI’s affective space.
THE AFFECTIVE SPACE The affective space is a new and unique concept in the domain of affective computing. Smith and Ellsworth’s experimentation and analysis (Smith & Ellsworth, 1985), identifies six orthogonal appraisal dimensions used to describe the scope of the affective space. These dimensions are pleasantness, P; anticipated effort, E; certainty, C; attentional activity, A; responsibility, R; and control, O; across 15 emotions: happiness, sadness, anger, boredom, challenge, hope, fear, interest, contempt, disgust, frustration, surprise, pride, shame, and guilt. Each emotion exists within the affective space, as a discrete emotion point comprised of values for each of the six appraisal dimensions, as shown in Table 1. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
102
Baillie, Toleman & Lukose
Figure 1: Emotionally motivated artificial intelligence architecture
Figure 1. Emotionally Motivated Artificial Intelligence Architecture
Table 1: Mean locations of emotional points Emotion
P
E
C
A
R
O
Happiness
-1.46
-0.33
-0.46
0.15
0.09
-0.21
Sadness Anger Boredom Challenge Hope Fear Interest Contempt Disgust Frustration Surprise Pride Shame Guilt
0.87 0.85 0.34 -0.37 -0.5 0.44 -1.05 0.89 0.38 0.88 -1.35 -1.25 0.73 0.6
-0.14 0.53 -1.19 1.19 -0.18 0.63 -0.07 -0.07 0.06 0.48 -0.66 -0.31 0.07 0
0 -0.29 -0.35 -0.01 0.46 0.73 -0.07 -0.12 -0.39 -0.08 0.73 -0.32 0.21 -0.15
-0.21 0.12 -1.27 0.52 0.31 0.03 0.7 0.08 -0.96 0.6 0.4 0.02 -0.11 -0.36
-0.36 -0.94 -0.19 0.44 0.15 -0.17 -0.13 -0.5 -0.5 -0.37 -0.97 0.81 1.13 1.13
1.51 -0.96 0.12 -0.2 0.35 0.59 0.41 -0.63 -0.19 0.22 0.15 -0.46 -0.07 -0.29
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
103
As the agent’s emotional state or mood is stored as six attributes, each representing a value in each of the appraisal dimensions of the affective space, the agent’s mood can be compared (by a distance measure) to any pure emotion (for example, happiness or sadness) in the affective space. The pure emotion physically closest to the agent’s emotional state is used to describe the agent’s mood. Through the implementation of the affective space, an EMAI agent cannot only synthesize a mood for itself, but it can also associate emotions with environmental elements.
ASSIGNING A MULTIDIMENSIONAL EMOTIONAL STATE In the EMAI architecture, an emotional state can be assigned to any entity, be it an event element (person, place, object, etc.), a whole event, or a partially ordered set of events (a plan). By assessing an element, event, or plan using the six appraisal dimensions a six coordinate point can be determined. This point, when plotted in the affective space can be compared to the 15 pure emotion points that exist within (see Table 1). For any element, e, an Ωe value can be determined that represents the emotional state, Ω evoked by the item. The Pe, Ee, Ce, Ae, Re, and Oe values are calculated for an item. The representation for Ωe is shown in Expression 1. (1)
Ω
= {Pe , E e , C e , Ae , Re , Oe}
e
Given Ωe for an element, the emotional state can be deduced for the purpose of expression in natural language by determining the distance that Ωe is from each of the 15 pure emotion points. To determine natural language terminology to best describe the emotion of Ωe, the distance between the Ω1…Ω Ω15) is element’s emotional state point and each of the pure emotions (Ω calculated using Expression 2. (2)
∆Ω = ( P e − P j) + ( E e − E j) + (C e − C j) + ( Ae − A j) + ( Re − R j) + (O e − O j) 2
2
2
2
2
2
i
where 15 values will be calculated for j = 1…15. The pure emotion for the element expressed in English, Eme, closest to Ωe is then determined by using Expression 3.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
104
Baillie, Toleman & Lukose (3) e
= emotion_ name(min(U ∆ )) Ωi j =1
⊃
Em
15
where the function min returns the pure emotion point in closest proximity to Ωe, and the function emotion_name converts the pure emotion point into English terminology. For example, assume an element with an emotional state point of Ωe = [15, 87, 35, -30, 10, -50]. To find the name of the emotion that best describes the element’s associated emotional state, the first step is to find the distance between this point and the 15 pure emotion points in the affective space (shown in Table 1) using Expression 3. The results are shown in Table 2. In this example, Ωe is associated with the emotion shame. The multidimensional affective decision making model of the EMAI agent uses an emotional prioritizing procedure to select behaviors. However, before the prioritizing can begin, an event must be assigned an emotion. The emotion assigned to the event is calculated by considering the agent’s emotional responses to each of the elements in the event. As defined in Baillie et al. (2000), an event E is made up of a set of actions a, a set of objects o, occurs at a time t, and has context c, as in Expression 4. (4)
E
= {a , o , c , t }
Table 2: Distance between an item’s emotional state and pure emotions in affective space Emotion happiness sadness anger boredom challenge hope fear interest contempt disgust frustration surprise pride shame guilt
∆Ω 2.08 2.22 2.18 2.15 1.59 1.46 1.70 2.11 1.68 1.74 1.93 2.68 1.64 0.80 0.84
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
105
For each element e, in an event E, the emotional state, Ωe is defined as per Expression 1. Based on the outcome of event E, the agent will assign a weighting w, to the emotional state of each of the elements, e. As the weighting of an element and the resulting emotional state with respect to an event are dynamic, the time, t, at which the emotional state is being calculated, must also be taken into consideration. Therefore, the emotional state resulting from an event E, written as ΩE, t is calculated as Expression 5. (5)
n
ΩE ,t = ∑ we,t Ωe,t e =1
where n is the number of elements associated to event E, and n
0 ≤ w ≤ 1 and ∑ w = 1 e
e=1
e
Once an event has been completed, each of the elements involved in the event has its emotional states updated with respect to the change in the emotional state of the agent (or the agent’s mood) evoked by the outcome of the event, ΩO,t+1. ΩO,t+1 represents the emotional state of the agent after an event has occurred, where O stands for the outcome emotion and t + 1 is the time at which the event ended. This value is not the same as the emotional state assigned to the event after execution. A change in the emotional state of the agent occurs when the values for each of the appraisal dimensions (P, E, C, A, R, O) are updated during and after an event. While each of six appraisal values of an individual element involved in the event influence how these values are changed for the agent’s emotional state, it cannot be determined before the event occurs what the final emotional state will be. The agent can only speculate. For example, an event that the agent believes will make the agent happy may fail during execution, may take longer to complete than initially calculated, or may require extra effort. The resulting emotional state in this example may be sad rather than the expected happy. Therefore, ΩO,t+1 cannot be calculated by combining the appraisal dimensions of the elements of an event but can only be determined after the event has been executed. The change in the emotional state of an event is calculated using Expression 6.
∆ =Ω Ω
O ,t +1
− ΩE ,t
(6)
After this has been calculated, the emotional states for each element in the event set can be updated as Expression 7.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
106
Baillie, Toleman & Lukose
Ω
e,t +1
= Ωe,t + we,t +1 ∆Ω
(7) E
Instead of the element taking on the final emotional state of the event, the previous emotional state of the element is taken into account along with the weighting of effect the element had in the resulting emotional state for the event. For example, assume an event E with two elements A and B. At the beginning of the event, elements A and B have emotional states of happiness, and weightings of 0.2 and 0.8, respectively, thus:
Ω Ω
A, t
= [-146, - 33, - 46, 15, 9, - 21]
B ,t
= [-146, - 33, - 46, 15, 9, - 21]
w w
A, t
= 0 .2
B ,t
= 0.8
This would result in the emotional state for the event before execution as:
Ω
E ,t
= wA,t ΩA,t + wB,t ΩB,t = 0.2 × [-146, - 33, - 46,15, 9, - 21] + 0.8 × [-146, - 33, - 46,15, 9, - 21] = [-146, - 33, - 46,15, 9, - 21]
In other words, a happy event. Assuming after the event has occurred, the outcome results in an emotional state of happiness, and A and B are still weighted the same, then A and B can be updated as shown below.
Ω
A,t +1
= Ω A,t + w A,t +1 (ΩO ,t +1 − ΩE ,t )
= ([-146, - 33, - 46, 15, 9, - 21] + 0.2([-146, - 33, - 46, 15, 9, - 21] - [-146, - 33, - 46, 15, 9, - 21]) = [-146, - 33, - 46, 15, 9, - 21]
Ω
B ,t +1
= ΩB ,t + wB ,t +1 (ΩO ,t +1 − ΩE ,t )
= ([-146, - 33, - 46, 15, 9, - 21] + 0.8([-146, - 33, - 46, 15, 9, - 21] - [-146, - 33, - 46, 15, 9, - 21]) = [-146, - 33, - 46, 15, 9, - 21]
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
107
Figure 2: Emotional State of an Event
When the same event’s emotional state needs to be calculated in the future, it will again be evaluated to be happy. However, if the final emotional state of the event changes from its initial state, the emotional states for the elements will change. For example, imagine that the same event is to occur again. This time, the elements A and B are weighted initially as before; however, after the event, the emotional state is sad, and the weightings of A and B are now 0.4 and 0.6, respectively. Figure 2 graphically represents the elements A and B and the event E before execution with a vector representing the change in emotional state of the agent after the execution of the event E. Given the new emotional state of E and the weighting of A and B after the event, the new emotional states for A and B can be calculated as:
Ω
A,t +1
= Ω A,t + wA,t +1 (ΩO ,t +1 − ΩE ,t )
= ([-146, - 33, - 46, 15, 9, - 21] + 0.4([87, - 14, 0, - 21, - 36, 151] - [-146, - 33, - 46, 15, 9, - 21]) = [-146, - 33, - 46, 15, 9, - 21] + [93.2, 7.6, - 18.4, - 14.4, - 18, 68.8] = [−58.8,−25.4,64.4,6,−9,47.8]
Ω
B ,t +1
= ΩB ,t + wB ,t +1 (ΩO ,t +1 − ΩE ,t )
= ([-146, - 33, - 46, 15, 9, - 21] + 0.6([87, - 14, 0, - 21, - 36, 151] - [-146, - 33, - 46, 15, 9, - 21]) = [-146, - 33, - 46, 15, 9, - 21] + [139.8,11.4,-27.6,-21.6,-27,103.2] = [−6.2,−21.6,−73.6,−6.6,−18,82.2]
This moves the emotional point for each of the elements closer to the final emotional state for the event by using the weightings to add a portion of the vector from Et to Et+1 onto the emotional states for A and B. The result of this is represented graphically in Figure 3.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
108
Baillie, Toleman & Lukose
Figure 3: Emotional state of event elements after event execution
Once each element of the event has had its emotional state updated, the emotional state for the event is updated using the new values for A and B, thus:
Ω
E,t +1
= wA,t+1 ΩA,t+1 + wB,t+1 ΩB,t+1
= 0.4 × [−58.8,−25.4,64.4,6,−9,47.8] + 0.6× [−6.2,−21.6,−73.6,−6.6,−18,82.2] = [-27.24,- 23.12,- 18.4,- 1.56,- 14.4,68.44]
This results in a new emotional state that, in this example, lies between the elements of A and B as shown in Figure 4. In the case of elements A and B, from having found the emotional states (Ω values) for both, A can be said to be associated with surprise, and B can be associated with sadness. Once the agent has calculated the emotional points for each of the events and in turn each element involved in an event, the agent can use this emotional point in its affective decision-making process. Figure 4: Emotional state of event after event execution
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
109
AFFECTIVE DECISION MAKING WITH EMAI An EMAI agent can decide which of its planned events to execute by calculating the resulting emotional effect that performing the event will have on the agent’s mood. The agent’s mood ΩEMAI is the result of weighing the importance of an event wE and summating the m number of resultant emotional points for all episodes of all events performed by the agent E and defined by Expression 8. f
Ω
(8)
m
EMAI
= ∑ wE ΩE E =1
m
where 0 ≤ wE ≤ 1 and ∑ wE = 1 E =1
Given a number of competing events that have the same priority, the agent will select an event that will most likely update the agent’s emotional state to a more preferred emotional state. If the agent would prefer to have an emotional state closer to happy, it would select the event that would, when combined with its current emotional state, make the agent happy. During the course of affective decision making, the inherent nature of the affective space deals with two concepts that have been difficult to handle in other contemporary affective agent architectures: emotion blending and emotion decay. These are discussed in the following sections.
Emotion Blending Emotion blending is a concept devised by psychologists such as Plutchik (Lefton, 1994), in which it is stated that there exist several basic emotions and that more complex emotions can be created by blending different combinations of the basic emotions. The problem with any emotion synthesis in affective computing will be determining which emotions, when blended together, produce other emotions. In affective agent architectures such as Em (Reilly, 1996) and PETEEI (El-Nasr, 1998), emotions are represented by individual variables or gauges within the architecture. The problem occurs when the gauges need to be coordinated. This requires complex programming within the agent to describe the relationships that exist between the individual variables. The EMAI architecture eliminates the need for defining the complex relationships and programming coordination between emotion state variables because of the nature of the affective space. The relationships that exist among the pure emotions are defined by their separating distances. Rather than having Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
110
Baillie, Toleman & Lukose
individual variables that represent the emotions integrated into the architecture and using these same variables to record the strength of the emotion of the agent, the EMAI architecture separates the two. The 15 pure emotions are fixed points within the affective space. They represent the relationship between the pure emotion and the appraisal dimensions but do not record the agent’s mood or the emotional state of any assessed item. The emotional state of an item or the agent is separate from each pure emotion. Because of this representation, theoretically, an infinite number of pure emotions could be added to the architecture with little effort. Furthermore, as the emotional state point for an item (or the agent) is independent of the pure emotion points, the value for the emotional state point can freely change and not affect the pure emotion points. Although a discrete point within the affective space represents the pure emotions, they exist within an affective continuum. Unlike other affective agent architectures that model pure emotions as individual and discrete variables with complex rules for the blending of the values, the affective space naturally defines blended emotions, as they are represented by the space that exists between the pure emotion points. While the emotional state for an item may fall close to the proximity of a pure emotion within the affective space, most often it will not fall exactly at the same spot as a pure emotion. Table 3 displays the results of blending equal parts of two of the 15 pure emotion points.
Emotional State Decay The same characteristics of the affective space that allow for a natural emotion blending process also address the issue of emotional state decay. Emotional decay is the mechanism by which an agent that is experiencing an emotional state will, over time, begin to have the strength of that emotional state reduced. This ensures that the agent does not stay in the same emotional mood. Contemporary affective agent models [PETEEI (El-Nasr, 1998), Em (Reilly, 1996), Silas T. Dog (Blumberg, 1997), Yuppy (Velasquez, 1999)] address the issue of emotional state decay using decay rate functions. Because these models represent emotions and their associated strengths as individual variables, emotional decay must be dealt with manually. This means coordinating the rate of decay of an emotion variable with other emotion variables. For example, an agent that is experiencing happiness will need to have the value of the happiness variable decreased as the value of the variable representing sadness is increased. Otherwise, the agent will be experiencing two opposing emotions at the same time. The closest that an emotional state point can get to the pure emotion is to be positioned at the exact same location in the affective space as the pure Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Shame pride
shame contempt shame
shame hope shame
disgustingguilt
shame
hope
guiltypride
hope
shame
guiltyboredom
guilt
shamefulsadness challengingshame
contempt shame
fearfulshame hope
shamefulcontempt
shamefuldisgust
frustratingshame
hope
guilt
happypride
hope contempt
boredpride challengingpride hopefulpride
hope
happiness
hope
disgustingpride
hope
happiness
surprisedhappiness
hope contempt
hope interest hopefulsurprise
hope
interestingsurprise
hope
hope
frustratedsadness angryfrustration
disgust challengingfrustration
hope
disgustedhappiness
disgustedsadness contempt
boreddisgust challengingdisgust
hopefuldisgust
hope
hope
frustration angrycontempt disgust
frustration
hopefulcontempt
hopefulfrustration
interestedhappiness
hope contempt hope
challenginginterest
hopefulinterest
fearfulfrustration
hope
fearfulsadness frustration
disgust
fearfulchallenge
hopefulfear
hope
interest
hopefulsadness
contempt
hopefulboredom
challenginghope
frustratingcontempt
interest
fear
contempt
challenginganger
contempt
Pride fearfuldisgust
Surprise hope
Frustration disgustingcontempt
Disgust frustration
Contempt hope
Interest
guiltyshame
Fear disgust
Hope boredhappiness
Challenge boredsadness
Boredom disgust
Anger
hope
Sadness frustration
Happiness hope
Guilt
Shame
Pride
Surprise
Frustration
Disgust
Contempt
Interest
Fear
Hope
Challenge
Boredom
Anger
Sadness
Engineering Emotionally Intelligent Agents 111
Table 3: Resultant emotions from blending pure emotions
emotion. If this is the case, the agent is said to be experiencing a pure emotion. When the emotional state of the item being assessed changes, the position of the emotional state point in the affective space also changes. As the emotional state point moves away from one pure emotion in the affective space, it moves toward another. Therefore, the emotional state point for the assessed item
Table 3. Resultant Emotions from Blending Pure Emotions
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
112
Baillie, Toleman & Lukose
cannot be representing two opposing emotional states at the same time. The affective space, by its nature, generates naturally decaying emotions without any extra programming or coordination effort. When the emotional state point for the item is at its most pleasant, the strength of the emotion happiness is the greatest. This is because the point for the pure emotion of happiness is closest to the pleasantness axis at the pleasant end. As the emotional state point moves toward, for example, sadness, the strength of the happiness emotion with respect to the item’s emotional state point reduces.
FUTURE TRENDS FOR AFFECTIVE AGENTS A number of application areas for affective agents have been developed in recent years. The majority of these areas focus on the computer’s ability to predict the emotional state of the user. Several projects undertaken by the MIT Media Laboratory focus on wearable computers and affective tutoring systems that rely on their ability to determine the user’s emotion state from a series of physiological changes. The research that has gone into the EMAI agent has taken a different perspective. Rather than predicting the emotions of the user, the computer is generating emotions that affect its own behavior. This type of behavior would be beneficial in the creation of an intelligent tutoring system that not only predicts the user’s emotional state but also could adapt to it and provide an emotional and personalized interface that could appear sympathetic to the user. Another obvious application area for the EMAI architecture is in the development of computer game and interactive fiction characters. Stern, a leading computer game developer, in Stern (1999) comments that what computer gamers are really seeking is affective AI and, in particular, the creation of believable artificial life or A-life. The EMAI architecture achieves a partial step in the direction toward achieving affective AI in computer characters by modeling an emotion-rich decision-making mechanism that gives a computer character a perceived personality and the ability to use emotions to adapt to its environment. Another area in which the EMAI architecture could be applied is that of e-commerce. E-commerce systems are increasingly recognizing the importance of giving additional value to users by providing personalized transactional experiences (Meech & Marsh, 2000). Two specific factors apparent in the impact of personalized transactions in e-commerce are those of trust and personality and how these may be integrated into Web-based interfaces. The principle behind a lifelike intelligent agent to act as a host at a Web site is the Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
113
same as for the creation of computer game characters. The aim is to create a suspension of disbelief for the user. The qualities necessary for this are personality, emotions, social relationships, and the illusion of life (Mateas, 1997). With the ability to create an artificial personality with emotions, the EMAI architecture could be used to develop a personalized face for an ecommerce business by providing a face for the company and a personal shopping guide. Another area worthy of research is that of the appraisal dimensions selected for constructing the affective space. The dimensions chosen for EMAI’s initial affective space belonged to an existing appraisal model by Smith and Ellsworth (Smith & Ellsworth, 1985). As the objective of the research herein was not to evaluate the appraisal dimensions but to evaluate the use of such a model in an intelligent agent architecture, different appraisal dimensions were not evaluated. It has become evident during this research that while the appraisal dimensions used for EMAI’s affective space have been successful in representing emotional states with respect to events, there are many other dimensions that could be equally as effective. Furthermore, things other than events can evoke emotional states, and in these cases, an entirely different set of appraisal dimensions may be more appropriate. Besides the six appraisal dimensions included in the Smith and Ellsworth (1985) model used for the definition of EMAI’s affective space, psychologists have suggested many others. These include legitimacy (Roseman et al., 1990), compatibility, novelty, and perceived obstacle (Scherer, 1982) and danger or threat (Smith & Lazarus, 1990). Because the Smith and Ellsworth model was developed through experimental procedures and evaluated as a successful theory for relating their chosen appraisal dimensions with 15 separate emotional states, it formed an excellent basis for the affective space of the EMAI architecture. As the objective of the engineering of the EMAI architecture was to develop an emotionally intelligent artificial being and not expand the psychological theory of emotion, further research into other appropriate appraisal dimensions and other emotional states has not been explored. This is not to suggest that this type of research would not be beneficial to the advancement of affective computing. In fact, the appraisals in the EMAI architecture are restricted to the emotional states evoked from events. However, emotional states can be evoked from a number of other cognitive appraisals. Music, for example, can evoke a variety of emotional states in the listener (Juslin & Madison, 1999). Tempo, timing, articulation, key, tone, and instrument choice influence the emotions experienced by the listener. Color is Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
114
Baillie, Toleman & Lukose
another concept that can also elicit an emotional response from the viewer (Jacobson & Bender, 1996). Emotions experienced as the result of viewing a specific color differ culturally. For example, black is associated with funerals and mourning in the Western World, and white and purple are associated with funerals and mourning in the Eastern world. However, the research clearly shows that differing colors can evoke different emotional states. This suggests that emotional experiences are not restricted to the cognitive appraisals of Smith and Ellsworth’s model. Exactly what appraisal dimensions are appropriate in assessing concepts other than events is ill defined in current literature. It could be the case that in some domains, cognitive appraisal models of emotion theory are inadequate for the full realization of a reasonable, emotionally intelligent artificial being. For example, besides the Western World philosophy on emotion, there are also a number of Eastern views. The traditional Chinese medicine of acupuncture defined 12 main Qi channels that run through the body (Rogers, 2000). These channels relate to major organs and intestinal parts of the body, such as the stomach, lungs, and large intestine. The lungs are said to relate to the emotion of sadness, the stomach corresponds to pensiveness, and the heart happiness. In the Chinese martial art of Tai Chi, emotions relate to the universal five elements: wood, fire, earth, water, and metal. Each element corresponds to a negative and a positive emotional state. For wood, the positive emotion is humor, and the negative emotion is anger. Exactly how these views on emotional theories integrate into AI is unclear. However, it is worth further investigation. It may be that some Western philosophies are suited to a particular set of application areas and Eastern philosophies to others. As can be seen from the aforementioned application areas, the EMAI architecture is ideal for integration into systems that require an intelligent system that requires an interface with personality and emotional capabilities. Its emotional abilities allow it to create a personalized and nonthreatening interface for computer systems and Web interfaces. Its emotion development mechanisms allow it to not only to meet the current needs of the user but also to be able to perceive future needs.
CONCLUSION Although emotions were originally thought to impede rational thinking, new schools of thought have arisen that consider emotions a vital part of rational intelligent behavior and necessary not only for survival, but also intellectual Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
115
success. In recent years, important emphasis has been placed on the concept of emotional intelligence. This has led many researchers in the area of artificial intelligence to examine the contribution that theories on emotion could have in the development of intelligent agents. Thus, the domain of affective computing was born. The contributions presented in this chapter endeavor to further enhance the domain of affective computing by providing a different perspective on the way in which emotions are modeled in an artificial agent. The continuous affective space in the EMAI agent is used to replace the discrete variables used to represent individual emotions in EMAI’s predecessors. In short, the EMAI architecture dispenses with the current traditional view that emotional states are to be modeled as individual variables, and it integrates a new model of a continuous multidimensional affective space. The inherent nature of this unique representation addresses a number of enigmas associated with the traditional depiction. Namely, the significant programming effort and complex relational algorithms that existed between emotional states for updating their valences are no longer necessary. With the use of the affective space, this chapter has demonstrated the ability to scale up (increase) the emotional states of an agent with considerable ease. Furthermore, emotional decay and emotion blending are dealt with in an efficient and effective manner. What the future holds for the field of affective computing is unclear. The complexities of human emotions may be too extreme to include them holistically within an artificial intelligence at this time. Only those segments of emotional behavior that are advantageous to the goals of an artificial being should be considered. Further categorization of emotional behaviors is necessary to identify domains in which particular aspects of emotions are of an advantage. As more and more effort is put into small projects that contribute to the understanding of emotion, the field of affective computing comes closer and closer to a model for general, emotional, artificial intelligence.
REFERENCES Baillie, P. (2002). An agent with a passion for decision making. In Proceedings of Agents in Simulation 2002. Passau, Germany: University of Passau. Baillie, P., Toleman, M. & Lukose, D. (2000). Emotional intelligence for intuitive agents. In Proceedings of AISAT 2000, Hobart, Australia (pp. 134-139) Hobart: University of Tasmania. Blumberg, B.M., Todd, P.M. & Maes, P. (1996). No bad dogs: Ethological lessons for learning in Hamsterdam. In P. Maes, M.J. Mataric, J.-A. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
116
Baillie, Toleman & Lukose
Meyer, J. Pollack, & S.W. Wilson (Eds.), From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior, (pp. 295-304) Cambridge, MA: MIT Press/ Bradford Books. El-Nasr, M.S. (1998). Modeling Emotion Dynamics in Intelligent Agents. Master of Science Dissertation, American University in Cairo. Goleman, D. (1995). Emotional Intelligence. New York: Bantam Books. Grand, S., Cliff, D. & Malhotra, A. (1997). Creatures: Artificial Life Autonomous Software Agents for Home Entertainment. In Proceedings of 1st International Conference on Autonomous Agents (pp. 22-9) New York: ACM Press. Jacobson, N. & Bender, W. (1996). Color as a determined communicator. IBM Systems Journal MIT, Cambridge, 35(3), 526-538. Juslin, P. & Madison, G. (1999). The role of timing patterns in recognition of emotional expression from musical performance. In Music Perception (Vol. 17, pp. 197-221) New Haven: The University of California Press. Mateas, M. (1997, June). An Oz-centric review of interactive drama in believable agents. (Technical Report CMU-CS-97-156) School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. Meech, J. & Marsh, S. (2000). Social factors in e-commerce personalization. In Proceedings of the CHI 2000 Workshop Designing Interactive Systems for 1-to-1 E-commerce. The Hague, NRC 43664. Picard, R. (1997). Affective Computing. London: The MIT Press. Reilly, W.S.N. (1996). Believable Social and Emotional Agents. PhD Dissertation, Carnegie Mellon University. Rogers, P.A.M. (2000). Acupuncture and homeostasis of body and adaptive systems. In The Web Journal of Acupuncture, found at http:// users.med.auth.gr/karanik/english/hels/helsfram.html. Roseman, I.J., Jose, P.E. & Spindel, M.S. (1990). Appraisals of emotioneliciting events: Testing a theory of discrete emotions. Journal of Personality and Social Psychology, American Psychologists Association, Washington, 59(5), 899-915. Scherer, K.R. (1982). Emotion as process: Function, origin and regulation. Journal of Experimental Psychology, American Psychologists Association, Washington, 29, 497-510. Smith, C.A. & Ellsworth, P.C. (1985). Attitudes and social cognition. In Journal of Personality and Social Psychology, American Psychologists Association, Washington, 48(4), 813-838. Smith, C.A. & Lazarus, R.S. (1990). Emotion and adaption. In L.A. Pervin Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Engineering Emotionally Intelligent Agents
117
(Ed.), Handbook of Personality: Theory and Research, (pp. 609-637) Guilford, NY. Stern, A. (1999). AI beyond computer games. In Proceedings of 1999 AAAI Spring Symposium, Artificial Intelligence and Computer Games, (pp. 77-80). Menlo Park, AAAI Press. Velasquez. (1999). From affect programs to higher cognitive emotions: an emotion-based control approach. In Proceedings of Workshop on Emotion-Based Agent Architectures, (pp. 10-15) Seattle, WA.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
118 Smith & Bonacina
Chapter VI
Evolutionary Computation as a Paradigm for Engineering Emergent Behavior in Multi-Agent Systems Robert E. Smith The University of the West of England, UK Claudio Bonacina The University of the West of England, UK
ABSTRACT In the multi-agent system (MAS) context, the theories and practices of evolutionary computation (EC) have new implications, particularly with regard to engineering and shaping system behaviors. Thus, it is important that we consider the embodiment of EC in “real” agents, that is, agents that involve the real restrictions of time and space within MASs. In this chapter, we address these issues in three ways. First, we relate the foundations of EC theory to MAS and consider how general interactions
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
119
among agents fit within this theory. Second, we introduce a platform independent agent system to assure that our EC methods work within the generic, but realistic, constraints of agents. Finally, we introduce an agent-based system of EC objects. Concluding sections discuss implications and future directions.
INTRODUCTION With the advance of computational power and communications speed, we now live in a computational world where a large number of software agents may be acting on behalf of even the most casual user: searching for music, comparing pension schemes, purchasing goods and services, identifying chat partners, etc. Moreover, these agents may be collaborating with those of other users, while spawning and managing agents of their own. In more formal settings, a business, academic, or government user may simultaneously employ many software agents to manage workflow, trade goods or information, collaboratively solve problems, etc. In the future, even relatively simple household appliances may play a role in this churning system of interacting, computational agents. The behavior of a multi-agent system (MAS) is a result of the repeated (usually asynchronous) action and interaction of the agents. Understanding how to engineer adaptation and self-organization is thus central to the application of agents in the computational world of the future. Desirable self-organization is observed in many biological, social, and physical systems. However, fostering these conditions in artificial systems proves to be difficult and offers the potential for undesirable behaviors to emerge. Thus, it is vital to be able to understand and shape emergent behaviors in agent-based systems. Current mathematical and empirical tools give only partial insight into emergent behavior in large, agent-based societies. Evolutionary Computation (EC) (Back et al., 1997) provides a paradigm for addressing this need. Moreover, EC techniques are inherently based on a distributed paradigm (natural evolution), making them particularly well suited for adaptation in agents. In the MAS context, EC theories and practices have new implications. Agents that interact according to these theories are no longer locked inside the laboratory conditions imposed by EC researchers and users. Thus, it is important that we consider the embodiment (in a sense similar to that in Brooks et al., 1998) of EC in “real” agents, that is, agents that involve the real restrictions of time and space within a MAS. We address this issue in two ways. First, we have developed a platform independent agent system, to assure that we work within the generic, but Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
120 Smith & Bonacina
realistic, constraints of agents. Second, we have developed an agent-based system of EC objects. The prime thrust of our research with these tools is to facilitate understanding of EC within agents and understanding of more general agent interactions in the light of EC theories. The following sections describe the foundations of EC theory and practice and relate these foundations to MAS, both in terms of applying EC to MAS and in terms of understanding general MASs as evolving systems, through EC theory. The platform independent agent system is presented, along with the generic software framework for EC in MAS. Final sections include implications and future directions.
BACKGROUND The key qualities exhibited by software agents are autonomy, reactivity, and proactivity (Franklin & Graesser, 1997; Wooldridge & Jennings, 1996). Moreover, agents have the possibility of mobility in complex network environments, putting software functions near the computational resources they require. Agents can also explicitly exploit the availability of distributed, parallel computation facilities. However, these qualities ultimately depend on the potential for agent adaptation. For instance, if an agent is to operate with true autonomy in a complex, dynamic environment, it may have to react to a spectrum of circumstances that cannot be foreseen by the agent’s designer. Autonomous agents may need to explore alternative reactive and proactive strategies, evaluate their performance online, and formulate new, innovative strategies without user intervention. Areas where agents could benefit from adaptation are addressed by active research in machine learning (e.g., classification of unforeseen inputs, strategy acquisition through reinforcement learning, etc.). However, many machine learning techniques are focused on centralized processing of databases to formulate models or strategies. In contrast, EC techniques are inherently based on a distributed paradigm (natural evolution), making them particularly well suited for online, ongoing adaptation in agents. Note that in the following description, we will intentionally avoid providing the typical description of EC, to show that the connection to agents emerges from less method-specific rationale. Also note that (while explicitly avoiding entering the endless debate on what constitutes an agent), we refer to a finegrained system of many agents and assume no particularly detailed “intelligence” or “cognitive ability” in agents. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
121
Instead, we merely assume that MASs are filled with self-interested agents, which can change, adapt, and learn. Given this, the agents can be expected to interact in ever-changing ways that range from the competitive to the cooperative. One should ask what models of system behavior have considered systems comprised of such agents? Clearly, the theories that have developed in the field of evolutionary computation qualify. This section will try to concisely restate some of the basic theories of EC, using an agent perspective, to indicate a direction along which EC theories can have implications for general agents. Many of the developments follow those in Goldberg (1989). Although these theories have been much debated in the EC literature (Radcliffe, 1997), we believe their conceptual conclusions provide a valuable perspective on self-organizing evolvable systems of general agents. Let us consider a set of agents, each of which selfishly seeks to exploit some limited set of resources. Imagine that each agent’s behavior is dictated by a set of (discrete) features, but assume an agent has no particular intelligence about how those features relate to its own success. Given this situation, a method of biased perturbations seems an obvious avenue for progress (evolution) for the individual agent. However, this model fails to consider how agents may benefit from one another. Let us assume that agents can exchange information in the form of messages (possibly in a market of information). How might a selfish agent exploit such information? Clearly, a viable strategy is for an agent to attempt to reason about the effectiveness and features of other agents and use any information obtained to bias its perturbation strategy toward more effective regions of the feature space. Given this outline of an agent’s localized perspective, let us consider the resulting global effects. Using notation similar to that in Goldberg (1989) let the expected proportion of existing agents containing some subset of features H at some time t be p(H,t). The expected number at time t + 1 is given by: p( H , t + 1) = Ps ( H ) 1 − Pd ( H ) where Ps(H) is the probability of any individual agent selecting feature H, and Pd(H) is the probability of feature being disrupted by the selection of other features or other effects of agents perturbing their feature sets. Note that each of these probabilities may be a function of the proportions of features. This simple expression makes no assumptions that can be said to be “genetic.” Our first assumption that vaguely relates to biological analogy is to cast the formula above in terms of a reproductive plan (Holland, 1975). That is, we will assume that the probability of selecting a feature is proportional (or Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
122 Smith & Bonacina
just directly related) to the proportion of agents that contain that feature. In other words, the more agents that persist in having a feature, the greater the likelihood that agents will adopt or retain that feature. In its simplest (proportional) form, this gives: Ps ( H ) = p ( H , t ) R ( H ) where R(H) is a reproductive rate related to the feature’s perceived utility across the population of agents. Note that this simple “proportional selection” form is often used in EC, but any increasing function of proportion would yield conceptual conclusions similar to those presented here. Substituting yields: p( H , t + 1) = p ( H , t ) R ( H ) 1 − Pd ( H ) which is a proportional form of Holland’s schema theorem (Holland, 1975). This formula does not depend explicitly on the form of the internal workings of the agents (i.e., the method of encoding features, or the operators within the agents). It only depends on the assumption of a reproductive plan. Why a reproductive plan? This “bandwagoning” onto apparently useful features in other agents is certainly not the only rational approach from an agent perspective. Agents may find it useful to run counter to what other agents do. However, a reproductive plan is certainly one reasonable strategy, and worthy of examination. Moreover, other plans not explicitly reproductive in character, but that use perceived utility of features in other agents to bias feature selection, may yield similar mathematical forms. Assume that the agent’s reasoning about desirable features is generally correct for some desirable feature H, and that R(H)[1-Pd(H)] remains greater than one for that feature. Ignoring constraints on proportions, this dictates an exponential increase in the p(H,t) with respect to t. Is this form of increase desirable? Holland’s k-armed bandit argument (Holland, 1975) shows that, regardless of the distributions of utilities of H and competing (mutually exclusive) features, a near-optimal rate of increase should be of exponential form with respect to time. A reproductive plan, like that stated above, yields this exponential form for certain features. This is an emergent effect at the system level, which only involves interactions at the agent level. The features that show this near-optimal, exponential effect are those with low rates of disruption,
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
123
Pd(H), relative to the magnitude of R(H). In EC, such features are often referred to as building blocks. At this point, note that the previous discussion is not inherently genetic and could as easily apply to memes as genes (Blackmore & Dawkins, 1999; Dawkins, 1990). Memes are replicators and are defined as units of cultural information, such as cultural practices or ideas, which are transmitted via communication and imitation between one entity and another. Clearly, memes are subjected to a reproductive plan, in the sense of Holland (1975). The primary difference between genes and memes is that we have an understanding of the underlying encoding of genes, but we have no such understanding (in general) of memes. Otherwise, the two entities behave in a conceptually identical fashion, that is, selfishly trying to maximize their reproductive success. The remaining technical discussion in this section concentrates on the assumption of some atomic encoding unit of an agent’s features. Although we may not understand what this unit is for agent memes, much of the reasoning will still ultimately hold. Moreover, the EC offshoot field of memetic algorithms (Krasnogor & Smith, 2000) can provide some insight in light of the perspective presented here. Because all interactions between agents could be categorized as potential transmissions of genes or memes, we believe that the EC-based perspective here can provide insight into general interactions between agents that may not be specifically genetic in character. All building blocks are treated in the emergent, yet near-optimal fashion indicated above, under a reproductive plan. Therefore, we should consider how many of these building blocks exist in a population of individuals. However, to maintain a general agent focus, we will do this without specific reference to EC details (e.g., genetic encoding). We only assume that there is a set of (discrete) atomic features, from which all other features are constructed. These atomic features are (roughly) analogous to genes in biological systems (or memes), but we are not assuming any particular, underlying encoding. With these assumptions, and the assumption of a population size that insures moderate sized building blocks have at least one copy in the population (Smith et al., 2000), one can show that the overall number of building blocks (Nbb) in the population has the following lower bound: 1 2 log(1/ p) 2 +1 N bb ≥ N 2
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
124 Smith & Bonacina
For binary atomic features, where p = 0.5, this gives a form of the N 3 lower bound often associated with genetic algorithms (Holland, 1975). However, regardless of the assumed form of atomic features, the general estimate shows that (under certain restrictive assumptions) a large number of building blocks are implicitly treated in the near optimal fashion indicated by the k-armed bandit argument, as an emergent phenomenon of reproductive plans. This key emergent effect of reproductive plans is referred to in EC as implicit parallelism. The discussion above only alludes to the simplest aspects of evolutionary models. More modern theories consider how features come into balance with respect to limited resources consumed, as another emergent consequence of selfish agents under reproductive plans. Some EC models have explicitly exploited this emergent “resource balancing” effect, as in multiobjective optimization problems (Deb & Goldberg, 1989). Such effects have also been observed as emergent artifacts of EC system dynamics (Horn, Goldberg, & Deb, 1994). EC models seem an appropriate way to gain insight on the propagation of features through a system of agents, and possibly to shape that propagation. Moreover, EC processes implicitly exploit parallelism, while remaining trivial to explicitly parallelize (as in an autonomous agent context). This is a strong argument for the advantages of MASs in general. However, the effects of EC in laboratory environments are not necessarily the same as the effects implied above for generalized agent systems. Thus, we must begin considering the EC analogy within these standards and within real agents based on these standards.
EC IN MAS A clear example of a system of agents, with evolutionary computation as an emergent effect, is Holland’s ECHO system (Holland, 1992). Another notable example is Tierra (Ray, 1990), which involves the coevolution of pieces of computer code. Other schemes suggest the idea of evolving agents (Adami, 1998; Balabanovic, 1997; Menczer & Monge, 1999; Moukas, 1996). However, these systems are not designed within general software agent frameworks. With the emergence of standards-based frameworks for general agents, it is becoming possible to consider EC as a form of interaction between such general agents. We suggest that by examining EC-specific interactions in such frameworks, one can gain insight into general agent interactions in the
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
125
appropriately embodied context. The following sections introduce a framework for examining embodied EC effects in general, standards-based agents.
A PLATFORM-INDEPENDENT AGENT SYSTEM To provide the appropriate context for general agent interactions, we first introduce our platform independent agent system, PIA. Because it is our desire to better understand EC when it is thoroughly embodied in real agents, and to also better understand general agent interactions through an EC perspective, we feel it is necessary to use a system of “real” agents, but one that is independent of the specifics of any given agent development platform. Different MAS platforms provide a large number of services. Different research communities are interested in different subsets of these services. We focused our attention on the basic characteristics that MAS should have, and with PIA, we provide a simpler interface with respect to the services provided by available platforms. Moreover, the PIA framework provides an easy way of developing and engineering research-grade applications using a prototyping approach and facilitating the applications migration from one platform to another. In order to achieve the platform independence, we modeled the PIA architecture using some appropriate and well-known design patterns (Gamma, 1995). We define PIA services that address basic characteristics of MAS in order to be able to mount them on the largest number of possible platforms.
Basic MAS Characteristics in PIA • •
•
•
In our modeling process, we identified the following actors: The Agent should be able to register itself in the system, obtaining a unique agent id, which is usable in a distributed environment; deregister itself from the system; send messages; and receive messages. The Message should provide these services: getting the sender and receivers IDs; getting the message’s send and receive times; setting the message’s receive time; getting the message’s type; and getting the message’s body. The Agent Factory should allow one to create an agent or destroy an agent. The agent factory is a concept present in all the platforms available to develop MAS. For example, in Aglets (the free development agent platform provided by IBM), it is possible to find several analogies between the Aglet Context and our agent factory concept. The Directory Service offers White and Yellow Pages and provides an interface for registering and deregistering agents to and from categories;
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
126 Smith & Bonacina
getting all agents registered in a category; getting all agents present in the system; getting all categories present in the system; testing if an agent and a category are present in the system; and testing if an agent belongs to a category. Obviously, it will often be necessary to define some rules that restrict agent creation and destruction. Such rules are application dependent and are grouped into PIA’s rule service. The Rule Service contains all the interaction and permission rules needed in the framework and in the applications using it. The generic framework contains just the following basic rules: permission rule to deregister an agent and permission rule to destroy an agent. The interface provided by the rule service is highly application dependent and will be specialized in particular applications.
•
Figure 1: Bridge and abstract factory patterns <> Bridged
<> Implementor
SetImplementor() GetImplementor() Imp
Agent Factory Abstraction
Uses >
Operation ()
<> Agent Factory Implementor OperationImp ()
Imp->OperationImp()
Refined Agent Factory Concrete Agent Imp A OperationImp ()
Concrete Agent Imp B OperationImp () Uses >
Obtain >
Uses >
Developing Platform A
Client Application
Uses >
Developing Platform B
<> Abstract Factory CreateAgentFactory ()
Concrete Factory A CreateAgentFactory()
Concrete Factory B CreateAgentFactory()
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
•
127
The Message Delivery Service provides agents with an asynchronous message manager. It can be compared with an Internet Mail Provider that manages messages regardless of whether the user is connected or not. The services are forwarding a message and taking in a message.
PIA does not deal with the features of the low levels of MAS architecture (e.g., the transport layers or scheduling processes). These functionalities are necessary; hence, PIA exploits the low-level services provided by an underlying platform, through a series of design patterns that connect the basic PIA actors to the platform. These are described in the following section.
Patterns in the PIA Framework We modeled the PIA architecture using the well-known Bridge Pattern (Gamma, 1995), which permits us to supply implementation independent services. All the bridged classes implement the interface Bridged. In Figure 1, we give an example of the agent factory service as a bridged class. PIA applies the same pattern to the concepts of agent, message, and directory services. Not all the services provided by the framework are implementation dependent, hence, not all of them have been bridged (e.g., the rule service is not bridged). The representation of the Abstract Factory Pattern appears at the bottom part of Figure 1. This is the only component of PIA with which the client application directly interacts. There must be a concrete factory class for each different platform PIA is able to exploit. To add a supported platform to PIA, the user adds a new subclass of AbstractFactory, implementing all the services it provides. AbstractFactory supplies a class method for the instantiation of the appropriate concrete factory. The client application calls this method (indicating which developing platform the user has decided to exploit) and retrieves the reference to the corresponding object. The AbstractFactory class supplies all the necessary methods to access all the framework services (e.g., like the agent factory’s services, directory services, and the creation of messages). A concrete factory provides a platform-related implementation of these services. The kind of agent to instantiate is defined by passing the class of the agent as a parameter to the agent creation method of the agent factory. Indeed, because the user can specialize the agent abstraction, PIA provides the possibility of creating an agent as an instance of any subclass of the Agent class. For instance, here is an example of a method call to create a HelloAgent instance: Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
128 Smith & Bonacina
HelloAgent anAgent = (HelloAgent)theFactory.getAgentFactory(). createAgent(Class.forName(“HelloAgent”)); There is a third pattern widely used in PIA: the Visitor Pattern; it represents an operation to be performed on the elements of an object structure. Visitors let one define new operations without changing the classes of the elements on which it operates. Subclassing and Visitor Pattern are the two main ways to increase the framework services with application-dependent features.
EC WITHIN A FRAMEWORK OF GENERAL AGENTS To implement an EC system, a programmer typically writes computer code that creates a population of software entities and manipulates those entities in a prespecified way. This is clearly unlike the intended design principles of MASs for the future. Figure 2: Structure of a typical, centralized genetic algorithm (GA), a basic form of EC Centralized GA program population
Evaluation/Interaction w/ external applications
Selection
Recombination
Mutation
Individuals are simple genetic encodings of parameters
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
129
Figure 3: Structure of the EC embodied within a general MAS General Agent Environment
IndividualAgent Agent’s evaluation of other agents
Messages Messages flow to and from the agent, other agents, and the “outside world”
Agent’s Recombination Strategy
Agent’s Mutation Strategy
“Child” Agent
In the authors’ view of an MAS/EC system, one must turn the typical EC software design on its head. In common EC software, a centralized program stores individuals as data structures and imposes the genetic operations that generate successive populations (Figure 2). In natural genetics, the equivalent of the centralized GA does not exist. Instead, the evolutionary effects emerge from individual agents. This makes designing an agent-based EC system a matter of straightforward analogy (see Figure 3). Note that there are two perspectives one can take on this figure. One is to see it as a literal design of EC objects and methods for a general agent environment. Another is as a viewpoint for examining general interactions of agents. By considering the former, one may be able to gain insight into the latter. However, to consider the possible complexities of general-purpose agents, one must embody those agents in a realistic context. We now introduce a framework like that pictured in Figure 3 (formerly called Egglets in Smith & Taylor, 1998, but now called EvoAgents). The EvoAgent framework is based on PIA. By using PIA, the EvoAgent framework can be used upon any MAS development platform. It is important to be clear about the purposes of this framework. It will serve as a platform for examining and exploiting explicit EC interactions of Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
130 Smith & Bonacina
autonomous, standards-based agents. However, it is hoped these examinations will allow for broader understanding of an EC model of general agent interactions, where EC data structures and operations may not be explicit. The most fundamental aspect of EvoAgents is the addition of EC packages. Some of them address basic EC features like, for instance, two interfaces that can be implemented for general objects. These are Randomizable and Mutatable. Simply stated, the system can be assured that any object that implements Randomizable has a randomize method, which generates a random instance of the object. The system can be assured that any object that implements the Mutatable interface has a mutate method, which randomly perturbs the object in some fashion. The two interfaces allow an agent to pursue a strategy of biased perturbation of data objects that implement these interfaces. To account for exchange of pseudo-genetic feature sets, there is an interface Sperm (which described genetic feature sets sent as messages between agents). To account for biased combination of Sperm with an agent’s own features, there is an interface Egg. These interfaces specify the behaviors suggested by their names but are also arbitrary data objects that can be structured in any fashion. However, note that the individual agent who creates an object that implements these interfaces determines the specific details of how an Egg operates, and what sort of Sperm data organization is understandable, and processable. An Egg need not simply recombine two feature sets, it may process arbitrary numbers of feature sets in arbitrary ways. Another key element of the framework is a Plumage interface, which agents use to advertise themselves for “mating.” Note that how an agent evaluates and selects mates (through evaluation of Plumage objects) is entirely an internal function of that agent. Moreover, the model allows for different agents to have entirely different mating and recombination strategies. A complete hierarchy of EC agent classes and interfaces has been developed, and part of it is shown in Figure 4. A BehavioralAgent is able to run a specific strategy. A GAgent can be genetically initialized, because it implements the GeneticInitAgent interface. The behavior of a GAgent may be driven by its genetic material, but the agent cannot mate. In order to reproduce itself, an agent must extend the class Bagent that implements the interface BreederAgent. This part of the class hierarchy defines all the services necessary to the agent for breeding. Note that the interfaces in this system could be applied to many different types of data objects, ranging from the simple strings of bits typically used in GAs, to tree structures, or more complex data objects. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
131
Figure 4: The complete class and interface hierarchy to develop ECrelated agents <> Phenotype
Agent
<> GeneticInitAgent
BehaviouralAgent
<> BreederAgent
<> GAgent
<> BAgent
Note that in the embodied agent context, mating is not actively controlled from outside the agent. Instead, each agent undertakes its own internal mating strategy. The dynamics of this asynchronous (and possibly diverse) activity is an important aspect of embodied agents that deserves careful consideration.
RESULTS Results with the EC-based agents discussed above have already shown behaviors similar to that of conventional, centralized EC systems, while also showing that there are interesting, distributed effects of agent-based mater selection that deserve careful, analytical and empirical investigation (Smith et al., 2000). Also, results in a system of trading agents have shown the emergence of a complex interplay of agents, whose evaluations of genetic features are a dynamic function of their population’s composition (see Figure 5). For the sake of brevity, details of this simulation are offered, but each of the prices shown in this graph is controlled by the genetically determined production behavior of simple agents in an economy with the prices of three “goods” in the economy Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
132 Smith & Bonacina
Figure 5: Prices of three intermediate goods as a function of trading day in an evolving economy of producers and consumers (Smith et al., 2000) 18 16 14 Prices
12 10 8 6 4 2 2832
2683
2534
2385
2236
2087
1938
1789
1640
1491
1342
1193
895
1044
746
597
448
299
1
150
0
Trading Day
shown. The oscillations seen here are a by-product of ongoing coevolution of the agents. The agent’s tend to “chase” one another to profitable areas of their economy, deplete it, and then chase toward the new profitable area this depletion creates. In this sense, these complex behaviors can be seen as similar to those observed in the El Farol Bar Problem (). Our current work is extending these results into an EC/MAS-based workflow system. In this system, agents negotiate for work items (through a standardized bidding process), and, on a longer time scale, mate, and recombine through EC operations. The results will show the evolution of agents and the emergent effects on the workflow process.
FUTURE TRENDS • •
The key points we intend to emphasize are the following: General agent interactions, and related changes in agent behavior, can be viewed as EC-like interactions, whether or not explicit EC is being employed. This suggests that experience with “contained” EC theory and practice has important implications for the open, more universal, agent-based systems of the future.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
•
133
However, to adequately consider these effects, we must begin to embody EC capabilities within “real” (standards-based) agents, such that realistic effects can be appropriately considered, utilized, and transferred to general understanding of EC-like agent behavior
The final point is key to future research in this area. When general agents exchange genetic/memetic-like information (or information that we wish to view as genetic/memetic-like), many agent-based systems issues must be directly considered. Specifically, market-based system issues are of importance (Kearney & Merlat, 1999), e.g.: • Perceived market value of the material exchanged and related issues of: - Trading protocols - Exchange contracts - Secondary information markets - Mechanisms for insuring trust In considering the extension of EC paradigms to general agent interactions, each of these facets will constitute important areas for future research.
CONCLUSION In conclusion, the EC theories need not be thought of as being limited to the specific class of contained algorithms we now know as evolutionary computation. With the possibility of a future where ubiquitous, semiautonomous agents interact in a vast computational sea, these theories have new meanings that require us to refocus consideration of EC within an agent context. With the emergence of fields of memetics and artificial societies (Gilbert & Conte, 1995), the importance of this connection between EC and agents is further emphasized.
ACKNOWLEDGMENTS Authors Smith and Bonacina gratefully acknowledge support provided through a research grant from British Telecom. The authors would also like to thank Lashon Booker and Rick Riolo for many useful comments that contributed to this paper.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
134 Smith & Bonacina
REFERENCES Adami, C. (1998). Introduction to Artificial Life. Heidelberg: SpringerVerlag. Arthur, W.B. (1994). Inductive reasoning and bounded rationality (the El Farol problem). American Economic Review, 84, 406–411. Back, T., Fogel, D.B., & Michalewicz, Z. (1997). The Handbook of Evolutionary Computation. Oxford: Oxford University Press. Balabanovic, M. (1997). An adaptive web page recommendation service. In Proceedings of The First International Conference on Autonomous Agents. Blackmore, S.J. & Dawkins, R. (1999). The Meme Machine. Oxford: Oxford University Press. Brooks, R.A., Breazeal, C. (Ferrell), Irie, R., Kemp, C., Marjanovic, M., Scassellat, B., & Williamson, M. (1998). Alternate essences of intelligence. Proceedings of AAAI-98 Conference. AAAI Press. Dawkins, R. (1990). The Selfish Gene. Oxford University Press. Deb, K. & Goldberg, D.E. (1989). An investigation of niche and species formation in genetic function optimization. Proceedings of the Third International Conference on Genetic Algorithms (42–50). Eymann, T., Padovan, B., & Schoder, D. (1998). Simulating value chain coordination with artificial life agents. In Demazeau, Y. (Ed.), Proceedings of ICMAS’98, (423–424). IEEE Computer Society Press: Los Alamitos. FIPA Specifications. http://www.fipa.org/spec/fipa99/fipa99Kawasaki.htm. Franklin & Graesser. (1997). Is it an agent, or just a program?: A taxonomy for autonomous agents. In Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages (21– 35). Heidelberg: Springer-Verlag. Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1995). Design Patterns. Reading, MA: Addison-Wesley. Gilbert, G.N. & Conte, R. (Eds.). (1995). Artificial Societies: The Computer Simulation of Social Life. London: UCL Press. Goldberg, D.E. (1989). Genetic Algorithms in Search Optimization, and Machine Learning. Reading, MA: Addison-Wesley. Grand, M. (1998). Patterns in Java, Volume 1. New York: Wiley. Holland, J.H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor, MI: The University of Michigan Press. Holland, J.H. (1992). Adaptation in Natural and Artificial Systems (2nd ed.). Cambridge, MA: MIT Press. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Evolutionary Computation as a Paradigm
135
Horn, J., Goldberg, D.E., & Deb, K. (1994). Implicit niching in a learning classifier system: Nature’s way. Evolutionary Computation, 2 (1): 37– 66. IBM Aglets. http://www.trl.ibm.co.jp/aglets/index.html. Kapsalis, A., Smith, G.D., & Rayward-Smith, V.J. (1994). A unified paradigm for parallel genetic algorithms. In Fogarty, T. (Ed.), Evolutionary Computing AISB Workshop. Heidelberg: Springer-Verlag, pp. 131– 149. Kearney, P. & Merlat, W. (1999). Modeling market-based decentralised management systems. BT Technology Journal, 17 October (4). Kearney, P., Smith, R., Bonacina, C., & Eymann, T. (2000). Integration of computational models inspired by economics and genetics. BT Technology Journal. 18 (4): 150–161. Krasnogor, N. & Smith, J. (2000) MAFRA: A Java memetic algorithms framework, Workshop on Memetic Algorithms, GECCO. Menczer, F. & Monge, A.E. (1999) Scalable Web search by adaptive online agents: an InfoSpiders case study. In M. Klusch (Ed.), Intelligent Information Agents. Heildelberg: Springer-Verlag. Moukas, A. (1996). Amalthaea: information discovery and filtering using a multiagent evolving ecosystem. In The Proceedings of the Conference on Practical Applications of Agents and Multiagent Technology, London. Parunak, H.V.D., Savit, R., & Riolo, R. (1998). Agent-based modeling vs. equation-based modeling: a case study and users’ guide. In Proceedings of Workshop on Multiagent Systems and Agent-Based Simulation (MABS’98), (10–25). Heidelberg: Springer-Verlag. Radcliffe, N.J. (1997). Schema processing. In Back, T., Fogel, D.B., & Michalewicz, Z. (Eds.). The Handbook of Evolutionary Computation. Oxford: Oxford University Press, pp. B2.5:1–10. Ray, T. (1990). An approach to the synthesis of life. In Langton, C., Taylor, C., Farmer, J.D., & Rasmussen, S. (Eds.). Artificial Life II. Reading, MA: Addison-Wesley, pp. 371–408. Rosin, C.D. & Belew, R.K. (1997). New methods in competitive coevolution. Evolutionary Computation. 5(1): 1–29. Smith, R.E. & Taylor, N. (1998). A framework for evolutionary computation in agent-based systems. In C. Looney & J. Castaing (Eds.) Proceedings of the 1998 International Conference on Intelligent Systems, (221– 224). ISCA Press.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
136 Smith & Bonacina
Smith, R.E., Bonacina, C., Kearney, P., & Merlat, W. (2000). Embodiment of evolutionary computation in general agents. Evolutionary Computation. 8 (4): 475–493. Wooldridge & Jennings. (1996). Software agents. IEE Review. January, 17– 20.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
137
Chapter VII
The Logic Behind Negotiation: From PreArgument Reasoning to Argument-Based Negotiation Luís Brito Universidade do Minho, Portugal Paulo Novais Universidade do Minho, Portugal José Neves Universidade do Minho, Portugal
ABSTRACT The use of agents in Electronic Commerce environments leads to the necessity to introduce some formal analysis and definitions. A four-step method is introduced for developing EC-directed agents, which are able to take into account nonlinearites such as gratitude and agreement. Negotiations that take into account a multistep exchange of arguments provide extra information, at each step, for the intervening agents, enabling them to react accordingly. This argument-based negotiation among Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
138 Brito, Novais & Neves
agents has much to gain from the use of Extended Logic Programming mechanisms. Incomplete information is common in EC scenarios; therefore, arguments must also take into account the presence of statements with an unknown valuation.
INTRODUCTION The amount of ambiguity present in real-life negotiations is intolerable for automatic reasoning systems. Concepts present in each intervening party of a real-life negotiation need to be objectively formalized in order for an automatic approach to be reached. Logic, and especially, Extend Logic Programming (ELP) (Baral & Gelfond, 1994) poses itself as a powerful tool to achieve the desired formality without compromising comprehension and readability, and the ability to easily build an executable prototype for agents. Logical formulas are extremely powerful, unambiguous, and possess a set of interesting advantages (McCarthy, 1959): Expressing information in declarative sentences is far more modular than expressing it in segments of computer programs or in tables. Sentences can be true in a much wider context than specific programs can be used. The supplier of a fact does not have to understand much about how the receiver functions or how or whether the receiver will use it. The same fact can be used for many purposes, because the logical consequences of collections of facts can be available. However, in a dynamic environment such as the one found in electronic commerce (EC), the simple use of logical formulas is not enough. The use of nonmonotonic characteristics is self-evident (which is in some way found in ELP) (Neves, 1984). In general logic programs, negative information is provided by the closedworld assumption (i.e., everything that cannot be proven to be true is false), however, in extended logic programs, that is not so. In ELP, a query may fail due to the fact that information is not available to support it or, on the other hand, it may fail due to the fact that negation succeeds. The knowledge base (KB), which serves as the basis for the agent’s reasoning, can be seen has an extended logic program (P) that is a collection of rules with the form: L0 ← L1, ..., Lm, not Lm+1, ..., not Ln
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
139
where Li (0 ≤ i ≤ n) is a literal (i.e., formulas of the form p or ¬ p, where p is an atom). This general form is reduced to L0← (also represented as L0) in the case of facts. The strategy to get a consistent and sound approach for the use of agents in EC is based on Novais et al. (2001) and is composed of a four-step development methodology: • Architecture definition: Define and specify the agent’s modules or functionalities, design the flow of information [e.g., Experience-Based Mediator (EBM) agent (Novais et al., 2000), mobile agents for virtual enterprises (Brito et al., 2000b)] • Process quantification: Quantify each metric and subprocess that the agents may have to deal with; establish the mechanisms and protocols for an efficient approach to a wide range of problems (Brito & Neves, 2000; Brito et al., 2000a) • Reasoning mechanism: Each agent needs a formal (logical) set of rules that will serve as the main guidelines for the negotiation processes; the agent needs to reason about the surrounding world before it acts through argumentation (Brito et al., 2001a) • Process formalization: The process of (logical) argumentation needs to proceed via a formal specification to a consistent implementation in order to set the agents to act or react in a reasonable (logical) way; arguing during an EC negotiation has many similarities to legal arguing (Prakken, 1993; Sartor, 1994) and logic presents itself, once again, as a powerful specification and implementation tool This methodology stands as a particular case of the use of formal methods in Agent-Oriented Software Engineering (AOSE) (Wooldrige & Ciancarini, 2001). This chapter is disposed accordingly to the proposed four-step approach to the development of agent for EC. In the section Architecture Development, architectures for EC are presented. In the section on Process Quantification, examples of objective process quantifications are exposed. In the Reasoning Formalization section, the basic building blocks of the negotiation process (tools for preargument), such as theorem solvers, restrictions, and null values are introduced, aiming at a proper formalization, and the process of reasoning with incomplete information is extended to include temporality and priorities, giving way to the formalization of concepts such as delegation, gratitude, and agreement. In the section on Process Formalization, the process of argumentation is formalized. Finally, some conclusions are drawn, and future work is proposed. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
140 Brito, Novais & Neves
The main contributions of this work are the definition of a common ground to situate the agent’s reasoning mechanisms in EC environments; the use of formal tools (logic) to describe the rational behavior of agents involved in EC; the description of a reasoning mechanism necessary for a consistent and sound development of agents for EC; the use of incomplete information in the reasoning process; the bridging of legal argumentation and argument-based negotiation; and the establishment of sound syntactic and semantic tools for argument-based negotiation.
ARCHITECTURE DEVELOPMENT The development of agent architectures for EC needs to take into account the particular reasoning characteristics to be addressed. The EBM agent (Novais et al., 2000) provides a logic-based framework (with a well-defined set of modules) for preargument reasoning and argument generation. However, this architecture is, in no way, the final solution. Agents oriented to price manipulation (and other econometric approaches) represent an interesting solution (although with limited reasoning capabilities).
The EBM (Experience-Based Mediator) Agent The EBM agent is a general module-oriented architecture aiming at the development of intelligent agents for EC. Taking previous experience as a starting point, the agent’s knowledge is complemented by general and introspective knowledge, the former comprises information about the system and the prices and rules practiced by counterpart agents, and the last embraces psychological values such as beliefs, desires, intentions, and obligations. Dynamic and static knowledge are therefore embedded at this level. An agent must be able to reason about general or even incomplete information, on the one hand, and it must also be able to explain its own behavior or acquire new knowledge, on the other hand. But, in the present context, these procedures are not enough. The ability to deal with the market’s specificities is paramount [e.g., the ability to form prices, to evaluate a good or service, or to cartelise (Brito & Neves, 2000; Brito et al., 2000a)].
Other Approaches This is a functional approach to agent architecture in EC. Agents in EC were primarily seen as information gatherers (price gatherers) and priceadapters (through mathematical or functional techniques). The use of agents in
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
141
EC scenarios has also been approached through theories of economic implication, i.e., economic models and theories condition the behavior of an agent. As the transition toward the information economy is taking place, in Kephart et al. (2000) two kinds of agents were proposed to enable this same transition: the pricebots and the shopbots. Shopbots (also called comparison shopping agents) are the answer to intelligent price comparison of online providers. Pricebots, on the other hand, are the provider’s counterpart to shopbots, i.e., they manipulate prices, taking into account the market conditions. These two kinds of agents are a step toward so-called “frictionless” commerce.
PROCESS QUANTIFICATION Through the mass media, EC has been, to the eyes of the public, indisputably reduced to a Business-to-Consumer (B2C) perspective; furthermore, this short-sighted vision was reduced even more to the publicized and catalog sales (Guttman et al., 1998). In spite of this, the Business-to-Business (B2B) perspective is also endorsed by EC, although the lack of well-established standards and reluctance on the managerial side has hindered its success. EC can be seen under two perspectives: Virtual Marketplaces (VMs) and Virtual Organizations. VMs fall into the popular view of the subject, i.e., buying or selling in auction or nonconcurrent dealings. VOs are traditionally seen as the network of commercial interests established among different businesses, in order to provide some sort of good or service. The VOs view can be extended beyond the common definition and asserts that a network of interests can be established within an organization, i.e., functional units or work areas can be found in enterprises, giving way to a network of internal interests driven by the ultimate goal of providing, with maximum quality and minimum costs and delivery time, the contracted goods or services. The simple VM that spawns from a company that tries to sell its products on the Internet may be seen as an atomic element of a wider VO. This recursive construction is made possible by the fact that agents, being similar to their realworld counterparts, should play a mediator role, i.e., an agent is either a buyer or a seller, depending upon the pending circumstances. The definition of a business strategy is of paramount importance for the future success of any company. Planning must rely on a series of tools that enables the elaboration of a short-, medium-, or long-term strategy. In the real world, the main tools are mediation, agreement, and gratitude. Mediation enables a company to play a dual part on the market, i.e., the experiences Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
142 Brito, Novais & Neves
gathered as buyer may be used to extrapolate future actions as seller, and vice versa. Agreement enables a feeling of trust in an agent, either on truthful or untruthful voting scenarios (Brito & Neves, 2000). Gratitude is important for the creation of interorganizational dependencies (debts) that condition future deals (Brito et al., 2000a). Typical approaches to EC are based on the assumption of one-to-one negotiation without any spurious influences on thirdparty entities, i.e., negotiations are conducted in a one-provider to onecustomer way, such that there is an absence of dialogue among providers and, therefore, all negotiations are statistically independent.
Gratitude One can establish gratitude as a tacit obligation that influences the decision-making process (Brito et al., 2000a) (e.g., in the real world, an agent may be forced to pull out from a negotiation if so requested by someone to whom it owes some value). Gratitude may arise from one of two main situations: a gift of some sort is given to someone (e.g., a Christmas present) — nonnegotiable gratitude — or, during the process to set up a settlement or agreement, an agent offers some form of compensation for the pull out of a competitor (e.g., monetary compensation for an unsure transaction) — negotiable gratitude (Brito et al., 2000a). The significance of this concept on VOs spawns from the fact that agents are now able to influence future decisions on the part of the other companies’ counterparts, i.e., the debt of gratitude is influenced by personal standards and does not configure itself as a universal value. Gratitude may be measured marginally in the form: valueoffer − Wstrategy ( NI , y ), NI .grt = non − negotiable gm ( x, y, NI )(1 − α ) Fgains ( NI ), NI .grt = negotiable 0, otherwise where gm(x, y, NI), Wstrategy(NI, y), valueoffer, a and Fgains(NI) stand, respectively, for the marginal gratitude of agent x toward agent y, taking into account the negotiable information NI; the function that weights the influence of NI and agent y in the strategy of agent x; the value attributed by x to the offer that develops a gratitude debt; the percentage of the gains to offer to y as compensation for a drop-out; and the forecast of the gains taking into account NI. NI is a composite structure that includes fields such as the kind of gratitude (grt).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
143
The overall agent’s debt toward the community is given by: G ( x) =
∑ g ( x, i,⋅) m
i∈ Agents \{ x}
where G(x) and Agents stand, respectively, for the aggregate gratitude debt of agent x and the set of agents that represent the community. This aggregate value must be thoroughly controlled, so that an agent x does not enter a situation where the debt of gratitude is greater than the gains expected to be obtained in future dealings.
Agreement Inside an organization, one may see the need for agreement, but when using collaborative agents, one is faced with a truthful environment (Brito & Neves, 2000). In these environments, one is able to formalize agreements in order to provide an integrated picture of the system to the outside world. This image can be achieved by gathering all the agents’ opinions on a particular subject and by using a majority vote. The agreement strategy can, therefore, be rewritten in order to weight each agent’s specificity. This may be expressed as: agreementw(value)= majority[w1opinion1(value), w2opinion2(value), ..., wnopinionn(value)], value∈ρ. with 0 ≤ wi ≤ 1, ∀ i ∈ Agents where wi and Agents stand, respectively, for the weight of agent i in the making of value, and the community of agents. wi, on the other hand, may be a function of time, which can be expressed as wi(t) = β wi(t - 1) + (1 - β) compliance (vp,i(t),vr,i(t)) with 0 ≤ β ≤ 1 where β, vp,i(t), vr,i(t) and compliance(x,y) stand, respectively, for the weight of the historic information in the making of wi(t); the judgement of agent i at time t on the value; the value attained by agent i at time t; and a measure of the reciprocity or relation between vp,i(t) and vr,i(t). The higher the value of β, the higher the significance of historic values and the smaller the influence of sporadic noncompliances. However, on an open market, one cannot assume that the agents are always truthful, i.e., telling or expressing the truth.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
144 Brito, Novais & Neves
For someone to reach an agreement in untruthful environments, a roundbased protocol is needed in order to isolate the pernicious influences of untruthful voting. These protocols often rely on the existence of a minimum number of intervening factors. Typically, one must have n ≥ 3m + 1, where n is the number of truthful agents, and m is the number of untruthful ones (Brito & Neves, 2000). This natural capability to generate agreements using round-based protocols in real-world environments makes a case to strategic planning in interbusiness relationships. One is able to form alliances among companies to forge better prices to specific products. On the other hand, it makes possible the gathering of information of vital importance for the definition of market strategies, determining the limits of the common ground on which the different enterprises stand.
Strategic Planning Every organization, in order to evolve in a sustained and sound way, must define strategic guidelines that will enable competitiveness and the definition of management goals. The concept of strategy is important in areas that range from the military to commercial organizations, either virtual or real, i.e., the necessity for feasible and clear planning is vital in every point of the production and consumption chain. On the other hand, the companies that position themselves closer to the consumer suffer from the impact of production defects and delivery delays, while being, at the same time, pressured by the consumers. Typical EC systems are unaware of these shortcomings, i.e., they function on a per-deal basis, which may render interesting profits on a deal but, in the longterm, may decrease the negotiable company rates. One may now formalize the difference between overall profit (spawning from strategy driven systems) and local profitability, in the form: Profit ( Agents) ≠
∑ ∑ profit (i, j )
i∈Agents j∈Deals
where Profit(Agents) and profit(i,j) stand, respectively, for the overall profit obtained by the set of agents Agents, and the marginal profit acquired in a perbusiness strategy executed by agent i for deal j. In order to define a sound strategy, one must be able to gather the counterparts and guarantee their standing on particular issues, i.e., agreement plays an important role in market placement. On the other hand, one must be
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
145
aware that the market evolves within truthful environments (where there is agreement among the parties that make an organization) or within untruthful ones (this is the case where typical real-world, self-interested entities may utter conflicting opinions). The definition of strategic lines must take into account punctual alliances rising from gratitude debts, which may be used in order to secure the expected behavior from counterparts.
REASONING FORMALIZATION Some of the most important features of preargument reasoning are temporality, priorities, delegation, gratitude, and agreement (already quantified in the previous section). An agent weights its knowledge base, its temporal validity, and its relative priorities, and then decides if delegation is in order. As for gratitude and agreement, reasoning takes into account the quantification provided at the previous stage of the present methodology. The general process of negotiation must be clearly distinguished from the argumentation stage (Brito et al., 2001b). The process of argumentation is tightly coupled with the process of logically founded attack on the arguments put forward by a counterpart. It deals with price-formation issues and deal finalization. On the other hand, negotiation is a wider concept that is coupled with specific forms of reasoning, dealing with the high-order, prearguing relationships that may be established among agents.
Right to Deal During a negotiation process, each agent, although being able to deal with a counterpart, may be inhibited from doing so. Therefore, a distinction must be established between capability (i.e., an agent has the necessary expertise to do something) and right (i.e., an agent has the capability to do something, and it can proceed that course of action) (Norman et al., 1998). In the case of an EBM agent, it is assumed that it has the ability to deal with every product, under any scenario. However, any EBM agent has its behavior conditioned by the right-to-deal premise. Consider predicates capability-todeal: Product, Conditions, Counterpart → {true, false} (representing the capability to deal), and right-to-deal: Product, Conditions, Counterpart → {true, false} (representing the right to deal), where Product, Conditions and Counterpart stand, respectively, for the product to be traded, the conditions associated with that operation, and the counterpart agent involved in the deal. It may now be stated that:
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
146 Brito, Novais & Neves
∀
Product
∀
Conditions
∀ Counterpart capability-to-deal
(Product, Conditions, Counterpart) i.e., the capability to deal is a tautology within EBM agents. Therefore, the presence of such knowledge in the KB of an agent can be taken as implicit. Therefore, the knowledge about the right to deal (right-to- deal: Product, Conditions, Counterpart → {true, false}) rises in importance. A logical theory (on which the KB of each agent is based upon) is now possible to define: Definition 1 (A Logical Theory for Negotiation Agents) Alogical Theory for Negotiation Agents is defined as the quadruplet TNA = 〈R, C, BP, p 〉 where R, C, BP and p stand, respectively, for the set of predicates on the right to deal (right-to-deal: Product, Conditions, Counterpart → {true, false}), the set of invariants (A:+restriction::P), the set of behavioral predicates (including the theorem proffers) and a noncircular order relation that states that if P p Q, then P occurs prior to Q, i.e., having precedence over Q.
Using Incomplete Information Typically, commerce-oriented agents (such as the EBM one) act in situations where dealing with a given agent is forbidden or, in some way, the set of conditions to be followed in a deal is not completely defined. These situations involve the use of null values (Analide & Neves, 2000). A special theorem solver can be developed in order to cope with this kind of information. With the use of incomplete information with null values, a simple three-valued logic is set into place. Using this framework, it is now possible to assert the conditions under which a given product or service may be traded. The use of a null value from an unknown set of values (Baral & Gelfond, 1994) can state the ability to deal some product with some counterpart knowing only that the set of conditions that governs such deal belongs to an unknown set of values. For this case, the KB of an agent must contain clauses such as the following: exceptionrtd(P, -, CP)←
nullunknown-set(X), right-to-deal( P, X, CP).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
147
¬ right-to-deal (P, C, CP)← not right-to-deal(P, C, CP), not exceptionrtd(P ,C, CP). The KB of an agent must contain an instantiation of nullunknown-set [e.g., nullunknown-set (cond)] and right-to-deal() clauses that may use the null value [e.g., right-to-deal(p4,cond,cp2)].
Temporality The concept of temporality is connected to the temporal validity of possible inferences over the KB of an agent; i.e., a fact may be valid only on a well-defined time period. Taking a nondestructive KB and a nonmonotonous logic, different conclusions may be reached when the temporal validity of information is taken into account (e.g., John has the right to deal with Paul but only from 10/05/2001 to 12/05/2001) (Neves, 1984). Taking set R (right-to-deal clauses) from logical theory TNA, an extension is to be made in order for these elements to encompass temporal validity. Therefore, an agent will reason about validity, taking into account the information present at the fact level. An example of validity, for a specific clause, is shown in Figure 1. Definition 2 (Clauses with Temporality) A factual clause, represented as P, where P is an atomic formula, is represented, in order to encompass temporal validity, as P::[i1,i2,...,in]., where ij=[ta, tb] is one of the following elements: 1. Temporal instant ta= tb, ta, tb ≥ 0 with ta, tb ∈ TD. TD = {t|t ∈ No} ∪ {forever}, where forever represents the end of times.
Figure 1: Example of time validity for a right-to-deal clause
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
148 Brito, Novais & Neves
2.
Temporal interval ta< tb, ta ≥ 0 with ta, tb ∈ TD. TD= {t|t ∈ No} ∪ {forever}, where forever represents the end of times.
In the case where positive and negative information is present in the KB, set R of theory TNA should be consistent, i.e., the following condition should be verified: ∃P :: T1 ∧ ∃P :: T2 → T1 ∩ T2 = ∅
Priorities In logic programming languages, such as Prolog, some priority is established through the ordering of clauses. However, this kind of priority is too weak, giving way to the definition of new priority rules with well-specified semantics. The necessity to establish priorities, within the set of clauses that composes an agent’s KB, arises either from computational reasons or from the necessity of establishing new semantics. The solution, for a feasible priority treatment, lies in the embedding of priority rules in the KB of each agent (Brito et al., 2001a, 2001b). Therefore, logical theory TNA is to be changed into a new logical theory (TNAP) in which the organization of factual clauses is given by the semantics of priority rules. Definition 3 (A Logical Theory for Negotiation Agents with Priorities) The logical Theory for Negotiation Agents with Priorities its defined as TNAP = 〈R, C, BP, p 〉 where, R, C, BP, PR, and p stand, respectively, for the set of predicates on the right to deal (right-todeal: Product,Conditions,Counterpart → {true, false, unknown}), the set of assertion restrictions/invariants (A:+restriction::P.), the set of behavioral predicates (including all demonstrators/theorem solvers), the set of embedded priority rules and the noncircular order relation established among the different clauses in a KB that derives from the time of their insertion. Relation p determines, in the case of P p Q, that P is earlier than Q, thus ordering the set of clauses, providing for a fail-safe priority mechanism under the one provided by the set PR.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
149
Although priorities can be established between single clauses, it is usual, at least as a first-level approach, to consider priorities among bodies of knowledge (e.g., information about mary as priority over information about john). These bodies of knowledge are nothing more than a high-level classification of factual clauses (e.g., agy : bk1 : right-to-deal(p2,[c5], cp4):: [[0, 10]].). Notice, however, that this classification has variable granularity, giving way to a per-clause priority if so needed (with the consequent increase in complexity). The previous definitions on the use of incomplete information, temporal information, and priorities culminate in the creation of a theorem solver that enables reasoning preargumentative reasoning. Definition 4 (An LP Theorem Solver for Incomplete and Temporal Information with Priorities) Taking factual clauses with temporal validity and body of knowledge classification (represented by BK::P::[i1,i2,...,in].) and rule clauses (represented by P ← Q and being read as “P if Q”) as the components of the KB present in each agent, the predicate demoLPITP:T,CT,V → {true, false}, where T, CT, V and {true, false} stands, respectively, for a logical theorem, the current time, the theorem valuation (true, false, or unknown) and the possible valuations for the demoLPITP predicate, represents the LP theorem solver for incomplete and temporal information over the KB, governed by the following set of rules: demoLPITP(P, CT, true)← priority(BK1, BK2), testpriority(BK1, BK2, P,T), intime(CT, T). demoLPITP(P, CT, false)← priority(BK1, BK2), testpriority(BK1, BK2, P,T), ¬intime(CT, T). demoLPITP(P, CT, false)← priority(BK1, BK2), ntestpriority(BK1, BK2, P,T), intime(CT, T). demoLPITP(P, -, unknown)← priority(BK1, BK2), not testpriority(BK1, BK2, P,-), not ntestpriority(BK1, BK2, P,-). (BK1::P::T). testpriority(BK1, -, P,T) ← testpriority(-, BK2,, P,T) ← (BK2::P::T). Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
150 Brito, Novais & Neves
ntestpriority(BK1, -, P,T) ← ¬ (BK1::P::T). ntestpriority(-, BK2, P,T) ← ¬ (BK2::P::T). where predicates intime: CT, LT ®{true, false}, testpriority: BKa, BKb, P, T→ {true, false} and ntestpriority: BKa, BKb, P, T → {true, false} stand, respectively, for the verification of presence of time CT in the list of validity intervals LT, the prioritized demonstration of theorem P for the bodies of knowledge BKa and BKb and the prioritized demonstration of theorem P through negative information for the bodies of knowledge BKa and BKb.
Delegation Delegation can be seen as the delivery (assimilation) of a valid negotiation from one agent to another. Negotiation tasks may only be delivered to a third party if there is sufficient knowledge relating to the right to deal with that same agent. Delegation acts as a way to undertake indirect negotiations, i.e., use a proxy agent taking advantage of its particular characteristics, such as gratitude debts and agreements established among the proxy and the other agents (Brito et al., 2000a). Therefore, formalizing the delegation process is equivalent to formalizing the generation of a “middle-man” approach to business. A logical perspective is given by considering that the act of delegating deals involving product P, conditions C and counterpart CP to agent Y (considering time CT), is only possible if the delegating agent is able to deal the product with the final counterpart (valid for the present view over delegation); the delegating agent is able to deal with the proxy agent; and the proxy agent is able to deal with the final counterpart by itself. Formally: agx : delegate(P,C,CP,Y,CT)← agx : demoLPITP(right-to-deal(P,C,CP), CT, true), agx : demoLPITP(right-to-deal(P,-,Y), CT, true), Y: validassimilation(Y: right-to-deal(P,C,CP), CT).
Gratitude Although gratitude quantification is possible and even desirable in order to enable computational manipulation, it is still a subjective element and a nonlinearity that has decisive influence in the outcome of many business strategies (e.g., in strategic planning) (Brito et al., 2000a). A quantified characterization of the marginal gratitude concept is depicted in the section on Process Quantification. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
151
When analyzing the two gratitude situations, it can be seen that the first (nonnegotiable gratitude) occurs with gratitude value (Value) when, taking into account the negotiation information (NI), the counterpart agent (Y), and the reasoning time (CT), a specific offer (e.g., gift) from the counterpart agent takes place. Dealing with that counterpart is authorized, and the “subjective” (although quantified) value of the offer is taken into account when updating the negotiation information through which a strategic evaluation (conditioning further action) is made. The next situation (negotiable gratitude) is viable when it is possible to deal with a specific counterpart agent, which in turn, states in its own KB that the agent is able to drop a specific negotiation (probably a competitive negotiation) in exchange for some compensation (reflected in terms of gratitude). Formally: agx:gratitude(Value,NI,Y,CT) ← agx:offer(Y,Description), agx:demoLPITP(right-to-deal(-,-,Y),CT,true), agx:evaluateoffer(Y,Description,CT,Valueoffer), agx:update(NI,Y,Description,Valueoffer,CT,NNI), agx:evaluatestrategy(NNI,Y,CT,S,Sweight), agx:gratitudemarginal(NNI,[Valueoffer,Sweight],Value), Value>0. agx:gratitude(Value,NI,Y,CT) ← agx:demoLPITP(right-to-deal(-,-,Y),CT,true), Y:dropcompensation(agx,NI,CT,α ), agx:forecastgains(Y,NI,CT,F), agx:gratitudemarginal(NI,[α ,F],Value), Value>0.
Agreement Like gratitude, agreement can be seen, in many cases, as a subjective element that introduces nonlinearities in the negotiation process (Brito & Neves, 2000; Brito et al., 2000a). The simplest case of agreement is reached through the use of a majority of votes. This democratic approach relies on the existence of fully veridic agents, i.e., agents that convey their opinion in a consistent manner to their peers. This majority approach is quantified in the section on Process Quantification. In logical terms, an agreement can only be reached among agents that are able to deal with each other, i.e., if an agent is unable to assert the right to deal with other agents, it can never establish some sort of commitment (agreement). Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
152 Brito, Novais & Neves
An agreement is reached on a specific subject (S), among a set of entities (E) with a set of opinions (O) at a specific time (CT). By definition, an agent is in agreement with itself in every subject. As for the other counterparts, an agreement with them is reached if the agent is authorized to deal with every one of them, their opinions are gathered, and, finally, a summary is produced, i.e., it is possible to establish an agreement situation weighing the set of opinions. Formally: agx:agreement(-,[agx],-,-). agx:agreement(S,E,O,CT) ← agx:can-deal-with(E,CT), agx:gatheropinions(S,E,CT,LO), agx:summarize(S,O,LO,CT). agx:can-deal-with([A],CT) ← agx:demoLPITP(right-do-deal(-,-,A),CT,true). agx:can-deal-with([A|T],CT) ← agx:demoLPITP(right-do-deal(-,-,A),CT,true), agx:can-deal-with(T,CT).
Example Assume the following KB, defined according to the noncircular theory TNAP. Reasoning about delegation will now involve the set of restrictions embedded into the KBs. The clauses are: agx : bk1 : right-to-deal(p1,[c1], cp2):: [[0, forever]]. agx : bk2 : right-to-deal(p2,[c3, c4], cp3):: [[0, 50]]. % exceptions agx % theorem proffers agx % priorities agx: priorities(bk1, bk2). agy : bk1 : right-to-deal(p2,[c3, c4], cp3):: [[0, 60]]. agy : bk1 : right-to-deal(p2,[c5], cp4):: [[0, 10]]. agy :money(900). % exceptions agy % theorem proffers agy % priorities agy: priority(bk1, bk2). Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
153
Agent agx is able to negotiate product p1, taking conditions [c1] with counterpart agent agy permanently. In the case of product p2, conditions [c3, c4] are established for counterpart agent cp3, but only for interval [0,50]. The knowledge about the right to deal with agy overpowers the knowledge about cp3. Agent agy is able to negotiate product p2, taking conditions [c3, c4] with counterpart agent cp3, but only on interval [0,60]. Furthermore, it is able to negotiate product p2, taking conditions [c5] with counterpart agent cp4, but only on interval [0,10]. Agent agy has 900 monetary units expressed in its KB, and a new assertion is conditioned to the existence of 1000 monetary units (due to an assertion restriction). Priority rules establish that the knowledge about cp3 overpowers that of cp4. The KB can be queried in order to determine the validity of a delegation process: ?agx: delegate(p1,[c1],cp2, agy,10). false ? agx: delegate(P,C,cp2, agy, 10).
P={p1},C={[c1]}
The second column expresses possible variable valuations or the valuation for the query. In the first query, although the right to deal is well established in agent agx, it is impossible to assert the necessary knowledge in the proxy agent (agy) due to the assertion restriction. In the second query, the delegation on agent agy for a negotiation with cp2, at time instant 10, is only possible for product p1 and conditions [c1].
PROCESS FORMALIZATION Argument-Based Negotiation The use of logic for the formalization of argument-based negotiation does not aim at the definition of the best dealing strategies (although the construction of problem-solving methods for that purpose may turn out to be more stable, taking into account the concepts stated in the formal theory). There are two main objectives: offers and counteroffers are logically justified, and the definition of conflict or attack among opposing parties is clearer. Without arguments, each agent has no way of ascertaining why their proposals and counterproposals are accepted or rejected, due to the limited amount of exchanged information (Jennings et al., 1998).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
154 Brito, Novais & Neves
Global Versus Local Knowledge Each element that composes an argument may come from one of two main sources: global or local knowledge. Global knowledge is shared by the intervening entities and is, therefore, independent of a particular experience or local state. Local knowledge derives from sources that are not common to every agent, giving way to the possibility of contradictory conclusions upon confrontation. Contrary to the definitions found in logical formalizations in law (Prakken, 1993), the KB embedded in each agent may be quite different. The use of global or local knowledge conditions the capacity to determine the winner of a confrontation. As expected, local knowledge is not the best starting point for a premise denial attack (e.g., a claim such as “my experience tells me I sold item X for Y monetary units” is difficult to be stated as false by the counterpart agent, because he cannot say what the particular experiences are of the other agent). In many Business-to-Business (B2B) or Business-to-Consumer (B2C) argumentations, there is often no winner or loser, however, the exchange of arguments among agents is essential so that an acceptable situation for both parties can be reached (even if an agent decides to drop, at any time, the negotiation). Local knowledge is important for an agent to reach another agent’s acceptability region faster (Jennings et al., 1998).
Negotiation Arguments After a theory and a language have been established, in order to represent each agent’s knowledge and information (from which it will draw the justification for each offer and counteroffer), a definition for argument must be reached. An argument is to be constructed progressively, being the antecedent of each rule composed by the consequents of previous rules. This definition is, perhaps, the most important in the logical formalization of argument-based negotiation. Definition 5 (negotiation argument with an implicit meta theorem-solver) Taking ordered theory TNAP, a negotiation argument is a finite, nonempty sequence of rules 〈r1,...,demo(ri,Vi),...,r n〉 such that, for each sequence rule rj with P as a part of the antecedent, there is a sequence rule ri (i < j) on which the consequent is P. The use of such arguments, extended by a threefold logic, is important due to their informative nature, i.e., one of the advantages of using argument-based Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
155
negotiation lies in the fact that information is conveyed in such a way that the counterpart agents are able to evolve their counterarguments in a parallel way (reaching a cooperative usage of knowledge) (Brito et al., 2001b; Jennings et al., 1998). The conclusion of an argument relates to the consequent of the last rule used in that same argument. Formally, see Definition 6. Definition 6 (argument conclusion) The conclusion of an argument A1 = 〈rr1,...,rn〉 , conc(A1), is the consequent of the last rule (rn). As it has been stated, the nature of the knowledge each agent has (local and global) is relevant for arguments and counterarguments. By composing an argument with rules or facts that spawn from local knowledge (e.g., previous experiences), the attack or counterargument launched by the opposing agent during its round is conditioned (due to the fact that local knowledge is hard to deny). Taking into account the two forms of argument attack (conclusion denial and premise denial), a conflict among two opposing agents (e.g., buyer or seller) can be formally specified. Definition 7 (conflict/attack over negotiation arguments) Let A1 = 〈r1,1,...,r1,n〉 be the argument of agent 1 and A2 = 〈r2,1,...,r2,m〉 be the argument of agent 2. Then, (1) if r1,i ∈ A1 or r2,j ∈ A2 are local, the arguments are said to be in “probable conflict” (2) A1 attacks A2 iff A1 executes a conclusion denial attack or a premise denial attack over A2 (3) A1 executes a conclusion denial attack over A2 iff there is no local knowledge involved and conc(A1) is contrary to conc(A2) (4) A1 executes a premise denial attack over A2 iff there is no local knowledge involved, and conc(A1) is contrary to some r2,j ∈ A2 Having in mind the use of rational agents (i.e., those that do not undermine their own actions and are able to formulate coherent arguments), a proper definition of coherency must be formulated.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
156 Brito, Novais & Neves
Definition 8 (argument coherency) An argument A1 = 〈r1,...,rn〉 is said to be “coherent” iff ¬∃ai,aj ai,aj ∈ subarguments(A) ∧ i≠ j: ai attacks aj. Taking into account the definition of conflict and attack and the concept of round, it is possible to logically define the victory/defeat pair. Definition 18 (victory/defeat of negotiation arguments) Let A1 = 〈 r1,1,...,r1,n〉 be the argument of agent 1 and A2 = 〈 r2,1,...,r2,m〉 be the argument of agent 2, and A2 is presented at a later “round” than A1. Then, A1 is defeated by A2 (or A2 is victorious over A1) iff (1) A2 is coherent and A1 is incoherent (2) A2 is coherent, executes a conclusion denial attack over A1 (coherent) and the conclusion rule of A2 is prioritary (taking into account the TNAP theory) over A1 (3) A2 is coherent, executes a premise denial attack over A1 (coherent) and the conclusion rule of A2 is prioritary (taking into account the TNAP theory) over A1
Example Some examples may be presented to illustrate the previous definitions. Let agents E and F be engaged in the process of buying or selling product p1 in an environment with priority rules embedded in the KBs. Agents E and F share general knowledge, market knowledge, and the set of priority rules. Agent E: PE : r5 : price(p1,143). %(experience) price for p1 is 143 %(market) price for p1 is 147 MK : r7 : price(p1,147). GK : r1 : price(p1,150). %(global) price for p1 is 150 PRIO : r4 : priority(PE,GK). %(priority) PE overpowers GK PRIO : r6 : priority(MK,PE). %(priority) PE overpowers GK Agent F: MK : r7 : price(p1,147). GK : r1 : price(p1,150). PRIO : r4 : priority(PE,GK). PRIO : r6 : priority(MK,PE).
%(market) price for p1 is 147 %(global) price for p1 is 150 %(priority) PE overpowers GK %(priority) MK overpowers PE
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
157
The argument given by agent E might be AE = 〈r4,r5〉, however, agent F might argue with AF = 〈r6,r7〉, representing a conclusion denial attack taking into account the priority rules shared by the community. Agent F is considered the winner due to the fact that it uses an higher priority rule on the set of priority rules.
CONCLUSIONS As previously stated, logic stands as an important tool for formalizing approaches to the development of agent-based software. Logic provides a way to eliminate (or at least reduce) ambiguity and, in the particular case of ELP, is close to a working prototype. EC is an area posing particular problems to the use of agent-based software. Though applications in this area are particularly suited to be solved by agents, no formal development process has been devised for such a field of expertise. However, as previously seen, building agents for EC purposes can be seen as a four-step approach. Starting with the definition of an agent architecture, the processes that take place within and among agents are to be quantified, the reasoning mechanisms formally stated, and the flow of knowledge must be stated. The processes involved in EC, which are difficult to assimilate into traditional systems, revolve around subjective business parameters. Parameters such as gratitude and agreement among parties are nonlinearities, which need to be considered in order to develop a feasible EC system. This information is to be taken into account when drawing up a strategic plan of action. However, once subjective parameters have been quantified, some reasoning must take place before any argument is exchanged with potential counterparts. This stage, which has been referred to as “prenegotiation reasoning” deals with the existence of incomplete information and delineates logical conclusions upon an agent’s KB (e.g., is agent A able to deal product P with agent B at time T). The exchange of arguments among agents serves the main purpose of information exchange. Exchanging justified information provides an agent’s counterpart with enough knowledge to try and reach a common understanding much faster. Formalizing the ways an agent can attack an argument (and which knowledge to use for an effective “victory”) culminates the most important steps in the development of EC-directed agent software.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
158 Brito, Novais & Neves
REFERENCES Analide, C. & Neves, J. (2000). Representation of Incomplete Information, in CAPSI, Conferência da Associação Portuguesa de Sistemas de Informação, Universidade do Minho, Guimarães, Portugal. Baral, C. & Gelfond, M. (1994). Logic Programming and Knowledge Representation, Journal of Logic Programming, 19,20:73–148. Brito, L. & Neves, J. (2000). Agreement and Coalition Formation in MultiagentBased Virtual Marketplaces, Proceedings of the IEA/AIE-2000: The 13th International Conference on Industrial & Engineering Applications of Artificial Intelligence & Expert Systems, New Orleans, LA. Brito, L., Neves, J., & Moura, F. (2000b). A Mobile-Agent-Based Architecture for Virtual Enterprises, Proceedings of the PRO-VE 2000 — 2nd IFIP/Massyve Working Conference on Infrastructures for Virtual Enterprises, Florianópolis, Brasil. Brito, L., Novais, P., & Neves, J. (2000a). Mediation, Agreement and Gratitude in Strategic Planning for Virtual Organisations, Proceedings of the ICEIS2000 — 2nd International Conference on Enterprise Information Systems, Stafford, UK. Brito, L., Novais, P., & Neves, J. (2001a). Temporality, Priorities and Delegation in an E-Commerce Environment, Proceedings of the 14th Bled Electronic Commerce Conference, Bled, Slovenia. Brito, L., Novais, P., & Neves, J. (2001b). On the Logical Aspects of Argument-Based Negotiation among Agents, Proceedings of the CIA 2001 — Fifth International Workshop on Cooperative Information Agents, Modena, Italy. Guttman, R., Moukas, A., & Maes, P. (1998). Agent-Mediated Electronic Commerce: A Survey, Knowledge Engineering Review. Jennings, N.R., Parsons, S., Noriega, P., & Sierra, C. (1998). On Argumentation-Based Negotiation, Proceedings of the International Workshop on Multiagent Systems, Boston, MA. Kephart, J., Hanson, J., & Greenwald, A. (2000). Dynamic Pricing by Software Agents, in Computer Networks. McCarthy, J. (1959). Programs with Common Sense, Proceedings of the Teddington Conference on the Mechanization of Thought Processes, (75–91), London. Neves, J. (1984). A Logic Interpreter to Handle Time and Negation in Logic Data Bases, Proceedings of the ACM 1984 Annual Conference, (50– 54), San Francisco, CA.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Logic Behind Negotiation
159
Norman, T., Sierra, C., & Jennings, N. (1998). Rights and Commitment in MultiAgent Agreements, Proceedings of the 3rd International Conference on Multiagent Systems (ICMAS-98), (222–229). Paris, France. Novais, P., Brito, L., & Neves, J. (2000). Experience-Based Mediator Agents as the Basis of an Electronic Commerce System, Proceedings of the Workshop 2000 — Agent-Based Simulation, Passau, Germany. Novais, P., Brito, L., & Neves, J. (2001). Developing Agents for Electronic Commerce — A Constructive Approach, Proceedings of the SCI 2001 — 5th World Multiconference on Systemics, Cybernetics and Informatics, Orlando, FL. Prakken, H. (1993). Logical Tools for Modeling Legal Argument, Doctoral Dissertation, Free University, Amsterdam. Sartor, G. (1994). A Formal Model of Legal Argumentation, Ratio Juris, 7, 212–226. Wooldrige, M. & Ciancarini, P. (2001). Agent-Oriented Software Engineering: The State of the Art, Agent-Oriented Software Engineering, SpringerVerlag Lecture Notes in AI Volume 1957, Heidelberg: Springer-Verlag.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
160
Debenham & Henderson-Sellers
Chapter VIII
Designing Agent-Based Process Systems— Extending the OPEN Process Framework J. Debenham University of Technology, Sydney, Australia B. Henderson-Sellers University of Technology, Sydney, Australia
ABSTRACT Originally a development methodology targeted at object technology, the OPEN Process Framework (OPF) is found to be a successful basis for extensions that support agent-oriented software development. Here we describe the process components necessary to agent-oriented support and illustrate the extensions by means of two small case studies that illustrate the extensions by means of two small case studies that illustrate both taskdriven processes and goal-driven processes. The additional process components for Tasks and Techniques are all generated from the OPF’s metamodel, which gives the OPF its flexibility and tailorability to a wide variety of situations—here agent-orientation.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
161
INTRODUCTION AND BACKGROUND Modern software engineering encompasses a wide range of areas of interest. Three of particular interest are object orientation, component-based development, and intelligent multiagent systems. While all three have a different genesis, they all have elements in common, notably, modularity and information hiding with a clear distinction between interface (or specification) and implementation. Intelligent agents have emerged from artificial intelligence research (Wooldridge, 1997), whereas intelligent agent software engineering and methodologies have a firmer relationship to object technology (e.g., Wooldridge et al., 2000). The majority of agent-oriented methodologies have an objectoriented (OO) heredity despite observations that with previously published OO methodologies there is inadequate support for agents; while others adapt knowledge engineering techniques (Wooldridge & Ciancarini, 2001). Of course, OO methodologies that are highly prescriptive and overly specified are hard to extend when a new variant or a new paradigm appears. What is required is a more flexible approach to building methodologies or processes. One such approach will be utilized here: OPEN (Object-oriented Process, Environment and Notation; Graham et al., 1997). Using OPEN, process components are selected from a repository, and the actual methodology (or process) is constructed using identified construction and tailoring guidelines. We identify here what support must be added to OPEN to fully support agents. Following this identification, we give an example application in the context of the development of business process management systems in which the processes may be decomposed into conditional sequences of goals. Goal orientation, as embodied in this class of applications, is readily accommodated in the existing and new tasks and techniques of the OPEN Process Framework (OPF; Firesmith & Henderson-Sellers, 2002) leading to a specification of the individual agents in the system.
DESIGNING AGENT-BASED SYSTEMS A multiagent system is a society of autonomous components, each of which may be constructed independently. They may also be mobile (across hardware platforms and networks)—this aspect is out of the scope of this paper and is not discussed further. Agents can be regarded as powerful versions of objects, constructed using intelligent machinery that supports their autonomous nature and their capability to “take the initiative.” The high-level
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
162
Debenham & Henderson-Sellers
specification of a multiagent system then, additionally, requires the description of the roles of the agents in their society and the relationships they have with one another, with external agents (including humans and other systems), and with system resources. This high-level specification constrains, but does not necessarily determine, the agent interaction protocol. The specification of the agent interaction protocol is the second step in the design process. Following that, the final step is to specify the design of the agents. An advocated approach to designing a multiagent system (Jennings & Wooldridge, 2001) is sequenced: The System Objective: What is the system intended to do? The Environment: Of what does the system environment consist? To what extent will the agents be able to “see” the things in the environment and to “effect” them? Consideration should be given to people, computer systems, hardware and software sensors and effectors, and existing traditional components. The System Organization: What are the agents in the system, and what are their roles? To whom—in the system or in the environment—do they report and to what extent? To whom can they talk and about what? To what resources (e.g., databases) do they have access? Evaluating the system organization for a multiagent system is highly analogous to the problem of deciding how a (human) office works. The Interaction Protocol: Because the individual agents are autonomous, they require some form of motivation to act. The agent interaction protocol specifies the negotiation mechanisms considered acceptable, e.g., “an English auction with payment made in cash.” Another wellknown, general mechanism is “contract nets,” in which proposals are requested and then evaluated. The interaction protocol is crucial in that it facilitates the agents’ motivation for doing something. In some systems, the goal of the system may not be represented in any of the individual agents, in which case, it will be implicitly buried in the interaction protocol. The Communication Protocol: This is fairly mundane in comparison to the interaction protocol. It includes the actual form of the messages and the language used to encode them. In distributed systems, it also includes the specification of ontologies (e.g., Nowostawski et al., 2000) that determine what the messages actually mean. In multiagent systems, communication is usually asynchronous.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
163
The Conceptual Architecture: What architectural components will the agents have? What form of reasoning will each agent require? Reactive (feedback) and deliberative (proactive or feed-forward), or both (called “hybrid”)? How will this reasoning be represented? For example, one form of deliberative reasoning is plans [= state charts of goals]. In this case, we can ask what form of plan is required? This is the start of a highly detailed design for the agents. The Control Architecture: How will the agents work? What mechanism will they use to manage their commitments? What mechanism will they use to schedule their actions? How will the whole thing go? If the system is widely distributed, then how do you turn it all on? The System Operation: How will the agents “learn” to improve their performance? How will they manage their belief revision (this is substantially involved with identifying beliefs that are no longer valid—a complex problem even for humans).
A STARTING POINT: THE OPEN PROCESS FRAMEWORK Having described the design needs for building a multiagent system, we now identify a suitable, existing process infrastructure upon which to build a facility to support the design of multiagent systems. As noted above, OPEN (Graham et al., 1997; Henderson-Sellers et al., 1998) provides a useful starting point, because it is not only defined at the metamodel level but is also itself componentized. Thus, adding further support for the design of intelligent agents is straightforward, such potential extensions having been a priori designed into the metamodel architecture of the OPEN Process Framework (OPF). The OPF is a process metamodel or framework from which can be generated an organizationally specific process (instance). Some of the major elements in this metamodel are Work Units (Activities, Tasks, and Techniques, wherein Activities and Tasks say “what” is to be done, and Techniques say “how” it will be accomplished), Work Products, and Producers. Together, Work Units and Producers create Work Products, and the whole process is structured temporally by the use of Stages (phases, cycles, etc.). Each process instance is created by choosing specific instances of Activities, Tasks, Techniques etc. from the OPF Repository (Figure 1) and specific configurations thereof (created by the application of the Construction Guidelines). OPEN thus provides a high degree of flexibility to the user organization.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
164
Debenham & Henderson-Sellers
Figure 1: A personalized OO development process is created from a class repository of process components using supplied construction guidelines. These are all created as instances of metalevel elements in the OPF metamodel (Henderson-Sellers, 2001).
M1 Project
M2
is tailored to meet the needs of a specific
Personalized OO Development Process selection and construction
<>
OPEN Process Framework (metamodel)
generates instances for
offers advice on
Repository of Predefined Process Components describes how to use the Construction Guidelines
user
methodologist
Figure 1 A personalized OO development process is created from a class repository of process components using supplied construction guidelines. These are all created as instances of metalevel elements in the OPF metamodel (after Henderson-Sellers, 2001. Originally published in the Journal of Object-Oriented Programming, October/November 2001, pg. 10).
EXISTING OPEN SUPPORT FOR AGENTORIENTED SOFTWARE ENGINEERING
If we consider the similarities between object-oriented and agent-oriented development at the granularity of OPEN’s Activities, the Tasks relevant to the Activities of Project Initiation, Implementation Planning, and Project Planning will remain relatively unchanged. These are the same for any project. Business approval must be obtained, feasibility studies must be undertaken, and other general tasks must be completed. Activities such as Requirements Engineering and Build will be most affected, because this is where the project domain affects the process. Because OPEN is a full life-cycle process model, it takes into account business, training, and personnel issues. The Activities, Tasks, and
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
165
Techniques associated with these issues may vary but will not be considered in this paper. Rather, we seek specific existing support for agents. None appears in the original books on OPEN, but in more recent publications (HendersonSellers, 2001) there is a preliminary indication of the need to support agents. A newly proposed Task named Identify intelligent agents is described there. With some minor modification, it states: Identification of agents is in some ways an extension of “finding the objects.” Agents are autonomous entities that have many similarities to objects. A major difference is that whereas an object receiving a request for service must deliver that service, an agent is empowered to say “no.” Agents act when “they feel like it,” and not necessarily when they receive a communication or other stimulus. Agents play roles with responsibilities. These responsibilities are not only equivalent to those for objects (responsibilities for doing, knowing, and enforcing) but also towards achieving organizational goals (Jennings, 2001). They more closely mimic the behavior of people and their decision-making strategies than can objects. Consequently, there is a greater emphasis on the roles that are played by agents. Each role is defined by four attributes: responsibilities, permissions, motivations, and protocols (Wooldridge et al., 2000). Roles are already well supported in OPEN. Thus, agent modeling has parallels with the use of roles in object modeling. An overall “community of agents” can therefore be well modeled at the highest level of abstraction using techniques of role modeling, collaboration diagrams, and other standard OO structuring techniques and diagrams. Some extensions to the Unified Modeling Language have recently been proposed by Odell et al. (2000) in order that the modeling language may be considered applicable for agent modeling. These standard modeling techniques and design notations will therefore not be discussed further here, because we will instead focus on the new additions to the developer’s suite of tools and techniques.
EXTENDING OPEN SUPPORT FOR AGENT ORIENTATION In this section, we outline the various Tasks and Techniques proposed as additions and modifications to the OPF, and especially its Repository, to better facilitate the support of intelligent agent software engineering. These are given mostly in summary form due to space limitations and the need for further refinement as additional research results become available. In particular, the
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
166
Debenham & Henderson-Sellers
proposed Techniques for Market mechanisms, Activity scheduling, Commitment management, Learning Strategies, and Belief revision are all unresolved research issues. Nonetheless, it is clear that it is highly appropriate to create at least a placeholder of an OPEN Technique for each of these leading-edge techniques.
New Tasks Task: Identify Agents’ Roles A good metaphor for assisting in agent design is to identify what roles they are intended to play. Roles are applied to agents by, e.g., Wooldridge and Ciancarini (2001), in much the same way as they are applied to objects in OOram (Reenskaug et al., 1996) and supported in OPEN (Henderson-Sellers et al., 1998). In Gaia (Wooldridge et al., 2000), roles are defined by four attributes. Responsibilities are similar to those defined for objects but classified by liveness and safety properties. A set of permissions is then allocated to each agent to support these responsibilities. Computations are performed by a set of internal Actions (also called Activities1), and there are a number of associated Protocols that govern external Interactions. This task focuses on the identification of the roles per se together with this associated attribute set. Task: Undertake Agent Personalization In process management applications, agents often represent actual people. People may undertake tasks in their business with different styles (say within an office environment). The software agent must reflect the style of the person they are representing. Because these differences may be significant, this is an important task, which contributes to the overall agent design. Task: Identify Emergent Behavior Emergent behavior is that which exist at the system level but which cannot be predetermined by inspection of the behavior of the individual elements of the system. For example, a colony of ants has an overall behavior not reflected in the behavior of any one individual, whose behavior when studied appears to be almost random. Since these emergent properties are not deterministic, there is a challenge for the designer in identifying likely emergent behavior. Such emergent behaviors are often critical for the success of a system. Task: Model the Agent’s Environment Agents, unlike objects, are situated in an environment: they interact with it by observing and by changing it. The “robot” paradigm is often used to illustrate Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
167
the difference between agents and objects—an agent’s “sensors” and “effectors” are physical or virtual devices by which it interacts with its environment. Task: Determine Agent Interaction Protocol The agent interaction protocol determines how the agents in a system may converse (Kraus, 2001). It specifies what they are allowed to “say” to each other. Agents may cooperate or compete with each other (Durfee, 2001). For competitive agents, the objective of the interaction protocol may be to maximize the utility of the individual agents. Agents are cooperative. A key issue in designing the interaction protocol for cooperative agents is to ensure that the whole system behaves in a coherent way without stifling the autonomy of the individual agents. Another issue for cooperative agent interaction is how to coordinate their activities, particularly when access to resources is limited; this is achieved with a coordination protocol. In process management applications, a coordination protocol is required for personal diary management of the human users (Wobcke & Sichanie, 2000). A common strategy for cooperative interaction protocols is the decomposition and distribution of tasks. This is achieved in the process management applications described here by a “delegation strategy” coupled with the contract net mechanism with focused addressing. Each agent manages the work of its user and deals with the delegation of responsibility for subprocesses to other agents. This delegation is achieved by inviting a selected set of nodes to bid for work. A bid from a node contains information on the firm and preferred constraints that that user presently has, together with information about that user’s work—including an estimate of the cost, in time, that that user may incur. This employs the OPEN Technique: Contract Nets, and is documented by the use of Agent UML (Odell et al., 2000). Task: Determine Agent Communication Protocol Communication protocols underpin the interaction protocols by determining the actual form of messages plus languages and ontologies to be used for their coding. KQML (Knowledge Query and Manipulation Language) is an example of an agent communication language (ACL) (Finin et al., 1997), a second being the Foundation for Intelligent Physical Agents (FIPA)’s ACL (Odell, 2000, 2001). Both are based on speech acts. If Agent A wishes to tell Agent B something, it does so by posting a message to Agent B’s message area (direct interagent communication is also possible, but infrequently utilized; Odell, 2001).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
168
Debenham & Henderson-Sellers
Task: Determine Delegation Strategy The delegation strategy is part of the interaction protocol; it determines “what should be delegated to whom.” The delegation strategies described here use the performance knowledge parameters described below. The material in this section and the next are nonstandard and are described in some detail. In process management applications, delegation may involve forming a group (e.g., a committee). Estimating the effectiveness of every possible group of individuals in every possible situation and maintaining the currency of those estimates is not feasible. To measure the performance of groups, the effectiveness of individuals at forming and managing groups is estimated instead; this is feasible. In this way, to form a group, an individual is selected to whom responsibility of forming a group is delegated. In the applications, selection of what to do next may be handled manually by the user or automatically by the system. Delegation of what to whom may be handled manually, semimanually (when the user provides a short list of people to be approached), or automatically. Given a subprocess, suppose that we have some expectation of the payoff Di as a result of choosing the ith individual (i.e., agent and user pair) from the set of candidates {X1,...,Xi,...,Xn} to take responsibility for it. A delegation strategy at time τ is specified as S = {P1,...,P i,...,P n}, where Pi is the probability of delegating responsibility at time τ for a given task to individual Xi chosen from {X1,...,Xi,...,Xn}. For example, the delegation strategy best maximizes expected payoff: Pi =
such that Pr(X maximal 11// m ififXX i isi is such that Pr(X i>>)i ») is is maximal 0 0 otherwise otherwise
where Pr(Xi ») means “the probability that Xi will have the highest payoff.” There are m individuals for whom Pr(Xi ») is maximal. Another strategy prob (defined by Pi = Pr(Xi »)) also favors high payoff but gives all individuals a chance, sooner or later. It is an admissible delegation strategy that has the following properties: • if Pr(Xi ») > Pr(Xj ») then Pi > P j • if Pr(Xi ») = Pr(Xj ») then Pi = P j • Pi > 0 (for all i) So, the strategy best is not admissible. The strategy prob is admissible and is the default delegation strategy. It provides a balance between favoring Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
169
individuals who perform well and giving occasional opportunities to poor performers to improve their performance. The strategy prob is not based on any model of user improvement, and so it cannot be claimed to be optimal in that sense, although it has socially desirable properties. Task: Gather Performance Knowledge To deal with selection and delegation, performance knowledge is gathered. This comprises performance statistics on the operation of every agent, plan, and activity. A set of basic parameters is described here. More recent work includes observations on the relationships between the human users in the system; in particular, the extent to which one human can trust or rely on another. These higher-level parameters will be included in future versions of the method. In the case of a parameter, p, that can reasonably be assumed to be normally distributed, an estimate for the mean of p, µp, is revised on the basis of the ith observation obi to: µpnew = (1 – α) * obi + α * µpold which, given a starting value µpinitial, and some constant α, 0 < α < 1, approximates the geometric mean of all observations to date. Similarly, an estimate for times the standard deviation of p, σp, is revised on the basis of the ith observation obi to: σpnew = (1 – α) * | obi – µpold | + α * σpold which, given a starting value σpinitial, and some constant α, 0 < α < 1, approximates the geometric mean of the modulus of difference of the observations and the mean to date. The constant að is chosen on the basis of the stability of the observations. Each individual agent or user pair maintains estimates for the three parameters: time, cost, and likelihood of success for the execution of all of its plans, subplans, and activities. “All things being equal,” these parameters are assumed to be normally distributed—the case when “all things are not equal” is considered below. Time is the total time taken to termination. Cost is the actual cost of the resources allocated; for example, time used. The likelihood of success observations are binary (i.e., “success” or “fail”) and so the likelihood of the success parameter is binomially distributed, which is approximately normally distributed under the standard conditions. Unfortunately, value is difficult to measure in process management. The system does not attempt to measure value; each individual represents the perceived value of each other individual’s work as a constant for that individual. Finally, the delegate parameter estimates the amount of work delegated to each individual in each discrete time period. The delegate parameter is not normally distributed. The delegate and value estimates are associated with individuals. The Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
170
Debenham & Henderson-Sellers
time, cost, and likelihood of success estimates are attached to plans and activities. The three parameters time, cost, and likelihood of success are assumed to be normally distributed. If working conditions are reasonably stable, then this assumption is acceptable, but the presence of external environmental influences may invalidate it. One virtue of the assumption of normality is that it provides a statistical basis on which to query unexpected observations. If an observation lies outside the expected confidence interval, then there are grounds, to the chosen degree of certainty, to ask why it is outside. Inferred reasons Γ for why an observation is outside expected limits may sometimes be extracted from observing the interactions with the users and other agents involved. If the effect of such a reason can be quantified (perhaps by simply asking a user), then the perturbed values of {obi} are corrected to {obi | Γ}. Performance knowledge is historical. If it is used to support future decisions, then some allowance should be made for how those performance estimates are expected to have changed in time. For example, if A was good yesterday and B was bad 6 months ago, then how should we rate their expected relative performance tomorrow? The probability of A being better than B will be greater than 0.5. The standard deviation of a parameter can be interpreted as a measure of lack of confidence in its mean. It may be shown that if ρ is the expected range of values for A and B and if σB = ρ, then the probability of A being better than B will be less than 0.79 no matter what µB, µA, and σA are. If σB = 2 * ρ, then this probability is less than 0.66. Thus, to allow for the historical estimate of B, determine a period by which the estimates should be “moderately useless,” say 1 year, and increase σB linearly by a half of the difference between its value and 2 * ρ (because 6 months is half of one year). This has the effect of giving B the “benefit of the doubt,” as B has not been given an opportunity for 6 months. Task: Determine Conceptual Architecture The agents used here for process management are three-layer, BDI hybrid agents (Müller, 1996); they exhibit deliberative and reactive reasoning. In other applications, it may be possible to use just one of these forms of reasoning. For example, in agent terms, an expert system is a reactive system. Task: Determine Control Architecture The three-layer, BDI hybrid agents used here employ the nondeterministic procedure: “on the basis of current beliefs—identify the current options, on the basis of current options and existing commitments—select the current commitCopyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
171
ments (or goals), for each newly committed goal, choose a plan for that goal, from the selected plans choose a consistent set of things to do next (called the agent’s intentions).” Task: Determine System Operation Determining the system operation for agents remains a research issue. By including the Task here into the OPEN Process Framework, we are acknowledging the need for such a Task and affiliated Techniques, even though the details of these are not yet fully understood or described, currently being the focus of much research worldwide. We can, however, identify a set of new OPEN Techniques that will be necessary to support this Task. These are Activity scheduling, Commitment management, Learning strategies for agents, Belief revision of agents, and Task selection by agents. Furthermore, documenting such internal designs can be accomplished by the use of the emerging agent modeling language AUML (Agent UML) as described by Odell et al. (2000). In particular, their description of agent-focused state charts and activity diagrams will likely be found useful. Task: Determine Security Policy for Agents Distributed systems attract additional security risks2. Risks include unauthorized disclosure or alteration, denial of service, and spoofing (Odell, 2001). A policy must be determined on how to access resources in a secure manner. Most policies are based on agent identity, itself based on the notion of credential. Task: Code The need to code is already identified within the OPF as OPEN Task: Code. While this was originally intended to describe coding of object-oriented classes, it is equally applicable to agents.
New Techniques Technique: Market Mechanisms OPEN includes various market mechanisms that are useful in the construction of the interaction protocol for competitive agents. These mechanisms include voting mechanisms, negotiation mechanisms, and various auctionbased mechanisms. The negotiation and auction mechanisms are useful for building the interaction protocol for economically rational agents. (Note. This Technique is not used in our subsequent case study, but is included here for completeness.) Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
172
Debenham & Henderson-Sellers
Technique: Contract Nets Contract nets with focused addressing (Chapter 3 by Durfee in Weiss, 1999) are used here to manage semimanual or automatic delegation and have been a mainstay in handling heterogeneity (Durfee, 2001). In the second case study herein, a bid consists of the five pairs of real numbers (Constraint, Delegate, Success, Cost, Time). The pair constraint is an estimate of the earliest time that the individual could address the task, i.e., ignoring other nonurgent things to be done, and an estimate of the time that the individual would normally address the task if it “took its place in the in-tray.” The Constraint estimates require reference to the user’s diary (e.g., Wobcke & Sichanie, 2000). The Delegate pair represents delegations “in” and “out.” The pairs Success, Cost, and Time are estimates of the mean and standard deviation of the corresponding parameters, which are described below by the Task: Gather performance knowledge. The receiving agent then: • Attaches a subjective view of the value of the bidding individual • Assesses the extent to which a bid should be downgraded—or not considered at all—because it violates process constraints • Selects an acceptable bid, if any, possibly by applying its “delegation strategy” If there are no acceptable bids, then the receiving agent “thinks again.” Technique: Commitment Management Another aspect of agents is their need to manage their commitments, of which there are two forms: • The goals that an agent is permitted to achieve—These are commitments that an agent makes to itself. • The things that an agent promises to do for another agent (via the contract net protocol)—These commitments are made only if the agent that asks for bids accepts the other agent’s bid. Then the bidding agent is committed to the other agent to do something. In the process applications described here, a key feature of commitments is their value relative to other commitments. This value typically changes substantially, as new process instances are created and as circumstances change. Unfortunately, this value can only be assessed by a (human) principal, and that is how it is done. The system, though, keeps track of the cost (in time and effort) applied to a process. This enables the system to determine priorities that, for example, ensure that a (now) low value instance in which substantial Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
173
investment has been made and which is near to completion is not overlooked. Tools to estimate the parameters that support this prioritization of commitments are to be provided in OPEN. Technique: Activity Scheduling An agent’s commitments lead to the selection of plans for those commitments. Achievement of the goals and subgoals in those plans requires that activities and tasks be performed. The members of the resulting set of tasks may conflict with each other. Furthermore, they may conflict with the current intentions of other agents, in which case, a coordination protocol is required to resolve such conflict. Thus, in order to schedule these tasks, conflicts are resolved, and priorities are set. This technique is used to support the OPEN Task: Determine system operation. In the process applications described here, the scheduling of activities is substantially governed by the ever-changing availability of the humans in the system. This availability is hard to estimate, as the activities of the humans are not restricted to involvement in processes managed by the system. Each user’s agent maintains an estimate of their user’s capacity and the priorities of the items being dealt with. The problem of scheduling activities is a finer-grained view of the commitment management technique. Technique: Deliberative Reasoning: Plans In the goal-driven process management applications described in the case study, a plan cannot necessarily be relied upon to achieve its goal, even if all of the subgoals on a chosen path through the plan have been achieved. On the other hand, if a plan has failed to execute, then it is possible that the plan’s goal may still have been achieved. So, a necessary subgoal in every high-level plan body is a subgoal called the “success condition.” The success condition (SC) is a procedure with a goal to determine whether the plan’s goal has been achieved. The success condition (a procedure) is the final subgoal on every path through a plan. The execution of the procedure may succeed (9), fail (8), or abort (A). If the execution of the success condition fails, then the overall success of the plan is unknown (?). So, the four possible plan exits resulting from an attempt to execute a plan are as shown in Figure 2. A plan body is represented as a directed AND/OR graph or as a statetransition diagram, in which some of the nodes are labeled with subgoals. The plan body may contain the usual conditional constructs such as if...then and iteration constructs such as while…do... A plan body has one start state (activation condition “ac,” and activation action “α”), and stop states labeled Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
174
Debenham & Henderson-Sellers
Figure 2: The process agent plan
Plan Name[ Plan Goal ] start
• [ ac] / α
Plan body /σ ✓
/υ
/φ ✖
?
[ab] / ω A
Figure 2 The process agent plan as success states “9” (success action “σ”), fail states “8” (fail action “φ”), unknown states “?” (unknown action “υ”), or abort states “A” (abort condition “ab”; abort action “ω”). Each plan contains an optional abort condition [ab] as shown in Figure 2. These abort conditions are realized as procedural abort triggers that are activated when their plan is active. Active abort triggers scan the agent’s beliefs for the presence of their abort condition. Abort triggers are only active if the goal of the plan to which they are attached is a goal that the agent is presently committed to achieving. If a plan is aborted, then any active subgoals of that plan are also aborted. Technique: Reactive Reasoning In reactive reasoning, a trigger is activated by the agent, which, for example, might lead to the continued execution of an active but stalled plan. As well as firing predetermined procedures, the plan can also be aborted should an abort action be activated. The basis of reactive reasoning is a rule of the format: ifand then and where the is a device to determine whether the trigger is active or not, and is something that the agent may believe; may be simply to transfer some value to a partly executed plan, or may be more profound such as to abort a plan and decommit a goal.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
175
If an agent A has an active plan P that requires input from its user or another agent B, then a procedure sends a request message directly to B with a unique identifier #I, and a reactive procedure trigger is activated (i.e., made “active”): if active and believes B’s response to #I is Z then pass Z to P and not active In this way, data are passed to partly executed plans using reactive triggers. Reactive triggers of this form are associated with belief states of the form “B’s response to #I is known.” Such a procedure trigger is active when its associated subgoal is committed to but has not been realized. The abort triggers have a higher priority than reactive triggers. So, if a plan’s abort trigger fires, and if an active subgoal in that plan is the subject of a reactive trigger, then that subgoal will be deactivated, thus preventing that reactive trigger from firing, even if the required belief is present in the world beliefs. This leads to a more detailed design [OPEN Task: Determine conceptual architecture]. Technique: Learning Strategies for Agents A multiagent system should “learn” to improve its performance. In the process management applications described here, the agents are continually revising their estimates of the performance of the other agents by re-estimating values for each other agent’s performance parameters. Thus, the agents are continually adapting to the changing behavior of the other agents. In other words, each agent is “watching” all the others. In this way, they learn to adapt to changes in circumstances. This technique is a placeholder for future research results to be used to support the OPEN Task: Determine system operation. A set of basic parameters used in our applications are described under Task: Gather performance knowledge. Technique: Control Architecture The control architecture used here is essential to the INTERRAP control architecture (Müller, 1996). In brief, the deliberative reasoning mechanism employs the nondeterministic procedure: “on the basis of current beliefs (identify the current options), on the basis of current options and existing commitments [select the current commitments (or goals), for each newly committed goal, choose a plan for that goal], from the selected plans, choose a consistent set of things to do next (called the agent’s intentions).” If the Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
176
Debenham & Henderson-Sellers
current options do not include a current commitment, then that commitment is dropped. So, if agent A sends agent B a message M asking agent B to do something, agent B may commit to do this. If agent A then removes M from B’s message area, at B’s next deliberative cycle, B should decommit to that task. Technique: Belief Revision of Agents In the BDI (beliefs, intentions, desires) model of agents, the beliefs of agents are derived from reading messages passed to it by a user, from reading documents involved in the process instance, or by reading messages in an allocated message area. This message area is analogous to an office worker’s in-tray, in that messages placed there can be read, put in abeyance, or simply ignored. These beliefs are often modified with time, because they identify facts that are no longer true. There is thus a need for specifying the appropriate technique by which this belief revision is undertaken. This is still to a large extent unresolved, because there is no obvious solution to the problem of deleting beliefs that are no longer true. Agents are just like humans in this respect. In the context of the impact of beliefs on an agent’s parameters, this is not too problematical, though, because estimates will eventually be revised. This technique is often used to support the OPEN Task: Determine system operation. Belief revision has been managed in the case studies described below through making the sender of any message responsible for the removal of that message when it is no longer useful or valid. This is achieved conceptually by each agent having a “notice board” that is freely available to any agent in the system. An agent may then choose not to import the contents of a message into its beliefs if there are inconsistencies. Messages sent in the context of a particular process instance are removed by a garbage collection process when that instance is fully resolved. This simple method works well as long as these responsibilities are clear. The default message format in OPEN includes slots that are used to achieve this. Technique: Task Selection by Agents There may be a number of different tasks that can be invoked to achieve a goal. The selection of tasks by agents is dealt with by the plans and, if there is a choice, by applying a selection strategy. A default selection strategy here is “the probability of choosing a task is the probability that that task is the best task.” This strategy relies on a mechanism to determine the probability that a task is the best task. Mechanisms have been described for estimating the expected cost and value of tasks coupled with a set of human contributors. This Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
177
allows the system to pick appropriate tasks to progress the job at hand given the availability and compatibility of staff, the potential value of the task to a process instance, and its estimated cost. This technique is used by default to support the OPEN Task: Determine system operation. Tools that manage the parameters that underpin these mechanisms are to be made available in OPEN. In the following sections, we exemplify the use of several of the newly proposed Tasks and Techniques in the context of business processes.
TWO CATEGORIES OF BUSINESS PROCESS Business process management is a suitable area for multiagent systems (Jennings et al., 1998, 2000; Koudouridis et al., 2001). The two categories of business process are as follows: • A task-driven process can be associated with a, possibly conditional, sequence of activities such that execution of the corresponding sequence of tasks “always” achieves the process goal. Each of these activities has a goal and is associated with a task that on its termination “always” achieves this goal. Production workflows are often task-driven processes. • A goal-driven process has a process goal and can be associated with a, possibly conditional, sequence of subgoals, such that achievement of this sequence “always” achieves the process goal. Achievement of a subprocess goal is the termination condition for that subprocess. Each of these subgoals is associated with at least one activity and so with at least one task. Some of these tasks may work better than others, and there may be no way of knowing which is which. A task for an activity may fail outright or may be otherwise ineffective at achieving its goal. In other words, unpredictable task failure is a feature of goal-driven processes. If a task fails, then another way to achieve its subgoal may be sought. Figure 3 shows a simplified view of the management of goal-driven processes, in which the primitives are goals and plans. Some goals are associated with executable activities and so with tasks. If a goal is not associated with an activity, then it should be the subject of at least one plan. Figure 3 presents a simplified view, because a subgoal of a goal-driven process goal will not necessarily be goal-driven; aborting plans is also ignored. Two case studies follow. The first considers a task-driven process management system, and the second considers a goal-driven process management system. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
178
Debenham & Henderson-Sellers
Figure 3: Goal-driven process management (simplified view) Plan
Select Process Knowledge (knowledge of how much the instance has/should cost etc)
Identify ?not SC and not activity goal? Next-Goal (what to try to achieve next) ?activity goal?
Performance Knowledge (knowledge of how effective plans are)
Initialise Process Goal (what we are trying to achieve over all) ?SC?
Back-up
Add to
Select
Identify
Evaluate it
Procedure Add to
Do it
New Performance Knowledge
New Process Knowledge
Figure 3. Goal-driven process management (simplified view)
CASE STUDY 1 To avoid a lengthy description, the subject of both case studies involves the processing of a loan application by a bank. This is typically a workflow or task-driven process. The same example is used for the design of a single-agent system and the design of a multiagent system (see second case study). State charts [OPEN Technique: State modeling] are used to model taskdriven processes (Figure 4). To do this, first construct a node labeled with the activity that creates that process. From that node, directed arcs lead to other nodes labeled with activities, so that every possible sequence of activities that leads to a node that destroys the process is represented. If more than one arc follows a node, then those arcs are labeled with the condition under which they Figure 4: Statechart for a task-driven process
A
α (C) / D
B
Figure 4. Statechart for a task-driven process Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
179
Figure 5: Partial statechart for a loan application process application checked( OK ) / remove from checker and application enter on assessor being assessed assessment timed out( ) / remove from assessor and enter on supervisor application assessed urgently
assessment complete( risky ) / remove from assessor and enter on scrutiniser
application being scrutinised
assessment complete( not risky ) / remove from assessor and enter on loans officer offer being made
Figure 5. Partial statechart for a loan application process
should be followed. No arcs lead from a node that destroys a process. Then, relabel the arcs as α (C)/D, where α is the event that the activity that precedes the arc has terminated, C is the arc condition, if any, and D is the actions that the management system should perform prior to the activity following the arc. Some of what a Web-based process management system has to do is to add or delete pointers to virtual documents to or from the user’s work area. Operations of this sort are represented as actions D on the state chart. For example, Figure 5 shows part of a state chart for a loan application, where the primitives “remove” and “enter” add and delete pointers in this way. For a taskdriven process, the completion of an activity is equated to the realization of the activity’s goal. Thus, the only way that a process instance will not progress is if its activity instance is aborted for some reason, such as time constraints. In Figure 5, the event “assessment timed out” deals with such an eventuality. The resulting state chart is implemented simply as event–condition–action state-transition rules of the form: if in state A and event að occurs and condition C is true then perform action D and enter state B Task-driven process management can be effected using a single reactive agent or expert system containing rules of this form. If the rules in such a knowledge base are indexed by their “from” state, then the maintenance of the knowledge base is quite manageable. For example, the “top” transition in Figure 5 is:
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
180
Debenham & Henderson-Sellers
Figure 6: State label to manage a “circulating” document
circulate to:
Fred John Jane Mary Peter Michael
considering sent 6 Feb not yet sent verdict: “no” verdict: “yes” considering sent 8 Feb not yet sent
Figure 6. State label to manage a “circulating” document if in state(application being assessed) and event(assessment complete) occurs and condition(application assessed risky) is true, then perform action(remove from assessor’s “Out Tray” and add to scrutinizer’s “In Tray”) and enter state (application being scrutinized). The state label can be quite complex. For example, a state label for a process that is to be circulated among n people, two at a time, until some event occurs, can be represented as an n * 2 matrix—an example is shown in Figure 6.
System Objective and Environment and Organization The system objective is to manage the processes modeled above. The environment [OPEN Task: Model the agent’s environment] consists of users, assumed to have personal computing equipment with a Web browser. The system organization consists of a single-agent system simply managing the (task-driven) processes.
The Conceptual Architecture, the Control Architecture, and System Operation The conceptual architecture is a reactive architecture, i.e., it does not support proactive, feed-forward reasoning, because there is no such need in task-driven processes. All that the system has to do is to implement the event– condition–action state-transition rules described above. The control architecture is to trigger rules in the order in which the triggering events occur. In this simple example, there are no issues to decide for system operation; whereas in Case Study 2, the system operation receives considerably more attention.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
181
CASE STUDY 2 Goal-driven processes [OPEN Task: Identify agents’ goals] may be modeled as state and activity charts (Muth et al., 1998). The primitives of that model are activities and states. An activity chart specifies the data flow between activities. An activity chart is a directed graph in which the arcs are annotated with data items. A state chart is a representation of a finite state machine in which the transitions annotated with event–condition–action rules (see Figure 4). Muth et al. (1998) show that the state and activity chart representation may be decomposed to pre-empt a distributed implementation. Each event on a state chart may be associated with a goal to achieve that event, and so a state chart may be converted to a plan with nodes labeled with such goals. Unlike task-driven processes, the successful execution of a plan for a goal-driven process is not necessarily related to the achievement of its goal. One reason for this is that an instance may make progress outside the process management system—two players could go for lunch, for example. That is, when managing goal-driven processes, there may be no way of knowing the “best” task to do next. Each high-level plan for a goal-driven process should terminate with a check of whether its goal has been achieved. To represent goal-driven processes, a form of plan is required that can accommodate failure. This is discussed below. Thus, goal-driven process management has a requirement both for a software architecture that can cope naturally with failure and for some technique for intelligently selecting which is the “best” task to do next (Debenham, 2000) [OPEN Task: Determine conceptual architecture; OPEN Technique: Task selection by agents]. Any general-purpose architecture can achieve this first requirement, but the process architecture described below is particularly suitable.
The System Objective, Environment, and Organization In the goal-driven process management system, an agent supports each (human) user. These agents manage their users’ work and manage the work that a user has delegated to another user/agent pair. Subprocess delegation is the transfer of responsibility for a subprocess from one agent to another [OPEN Technique: Delegation analysis]. A delegation strategy decides who should be given responsibility for doing what. Delegation strategies in manual systems can be quite elementary; delegation is a job that some humans are not very good at. A user of the system may specify the delegation strategy and may permit the agent to delegate or may delegate manually. In doing this, the user has
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
182
Debenham & Henderson-Sellers
Figure 7: A system node in the multiagent system
Work area
other agents Agent
User Diary
virtual documents
Figure 7. A system node in the multiagent system
considerable flexibility first in defining payoff and second in specifying the strategy. A delegation strategy may attempt to balance some of the three conflicting principles: maximizing payoff, maximizing opportunities for poor performers to improve, and balancing workload (Wooldridge & Jennings, 1998). The objective of the system is to manage goal-driven processes with a specified interaction protocol and communication protocol [OPEN Tasks: Determine agent interaction protocol; Determine communication protocol]. The system’s organization consists of one agent for each (human) user; the role of each agent is that of an assistant to its user. The components of each node in this system are illustrated in Figure 7. The user interacts with a virtual work area and a virtual diary. The work area contains three components: the process instances awaiting the attention of the user, the process instances for which the user has delegated responsibility to another agent, and the process instances that the agent does not understand. The diary contains the scheduled commitments of the user. The agent manages the work area and may also interact with the diary.
The Conceptual Architecture The conceptual architecture of the agents belongs to a well-documented class. Wooldridge describes a variety of architectures (Chapter 1 in Weiss, 1999). One well-documented class of hybrid architectures is the three-layer, BDI agent architectures. One member of this class is the INTERRAP architecture (Müller, 1996), which has its origins in the work of Rao and Georgeff (1995). In the goal-directed process management system, the agent’s concepCopyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
183
Figure 8: Conceptual architecture AGENT other agents
documents
user
looks at seen by reads
BELIEFS
GOALS
s INTENTIONS
c Cooperative Cooperative t plans r h p are i e l and/or g Self Local Local a d lattices g n in e u Procedure Procedures World terms r l of reads reactive procedure triggers send import data from beliefs e messages message to messages manager Social
Figure 8. Conceptual architecture
tual architecture differs slightly from the INTERRAP conceptual architecture; it is intended specifically for business process applications. This conceptual architecture is shown in Figure 8. It consists of a three-layer BDI architecture together with a message area, managed by a message manager. Access to the message area is given to other agents in the system who may post messages there and, if they wish, may remove messages that they have posted. The idea behind the message area is to establish a persistent part of the agent to which the other agents have access. This avoids other agents tampering directly with an agent’s beliefs and enables agents to freely remove their messages from a receiving agent’s message board if they wish. The message area is rather like a person’s office “in-tray,” into which agents may place documents and from which they may remove those documents if they wish. The agents’ world beliefs are derived from reading messages received from a user or from reading the documents involved in the process instance or from reading messages in the message area. These activities are fundamentally different in that documents are “passive”; they are read only when information is required. Users and other agents send messages when they feel like it. Beliefs play two roles. First, they may be partly or wholly responsible, activating a local or cooperative trigger that leads to the agent committing to a goal and may thus initiate an intention (e.g., a plan to achieve what a message asks, such as “please do xyz”). This is part of the deliberative reasoning mechanism [OPEN Task: Determine reasoning strategy for agents]. Second, they can be partly or wholly responsible for activating a reactive procedure trigger that, for example, enables the execution of an active plan to progress. This is part of the reactive reasoning mechanism.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
184
Debenham & Henderson-Sellers
Figure 9: Plan for assessment example illustrated in Figure 5 Assess_application[ Application X has been dealt with in time τ ] start
•
Application X assessed (RESULT = X ) by time ( NOW + τ – ∆ ) ✓ RESULT = OK
✖ RESULT = ¢ OK
Acknowledgement for application X received from Loans agent by time ( NOW + ∆ ) ✓
✓
/ AdU( #1 )
✖
✖
Acknowledgement for application X received from Supervisor agent by time ( NOW + ∆ )
Acknowledgement for application X received from Scrutiniser agent by time ( NOW + ∆ ) ✓
/ AdU( #2 ) ✖
✓
✖
/ AdU( #3 ) ✖
✓
✖
✓
Figure 9. Plan for assessment example illustrated in Figure 5 Deliberative Reasoning
The form of plan used here is slightly more elaborate than the form of agent plan described in Rao and Georgeff (1995), where plans are built from singleentry, triple-exit blocks. Though that approach is powerful, it is inappropriate for process management, because in this case, whether a plan has executed successfully is not necessarily related to whether that plan’s goal has been achieved. In the first case study, we described the inclusion of time and space constraints in a state-chart-based process model. The inclusion of these constraints substantially increases the complexity of that model. Consider now the management of the same process (Figure 5). Consider the agent for the Assessor. That agent may have a plan similar to that shown in Figure 9, which shows how constraints are dealt with in that formalism. That plan may appear to be conceptually no simpler than the representation shown in Figure 4, but it also has to cope with the interagent communication [OPEN Task: Determine communication protocol]. Three “fail actions” are shown in Figure 9: “AdU” is a hard-wired action that means “advise the agent’s user,” with the corresponding error message signaling that some calamity has occurred. In this example, the three hard-wired actions indicate that no acknowledgment was received from the three respective agents within some preset time Dð. No “?” or “A” states are shown in Figure 9; for each of the four subgoals, the “8” should be entered in lieu of an abort or unknown state.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
185
Reactive Reasoning Reactive reasoning plays two roles in this case study: first, if a plan is aborted, then its abort action is activated; second, if a procedure trigger fires, then its procedure is activated—this includes hard-wired procedure triggers that deal with urgent messages such as “the building is on fire!” Of these two roles, the first takes precedence over the second.
The Control Architecture The control architecture is essentially the INTERRAP control architecture. In outline, the deliberative reasoning mechanism employs the nondeterministic procedure: “on the basis of current beliefs—identify the current options, on the basis of current options and existing commitments— select the current commitments (or goals), for each newly committed goal, choose a plan for that goal, from the selected plans choose a consistent set of things to do next (called the agent’s intentions).” If the current options do not include a current commitment, then that commitment is dropped. So if agent A sends agent B a message M asking agent B to do something, agent B may commit to do this. If agent A then removes M from B’s message area, then, at B’s next deliberative cycle, B should decommit to that task. The reactive reasoning mechanism takes precedence over the deliberative reasoning mechanism. The reactive frequency is the frequency at which an attempt is made to fire all active reactive triggers. The reactive frequency here is 30 seconds. The deliberative frequency is the frequency at which the deliberative reasoning mechanism is activated. To maintain some stability in each user’s work area, the deliberative frequency is 5 minutes.
The System Operation For goal-directed processes, there may be no way of knowing what the “best” thing to do next is, and that next thing may involve delegating the responsibility for a subprocess to another agent (Lind, 2001). This raises two related issues. The first issue is selection; that is, given a goal, select the “best” plan or activity for that goal. The second issue is delegation; that is, the problem of deciding whom to ask to take responsibility for what and then following up on the delegated responsibility to make sure that the work is done. The sense in which “best” is used here does not mean that selection and delegation are optimization problems. A process management system is one part of an organization. Ideally, the goal of a process management system
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
186
Debenham & Henderson-Sellers
should be that of its organization, such as “to maximize corporate profits.” But, unless measurements on the full range of corporate activity are available to it, the management system is unable to address such a global goal. On the other hand, attempts to optimize the performance of the process management system only can lead, for example, to overuse of the best performing staff. So, if the only measurements available are derived from within the process management function, then the meaning of “best” should take note of global implications, such as the equity in the working environment [OPEN Task: Model the agent’s environment], as well as the quality of the process output. A definition of “best” in functional process management terms that attempts to address corporate priorities may lead to conflicting principles, such as maximizing payoff, providing opportunities for poor performers to improve, and balancing workload. In the absence of a satisfactory meaning of “best” and with only the performance knowledge to guide the decisions, the approach taken to plan/ activity selection is to ask the user to provide a utility function defined in terms of the performance parameters. If this utility function is a combination of (assumed) normal parameters, then a reasonable plan/activity selection strategy [OPEN Task: Identify agents’ goals] is given a goal to choose each plan (or activity) from the available plans (activities) with the probability that the plan (activity) has the highest expected utility value. Using this strategy, even poor plans have a chance of being selected, and, maybe, performing better than expected. Contract nets [OPEN Technique: Contract nets] with focused addressing are often used to manage semimanual or automatic delegation.
ASSESSMENT The applications built are distributed multiagent systems. This enables the management of complex tasks to be handled, as each node is individually responsible for the way in which it goes about its business. That is, the plan in each agent only has to deal with the goals the agent has to achieve. For example, a complete high-level plan for an Assessor agent is shown in Figure 9. This simplifies the design of plans for the agents. As a system built from autonomous components, each node in the goal-driven system has to cope with the unexpected failure of other nodes. This complicates the design of the plans. The delegation strategy was discussed above. An overriding principle is required to determine how delegation is to be dealt with no matter what parameters are used to support it. For example, if A delegates the responsibility for a subprocess to B, who, in turn, delegates the same subprocess to C, then
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
187
should B advise A of this second delegation—thus removing B from the responsibility chain—or should B remain in the responsibility chain? Distributed, goal-driven systems are considerably more expensive (approximately four times the programming effort) to build than task-driven systems. Having made this investment, dividends flow from the comparative ease by which new processes are included, in that only those agents involved in a process need to develop plans to cope with that process. There is also a negative here. The system has grown around a principle of personalization, i.e., each individual is responsible for deciding how their agent operates. This means that similar plans may be constructed at a number of nodes by the users at those nodes to deal with the same subprocess. One way of managing this is to publish solutions as they are constructed, but that has not been considered. The application of the extended OPEN Process Framework has been exemplified only in terms of its support for particular tasks and techniques, mostly at the design level. This worked well. Future evaluations need to create a full life-cycle development process, which will include not only technical but also management issues, including a well-specified life-cycle model, such as the Contract-Driven Life-cycle (CDLC) model generally advocated for use in OPEN process instances. The CDLC (or other) life-cycle model adds concepts such as stages, phases, builds, milestones as project management focused sequencing superimposed upon the technical Activity/Task interfacing and sequencing discussed here.
FUTURE TRENDS Agents are beginning to enter the mainstream of software engineering (Jennings, 2001), emerging from the research laboratory into commercial utilization. In order to build real agent-oriented systems for commercial applications, their design must encompass all aspects of traditional software engineering, not just focus (as in the past) on internal design issues. The type of methodological approach described here and exemplified by the OPEN Process Framework (Firesmith & Henderson-Sellers, 2002) or by the Gaia approach (Wooldridge et al., 2000), which binds together agent technology and solid OO-based processes and methodologies, should provide, in the near future, the required substantive support. Value is also seen in codeveloping the “theory” of the process-focused approach and its application and evaluation in case studies are undertaken in parallel.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
188
Debenham & Henderson-Sellers
SUMMARY A first attempt to augment the OPF with support for agents has been described. While the initial results are promising, there is still much methodology refinement that is necessary before full methodological support for agentoriented software development becomes commercially available. Proposed extensions (new Tasks and Techniques for OPEN) are illustrated in two case studies of business processes.
ENDNOTES 1
2
Note that this is an entirely different meaning to word Activity from that as used in OPEN. This is particularly true for an agent-oriented system, when agents are (potentially at least) mobile [a future OPEN Task: Model agent mobility].
REFERENCES Debenham, J.K. (2000). Supporting strategic process. In Proceedings Fifth International Conference on The Practical Application of Intelligent Agents and MultiAgents (237–256). Manchester, UK. Durfee, E.H. (2001). Scaling up agent coordination strategies. IEEE Computer, 34(7), 39–46. Finin, F., Labrou, Y., & Mayfield, J. (1997). KQML as an agent communication language. In J. Bradshaw (Ed.), Software Agents. Cambridge, MA: MIT Press. Firesmith, D.G. & Henderson-Sellers, B. (2002). The OPEN Process Framework. An Introduction. Harlow, UK: Addison-Wesley. Graham, I., Henderson-Sellers, B., & Younessi, H. (1997). The OPEN Process Specification. Harlow, UK: Addison-Wesley. Henderson-Sellers, B. (2001). Enhancing the OPF repository. JOOP, 14(4), 10–12, 22. Henderson-Sellers, B., Simons, A.J.H., & Younessi, H. (1998). The OPEN Toolbox of Techniques. Harlow, UK: Addison-Wesley. Jennings, N. (2001). Agent of change. Application Development Advisor, 5(3), 6. Jennings, N.R. & Wooldridge, M. (2001). Agent-oriented software engineering. In J. Bradshaw (Ed.), Handbook of Agent Technology (in press). Cambridge, MA: AAAI/MIT Press. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Designing Agent-Based Process Systems
189
Jennings, N.R., Faratin, P., Norman, T.J., O’Brien, P., & Odgers, B. (2000). Autonomous agents for business process management. Int. J. Applied Artificial Intelligence, 14 (2), 145–189. Jennings, N.R., Sycara, K., & Wooldridge, M. (1998). A roadmap of agent research and development. Int. Journal of Autonomous Agents and MultiAgent Systems, 1 (1), 7–38. Koudouridis, G., Corley, S., Dennis, M., Ouzounis, V., Van Laenen, F., Garijo, F., & Reynolds, H. (2001). Communications management process integration using software agents: a specification of a framework for agent oriented workflow management systems. EURESCOM Project P815. http://www.eurescom.de/public/projectresults/P800-series/ 815d1.htm. Kraus, S. (2001). Strategic Negotiation in Multiagent Environments. Cambridge, MA: MIT Press. Lind, J. (2001). Iterative Software Engineering for Multiagent Systems: The Massive Method. Berlin, German: Springer-Verlag. Müller, J.P. (1996). The Design of Intelligent Agents. Berlin, Germany: Springer-Verlag. Muth, P., Wodtke, D., Weißenfels, J., Kotz, D.A., & Weikum, G. (1998). From centralized workflow specification to distributed workflow execution. J. Intelligent Information Systems, 10(2). Nowostawski, M., Bush, G., Purvis, M., & Cranefield, S. (2000). Platforms for agent-oriented software engineering. In Proceedings 7th Asia Pacific Software Engineering Conference (480–488). Los Alamitos, CA: IEEE Computer Society Press. Odell, J. (2000). Objects and agents: how do they differ? JOOP, 13(6), 50– 53. Odell, J. (2001). Key issues for agent technology, JOOP, 13(9), 23–27, 31. Odell, J., Van Dyke Parunak, H., & Bauer, B. (2000). Extending UML for agents. In G. Wagner, Y. Lesperance, & E. Yu (Eds.), Proceedings Agent-Oriented Information Systems Workshop, 17th National Conference on Artificial Intelligence (3–17). Austin, TX. Rao, A.S. & Georgeff, M.P. (1995). BDI agents: from theory to practice. In Proceedings First International Conference on Multiagent Systems (312– 319). San Francisco, CA. Reenskaug, T., Wold, P., & Lehne, O.A. (1996). Working with objects. the OOram software engineering manual. Greenwich, CT: Manning. Weiss, G. (Ed). (1999). Multiagent Systems. Cambridge, MA: The MIT Press. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
190
Debenham & Henderson-Sellers
Wobcke, W. & Sichanie, A. (2000). Personal diary management with fuzzy preferences. In Proceedings Fifth Int. Conf. on The Practical Application of Intelligent Agents and MultiAgents. Manchester, UK. Wooldridge, M. (1997). Agent-based software engineering, IEE Procs. Software Eng., 144, 26–37. Wooldridge, M. & Ciancarini, P. (2001), Agent-oriented software engineering: the state of the art. In Agent-Oriented Software Engineering. P. Ciancarini & M. Wooldridge (Eds.). Berlin, Germany: Springer Verlag, pp. 1–22. Wooldridge, M. & Jennings, N.R. (1998). Pitfalls of agent-oriented development. In Procs. 2nd Int. Conf. on Autonomous Agents (385–391). Minneapolis/St. Paul, MN. Wooldridge, M., Jennings, N.R., & Kinny, D. (2000). The Gaia methodology for agent-oriented analysis and design. J. Autonomous Agents and MultiAgent Systems, 3, 285–312.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
191
Chapter IX
Toward an OrganizationOriented Design Methodology for Agent Societies Virginia Dignum Utrecht University, The Netherlands Hans Weigand Tilburg University, The Netherlands
ABSTRACT In this chapter, we present a framework for the design of agent societies that considers the influence of social organizational aspects on the functionality and objectives of the agent society and specifies the development steps for the design and development of an agent-based system for a particular domain. Our approach will provide a generic frame that directly relates to the organizational perception of the problem. The framework specifies the development steps of the design and development of an agent-based system for a particular domain. Based on the coordination characteristics of a domain, the methodology provides three frameworks for societies (market, hierarchy, and network). These frameworks relate to the organizational perception of a problem and allows for existing methodologies to be used for the development, modeling, and formalization of each step. The methodology supports the development of increasingly detailed models of the society and its components.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
192
Dignum & Weigand
INTRODUCTION In an increasing number of domains, organizations need to work together in transactions, tasks, or missions. Work relationships between people and enterprises are shifting from the “job-for-life” paradigm to project-based virtual enterprises in which people and organizations become independent contractors. These considerations lead to an increasing need for a transparent representation and implementation of work processes. In such settings, the ability to organize and maintain business processes, the support of communication and collaboration, and the management of knowledge are issues that are increasingly more important to insure the survival and sustainable advantage of organizations. The fact that business processes are highly dynamic and unpredictable makes it difficult to give a complete a priori specification of all activities that need to be performed, which are their knowledge needs, and how they should be ordered. In organizations, there is often a decentralized ownership of data, expertise, control, and resources involved in business processes. Different groups within organizations are relatively autonomous, in the sense that they control how their resources are created, managed, or consumed, and by whom, at what cost, and in what time frame. Often, multiple, physically distributed organizations (or parts hereof) are involved in one business process. Each organization, or part of an organization, attempts to maximize its own profit within the overall activity. There is a high degree of natural concurrency (many interrelated tasks and actors are working simultaneously at any given point of the business process), which makes it imperative to be able to monitor and manage the overall business process (e.g., total time, total budget, etc.). Software agents, characterized as autonomous entities with reasoning and communicative capabilities, are utmost suitable to implement, simulate, or represent autonomous real-life entities and, therefore, are an ideal means to model organizations. It is commonly accepted that agents are an effective solution in situations where the domain involves a number of distinct problemsolving entities, data sources, and other resources that are physically or logically distributed and that need to interact to solve a problem. Therefore, because of the proactive and autonomous behavior of agents, it is natural to design organizational support systems using agent societies that mimic the behavior and structure of human organizations (Zambonelli et al., 2001). In order to make agent technology widely accepted and used in industry, it is necessary to clearly specify the types of problems suitable for an agent approach and the benefits of agents above other technologies. Furthermore, it
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
193
is necessary to develop engineering methodologies that focus not only on the internal organization of each of the intervening agents but also on the social aspects of the domain (Omicini, 2001). However, as yet, there is no wellestablished and all-encompassing agent-oriented methodology that covers the whole development process of agent systems from requirement acquisition to implementation and testing. Most existing methodologies concentrate on just one part of the total picture or are too formal to be applicable in practice. A methodology for designing multiagent systems must be specific enough to allow engineers to design the system and generic enough to allow the acceptance and implementation of multiagent systems within an organization, allowing for the involvement of users, managers, and project teams.
Objectives We propose a framework that describes all the stages of development of a multiagent system, takes an organizational perspective on systems design, and specifies all the development steps for the design and development of an agentbased system for a particular domain. Specific agent-oriented methodologies can be used for the development and modeling of each of the development steps. We believe that such a generic framework, based on the organizational view, will contribute to the acceptance of multiagent technology by organizations. Following the development criteria proposed by Sycara (1998), we define a social framework for agent communities based on organizational coordination models that “implement” the generic interaction, cooperation, and communication mechanisms that occur in the problem domain. The proposed methodology allows a generic coordination model to be tailored to a given application and its specific agent roles and interactions to be determined.
BACKGROUND Social Concepts in Agent Research Nowadays, there is a rising awareness that multiagent systems and cybersocieties can best be understood and developed if they are inspired by human social phenomena (Artikis et al., 2001; Castelfranchi, 2000; Zambonelli et al., 2001). This is in many ways a novel concept within agent research, even if sociability has always been considered an important characteristic of agents. Until recently, the relation between environment and agent has been considered from an individualistic perspective, that is, from the perspective of the agent, in terms of how it can affect the environment or be affected by it.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
194
Dignum & Weigand
In an individualistic view of multiagent systems, agents are individual entities socially situated in an environment, that is, their behavior depends on and reacts to the environment and to other agents in it (Dautenhahn, 2000). It is therefore not possible to impose requirements and objectives to the global aspects of the system, which is paramount in business environments. When multiagent systems, or agent societies, are considered from an organizational point of view, the concept of desirable social behavior becomes of utmost importance. In a business environment, the behavior of the global system and the collective aspects of the domain, such as stability over time, predictability, and commitment to aims and strategies, must be considered. Organization-oriented agent societies take a collectivist view that considers agents as being socially embedded (Edmonds, 1999). If an agent is socially embedded, it needs to consider not only its own behavior but also the behavior of the total system and how they influence each other. Multiagent systems that are developed to model and support organizations need coordination frameworks that mimic the coordination structures of the particular organization. The organizational structure determines important autonomous activities that must be explicitly organized into autonomous entities and relationships in the conceptual model of the agent society (Dignum et al., 2001). Furthermore, the multiagent system must be able to dynamically adapt to changes in organization structure, aims, and interactions.
Society Models—A Brief Overview The term society is used in a similar way in agent society research as in human or ecological societies. The role of any society is to allow its members to coexist in a shared environment and pursue their respective roles in the presence of or in cooperation with others. Main aspects in the definition of society are purpose, structure, rules, and norms. Structure is determined by roles, interaction rules, and communication language. Rules and norms describe the desirable behavior of members and are established and enforced by institutions that often have a legal standing and thus lend legitimacy and security to members. A further advantage of the organization-oriented view on designing multiagent systems is that it allows for heterogeneity of languages, applications, and architectures during implementation. AALAADIN (Ferber & Gutknecht, 1998) is a model for agent societies based on the organizational perspective. This model is based on the basic notions of agent, role, and group. Groups in AALAADIN are defined as atomic sets of agents and do not incorporate the notion of goal, which we feel is an important aspect of societies, because usually, societies are created and Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
195
maintained to realize a certain objective, dependent on the domain goals and requirements. Artikis et al. (2001) provide a formal characterization of agent societies that views societies as normative systems and describes agent behavior and society rules in terms of the normative consequences of the agent role in the society. This society is neutral to the internal architecture of the agents and explicitly represents the communication language, norms, and behavior rules, and agent ownership as parameters of the society model. Recently, Davidsson proposed a classification for artificial societies based on the following characteristics (Davidsson, 2001): • Openness—Describing the possibilities for any agent to join the society • Flexibility—Indicating the degree agent behavior is restricted by society rules and norms • Stability—Defining the predictability of the consequences of actions • Trustfulness—Specifying the extent to which agent owners may trust the society Based on this classification, two new types of agent societies, semiopen and semiclosed, are introduced that combine the flexibility of open agent societies with the stability of closed societies. This balance between flexibility and stability results in a system in which trust is achieved by mechanisms that enforce ethical behavior between agents.
Coordination Relating society models to the organizational perception of the problem can facilitate the development of organization-oriented multiagent systems. That is, a common ground of understanding must be found between agent engineers and organizational practitioners. Coordination is the ideal candidate as common ground. It is generally recognized that coordination is an important problem inherent to the design and implementation of multiagent systems (Bond & Gasser, 1998), but the implications of coordination models for the architecture and design of agent societies are not often considered. Based on ideas from organizational science research, we propose a framework for agent societies that considers and reflects the implications of the coordination model of the real-life organization being modeled. In the following, we highlight some views on coordination that currently hold in economics and organizational sciences and in computer science and distributed artificial intelligence. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
196
Dignum & Weigand
Table 1: Comparison of organizational forms Co-ordination Relation form Primary means of communication Tone or Climate Conflict Resolution
MARKET Price mechanism Competition Prices
NETWORK Collaboration Mutual interest Relationships
Precision/ suspicion
c Open-ended / mutual Formal/ bureaucratic bureaucratic benefits Reciprocity Supervision (Reputation)
Haggling (Resort to courts)
HIERARCHY Supervision Authority Routines
Coordination and Organizational Forms Economics and organizational theory consider that relationships between and within organizations are developed for the exchange of goods, resources, information, and so on. Transaction costs and interdependencies in organizational relationships determine different models for organizational coordination, such as markets, hierarchies, and networks (Nouwens & Bouwman, 1995). Coordination in markets is achieved mainly through a price mechanism that allows independent actors to search for the best bargain. Hierarchies are mainly coordinated by supervision, that is, actors involved in power-dependent relationships act according to routines. Networks achieve coordination by mutual interest and interdependency. Table 1 summarizes the characteristics of these organization forms. Coordination and Interaction Forms In computer science and distributed artificial intelligence, coordination is usually defined as the art of managing interactions and dependencies among activities. Coordination languages are a class of programming notations that offer a solution to the problem of specifying and managing the interactions among computing agents. From this point of view, coordination models can be divided into two classes: control-driven and data-driven (Papadopoulos & Arbab, 1998). Control-driven models are systems made up of a well-defined number of entities and functions, in which the flow of control and the dependencies between entities need to be regulated. The data-driven model is more suited for open societies, in which the number of entities and functions is not known a priori, and cooperation is an important issue. Where the classification of cooperation provided by organizational theory stems from social considerations and transaction costs, this classification is concerned with the way interactions between agents happen. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
197
Agent-Oriented Software Engineering Methodologies The application of the agent paradigm to the development of different applications calls for a development methodology that focuses not only on the internal organization of each of the intervening agents but also on the social aspects of the domain. Such methodology should provide models and methods for all types of activities throughout all phases of the software life cycle. Because of similarities between agents and objects, it has often been claimed that existing object-oriented (OO) methodologies can be used for the development of agent-based systems. However, it has also been noted that agents possess specific characteristics that are not covered by traditional OO methodologies (Omicini, 2000; Jennings et al., 1998). One fundamental difference between agents and objects is autonomy, which refers to the principle that agents have control over their own actions and internal state. That is, agents can decide whether or not to perform a requested action. Objects have no control over their own methods, that is, once a publicly accessible method is invoked, the corresponding actions are performed (Wooldridge, 1997). Another characteristic of multiagent systems that calls for specific methodological approaches is openness. Because components and relationships in an open system can change at any time, designers cannot be certain of the system’s behavior at design time. Frederiksson defends that a methodological cycle for the engineering of agent societies must comprise principles of observation and construction of systems (Frederiksson & Gustavsson, 2001). From an organizational point of view, the behavior of individual agents in a society can only be understood and described in relation to the social structure. Therefore, the engineering of agent societies needs to consider the interacting and communicating abilities of agents as well as the environment in which agent societies are situated. Furthermore, in open societies, the “control” over the design of participating agents lays outside the scope and design of the society. That is, the society cannot rely on the embedding of organizational and normative elements in the intentions, desires, and beliefs of participating agents. These considerations lead to the following requirements for engineering methodologies for agent societies (Dignum & Dignum, 2001): • The methodology must include formalisms for the description, construction, and control of the organizational and normative elements of a society (roles, norms, and goals). • The methodology must provide mechanisms to describe the environment of the society and the interactions between agents and the society, and to
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
198
• • •
Dignum & Weigand
formalize the expected outcome of roles in order to verify the overall animation of the society. The organizational and normative elements of a society must be explicitly specified, because an open society cannot rely on its embedding in the agent’s internal structure. Methods and tools are needed to verify whether or not the design of an agent society satisfies its design requirements and objectives. The methodology should provide building directives concerning the communication capability and ability to conform to the expected role behavior of participating agents.
In our opinion, none of the currently existing agent-oriented engineering methodologies, such as Gaia (Wooldridge et al., 2000) and SODA (Omicini, 2001) fulfill all of the above requirements yet.
AGENT SOCIETY FRAMEWORKS The way in which organizations describe and achieve coordination is determinant to the specification of coordination in agent societies. Different application contexts exhibit different needs with respect to coordination, and the choice of a coordination model has great impact in the design of the agent society. The overall goals of a society are domain dependent, but all societies depend on a facilitation layer that provides the social backbone of the organization (Dellarocas, 2000). This layer deals with the functioning of the society and relates to the underlying coordination model. Therefore, we argue that the first step in the development of agent societies is to identify the underlying coordination model. We have specified generic facilitation and interaction frameworks for agent societies that implement functionality derived from the type of coordination holding in the domain. The coordination model determines interaction patterns and functionality of the facilitation layer of the agent society, that is, the interaction primitives and agent roles necessary to implement the facilitation layer are specific to each type of society (market, network, or hierarchy). Moreover, coordination models provide a framework to express interaction between the activities of agents and the social behavior of the system (Ciancarini et al., 1999). In the following, we describe the characteristics of agent society frameworks based on different coordination models.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
199
Market Framework The main goal of markets is to facilitate exchange between agents. In a market, heterogeneous agents will strive to find partners or clients with whom to trade their services. Being open systems, market architectures assume the heterogeneity of their members, in structure, goals, and ways of acting. Markets are particularly suitable to situations in which resources are overlapping and agents need to compete for them and are, therefore, a good choice for modeling product or service allocation problems. Being self-interested, agents will first try to solve their own local problem, after which agents can potentially negotiate with other agents to exchange services or goods in shortage or in excess. The decision to enter into or cancel a transaction is usually left to the agent. The facilitation activities of such an agent society are mainly limited to help agents find suitable partners through identification and matchmaking. Matchmakers keep track of agents in the system, their needs and possibilities, and mediate in the matching of demand and supply of services. Identification and reputation facilities are meant to build the confidence of customers as well as offer guarantees to society members. Furthermore, it is necessary to define ways to value the goods to be exchanged and determine profit and fairness of exchanges, which can be accomplished by providing banking facilities and currency specification. Interaction in markets occurs through communication and negotiation. A specific kind of market interaction is the auction that uses a highly structured negotiation protocol.
Network Framework Networks are coalitions of self-interested agents that agree to collaborate for some time in order to achieve a mutual goal. Relationships between agents are dependent on clear communication patterns and social norms. The society is responsible for making rules and norms known to its potential members. Agents in a network society are self-interested but still willing to trade some of their freedom to obtain secure relations and trust. Relationships between agents are described by contracts. Agents also enter a social contract with the network society, in which they commit themselves to act within and according to the norms and rules of the society and of the role they will assume. Besides matchmakers as in market frameworks, other types of facilitation roles in networks are gatekeepers, notaries, and monitoring agents. Gatekeepers are responsible for accepting and introducing new agents to the market. Gatekeepers negotiate the terms of a social contract between the applicant and the members of the market. Notaries keep track of collaboration contracts between agents. Monitoring agents are trusted third parties. Appointing a Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
200
Dignum & Weigand
monitor agent to a contract is the equivalent to the setting up of a supercontract between the contracting agents and the environment (personified by the monitoring agent). This supercontract specifies that the monitoring agents are allowed to check the actions of contracting agents and that the contracting agents must submit to the sanctions imposed.
Hierarchy Framework In a hierarchy, the flow of resources or information is coordinated through adjacent steps by controlling and directing it at a higher level in the managerial hierarchy. Managerial decisions, and not negotiation and communication as in markets, determine the interaction possibilities and the design of hierarchical societies. Demand parties do not select a supplier from a group of potential suppliers; they simply work with a predetermined one. In a hierarchy interaction, lines are well defined, and the facilitation level assumes the function of global control of the society and coordination of interaction with the outside world. Environments such as automated manufacturing planning and control are well suited to the hierarchical model. In such systems, reliable control of resources and information flow requires central entities that manage local resources and data but also need quick access to global ones. In the hierarchy model, agents are usually cooperative, not guided by their self-interest but by their orientation to a common global goal. In this architecture, communication lines between agents are predefined, and agents are usually not free to enter or leave their roles in the system. Agents have a local perspective, and their actions are therefore determined by their local states. In a hierarchical architecture, facilitation layer agents are mainly dedicated to the overall control and optimization of the system activities. Facilitation roles are controllers that monitor and orient the overall performance of the system or of a part of the system and interface agents that regulate communication between the society and the outside world. Table 2: Coordination in agent societies Type of society Agent ‘values’ Facilitation roles
MARKET Open Self interest Matchmaking Banking
NETWORK HIERARCHY Trust Closed Mutual interest/ Collaboration Dependency Interface Gate-keeping Control Matchmaking Notary Monitoring
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
201
APPLICATION EXAMPLE We are currently applying the methodology described in this paper to the development of a system that supports knowledge exchange between nonlife insurance experts at Achmea, a large financial and insurance services company in the Netherlands. This Knowledge Exchange Network preserves existing knowledge, rewards knowledge owners, and reaches knowledge seekers in a “just in time, just enough” basis. The network will serve as a knowledge repository as well as a means for support and encouragement of communication and collaboration. In this section, we briefly describe the aims of an agent society being developed to model the knowledge-sharing activities. In the next section, this example will serve as illustration for the different methodological steps. In this setting, knowledge seekers and knowledge owners want to be able to decide on trade partners and conditions. Sharing is not centrally controlled but is greatly encouraged by the management. The best-suited partner, according to each participant’s own conditions and judgement, will get the “job.” However, factors such as privacy, secrecy, and competitiveness between brands and departments may influence the channels and possibilities of sharing and must thus be considered. The project stakeholders have expressed the following requirements: • The organization aims at supporting collaboration and extending synergy and at supporting the preservation and organization-wide availability of existing knowledge • Knowledge owners are willing to share their knowledge within a group they feel they can trust; that is, they wish to be able to decide on sharing decisions and conditions; furthermore, added value of the sharing effort and fair exchange is a must (that is, the feeling that one is rewarded for share) • Knowledge seekers are not aware of existing knowledge and knowledge owners; they also wish to be able to decide on acquisition conditions and partners, and furthermore, an accreditation and certification mechanism is desired that enables them to check the level of trust and knowledge of partners These requirements identify a distributed system in which different actors, acting autonomously on behalf of a user, and each pursuing its own goals, need to interact in order to achieve their goals. Communication and negotiation are paramount. Furthermore, the number and behavior of participants cannot be
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
202
Dignum & Weigand
fixed a priori, and the system can be expected to expand and change during operation, in number of participants as well as in the amount and kind of knowledge shared. These characteristics indicate a situation for which the agent paradigm is well suited, and therefore, the methodology we propose can be applied.
DEVELOPMENT METHODOLOGY We propose a methodology for the modeling and construction of agent societies based on an organizational, collectivist view that specifies coordination through preestablished roles, responsibilities, and norms. Adapting from the ideas of Frederiksson and Gustavsson (2001), our methodology comprises the phases of observation, construction, and verification. A model results from the application of a set of explanatory principles to the observed properties of an existing system. The model includes the description of the coordination, environment, and behavior characteristics of the observed system. Using this model, an agent system can be constructed by populating the model with agents that will perform the modeled functionality. The resulting system can again be observed and the original model verified and possibly adapted. That is, the introduction of agents will influence the behavior of the observed system, creating the necessity for a dynamic engineering cycle.
Modeling Agent Societies The modeling process starts with the analysis of the domain, resulting in the elicitation of functional (what) and interaction (how) requirements. Interaction requirements specify the coordination structure (market, hierarchy, or network) of the society. Functional requirements determine the behavior of the society and its relationship with the environment. These requirements are the basis for a model society, in which behavior and animation can be verified and compliance to the domain requirements can be checked. This process can be compared to designing a generic enterprise model including roles as accountants, secretaries, and managers, as well as their job descriptions and relationships, and then extending it with the functions necessary to achieve the objectives of the given enterprise. These are, for example, designers and carpenters if the firm is going to manufacture chairs, or programmers and analysts when the company is a software house. More specifically, the modeling part of the methodology consists of the following levels that will be further described in the following subsections:
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
• • •
203
Coordination—The structure of the domain is determined, and a model is designed based on the collection of coordination models available in the library. Environment—Based on the coordination model design in the previous step, this level describes the interaction between the society and its environment in terms of global requirements and domain ontology. Behavior—Based on the models above, in this level, the intended behavior of the society is described in terms of agent roles and interaction patterns. This process is supported by a library of roles and interaction patterns.
Coordination Level The coordination level results in the choice of a coordination model applicable to the problem. Table 3 gives an overview of the specific characteristics of each coordination model that can be used to determine the applicable model for the domain. The identification of the appropriate model will point out the type of social laws and norms of conduct in the domain and describe the interaction patterns and facilitation needs of the agent society. In order to determine the type of coordination applicable to our example of a knowledge exchange network, we need to look at the wishes expressed by the stakeholders. The desired system should support collaboration and synergy and still enable participants to fulfil their own objectives. That is, collaboration and certification mechanisms are necessary. Furthermore, participants want to be able to determine their own exchange rules and to be assured that there is control over who the other participants are in the Table 3: Social characteristics of different coordination frameworks Society purpose Society goals Relation forms Communication capabilities of agents Interface to outside world
MARKET
NETWORK
HIERARCHY
Exchange Individual goals (determined by the agent) Negotiation (e.g. Contract Net Protocol) Interaction based on standards; communication concerns exchange only Usually open for agents (after identification)
Collaboration Both are possible
Production Determined by the global goals of the society Fixed (e.g. Action / Workflow loop)
Negotiable within society norms and rules Both the interaction Specified on design procedures and exchange can be negotiated Admittance Closed for agents; openn procedure for agents for data (input and output)
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
204
Dignum & Weigand
environment. In this situation, a market framework is not really suitable, because negotiation in a market follows fixed rules that participants must follow. Moreover, participation is open to any agent, and restriction of role or access is not possible. Also, the hierarchical model can be rejected, because it imposes a fixed partnership relation that is not possible, because partners and sources are not a priori known. However, the expressed requirements and wishes of the stakeholders point clearly to the network framework (cf. Table 3). Environment Level At the environment level, we describe the external behavior of the society, that is, the interaction between society and its environment. This includes the identification of the global functionality of the society and the specification of domain ontologies. In our agent framework, the environment consists basically of other agent societies. That is, interaction between society and its environment means essentially interaction between societies, that is, interaction between agents from different societies. However, because interaction must be governed by the rules of some society, interaction across societies is not directly possible. So, how do societies interact? We propose to draw on the linking-pin concept developed by Rensis Likert in management theory (1961). Likert redefined the role of managers in organization, by realizing that managers are members of at least two groups, and their behaviors reflect the values, norms, and objects of both groups—a manager is a subordinate in one group and a superior in another group. So rather than seeing the manager as a node in a hierarchical tree, Likert puts the manager in the intersection of two groups. Because of this dual membership, a manager can forward information or control from one group to the other. Groups may have different norms, which leave the manager with the task to “translate” between them. A Likert model of an organization is pictured as a set of tiles, where each tile has one or more overlaps with other tiles. Moreover, not only managers can be linking pins, and the set of tiles is not necessarily hierarchically ordered. We have applied the linking-pin principle to solve the problem of interaction between agent societies. Assuming that every agent is owned by an (human) subject, different agent societies will be linked by agents belonging to the same subject. In this way, the problem of communication between societies becomes an intrasubject problem. It is the responsibility of the subject to implement the communication between its various agents and to resolve potential conflicts. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
205
Table 4: Stakeholder table for the knowledge exchange network STAKEHOLDER
SOCIETY
STAKE
Knowledge owner Knowledge seeker
{product development team} {product development team, management, call-centre} {product development team, actuaries team, legal team} {system management team}
Disseminate knowledge Collect knowledge for tasks
Expert Editor
Generate knowledge Consolidate knowledge
All relevant societies in the environment, and more in particular, the stakeholders, those agents in adjacent societies that have a certain goal or expectation toward the society, must be described. The way in which a linking pin connects two societies is related to the coordination structure. In markets, each agent brings in a goal from outside. In hierarchies, each agent has a certain contribution to the overall goals, and hence, its presence in another society must be instrumental to these goals as well. The case of networks is mixed again: the agent contributes to the network, according to the contracts, but besides that, the network can be instrumental for the linking-pin agent to fulfil his role in the other society or vice versa. A stakeholder table specifies in detail the arrangements (“contract”) between the society and its adjacent societies. Table 4 shows an example of a stakeholder table for the knowledge exchange network. The next step in the environment level is to identify the functional requirements of the domain and the concepts and relationships relevant in the domain. The different stakes users will have on the society determine the requirements. The aim of the knowledge exchange network is to exchange knowledge represented as (XML)-documents describing reports, people, applications, web sites, projects, questions, etc. (This type of exchange “goods” imposes constraints to the task and communicative components of agents, because it demands a complex matching mechanism, because matches are not only at keyword level but require knowledge about relationships, processes, etc. However, this lies outside the scope of this article and will not be further discussed). Ontologies are needed to describe the different concepts relevant to the system. There are two types of ontologies necessary: • Society ontology describes concepts related to exchange to the coordination framework of the society. In the network case, these are owner, seeker, source, etc. • Domain ontology describes concepts related to the application domain. In our example, concepts related to nonlife insurance and specific concepts used at Achmea are part of this ontology.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
206
Dignum & Weigand
Behavior Level The purpose of the behavior level is to populate the society model obtained in the previous levels with the functional agent roles and interaction patterns needed to achieve the aims of the system. Here, we are concerned with the high-level definition of agent types. The social rules and norms of conduct associated with each coordination framework will determine the interaction patterns between agents in the society. To support the identification and specification of agent roles, role catalogs providing commonly occurring role models will be developed. In this level, environment requirements (identified in the environment level) are translated into domain roles and interaction rules of the society. Facilitation roles were identified in the coordination level as a result of the coordination model chosen. In the behavior level, specific roles, needed to achieve the aims of the society, as well as characteristics and constraints of each role are specified. The tasks and objectives of each role are derived from the stake each role is holding, as described in the stakeholders’ table in the Environment Level. Furthermore, the society must impose mechanisms for collaboration and certification between roles. For instance, in our example, a special kind of knowledge owner is responsible for the gathering and dissemination of information on a known, fixed list of competitors to knowledge seekers interested. In this case, society norms must enforce that such agents are required to provide all the information they are aware of. This also determines that monitors tracing this type of contracts have the task of checking if information in all companies in the list is indeed provided. In our example, the roles of matchmaker, notary, monitor, and gatekeeper were determined by the choice of a network model during the coordination level. The gatekeeper determines whether an agent can participate in the exchange or not and what kind of role can be fulfilled, the matchmaker matches supply and demand of knowledge between participants, and the notary registers and oversees the exchange commitments decided upon between participants. From the domain requirements identified in the environment level, the roles of knowledge owner, knowledge seeker, already introduced in the previous section, and the roles of editor and expert can be deduced. Editors are responsible for determining the validity and degree of expertise of knowledge items and knowledge owners, and experts can be consulted about a certain area and will be requested to answer questions of knowledge seekers. Furthermore, the role of visitor can be defined, referring to participants who are browsing through the system, are able to consult the knowledge repository, Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
207
but cannot request some of the services. Figure 1 shows a fragment of the architecture of the society, indicating roles and possible interaction procedures. Interactions in Figure 1 are described for the binary case (one seeker, one owner). However, it is possible to form multipartner contracts. Furthermore, interactions are usually multiaction processes and not one single action. That is, the result of the interaction is usually achieved after a whole conversation. Similar to what in Esteva et al. (2001) is referred to as a scene, interaction between agents is defined through conversations following a well-defined protocol. Figure 1: Fragment of the knowledge exchange network architecture ...
negotiate_partnership
Knowledge owner
register
Knowledge seeker
Knowledge owner
request_partner make_contract
Applicant
apply_sanction
Monitor
membership_ application
appoint Matchmaker
Notary Gatekeeper
Facilitation layer for Network Society
membership_application(X, gatekeeper): This is a negotiation between any agent and the gatekeeper of the society resulting in either an acceptance, that is X will become member of the society, or a rejection. The role the agent will play is also determined in this scene. register(M, matchmaker): Knowledge owners or seekers can register their requests with the matchmaker, who will use this information in future matches request_partner(M, matchmaker): Knowledge owners or seekers request possible partners for an exchange. Results in a possibly empty list of potential partners. negotiate_partnership(M, N): Owners and seekers check the viability of an exchange and determine conditions make_contract(M, N, notary): When an agreement is reached, partners register their commitments with the notary. appoint(notary, monitor): The notary appoints a monitor for a contract. It delegates agreed tasks to the monitor. The monitor will keep track of contract status and will act when an undesired state is reached. apply_sanction(monitor, M): when a breech of contract occurs the monitor will contact the faulty party and apply the sanctions agreed upon (either described in the contract or standard in the institution).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
208
Dignum & Weigand
Building Agent Societies The design part of the methodology described above results in a generic model for an agent society. This model specifies the necessary components and relationships that describe the domain. In the next step, an actual agent society will be built by populating the society model with real agents. The behavior of the resulting multiagent system mimics the original system. We assume that the design and implementation of the agents is independent from the design of the society. That is, participating agents are built somewhere else, thus their capabilities cannot be determined a priori. The design of the behavior of the society cannot rely on specific architectural characteristics of the participating agents. Therefore, the society model must impose conditions to the internal model of agents intended to participate in a society (such as the communication language allowed). The functionality of each agent role can be specified in the model in terms of requirements for interaction, communication, behavior, and interface. However, the agent actually performing the role will act according to its own design, and therefore, the behavior of the system may differ from the expected behavior specified in the model, as illustrated in Figure 2. The structure of agent societies will evolve over time as result of interaction between agents. Therefore, the methodology must describe and verify the patterns and processes that model the system dynamics. Figure 2: Influence of agent interpretation in system behavior
modelled role 1
model
modelled modelled role 3 role 2 modelled role 4
Agent A Agent B performed role 2 performed role 3 performed role 1
system
performed role 4
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
209
CONCLUSIONS We presented a global methodology for the design of agent societies that takes the organizational perspective as the starting point and describes the implications of the coordination model of the organization for the architecture and design method of the agent society being developed. The approach specifies the development steps for the design and development of an agentbased system for a particular domain. It provides a generic frame that directly relates to the organizational perception of a problem and allows for existing methodologies to be used for the development, modeling, and formalization of each step. Although there are several agent-based software engineering methodologies available, these are often either too specific or too formal and are not easily used and accepted. We believe that because of its organizationaloriented approach, our methodology will contribute to the acceptance of multiagent technology by organizations. We are currently applying the ideas described in this paper to develop a Knowledge Exchange Network at Achmea. In the future, a system for agentbased mediation for health care will be developed. Experience gained from these applications will be used to improve the design methodology and the coordination frameworks used. Research has to be continued in several directions. Work is needed on the formal description of agent societies based on the coordination frameworks presented. We also intend to develop libraries of conceptual interaction patterns and agent roles. These libraries will improve and facilitate the design of agent societies. Finally, we plan to look at the compatibility and integration of our ideas with current standardization efforts for agent development, such as Agent UML.
REFERENCES Artikis, A., Kamara, L., & Pitt, J. (2001). Towards an Open Agent Society Model and Animation. Proceedings of the workshop on Agent-Based Simulation II, Passau, (48–55). Bond, A. & Gasser, L. (1988). Readings in Distributed Artificial Intelligence. Morgan-Kaufmann. Brazier, F., Jonker, C., & Treur, J. (2000). Compositional Design and Reuse of a Generic Agent Model. Applied Artificial Intelligence Journal, 14, 491–538. Castelfranchi, C. (2000). Engineering Social Order. Omicini, A., Tolksdorf,
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
210
Dignum & Weigand
R., & Zambonelli, F. (Eds.). Engineering Societies in the Agents World, Lecture Notes in Computer Science 1972, Heidelberg: SpringerVerlag, pp. 1–19. Ciancarini, P., Omicini, A., & Zambonelli, F. (1999, July). Coordination Models for Multiagent Systems. AgentLink News, 3. Dautenhahn. (2000): Reverse Engineering of Societies — A Biological Perspective. Proceedings of AISB Symposium “Starting from Society — Application of Social Analogies to Computational Systems.” Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behavior (AISB-00), University of Birmingham, England. Davidsson, P. (2000). Emergent Societies of Information Agents. Klusch, M. & Kerschberg, L. (Eds.). Co-operative Information Agents IV — The Future of Information Agents in Cyberspace, Lecture Notes in Artificial Intelligence 1860, Heidelberg: Springer-Verlag, pp. 143– 153. Davidsson, P. (2001). Categories of Artificial Societies. Omicini, A., Petta, P., & Tolkdorf, R. (Eds.). 2nd International Workshop on Emerging Societies in the Agent’s World (ESAW’01), Prague. Dellarocas, C. (2000). Contractual Agent Societies: Negotiated Shared Context and Social Control in Open Multiagent Systems. Proceedings of Workshop on Norms and Institutions in Multiagent Systems, Autonomous Agents 2000, Barcelona. Dignum, V. & Dignum, F. (2001, December). Modeling Agent Societies: Coordination Frameworks and Institutions. Submitted to MASTA 2001, 2nd Workshop on Multiagent Systems: Theory & Applications, Porto, Portugal. Dignum, V., Weigand, H., & Xu, L. (2001). Agent Societies: Towards Framework-Based Design. Wooldridge, M., Ciancarini P., & Weiss, G. (Eds.). Proceedings of the 2nd Workshop on Agent-Oriented Software Engineering, Autonomous Agents 2001, (25–31). Edmonds, B. (1999). Capturing Social Embeddedness: A Constructivist Approach. Adaptive Behavior, 7, 323–348. Esteva, M., Padget, J., & Sierra, C. (2001). Formalising a Language for Institutions and Norms. Proceedings of the 8th International Workshop on Agent Theories, Architectures and Languages, ATAL-2001, Seattle. Ferber, J. & Gutknecht, O. (1998). A Meta-model for the Analysis and Design of Organizations in Multiagent Systems. Proceedings of the 3rd Interna-
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Toward an Organization-Oriented Design Methodology
211
tional Conference on Multiagent Systems (ICMAS’98), IEEE Computer Society. Frederiksson, M. & Gustavsson, R. (2001). A Methodological Perspective on Engineering of Agent Societies. Omicini, A., Petta, P., & Tolkdorf, R. (Eds.). 2nd International Workshop on Emerging Societies in the Agent’s World (ESAW’01), Prague. Jennings, N.R., Sycara, K., & Wooldridge, M. (1998). A Roadmap for Agent Research and Development. Autonomous Agents and MultiAgent Systems, 1, Boston: Kluwer Academic Press, pp. 275–306. Jonker, C., Klusch, M., & Treur, J. (2000). Design of Collaborative Information Agents. Klusch M. & Kerschberg, L. (Eds.). Cooperative Information Agents IV — The Future of Information Agents in Cyberspace. Lecture Notes in Artificial Intelligence 1860, Heidelberg: SpringerVerlag. pp. 262–283. Likert, R. (1961). New Patterns of Management. New York: McGraw-Hill. Nouwens, J. & Bouwman, H. (1995). Living Apart Together in Electronic Commerce: The Use of Information and Communication Technology to Create Network Organizations. Steinfield, C. (Ed.). Journal of Computer Mediated Communication, Special Issue in Electronic Commerce, 1(3). Retrieved August 3, 2001 from http://www.ascusc.org/ jcmc/vol1/issue3/nouwens.html. Omicini, A. (2000). From Objects to Agent Societies: Abstractions and Methodologies for the Engineering of Open Distributed Systems. Corradi A., Omicini, A., & Poggi, A. (Eds.). WOA 2000 — Dagli Oggetti agli Agenti: Tendenze evolutive dei sistemi software, Bologna: Pitagora Editrice. Omicini, A. (2001). SODA: Societies and Infrastructures in the Analysis and Design of Agent-Based Systems. In Ciancarini P. & Wooldridge, M. (Eds.). Agent-Oriented Software Engineering, Lecture Notes in Computer Science 1957, Heidelberg: Springer-Verlag, pp. 185–194. Papadopoulos, G. & Arbab, F. (1998). Coordination Models and Languages. M.V. Zelkowitz (Ed.). Advances in Computers, 46, Academic Press, pp. 329–400. Powell, W. (1990). Neither Market nor Hierarchy: Network Forms of Organization. Research in Organizational Behavior, 12, 295–336. Sycara, K. (1998). Multiagent Systems. AI Magazine, 19(2), 79–92. Williamson, O. (1975). Markets and Hierarchies: Analysis and Antitrust Implications. New York: Free Press.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
212
Dignum & Weigand
Wooldridge, M. (1997). Agent-Based Software Engineering. IEEE Proc. Software Engineering, 144(1), 26–37. Wooldridge, M., Jennings, N., & Kinny, D. (2000). The Gaia Methodology for Agent-Oriented Analysis and Design. Autonomous Agents and MultiAgent Systems, 3(3), 285–312. Zambonelli, F., Jennings, N., Omicini, A., & Wooldridge, M. (2001). AgentOriented Software Engineering for Internet Applications. Omicini, A., Zambonelli, F., Klusch, M., & Tolkdorf, R. (Eds.). Coordination of Internet Agents: Models, Technologies, and Applications, Heidelberg: Springer-Verlag, pp. 326–346.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
213
Chapter X
Cooperation Between Agents to Evolve Complete Programs Ricardo Aler Universidad Carlos III Madrid, Spain David Camacho Universidad Carlos III Madrid, Spain Alfredo Moscardini University of Sunderland, UK
ABSTRACT In this paper, we present a multiagent system approach with the purpose of building computer programs. Each agent in the multiagent system will be in charge of evolving a part of the program, which in this case, can be the main body of the program or one of its different subroutines. There are two kinds of agents: the manager agent and the genetic programming (GP) agents. The former is in charge of starting the system and returning the results to the user. The GP agents include skills for evolving computer programs, based on the genetic programming paradigm. There are two sorts of GP agents: some can evolve the main body of the program and the others evolve its subroutines. Both kinds of agents cooperate by telling each other their best results found so far, so that the search for a good computer program is made more efficient. In this paper, this multiagent approach is presented and tested empirically. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
214 Aler, Camacho & Moscardini
INTRODUCTION Nowadays, the field of multiagent systems has shown a lot of interest in agents with intelligent skills, so that the agents can collaborate to solve complex problems. For instance, there are successful multiagent applications, and agent-based applications, that allow a user to solve complex problems through the use of artificial intelligence (AI) techniques, in several domains like the following: • Industrial applications—CAPlan (Muñoz-Avila, 2000), NaCoDAE (Breslow, 1998), Paris (Bergmann, 1996) • Information retrieval and knowledge integration—SIMS (Knobloc, 1997), TSIMMIS (Chawathe, 1994) • Web domains—WebPlan (Hullen, 1999), Ariadne (Knoblock, 1998), Heracles (Knoblock, 1998), Letizia (Lieberman, 1995), MetaCrawler (Selberg, 1995), MAPWeb (Camacho, 2001) Several AI techniques have been used in those previous systems, like planning (Allen, 1990), case-based reasoning (Aamodt, 1994) or case-based planning (Hammond, 1986), etc. Those techniques have been successfully integrated in agent-based or multiagent systems, obtaining excellent results. In particular, genetics algorithms (GA) have also been applied in multiagent environments. For instance, it has been applied for evolving agent behaviors and agent groups to solve complex problems. In this approach, a single genetic programming (GP) algorithm is used to evolve the behavior of all the agents in the society (this is called the homogeneous approach, because all the agents in the society share the same controlling program) (Arora, 1997; Haynes, 1997; Munde, 2000). Another approach is the coevolution of agents in which different populations evolve one agent in the team (Bull, 1997; Puppala, 1998). In this paper, we use a collaborative multiagent system to generate a computer program that is able to solve a particular problem (i.e., we follow the coevolutionary approach, but in our case, the task is to build a computer program). A computer program is made of different parts: the main program and several subroutines. Different agents will generate every one of these parts. Finally, there will be a “collecting” agent that will put all of the parts together. A way of finding good subroutines (and main programs) is to carry out a search in the space of subroutines. But in order to do so, it is necessary to have a measure to determine whether a subroutine is good or not (the so-called heuristic function, in the search paradigm). However, in many cases, it is not possible to evaluate different parts of a program separately, only a whole computer program can be properly evaluated. This can be done, for instance, Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
215
by running it and determining how well it solves a problem. But in order to do so, an agent specialized in searching for the first subroutine of a program must also have the rest of the subroutines and the main program, so that a complete program can be run. At this point, the agents can collaborate by sending the rest of the agents the best result they have found so far. By doing so, two points can be achieved: • An agent can evaluate a subroutine (or a main program) being considered at that moment by putting it together with the other parts sent by the other agents. • Different agents can send their best subroutine (or main program) found so far, so that the other agents can use them. This way, some agents can use improvements found by other agents as soon as they are available. By doing this, we hope that the search process will be more efficient. An example of this is science, where discoveries by one scientist are quickly transferred to other scientists, leading to quick advance. If discoveries were transferred only by word of mouth, advance would be much slower. However, it is not clear whether other agents can easily and always adopt the best results obtained by one agent, although we expect that in the long run, different agents would become adept. In this paper, we would like to empirically study this issue. Many techniques could be used to search in the space of computer programs (or subroutines), which is the basic skill of our agents. We considered that the most appropriate is genetic programming (GP) (Koza, 1994). GP is a genetic algorithm that starts with a population of randomly generated computer programs and evolves them by using the Darwinian principle of the supervivence of the fittest. This amounts to a particular beam-search algorithm (Tackett, 1994). Usually, computer programs in GP are represented as parse trees, made of functions that return a value, although this need not necessarily be the case.
GENETIC PROGRAMMING AND AUTOMATICALLY DEFINED FUNCTIONS Genetic Programming Genetic programming is a technique to automatically find computer programs that solve specific problems (Koza, 1994). It uses a genetic algorithm to search the space of possible computer programs. It has three basic elements:
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
216 Aler, Camacho & Moscardini
Figure 1: Crossover between two individuals represented as parse trees
•
A population P of individuals i ∈ P. Every individual is a computer program. Computer programs are usually represented by using primitive functions selected from a set F and terminals chosen from a set T. For instance, the following is an example of genetic programming individual: IF X<5 THEN X=(X+5), where F= { IF-THEN,<,=,+\} and T={X,5}
•
•
The genetic operators O = {m,c,r}, which can transform a computer program into another computer program. Thus, a genetic programming system can find better and better individuals by transforming good ones into better ones, by means of the genetic operators and according to the fitness function. The most commonly used operators are mutation (m: I → I) and crossover (c IxI → IxI), which takes two individuals, combines them, and produces two children programs. As individuals are represented by means of parse trees, recombination is achieved by exchanging subtrees, as displayed in Figure 1. Also, programs can reproduce without any change (this is useful to preserve good individuals into the next generation). Each operator has a probability P(o) of being chosen. A fitness function f(i): I → R measures how well an individual solves a problem, where I is the space of possible computer programs. In order to evaluate an individual, the computer program is run in different contexts or fitness cases (i.e., giving different inputs to the algorithm) and then it is checked to see whether the output is correct or not. Usually, an individual is evaluated by counting how many fitness cases are correctly solved. Better individuals have a larger probability of being chosen to have offspring and being the seed of future search. The algorithm followed by genetic programming can be seen in Figure 2.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
217
Figure 2: Genetic programming algorithm 1. Start in generation g = 0 2. Generate population P(0) by randomly creating individuals with elenements chosen from F and T 3. Until a solution is found or a number of generations is exceeded, do: 3.1. ∀ i ∈ P(g) obtain its fitness P(i) 3.2 Until a P(g + 1) is full, do: 3.2.1. Select o∈Ο according to f(i) 3.2.2. If o = m or o = r 3.2.2.1. Choose an i according to f(i) 3.2.2.2. Compute the offspring s = o(i) 3.2.2.3. Put s into P(g + 1) 3.2.3. If o = c 3.2.3.1. Choose i_1 and i_2 according to f(i_1) and f(i_2) 3.2.3.2. Compute the offspring (s_1, s_2) = c(i_1, i_2) 3.2.3.3. Put s_1, s_2 into P(g + 1) 3.3. g = (g + 1) 4. Return the best individual found so far
Automatically Defined Functions (ADFs) In order to improve the performance of standard GP, Koza developed a new paradigm in which an individual contains the main program and a set of subroutines that can then be called from the main program (Koza, 1994). The crossover operator has been modified so that it only happens between homologous parts of the individuals. For instance, the first subroutine of an individual can only be crossed over with the first subroutines of another individual (but not with its second subroutine or its main program). Koza showed that for a variety of problems, GP requires less computational effort to solve a problem with ADFs than without them, provided the difficulty of the problem is above a certain relatively low problem-specific breakeven point for computational effort (i.e., ADFs should be used with difficult enough problems).
MULTIAGENT SYSTEM ARCHITECTURE Our multiagent system (MAS) implementation has two kinds of agents: the manager agent and the GP agents (genetic programming agents). The purpose of the manager agent is to create as many GP agents as necessary and Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
218 Aler, Camacho & Moscardini
Figure 3: Multiagent genetic system architecture and interaction points between different GP agents and the manager agent Manager Agent
GP_agent1
GP_agent0
GP_agent2
to give them enough information so that they can communicate with each other. It also tells each GP agent what their purpose is: to evolve a main program or to evolve a subroutine. Finally, the manager agent will collect the results obtained by each of the agents, put them together, and return the whole program to the user. Figure 3 shows the architecture of the multiagent genetic system and the interaction points between the agents. As Figure 3 shows all the GP agents can communicate with any GP agent in the system and with the manager agent. The GP agents have two main skills. First, they can use the GP paradigm to evolve computer code, be it a main program or a subroutine. Second, they can communicate to the other GP agents their best main programs or subroutines found so far. This is used by the other agents to evaluate their individuals. Agents can run in different machines, and, although we have not done so, a single agent could run a paralellized version of GP in different machines (so, single agents could also be distributed). In order to make things simple, let us suppose that the structure of a computer program able to solve a particular problem consists of just two parts: a main body and one ADF (ADF0). Each part of the computer program will be evolved by different agents. The GPmain agent will evolve the population of main bodies, and the GPsub0 will evolve the ADF0s. Figure 4 shows how the GP agents communicate their best individual so that the other agents can evaluate their individuals. In order to evaluate a member (a main body) of the agent that evolves main programs, it will be coupled with the best ADF0 supplied from the agent responsible of evolving ADF0. Then, a whole computer program will be built, which can be run. Then, it is possible to determine how well that individual performs. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
219
Figure 4: Interrelations between the manager agent and the GP agents involved in the coevolution of the complete program GPmain agent
Manager agent
Initializatio
GPsub0 agent
n
InInInititiiaallizization
Best ADF B es t m
a in b o d
y
GP algorithm
GP algorithm
Best main body
DF B e st A Best
Best complete program
main
body
B es t A D F
n bo dy B es t m ai ADF Best
Similarly, in order to evaluate an individual of the ADF agent, the ADF will be coupled with the best main body supplied by the agent responsible of evolving main programs, and the resulting individual will be run. Of course, it is impossible to evaluate an individual of an agent until a best individual has been obtained by the other agent. But this is also true for individuals belonging to the other agent. As the process must start somewhere, at the beginning, each agent will supply one of its initial randomly generated individuals until the agent finds something better. Therefore, all individuals belonging to the agent responsible for finding good main programs will be coupled and evaluated with the same ADF (the best one obtained so far by the ADF agent), and vice versa. Although Figure 4 displays only two agents, many more ADF agents could be used in problems requiring more ADFs. The next section shows some experimental results obtained by our system for a scalable problem, that of determining whether in N binary inputs, the number of ones is even or odd. This is a scalable problem known in the literature as EVEN-N-PARITY. We have tested it for N = 5 and N = 6. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
220 Aler, Camacho & Moscardini
EXPERIMENTAL RESULTS The system has been tested with the EVEN-5-PARITY and EVEN-6PARITY problems, described in Koza (1994). Table 1 shows the tableau for the EVEN-5-PARITY problem with ADFs (the EVEN-6-PARITY problem is similar). This table is similar to Koza’s but for M and G.1 Koza’s M is 16,000, whereas, we use much smaller population sizes of 200 for EVEN-5-PARITY and 400 for EVEN-6-PARITY. Besides, as we wanted to explore the behavior of the system for long runs, our G has been extended to 150 (Koza’s being G = 51). As we used different parameters, we performed a series of experiments for Koza’s ADFs as well, so that it could be compared to our system. From now on, Koza’s ADF results will be referred to as Kadf and our system results as Iadf (“Independent ADFs”). As Koza states in (Koza, 1994), a good way to determine how well an adaptive system performs (for a given problem and chosen parameters) is to obtain the computational effort (E) for that problem. The computational effort I(M,i,p) is the number of individuals that should be processed by GP in order to give a solution with 0.99 probability by the ith generation (or before). Details about how this is computed can be found in Koza (1994). The smaller the computational effort is, the better. Computational effort and related data for Kadf and Iadf are shown in Table 2. Graphs displaying computational effort per generation are shown in Figures 5 and 6. Also, Figures 7 and 8 show the cumulative probabilities of solving EVEN-5-PARITY and EVEN-6-PARITY respectively (i.e., the probability of solving a problem by generation i). In order to understand these graphs, it has to be taken into account that the results for the different GP agents have been interleaved. That is, if we let each one of the n GP agents run for 150/ n generations, the fitness of the best individual found by the first agent appears at generation 0, the fitness of the second agent appears at generation 1, and so on. And again, the current fitness of the first agent will appear again at generation n + 1, and so on.
DISCUSSION It turns out that Iadf performs slightly better than Kadf (see Table 2) for the EVEN-5-PARITY problem (being the effort ratio E = 1.134594) and much better for the more complex EVEN-6-PARITY problem (E = 2.471301). However, that is the effort that the system would have spent had we chosen G = i*. But i* is not a datum we can know a priori. Had we started our runs
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
221
Table 1: Tableau with ADFs for the EVEN-5-PARITY problem Objective
Find a program that produces the value for the Boolean EVEN-5-PARITY function as its output when given the values of the tree independent Boolean variables as its input
Architecture
One result-producing branch and two twoargument functions-defining branches, with ADF1 hierarchically referring to ADF0
Parameters
Branch typing
Terminal set for the result-producing branch
D0, D1, D2, D3, D4
Function set for the result-producing branch
ADF0, ADF1, AND, OR, NAND, and NOR
Terminal set for the ARG0 and ARG1 function-defining branch ADF0 Function set for the AND, OR, NAND, and NOR function-defining branch ADF0 Terminal set for the ARG0 and ARG1 function-defining branch ADF1 Function set for the AND, OR, NAND, NOR, and ADF0 function-defining branch ADF1 (hierarchical reference to ADF0 by ADF1) Fitness cases
All 2 ^ 5 = 32 combinations of the five Boolean arguments D0, D1, D2, D3, D4
Raw fitness
The number of fitness cases for which the value returned by the program equals the correct value of the EVEN-5-PARITY function
Standardized fitness
The standardized fitness of a program is the sum, over the 32 fitness cases, of the Hamming distance (error) between the value returned by the program and the correct value of the Boolean EVEN-5-PARITY function
Hits
Same as raw fitness
Wrapper
None
Parameters
M = 200, G = 150
Success predicate
A program scores the maximum number of hits
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
222 Aler, Camacho & Moscardini
Table 2: Computational effort results (and related data) for Kadf and Iadf
Number of experiments Population size Effort = mini=0..149(I(M,i,0.99)) Best Generation i* Ekadf /EIadf
EVEN- 5 -PARITY Kadf Iadf 194 200 200 200 408000 359600 33 30 1.134594 1.134594
EVEN- 6 -PARITY Kadf Iadf 111 84 400 400 1550000 627200 30 48 2.471301 2.471301
without this knowledge, we could have chosen any other G and spent a different computational effort I. In order to have a better picture of what happens for different values of G, graphs displaying computational effort are shown in Figures 5 and 6. Also, Figures 7 and 8 show the cumulative probabilities of solving EVEN-5-PARITY and EVEN-6-PARITY, respectively. Results in these graphs can be easily summarized: • Iadf has a smaller computational effort than Kadf for all generations, especially for late generations (see Figure 5). This fact is even more noticeable for the more difficult problem (EVEN-6-PARITY) (see Figure 6). • Iadf manages to keep a steady rate of improvement (in terms of cumulative probability of success) for longer than Kadf (see Figure 7). Kadf’s rate diminishes by generation 30, while Iadf continues improving at a good pace for much longer. Again, this is even more noticeable in the EVEN6-PARITY problem (see Figure 8). Figure 5: Computational effort (I) for Kadf and Iadf, given that EVEN-5PARITY should be solved by generation i with probability z = 0.99
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
223
Figure 6: Computational effort (I) for both Kadf and Iadf, given that EVEN-6-PARITY should be solved by generation i with probability z = 0.99
Figure 7: Cumulative probability of solving EVEN-5-PARITY by generation i with M = 200 for Kadf and Iadf
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
224 Aler, Camacho & Moscardini
Figure 8: Cumulative probability of solving EVEN-6-PARITY by generation i for Kadf and Iadf
RELATED WORK The system we studied in this paper can be considered an extreme case of coevolution (Hillis, 1992), albeit a strange one, because interaction between populations happens only through the best individual of each population. Coevolution of a main program and several independent ADF populations has been dealt with in Ahluwalia (1997). They do not use the multiagent framework, though. Also, we are more interested in how different agents can collaborate to build complete programs than in studying general ADF coevolution. Other differences are that we use a generational model for all evolving populations instead of a steady-state model and that we favor program-level fitness evaluation instead of directly evaluating the individuals in the ADF subpopulations (which cannot always be done). Also, our results are in terms of computational effort to solve the problem rather than of average results per generation, as in their case. Some initial results can be seen in Aler (1998). Finally, hierarchical coevolution of subroutines has been studied in Racine (1998) and Aler (2001).
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
225
A related work reports the use of shared memory [initially proposed in Teller (1994)] between all the members of a population (Spector, 1996). That is, they use a global memory as a form of culture, to transfer data to all individuals in a population, with positive results. Although the aim is similar to ours, the mechanism used in both cases is different. In our case, what is transferred is not data but actual pieces of code.
CONCLUSIONS This article has two main contributions. First, we proposed a multiagent framework in which several agents can build a program together by searching different parts of the program separately. For instance, the main program will be searched for by an agent, and the subroutines of the program will be looked for by other agents. In this manner, each agent can become specialized in part of the problem. The agents in the system have two different skills. First, they can search in the space of computer programs by using GP.2 Second, they can communicate their best results (i.e., best main program or best subroutine) so far to the other agents in the system. In this way, good results are used by other agents as soon as they are available. Also, the other agents can run complete programs to evaluate their fitness, which is required by GP. They use the parts of the program supplied by the other agents in order to evaluate every individual in their own populations. Experiments have been carried out to determine whether dissemination of results obtained by the agents to the other agents improves search or not. It has been found that, in the EVEN-5,6-PARITY problems, performance (in terms of minimum computational effort) is better. Our system shows a curious effect: the cumulative probability of success keeps increasing at a good rate for longer than Kadf. That is, it does not seem to stagnate as soon as GP (or GP + ADF) does. The approach scales well, obtaining better results for the more complex problem than for the simpler one. Our approach is another way to parallelize GP, with the advantage that communication between populations happens at a very small rate: all the information populations need to exchange is the best individual obtained so far, which changes rather slowly. This kind of parallelism would be useful for problems requiring many different ADFs.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
226 Aler, Camacho & Moscardini
ENDNOTES 1 2
M is the size of the population, and G is the number of generations. However, other techniques could be used within this multiagent framework.
REFERENCES Ahluwalia, M., Bull, L., & Fogarty, T.C. (1997). Coevolving Functions in Genetic Programming: A Comparison in ADF Selection Strategies. Genetic Programming 1997: Proceedings of the Second Annual Conference, 3–8. Stanford University, CA:. Morgan Kaufmann. Aler, R. (1998). Immediate Transference of Global Improvements to All Individuals in a Population in Genetic Programming Compared to Automatically Defined Functions for the EVEN-5,6-PARITY Problem. Proceedings of the First European Workshop on Genetic Programming, Vol. 1391 of LNCS, 60–70. Paris. Heidelberg: Springer-Verlag. Aler, R., Blazquez, F., & Camacho, D. (2001). Experimentación en Programación Genética Multinivel. Revista Iberoamericana de Inteligencia Artificial, 13, 10–22. Allen, J.F., Hendler, J., & Tate, A. (1990). Readings in Planning. Morgan Kaufmann. Arora, N. & Sen, S. (1997). Resolving Social Dilemmas Using Genetics Algorithms: Initial Results. Proceedings of the 7th International Conference on Genetic Algorithms, 689–695. San Mateo, CA: Morgan Kaufmann. Bergmann, R. & Wilke, W. (1996). Paris: Flexible Plan Adaptation by Abstraction and Refinement. Workshop on Adaptation in Case-Based Reasoning (ECAI-96). New York: John Wiley & Sons. Breslow, L. & Aha, D.W. (1998). Nacodae: Navy Conversational Decision Aids Environment (aic-97-018). Technical report, NCARAI, Washington, DC. Bull, L. (1997). Evolutionary Computing in Multiagent Environments: Partners. Proceedings of the Seventh International Conference on Genetic Algorithms, 370–377. Camacho, D., Borrajo, D., Molina, J.M., & Aler, R. (2001). Flexible Integration of Planning and Information Gathering. Proceedings of the European Conference on Planning (ECP-01), Toledo, Spain. Heidelberg: Springer-Verlag.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cooperation Between Agents
227
Chawathe, S., Garcia-Molina, H., Hammer, J., Ireland, K., Papakonstantinou, Y., Ullman, J.D., & Widom, J. (1994). The Tsimmis Project: Integration of Heterogeneous Information Sources. 16th Meeting of the Information Processing Society of Japan, 7–18. Tokyo. Hammond, K. (1986). Chef: A Model of Case-Based Planning. Proceedings of Fifth National Conference on Artificial Intelligence, 267–271. Hillis, D. (1992). Coevolving Parasites Improve Simulated Evolution as an Optimization Procedure. Artificial Life II, Vol. X of Santa Fe Institute Studies in the Sciences of Complexity, 313–324. AddisonWesley, Santa Fe Institute, New Mexico. Hullen, J., Bergmann, R., & Weberskirch, F. (1999). Webplan — Dynamic Planning for Domain-Specific Search in the Internet. Workshop Planen und Konfigurieren (PuK-99). Knoblock, C.A. & Ambite, J.L. (1997). Agents for Information Gathering. AAAI/MIT Press, Menlo Park, CA. Knoblock, C.A. & Minton, S. (1998). The Ariadne Approach to Web-Based Information Integration. IEEE Intelligent Systems, 13(5). Knoblock, C.A., Minton, S., Ambite, J.L., Muslea, M., Oh, J., & Frank, M. (2001). Mixed-Initiative, MultiSource Information Assistants. The Tenth International World Wide Web Conference (WWW10). ACM. Koza, J.R. (1994). Genetic Programming II. Cambridge, MA: MIT Press. Lieberman, H. (1995). Letizia: An Agent that Assists Web Browsing. International Joint Conference on Artificial Intelligence (IJCAI95), 924– 929. Mundhe, M. & Sen, S. (2000). Evolving Agent Societies that Avoid Social Dilemmas. Proceedings of GECCO2000, 809–816, Las Vegas, NV. Muñoz-Avila, H., Hendler, J.A., & Aha, D.W. (1999). Conversational CaseBased Planning. Review of Applied Expert Systems, 5, 163–174. Plaza, E. & Aamodt, A. (1994). Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AICom-Artificial Intelligence Communications, 7(1), 39–59. Puppala, N., Gordin, N., & Sen, S. (1998). Shared Memory Based Cooperative Coevolution. Proceedings of the International Conference on Evolutionary Computation’98. IEEE Press. Racine, A., Schoenauer, M., & Dague, P. (1998). A Dynamic Lattice to Evolve Hierarchically Shared Subroutines: DL-GP. Proceedings of the First European Workshop on Genetic Programming, 220–232. Paris. Heidelberg: Springer-Verlag.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
228 Aler, Camacho & Moscardini
Selberg, E. & Etzioni, O. (1997). The Metacrawler Architecture for Resource Aggregation on the Web. IEEE Expert, 8–14. IEEE. Sen, S. & Haynes, T. (1997). Crossover Operators for Evolving a Team. Proceedings of Genetic Programming 97: The Second Annual Conference, 162–167, San Francisco, CA, Morgan Kaufmann. Smith, A.J. & Brown, C.J. (1991). Organizations and Database Management. Data Source, 10(4), 77–88. Spector, L. & Luke, S. (1996). Cultural Transmission of Information in Genetic Programming. Genetic Programming 1996: Proceedings of the First Annual Conference, 209–214, Stanford University, CA. Cambridge, MA: MIT Press. Tackett, W.A. Recombination, Selection, and the Genetic Construction of Computer Programs. PhD thesis, University of Southern California, Department of Electrical Engineering Systems. Teller, A. (1994). Turing Completeness in the Language of Genetic Programming with Indexed Memory. Proceedings of the 1994 IEEE World Congress on Computational Intelligence, 1, 136–141, Orlando, FL. IEEE Press.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 229
About the Authors Valentina Plekhanova is a senior lecturer in computing at the School of Computing, Engineering and Technology at the University of Sunderland, UK. Dr. Plekhanova holds a M.Sc./M.Phil. in applied mathematics and theoretical mechanics from the Novosibirsk State University, Academgorodok, Russia. Her Ph.D. is in application of computer technology, mathematical modelling and mathematical methods in scientific research, from the Institute of Information Technologies and Applied Mathematics, Russian Academy of Sciences. She has held a number of research and lecturer positions in Russia and Australia. Dr. Plekhanova has international experience in lecturing on subjects such as software engineering, knowledge engineering, artificial intelligence, theory of probability, computational science, and optimisation. Dr. Plekhanova was a software engineering consultant with P-Quant in Sydney, Australia. She was a project investigator in several international research and industrial projects in Russia and Australia. Research results were published in international journals and conference proceedings. Her research interests include engineering the cognitive processes, learning processes, machine learning, knowledge engineering, modelling intelligence in software systems, quantitative software project management, software process analyses and process improvement. * * * Ricardo Aler is a lecturer in the Department of Computer Science at Universidad Carlos III in Spain. He has researched several areas, including automatic control knowledge learning, genetic programming, and machine learning. He has also participated in international projects about automatic machine translation and optimising industry processes. He holds a Ph.D. in computer science from Universidad Politécnica de Madrid (Spain) and a M.Sc. in decision support systems for industry from Sunderland University (UK). He graduated in computer science at Universidad Politécnica de Madrid. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
230 About the Authors
Eduardo Alonso is a lecturer in the Department of Computing at City University. He is a member of the agents@city group and of the School of Informatics Distributed and Intelligent Systems Group. He is a member of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (SSAISB) Committee. Dr. Alonso’s research is focused on implementing learning algorithms for complex, dynamic applications such as e-commerce scenarios. Industrial collaborators are BTexact and MINOS-97. Dr. Alonso’s complete CV can be found at http://www.soi.city.ac.uk/~eduardo/resume.html. Penny Baillie is currently a lecturer of computer science with the Department of Mathematics and Computing at the University of Southern Queensland, Australia, where she teaches in multimedia systems, computer graphics, and artificial intelligence. She holds a B.InfoTech and a Ph.D. (computer science) from the University of Southern Queensland, and an Hons. (computer science) from the University of New England. Her main research is in affective computing, and online teaching delivery and assessment management. Claudio Bonacina obtained his master’s degree in computer engineering at the Politecnico di Milano, Milan, Italy in 1999. His graduation thesis considered fuzzy and crisp knowledge representation in learning classifier systems applied to autonomous agents. Since October 1999 he has been working as a Ph.D. student in the Intelligent Computer Systems Centre at the University of the West of England (UWE) in Bristol, UK. His research investigates the possibility of applying evolutionary computation to multiagent systems. His Ph.D. is sponsored by the Future Technologies Group at BTexact Technologies. He is a member of the Learning Classifier System Group (LCSG) at UWE. In September 2001, he co-founded the Web-based ECoMAS (Evolutionary Computation in MultiAgent Systems) community. Since then he has been an active co-organizer of ECoMAS. Luís Brito received a D.Eng. diploma of computer science and systems engineering from the University of Minho, Portugal, in 1999. He was awarded several prizes for academic achievement (Senado Universitário Prize, Governo Civil Prize and Sociedade Martins Sarmento Prize) and is now a Ph.D. student in the Computer Science Department of the School of Engineering at the University of Minho. His research interests focus on theoretical foundations of argumentation, argument exchange in heterogeneous environments, intelligent agents, electronic commerce, and logic programming. He has published several articles on international conferences and workshops. He is a member of the Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 231
American Association for Artificial Intelligence (AAAI), of the International Association for Science and Technology for Development (IASTED) and of the Portuguese Association for Artificial Intelligence (APPIA). His web page is available at http://alfa.di.uminho.pt/~lbrito. David Camacho is a lecturer in the Department of Computer Science at Universidad Carlos III de Madrid in Spain (UC3M) where he is a member of the Complex and Adaptive Systems Group (SCALab). He holds a Ph.D. in computer science from Universidad Carlos III de Madrid. He has a B.Sc. in physics from the Universidad Complutense de Madrid. He has researched in several areas, including planning, inductive logic programming, fuzzy logic, and multiagent systems. He has also participated in international projects about automatic machine translation and optimising industry processes. Don Cruickshank is a researcher in the field of pervasive computing and networks in the Intelligence Agents Multimedia group at the University of Southampton, UK. His research interests include open hypermedia architectures, multiagent systems, visual languages and temporal metadata processing systems, with a particular focus on systems to support collaboration between human and artificial societies. He is actively working with Next Generation Networks, including mobile IPv6 deployment and routing architectures for multimedia streams. David De Roure is professor of computer science at the University of Southampton, UK, where he leads the distributed systems activity in the Intelligence Agents Multimedia group. He obtained a Ph.D. in computer science in 1990. His research interests include large scale distributed systems and pervasive computing, with a particular focus on adaptive information systems and the application of knowledge technologies to support collaborative working. He is actively involved in semantic Web and grid computing research projects. Darryl N. Davis is a lecturer in the Department of Computer Science at the University of Hull, England. He has a B.Sc. in experimental psychology, a M.Sc. in knowledge base systems and Ph.D. in diagnostic and investigative medicine. Dr. Davis has worked in human visual perception and has over 14 years experience in artificial intelligence systems. These have been successfully applied to classification and computer vision problems in business, medicine and geology. He has research interests in cognitive science (in particular Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
232 About the Authors
cognition, motivation and emotion), agent technology as a metaphor for the mind and as a vehicle for domain applications, the application of computational intelligence to medical domains and machine vision. He has published widely in all these fields. J. Debenham is a professor of computer science at the University of Technology, Sydney, Australia. He is the author of two books on the design of intelligent systems. His recent research has focussed on multiagent systems with business process management as his chosen application domain. That work is now being extended into distributed eMarkets where all transactions are managed as business processes by smart management systems. Prof. Debenham is chair of the Australian Computer Society’s National Committee for Artificial Intelligence. Virginia Dignum studied mathematics and computer science at the University of Lisbon, Portugal, and the Free University of Amsterdam, The Netherlands. Currently, she works for Achmea as a Ph.D. researcher in co-operation with the Intelligent Systems Group of the Institute of Information and Computing Sciences at the University of Utrecht, The Netherlands. Her professional experience includes consultancy and development of knowledge and information systems. Her research focuses on the role of knowledge in organisations, and the applicability of the agent paradigm to knowledge creation, sharing and representation. She participated in the ESPRIT project KARE—Knowledge Acquisition and Sharing for Requirements Engineering and is co-organiser of the AAAI Spring Symposium on agent-mediated knowledge management. G. Eleftherakis holds a B.Sc. in physics and a M.Sc. in computer science. His current interests are in formal methods and especially in formal verification techniques. He is contacting research in the area of temporal logic and model checking and particularly in the development of a formal verification technique for the X-machine formal model. He also has special interest for Internet applications and has dealt with network programming and databases, especially using Java. He has published more than 10 papers especially in the area of formal methods. He is currently a lecturer in the Computer Science Department at CITY College, an affiliated institution of the University of Sheffield in Greece. M. Gheorghe holds a B.Sc. in mathematics and computer science and a Ph.D. in computer science both with Bucharest University, Romania. Dr. Gheorghe has a long-standing interest in computational models, both on their theoretical Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 233
side as well as in their applications in various fields of computer science. He published around 40 papers in different journals or international conference proceedings on subjects such as: generative power of various formal grammars, closure properties of the families of languages generated by them, formal specifications in software engineering, software testing, computational models of agents based on grammar systems, natural computing. Dr. Gheorghe is now a lecturer in the Department of Computer Science, Sheffield University, UK, where he teaches object-oriented methodologies for software analysis and design. B. Henderson-Sellers is director of the Centre for Object Technology Applications and Research and a professor of information systems at the University of Technology, Sydney (UTS). He is the author of nine books on object technology and is well known for his work in OO methodologies (MOSES, COMMA and OPEN) and in OO metrics. Brian was the founder of the Object-Oriented Special Interest Group of the Australian Computer Society (NSW Branch) and is a frequent, invited speaker at international OT conferences. In July 2001, Prof. Henderson-Sellers was awarded a Doctor of Science (D.Sc.) from the University of London for his research contributions in object-oriented methodologies. M. Holcombe, B.Sc. M.Sc. Ph.D., C.Eng., F.B.C.S., C. Math, and F.I.M.A., is a professor of computer science and the dean of the Faculty of Engineering, at the University of Sheffield, UK. He contacted research in the following areas: software and systems engineering, software testing, formal methods in systems engineering, formal specification and test generation for software and systems, requirements engineering, specification and analysis of hybrid systems, theoretical computer science, algebraic theory of general machines, formal models of user behaviour and human-computer interface design, visual formal specification languages and visual reasoning, biological systems, biocomputing and the computational modelling of cellular processing, metabolic systems theory, computational models of immunological systems, and developmental modelling of plant growth. He has published more than 80 papers and several books on theoretical computer science, software engineering and computational biology. Dimitar Kazakov is a lecturer in machine learning at the University of York. He graduated from the Czech Technical University, Prague, with a degree in control engineering in 1993, and received a Ph.D. in artificial intelligence and
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
234 About the Authors
biocybernetics from the same university in 2000. Dr. Kazakov’s main research contributions are at the intersections of machine learning, natural language, and multiagent systems, with a particular interest in unsupervised acquisition of language and the emergency of communication in societies of agents. During 2002, Dr. Kazakov has been a co-chair of the second AISB symposium on adaptive agents and multiagent systems in London. The author’s home Web page is http://www-users.cs.york.ac.uk/~kazakov/. P. Kefalas is the vice principal at CITY College, Thessaloniki, Greece. He holds a M.Sc. in artificial intelligence and a Ph.D. in computer science, both with the University of Essex, UK. He contacted research in parallel logic programming and search algorithms in artificial intelligence. He has published around 40 papers in journal and conference proceedings and co-authored a Greek textbook in Artificial Intelligence. He is currently involved in investigating the applicability of formal methods for specifying, verifying and testing agent systems. He is currently a reader in the Computer Science Department of CITY College, and teaches logic programming, artificial intelligence and intelligent agents. Daniel Kudenko is a lecturer in the Artificial Intelligence Group at the University of York. He received a Ph.D. in machine learning in 1998 from Rutgers University, NJ, USA. Dr. Kudenko has been a visiting researcher at the German Research Center for Artificial Intelligence (DFKI). Dr. Kudenko’s interests include machine learning, multiagent systems, and knowledge representation. His current research focuses on reinforcement learning of coordination and distributed inductive learning. Dr. Kudenko has co-chaired the first and second AISB Symposium on adaptive agents and multiagent systems, and co-edited the AISB journal special issue on agent technology. Furthermore, he has served on the program committee of ICML ’00 and CIA’02. Dickson Lukose is currently a consultant knowledge engineer with Mindbox Inc., USA, where he holds the position of principle knowledge engineer. He is also a member of the Australian Computer Science National Committee for Artificial Intelligence and Expert Systems (ACS-AIES). He holds a Ph.D. (computer science) from Deakin University, Australia, and was a Leverhulme Research Fellow at Loughborough University, UK. His main research interest is in automated planning and reasoning, conceptual structures, knowledge
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 235
engineering, affective computing and commercial applications of artificial intelligence based techniques into the financial, utility, and transportation industry. Luc Moreau is a reader in the Department of Electronics and Computer Science at the University of Southampton, UK. He obtained a Ph.D. in computer science in 1994 from the University of Liège, Belgium. He has been conducting research on a range of distributed algorithms including reference counting, directory services, and message routing for mobile agents. His investigation covers the spectrum of software engineering: design, specification, proof of correctness, implementation, performance evaluation, and application. He is the chief architect of SoFAR, the SOuthampton Framework for Agent Research. Alfredo Moscardini has been a professor of mathematical modelling at the University of Sunderland, UK, for 10 years. His research includes cybernetics, system dynamics and their application to economics. He is also leader of the Research Group in Decision Support Systems at the university. Recently he has been working with colleagues in the area of neural networks. He is presently leading a computational intelligence team and is responsible for the creation of a new master’s course in this area. He has been working for 15 years with universities in Eastern Europe and is responsible for introducing many of these ideas in Bulgaria, Estonia and the Ukraine. José Neves received a D.Eng. in chemical engineering from the University of Coimbra, Portugal, in 1976, a M.Sc. in software development and analysis from the Computer Science Department at the Heriot-Watt University in Edinburgh, Scotland, UK, in 1981, and a Ph.D. from the Computer Science Department at the Heriot-Watt University in Edinburgh, Scotland, UK, in 1983. He is now a full professor in the Computer Science Department of the School of Engineering at the University of Minho and he is a research member of the Computer Engineering and Artificial Intelligence Research Group (Algoritmi). His research interests focus on: intelligent agents and learning, logic programming and knowledge representation and reasoning. He has published several articles on national and international conferences and workshops, as well known journals. He is a member of the American Association for Artificial Intelligence (AAAI) and a member of the Portuguese Association for Artificial Intelligence (APPIA). His web page is available at http://www.di.uminho.pt/ ~jneves. Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
236 About the Authors
Paulo Novais received a D.Eng. diploma of computer science and systems engineering from the University of Minho, Portugal, in 1992, and a M.Sc. degree in computer science from the School of Engineering at the University of Minho, in 1998, and is now finishing a his Ph.D. at the Computer Science Department of the School of Engineering at the University of Minho. Since 1996, he is an assistant in the Computer Science Department of the School of Engineering at the University of Minho and is a research member of the Computer Engineering and Artificial Intelligence Research Group (Algoritmi). His research interests focus on intelligent agents, electronic commerce, logic programming, knowledge representation and reasoning, and case-based reasoning. He has published several articles on national and international conferences and workshops. He is a member of the Computing Engineering Society (Engineering Society/Portugal) and a member of the Portuguese Association for Artificial Intelligence (APPIA). His web page is available at http://www.di.uminho.pt/~pjn. Robert E. Smith is director of the Intelligent Computing Systems Centre at The University of the West of England. Dr. Smith leads ongoing research efforts in evolutionary algorithms, particularly evolutionary computation theory, evolving network agents, and cooperative computation. Dr. Smith has also conducted research on the application of neural networks. He has authored 18 journal articles, 7 invited book chapters, and more than 40 conference papers on these subjects. He is a former associate professor of aerospace engineering at the University of Alabama. He has conducted research projects for the US Army Strategic Defense Command, The Center for Nonlinear Studies, Los Alamos National Laboratory, Oak Ridge National Laboratories, NASA, Boeing, NSF, British Aerospace and British Telecom. He is a former associate editor of The IEEE Transactions on Evolutionary Computation and a current associate editor of the journal, Evolutionary Computation. Mark Toleman is currently an associate professor of information systems in the Faculty of Business at the University of Southern Queensland, Australia. He is deputy chair of the USQ academic board and chair of its course review committee. He is also an associated academic with the Software Verification Research Centre at the University of Queensland. Dr. Toleman is currently Queensland state chair of the Computer-Human Interaction Special Interest Group of the Ergonomics Society of Australia. He holds a Ph.D. (computer science) from the University of Queensland, a M.Sc. (mathematics) from James Cook University, and a Grad.Dip.Inf.Proc. and B.App.Sc. (mathematics) from Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 237
the Darling Downs Institute of Advanced Education. His main research interests include usability of computer-aided software engineering (CASE) tools, software agents, e-commerce issues, programming language issues, and web aesthetics. Hans Weigand studied computer science and mathematics at the Free University of Amsterdam, The Netherlands, where he received a Ph.D. in 1989. Since then, he has been working in the Faculty of Economics at Tilburg University, currently as an associate professor in computer science. His research interests include the language/action perspective, formal models of communication, agents, and electronic commerce. He has been involved in several ESPRIT projects with industry, one of which was aimed at supporting negotiation and contracting support in e-commerce (MEMO). Norliza Zaini is a research assistant and Ph.D. student under the supervision of Dr. Luc Moreau in the Electronics and Computer Science Department, University of Southampton, UK. She received her B. Eng. in computer systems engineering from the University of Kent at Canterbury in 1999. Before going to the University of Southampton, she worked as a system analyst at Multimedia University Malaysia.
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
238 Index
Index A Abstract Factory Pattern 127 active learning 5 activity scheduling 173 adaptation 119 affective space 99 agency compiler 52 agent 69 agent communication language (ACL) 56 agent interaction protocol 162 agent personalization 166 agent societies 194 agent-oriented methodology 161, 193 agent-oriented software engineering (AOSE) 139 agent’s environment 166 agents’ roles 166 agreement 141, 145 animation 198 any-time algorithms 17 argument-based negotiation 140 argumentation 139 artificial life 28 autonomous agents 120 autonomous behavior 193 autonomous components 161 autonomous entities 192 autonomy 51
B background knowledge 8 Baldwin effect 14
batch learning 5 BDI architecture 71 belief revision 176 Bridge Pattern 127 building blocks 123 business process management 177
C Cassiopeia method 81 close loop machine learning (CLML) 6 cognition 28 collective foraging behavior 84 commitment management 172 communicating X-machine 82 communication 12 communication language 194 communication protocol 162 component-based development 161 computational grid 52 concept language 3 conceptual architecture 163, 170, 182 concurrent METATEM 71 conflict simulation (CS) 17 contract net protocol 90 contract nets 172 control architecture 170, 175, 185 CTL 77
D Darwinian evolution 14 delegation 145 delegation strategy 168 deliberative reasoning 173, 184
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 239
DESIRE framework 71 distributed inductive learning 15 Distributed Information Management (DIM) 50 drives 28
E EC-specific 124 electronic commerce (EC) 138 emergent properties 166 emotion blending 109 emotional state decay 110 Emotionally Motivated Artificial Intelligence (EMA) 100 endpoint 55 engineering methodologies 194 EvoAgent 129 evolution 14 evolutionary computation (EC) 119 explanation-based learning 7 Extend Logic Programming (ELP) 138
F finite state machines 70 formal methods 70
G generic framework 193 genetic programming (GP) algorithm 214 genetics algorithms (GA) 214 goal-driven process 177 gratitude 141, 145
H heterogeneity 13 heterogeneous team 13 hierarchies 196 homogeneous team 13 human social phenomena 193
I implicit parallelism 124 incremental learning 5 inductive learning 5
intelligent agents 161 intelligent multiagent systems 161 intelligent skills 215 interaction rules 194
K k-armed bandit 122 KQML parsers 90 Kripke structure 76
L Lamarckian evolution 14, 15 language bias 3 learning bias 3 learning strategies 175
M machine learning (ML) 2 markets 196 mediation 141 memes 123 memetic algorithms 123 minimal description length (MDL) 4 model checking 76 modeling 70 motivations 28 multiagent system (MAS) 119, 193, 214 multicast mode 58 multidimensional emotional state 103 multiple single-agent learning 10
N negotiation 138 networks 197
O object language 3 object orientation 161 Occam’s razor 4 ontology compiler 52 OPEN (Object-Oriented Process, Environment and Not) 161 OPEN process framework 161
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
240 Index
organization-oriented 194 organizational coordination models 193 organizational perspective 193 P performance knowledge 169 performatives 56 Petri Nets 70 plan body 173 platform independent agent system 119 preference bias 4 priorities 145 proactivity 51 Procedural Reasoning System (PRS) 71
Q Q-learning 2, 6 query_if 56 query_ref 56
R reactive agent 74 reactive reasoning 174, 185 reactivity 51 registration 60 registry agent 60 reinforcement learning (RL) 2, 5 reproductive plan 121 roles 194
S search bias 4 security policy for agents 171 self-interested agents 121 self-organization 119 simulated ecosystem 20 social ability 51 social framework 193 social multiagent learning 10 software agents 192 Southampton Framework for Agent Research (SOFAR) 51 startpoint 55
statecharts 70 strategic planning 144 subscription 60 subsumption architecture 81 supervised learning (SL) 2 supported protocols 55 symbolic model checking 76 system behavior 121
T task selection 176 task-driven process 177 temporality 145 testing 70
U UML 71 unregister 56
V verification 70 virtual marketplaces (VMs) 141 virtual organizations 141 visitor pattern 128
W W-method 78 weak agency 51
X X-machine 71 X-machine Definition Language (XMDL) 90 XmCTL 78
Copyright © 2003, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Just Released! Intelligent Support Systems Technology Vijayan Sugumaran Oakland University, USA
While the area of intelligent support systems has experienced significant progress in the recent past, the advent of the Internet and World Wide Web has sparked a renewed interest in this area. There is a growing interest in developing intelligent systems that enable users to accomplish complex tasks in a Web-centric environment with relative ease in utilizing such technologies as intelligent agents, distributed computing and computer supported collaborative work. The purpose of this book is to bring together researchers in related fields such as information systems, distributed artificial intelligence, intelligent agents, machine learning, and collaborative work to explore various aspects of IIS design and implementation, as well as to share experiences and lessons learned in deploying intelligent support systems. ISBN 1-931777-00-4 (s/c) • eISBN 1-931777-19-5 • US$59.95 • 308 pages • Copyright © 2002
It’s Easy to Order! Order online at www.idea-group.com or call our toll-free hotline at 1-800-345-4332! Mon-Fri 8:30 am-5:00 pm (est) or fax 24 hours a day 717/533-8661
IRM Press Hershey • London • Melbourne • Singapore • Beijing
An excellent addition to your library
Just Released! Internet Commerce and Software Agents: Cases, Technologies and Opportunities Syed Mahbubur Rahman Minnesota State University, Mankato, USA Robert J. Bignall Monash University, Australia
Technological development and the Internet together have created an immense amount of opportunities and have been revolutionizing the whole structure of retail merchandising and shopping. Along with developing of Internet trading, the amount of business information available on the Internet is growing at the extraordinary speed. The biggest transformation resulting from this is in the area of Internet commerce. Twenty chapters of Internet Commerce and Software Agents: Cases, Technologies and Opportunities address several of the major Internet commerce issues and the challenges to be met in achieving automated and secure Internet trading using software agents. The nature and the range of topics covered in Internet Commerce and Software Agents: Cases, Technologies and Opportunities means that business professionals, technologists, academics, students and policy makers, will benefit from this book. ISBN 1-878289-95-0 (s/c) eISBN 1-930708-88-2 • US$84.95 • 420 pages • Copyright © 2001
“Internet Commerce and Software Agents provides a clear statement of the technologist’s view of the future, which is invaluable for both strategic planners and students and fills a gap in the existing electronic commerce literature” —Thomas O’Daniel Monash University, Australia It’s Easy to Order! Order online at www.idea-group.com or call our toll-free hotline at 1-800-345-4332! Mon-Fri 8:30 am-5:00 pm (est) or fax 24 hours a day 717/533-8661
Idea Group Publishing Hershey • London • Melbourne • Singapore • Beijing
An excellent addition to your library