• Aucun résultat trouvé

Interaction and communication among autonomous agents in multiagent systems

N/A
N/A
Protected

Academic year: 2022

Partager "Interaction and communication among autonomous agents in multiagent systems"

Copied!
149
0
0

Texte intégral

(1)

I nteraction and C ommunication among A utonomous A gents

in M ultiagent S ystems

A dissertation presented by

Nicoletta Fornara

Supervised by

Prof. Marco Colombetti Dr. Luca Maria Gambardella

Submitted to

School of Communication Sciences University of Lugano

for the degree of

Ph.D. in Communication Sciences

June 2003

(2)
(3)

Board:

Thesis advisor: Marco Colombetti Co-advisor: Luca Maria Gambardella Reviewer: Carles Sierra, Gianpaolo Cugola

The work described in this thesis was carried out at Universit`a della Svizzera italiana, Lugano, Switzerland, and at IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artifi- ciale), Lugano, Switzerland.

Copyright c° 2003 by Nicoletta Fornara

(4)
(5)

Abstract

The main goal of this doctoral thesis is to investigate a fundamental topic of research within the Multiagent Systems paradigm: the problem of defining open interaction frameworks in order to enable agent communicative interactions in open, heterogeneous and dynamic systems. That is to realize interaction systems where multiple agents can enter and leave dynamically, where no assumptions are made on the internal structure of the interacting agents, and that are defined using a method that enable agents designer to develop a single artificial agent that can interact with different systems designed by different organizations.

Such topic of research has received much attention in the past few years. In particular the need to realize applications where artificial agents can interact, negotiate, exchange information, resources, and services, for instance in electronic commerce or information retrieval applications, has become more and more important thanks to the advent of Internet.

I started my studies on multiagent interaction systems and on their use to realize electronic commerce applications by developing a trading agent that took part to an in- ternational trading on-line game: the First Trading Agent Competition (TAC). During the design and development phase of the trading agent some crucial and critical troubles related to the TAC interaction system emerged. First the problem of accurately under- standing the rules that govern the different auctions present in the game, and second the problem of understanding the meaning of the numerous messages that the registered trading agents can use to interact with the system. Another more general problem that became clear during the design phase of the trading agent is that its internal structure would have been strongly determined by the peculiar interface of the relevant interaction system, in fact the agent has to use a set of pre-defined methods to interact with the TAC server, consequently without any changes in its algorithms, it would not be able to take part to any other competition, even with slightly different rules or to communicate with any other interaction system present on Web. Furthermore the trading agent would not have been able to exploit opportunities, to handle unexpected situations, or to reason about the rules of the various auctions, since it is not able to understand the meaning o the exchanged messages. The presence of all those problems bears out the need to find a standard common accepted way to define open interaction systems.

The most important component of every interaction framework, as is remarked also by philosophical studies on human communication presented in Speech Acts Theory, is thein- stitution of language. Following this approach I start to investigate the problem of defining a standard and common accepted semantics for Agent Communication Languages (ACL).

Such a problem has received much attention in recent years but the solutions proposed so far are at best partial, and are considered as unsatisfactory by a large number of special- ists. In particular, most current proposals are unable to support verifiable compliance to standards and to make agents responsible for their communicative actions. Furthermore

(6)

What is required is an approach focused on externally observable events as opposed to the unobservable internal states of agents. Focusing on external events means to take into account the ”social framework” within which agents interact. Following Speech Act Theory approach to human communication that views language use as a form of action, I propose an operational specification for the definition of a standard Agent Communication Language based on the notion of social commitment. In such a proposal the meaning of basic communicative acts is defined as the effect that sending the message has on the social relationship between the sender and the receiver of the message described through operation on an unambiguous, objective, and public ”object”: the commitment. The adoption of the notion of commitment is crucial to stabilize the interaction among agents, to create an expectation on other agents behavior, to enable agents to reason about their and other agents actions. Moreover given that this approach is inspired by speech act studies makes it possible to treat human and artificial agents communication in a uniform way, a crucial aspect to obtain successful mixed interactions.

The proposed Agent Communication Language is verifiable, that is, it is possible to determine if an agent is behaving in accordance to its communicative actions; the semantics is public, that is, any third part agent witness of the messages flow has to be able to draw similar inferences from the interaction, and objective in order that everybody attributes the same meaning to the exchanged messages. The proposed semantic is independent of the agent’s internal structure, flexible and extensible to let agent cope with various and new situations, simple to be correctly used by agent designers, yet enough expressive.

A complete operational specification of an interaction framework able to support in- teractions among artificial agents using the proposed commitment-based Agent Commu- nication Language is presented. In particular some sample applications of how to use the proposed framework to formalize interaction protocols available in interaction systems are reported. A list of soundness conditions to test if a protocol or a general interaction is sound is proposed. Such conditions express constrains on the content of the state of the interaction system at various stage of the conversation with regard to the meaning of the exchanged messages. The conversation protocols analyzed are the protocol of proposals and the protocol of offer that are widely used in electronic commerce applications.

To complete this research work a more complex interaction protocol, the English auc- tion protocol, a protocol actually used in electronic commerce systems and adopted in the TAC game has been successfully formalized with the proposed framework. These positive results lead us to be optimist on the possibility of adopting the proposed framework to formalize many protocols that are actually used in the interaction systems operating on the web.

(7)

Dedicato a tutti coloro che mi sono stati vicini in questi anni.

Acknowledgments

I would like to thank the Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA), the Universit`a della Svizzera italiana (USI), the Scuola Universitaria Profession- ale della Svizzera Italiana (SUPSI), and the European PLATFORM project (Computer- controlled freight platforms for a time-tabled rail transport system” 4th Framework Pro- gramme DG VII European Commission with the support of the Swiss OFES Ufficio Fed- erale della Formazione Professionale e della Tecnologia project n. 97.0315) for their sup- port of my research activity.

I would like to thank all IDSIA researchers for their suggestions, encouragement, and collaboration. In particular I would like to mention Luca that encouraged me to start my research activity and engaged me to work in PLATFORM project, Monica for her contagious cheerfulness, and Ivo, Giovanni, Andrea, Doug, and Braham for having read parts of this thesis and of my papers.

I would like to thank my thesis advisor Marco who has hardly worked with me in this years teaching me many important things, among them the most important one is the love for teaching. I would like to thank all USI Ph.D. students who shared with me these years of hard work and all professors, assistants, and secretaries at USI who have collaborated with me in these years.

Last but not least I would like to express my gratitude to my parents, Sergio and Graziella, to my husband Andrea, to my sister Patrizia and her husband Fabio, to my nephew Daniele and my little niece Elisa, to my grandmother Caterina, to Andrea’s family, to Erika a special friend who gave me hospitality during this years, and to all friends and relatives who made these years full of love and of happiness.

(8)
(9)

Contents

1 Introduction 1

1.1 Goal of the Thesis . . . 2

1.2 Current Open Problems and Principal Contributions . . . 2

1.3 Outline of the Thesis . . . 5

2 Artificial Agents 7 2.1 Definition and Main Characteristics . . . 7

2.1.1 Autonomy . . . 9

2.1.2 Rationality . . . 11

2.1.3 Interoperability . . . 12

2.2 Functional Architectures of Intelligent Agents . . . 13

2.2.1 Reactive Agents . . . 13

2.2.2 Deliberative Agents . . . 14

2.3 The Belief-Desire-Intention Model of Agency . . . 15

2.4 Conclusions . . . 18

3 Multiagent Systems and Agent Societies 21 3.1 Systems with Multiple Agents . . . 22

3.1.1 Distributed Problem Solving . . . 24

3.1.2 Multiagent Systems . . . 25

3.1.3 Agent Societies . . . 26

3.2 Coordination of Systems with Multiple Agents . . . 29

3.2.1 Cooperation . . . 30

3.2.2 Competition . . . 31

3.2.3 Coordination in Open Systems . . . 33

3.3 Conclusions . . . 33

4 Automated Negotiation 35 4.1 Negotiation Protocols . . . 37

4.1.1 Contract Net Protocol . . . 38

4.1.2 Auctions . . . 39

4.2 Negotiation in Electronic Commerce . . . 43

4.3 Agent’s Decision Making Model for Simultaneous Auctions . . . 45 i

(10)

4.3.1 Trading Agent Competition . . . 46

4.3.2 Description of the Trading Agent Nidsia . . . 49

4.3.3 Evaluation Phase, Experiments and Results . . . 53

4.4 Conclusions . . . 57

5 Agent Communication 59 5.1 Human Communication . . . 59

5.1.1 Speech Act Theory . . . 63

5.2 Agent Communication Languages . . . 66

5.2.1 Syntax . . . 67

5.2.2 Lexicon . . . 68

5.2.3 Semantics: Different Approaches . . . 69

5.3 Conclusions . . . 72

6 A Commitment-Based Agent Communication Language 73 6.1 Main Concepts . . . 75

6.1.1 Social Commitment . . . 75

6.1.2 Temporal proposition . . . 78

6.2 Technical Specification . . . 78

6.2.1 The Commitment Class . . . 80

6.2.2 The Temporal Proposition Class . . . 83

6.2.3 Actions . . . 84

6.2.4 Update Rules . . . 85

6.3 Definition of Main Speech Acts . . . 86

6.3.1 Assertives . . . 86

6.3.2 Directives . . . 87

6.3.3 Commissives . . . 89

6.3.4 Declarations . . . 90

6.3.5 Proposals . . . 91

6.3.6 Offers . . . 91

6.4 Important Use of the Proposed Semantics . . . 92

6.5 Samples of Application . . . 93

6.5.1 Query . . . 94

6.5.2 Proposal . . . 95

6.5.3 Offer . . . 95

6.6 Conclusions . . . 100

7 A Method for the Definition of Interaction Protocols 103 7.1 Definition of Interaction Protocols . . . 104

7.2 Soundness Conditions . . . 107

7.3 The English Auction Protocol . . . 108

7.3.1 The Environment . . . 108

7.3.2 Communicative Acts and Guards . . . 110

(11)

iii 7.3.3 Interaction Diagram . . . 111 7.4 Conclusions . . . 112

8 Conclusions 115

8.1 Contributions . . . 115 8.2 Future Works . . . 117

A TAC game Auction Types 119

A.1 Flights . . . 119 A.2 Hotel Rooms: English Ascending Auction, Mth Price . . . 119 A.3 Entertainment Tickets: Continuous Double Auction . . . 120 B Truth-Tables for Temporal Proposition Objects 121

Bibliography 123

(12)
(13)

Chapter 1

Introduction

The research topic of this thesis concern artificial agents and their interaction and commu- nication within multiple agent systems. Given that studies about artificial agents are quite recent, in Artificial Intelligence research literature there is not yet a generally accepted definition ofintelligent agent mainly because some of its distinguishing attributes may re- sult less or more important on the basis of the different domains of application of agents.

Taking inspiration from research literature on artificial agents I think that it is possible to describe anagent as a computer program that operates continuously in a specific en- vironment for performing a specific task and having the following crucial characteristics:

autonomy, rationality, and interoperability. In particular regarding to interoperability it is important to remark that like the capability of communicating with other agents using a language has had a crucial role in the evolution of human creatures in the same way it will play a crucial role in the evolution of the capabilities of artificial agents. Such a social ability is much more complex than the ability to exchange binary information but it is the capability to exchange messages in an expressiveAgent Communication Language (ACL).

Multiagent Systems(MAS) and more recentlyAgent Societies are research areas within Distributed Artificial Intelligence (DAI) that study systems consisting of multiple inter- acting agents. There are many reasons and benefits for studying and developing systems with multiple agents. A first reason is to study distributed approaches to certain type of problems. Another reason is that there are important advantages in developing systems composed by multiple self-interested autonomous agents acting as ”individuals”, which very often represent real world parties, rather than as ”parts” of a whole system. The opportunity for artificial agents of efficiently retrieving, filtering and exchanging informa- tion, as well as exchanging knowledge, services, products and expertise with other agents, or even with humans, enables them to solve problems that cannot be solved alone. Fur- thermore the need to realize this type of applications, where artificial agents can interact and negotiate, has become more and more important in the last few years thanks to the advent of Internet. In particular electronic commerce applications like for instance on-line auctions or automated negotiations are becoming more and more popular on the Web.

In fact given the intrinsic complexity of evaluating which product is the best to purchase among all possible products available on various web sites and the complexity to compare

1

(14)

their prices, their characteristics and to monitor the prices changing especially when an agent want to buy bundle of inter-independent products, the possibility to engage intelli- gent software agents to better perform such tasks on our behalf is crucial for the success of such applications.

1.1 Goal of the Thesis

This thesis aims at describing and investigating two fundamental subject of research within the Multiagent Systems paradigm: the problem of designdecision making models for trad- ing agents that operate in electronic commerce applications and the more general problem of defining an application-independent method to formalize open interaction frameworks where as a specific case electronic commerce interactions can actually take place. In fact the need to realize open, heterogeneous, and dynamic interaction systems has became more and more crucial with the advent of Internet. In such systems multiple agents can enter and leave dynamically, no assumptions are made on the internal structure of the interact- ing agents, and are defined using a method that enable agents designer to develop a single artificial agent that can interact with different systems designed by different organizations as sketched in Figure 1.1.

First of all I aim to give a schematic description of the notion of agents and of existing studies within Multiagent Systems research. Then I will analyze in detail two fundamental problems: the problem of designing agents for automated negotiation and the problem of enable them to negotiate by means of a suitable open interaction system. In particular, with respect to this second point, I aim to study, through also the analysis of existing approaches, the problem of defining a standard semantics for Agent Communication Lan- guages. In fact, drawing inspiration by philosophical studies on human communication [143] theinstitution of languageis the fundamental component of every interaction frame- work.

Departing from the identification of the crucial characteristics that an Agent Com- munication Language for open interaction systems has to abide by I aim to propose an operational definition of an Agent Communication Language based on the notion of social commitment. Finally my purpose is to test the proposed framework to define an interac- tion system for electronic commerce interactions, in particular to define using the proposed framework one of the most popular interaction protocol: the English Auction protocol.

1.2 Current Open Problems and Principal Contributions

I started my studies on trading agents and multiagent interaction systems by taking part to an international on-line competition: the Trading Agent Competition (TAC) game.

The most interesting aspect of that competition is the complexity of devising a successful policy to buy bundle of complementary and substitutable products in parallel auctions of different types. This part of the work leads to two main contributions: one is related to the trading agent itself, that is, about its performance, its limits and its advantages, the other,

(15)

1.2 Current Open Problems and Principal Contributions 3

Hi!

Interaction system 1

Interaction system 2

Figure 1.1: Heterogeneous artificial agents interact by means of open interaction systems.

that is more general, is related to the interaction system available for the competition.

In fact during the design and development phase of the trading agent some crucial and critical troubles related to the TAC interaction system emerged. First the problem of accurately understanding the rules that govern the different auctions present in the game, and second the problem of understanding the meaning of the numerous messages that the registered trading agents can use to interact with the system. Another more general problem that became clear during the design phase of the trading agent is that its internal structure would have been strongly determined by the peculiar interface of the relevant interaction system, in fact the agent has to use a set of pre-defined methods to interact with the TAC server, consequently without any changes in its algorithms, it would not be able to take part to any other competition, even with slightly different rules or to communicate with any other interaction system present on Web. Furthermore the agent would not have been able to exploit opportunities, to handle unexpected situations, or to reason about the rules of the various auctions, since it is not able to understand the meaning o the exchanged messages. The presence of all those problems bears out the need to find a standard common accepted way to define open interaction systems.

An open interaction systems has to enable communicative interactions among self- interested agents in open, heterogeneous, distributed, and dynamic systems. That is to realize interaction systems where multiple agents distributed around the world can enter and leave dynamically, where no assumptions are made on the internal structure of the interacting agents, and that allow to the same agent to interact with different systems designed by different organizations.

The most important component of every interaction framework, as is remarked also by philosophical studies on human communication presented in Speech Acts Theory, is the

(16)

institution of language. Following these prospective I start to investigate the problem of defining a standard and common accepted semantics for Agent Communication Languages.

That problem has received much attention in recent years but the solutions proposed so far are at best partial, and are considered as unsatisfactory by a large number of special- ists. In particular, most current proposals are unable to support verifiable compliance to standards and to make agents responsible for their communicative actions. Furthermore those proposals make the strong assumption that every interacting agent may be at least modelled as a Belief-Desire-Intention (BDI) agent.

What is required is an approach focused on externally observable events as opposed to the unobservable internal states of agents. Focusing on external events means to take into account the ”social framework” within which agents interact. Taking inspiration from Speech Act Theory [6, 140] an operational specification for the definition of a stan- dard Agent Communication Language based on the notion of social commitment will be presented. In that proposal the meaning of basic communicative acts is defined as the effect that sending the message has on the social relationship between the sender and the receiver of the message described through operation on an unambiguous, objective, and public ”object”: the commitment. The adoption of the notion of commitment as an ex- ternal object is crucial to stabilize the interaction among agents, to create an expectation on other agents behavior, and to enable agents to reason about their and other agents actions. Moreover given that this approach is inspired by speech acts studies it makes possible to treat human and artificial agent communication in a uniform way, a crucial as- pect to obtain successful mixed interactions. Those basic communicative acts form a basic Library that can be used to express the meaning of the exchanged messages of different interaction systems.

In particular the proposed Agent Communication Language has some very important characteristics: it is verifiable, that is, it should be possible to determine if an agent is behaving in accordance to its communicative actions, the semantics is public, that is, any third part agent witness of the messages flow has to be able to draw similar inferences from the interaction, and objective in order that everybody attributes the same meaning to the exchanged messages. The proposed semantic is external with respect to the agent’s internal structure, flexible and extensible to let agent to cope with various and new situations, simple to be correctly used by agent designers, yet enough expressive. Finally respect the autonomy of agents, they must have only social constrains on their behavior.

A complete formal operative specification of an interaction framework able to support interactions among artificial agents using the proposed commitment-based Agent Com- munication Language is presented. In particular some sample applications of how to use the proposed framework to actually formalizeinteraction protocols are reported. The pro- tocols analyzed are widely used in electronic commerce applications and they are: the protocol of proposals and the protocol of offer. Furthermore a list ofsoundness conditions to test if a protocol or a general interaction is sound are proposed. Such conditions ex- press constrains on the content of the state of the interaction system at various stage of the conversation with regard to the meaning of the exchanged messages.

(17)

1.3 Outline of the Thesis 5 To complete this research work an application-independent method to formalize inter- action protocols is presented. It consist of three components that are: the definition of the meaning of every communicative act using the ACL proposed, the definition of pre- conditions for the performance of communicative acts specific to the interaction protocol analyzed, and finally theinteraction diagram of the protocol. This method will be used to successfully formalize a more complex interaction protocol, the English Auction protocol used in the Trading Agent Competition game discussed in Chapter 4. The achievement of this result demonstrates that the proposed semantics can be successfully used to formal- ize interaction protocols like for example electronic auctions that are widely employed in electronic commerce applications.

1.3 Outline of the Thesis

This thesis is organized as follows. In Chapter 2 the notion of agent is introduced focus- ing in particular on what are its constituent characteristics, and on principal functional architectures. In Chapter 3 existing studies and main topics of research about systems with multiple interacting agents are presented, particular relevance has been given to the methods of coordination among multiple agents. Chapter 4 faces the problem of auto- mated negotiation going into detail in the design and test of a trading agent who took part to the international Trading Agent Competition. In Chapter 5 I analyze in detail the problems related with the definition and use of a particular method of coordination among agents: Agent Communication Languages. In Chapter 6 a new proposal for an operational definition of the semantics of agent communication languages based on the notion of social commitments is presented and discussed. In Chapter 7 an application independent method for the definition of interaction protocol for open interaction systems based on the proposed ACL is presented and a concrete example of formalization of the English Auction Protocol, a protocol used in the TAC game, using the model proposed in the thesis is presented. Finally Chapter 8 concludes this thesis with comments on what have been done, and outlines interesting directions for future works.

(18)
(19)

Chapter 2

Artificial Agents

This chapter introduces the notion of an ”agent”, very important in Artificial Intelligence (AI). In fact it is possible to describe Artificial Intelligence as the subfield of Computer Science that aims to construct agents that exhibit an intelligent behaviour. This chapter is organized as follows. Section 2.1 describes what an artificial agent is, how it is possible to distinguish it from a simple program using its distinctive features and how it relates to its environment. In Section 2.2 I present a schematization of agents’ functional architectures distinguishing between two broad categories: reactive agents, which react immediately to the changes in the environment’s states, and deliberative agents, which reason about the expected effects of their actions on the environment. In Section 2.3 particular relevance to the Belief-Desire-Intention (BDI) model of agency, a very important model in agent communication languages studies, is given. Finally in Section 2.4 the conclusions of this chapter are given. For a more detailed introduction about artificial agents the book

”Artificial intelligence, a modern approach” by Russel and Norvig [131] is recommended.

2.1 Definition and Main Characteristics

An artificial agent is usually a computer program that was devised to obtain an entity equivalent or similar to a human being or animal that truly exists in the world. The main characteristics of human or animal beings are that they are able to live in an unpredictable environment, to act and in general to interact with that environment, with the aim to reach some goals, first of all to survive.

Recently there has been a growing interest in artificial agents in different subjects, for example in data communications and concurrent systems, in robotics, in user interface design, in electronic commerce and in information retrieval. Artificial agents are finding a wide range of applications. For example in 1998, Deep Space 1 was the first space probe to have an autonomous agent-based control system [111]. Another example is the increasing use of agents in Internet-based electronic commerce, where agents autonomously buy and sell goods on behalf of a user [115, 77]. The trading agent Nidsia, described in Chapter 4 is an example of software agent for electronic commerce [63].

As Russel and Norvig state in their book [131], there is not yet a generally accepted 7

(20)

definition of intelligent software agent in Artificial Intelligence, mainly because some at- tributes are less or more important based on the different domains of application of intel- ligent agents. Literature mostly gives only intuitive definitions describing some important characteristics that various artificial agents must have. Below are some of these definitions proposed by the main researchers in this field, which reflect the intuitive one expressed above.

”An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”, Russel and Norvig [131, p.31].

”An agent is a computer system that is situated in some environment, and that is capable of autonomous actions in the environment in order to meet its design objectives”, Wooldridge [177, p.29].

An agent is a computational entity that can be viewed as perceiving and acting upon its environment, that is autonomous and that operates flexibly and rationally in a variety of environmental circumstances, Weiss [172, p.1].

”An agent is a persistent computation that can perceive its environment and reason and act both alone and with other agents. The key concepts in this definition are interoperability and autonomy”, Singh [151, p.40].

Artificial agents are computer programs operating continuously and autonomously in a specific environment in order to carry out a predefined task, Colombetti [31].

It is interesting to notice that there are some important key concepts in the above definitions: autonomy,rationality, andinteroperability. Each one contributes to determine what an agent is and they will be discussed in more detail below. Other attributes may be considered important depending on the application of the agent, an example may be mobility, i.e. the ability of an agent to move around an electronic network.

An agent’s behaviour is determined by itsprogram, that is the mapping from the cur- rent percept and the currentstate, which represent the previous history of the system, to actions. The behaviour also depends on the agent’sphysical architecture, that is the com- puting devise where the program will run. The architecture makes the percepts available to the program and transfers the program’s actions to the effectors. Various functional architectures of agent’s programs will be discussed in Section 2.2.

The various possible types of environments are also important when investigating artificial agents. Russel and Norvig suggest the following classification of environment properties [131]. An environment may be accessible if the agent’s sensors give it access to the complete state of the environment or at least to all the aspects that are relevant to the choice of the actions, otherwise the environment isinaccessible. An environment is deterministicif its next state is completely determined by the current state and the actions selected by the agents. If the environment is inaccessible it may appearnondeterministic.

If the environment can change while an agent is deliberating, it is calleddynamic, otherwise

(21)

2.1 Definition and Main Characteristics 9 it isstatic. Finally if there are a limited number of distinct, clearly defined percepts and actions the environment is calleddiscrete in the space of the states, like for example chess, otherwise it is calledcontinuous.

A generic agent determined by its program and by its physical architecture, which perceives and acts on the environment is schematized in Figure 2.1.

2.1.1 Autonomy

The termautonomyorautonomousmay refer to, at least, three different important aspects of artificial agents.

In its most common acceptation, the adjective autonomous is used to mean that, to some extent, agents have control over their behaviour. This means that their actions are determined by their own experience and that they are able to act without the intervention of humans or other systems. Thus an autonomous agent makes independent decisions that are under its own control and that are not driven by others. I think it is possible, metaphorically, to speak about it as ”the freedom of the will” of agents with respect to their system designers. This type of autonomy is necessary in agent’s theory to make possible not to tell an artificial agenthow to do something but tell it onlywhat to do.

Depending on its functional architecture (see Section 2.2) an artificial agent may exhibit different levels of autonomy with respect to its designer.

Reactive agents are very simple agents. They consist only of a program that maps each possible perception or percept sequence in the corresponding action that the agent has to carry out. They need abuilt-in knowledge, which univocally determines their behavior. Actually these agents lack autonomy and their big limitation is that they are not flexible, in fact they operate successfully only if the assumptions about the environment, made by the designer during the project phase, will hold.

Planning agents are more sophisticated. They have a more complex built-in knowl- edge about the set of actions that can be performed. This means that they know

ENVIRONMENT percepts

AGENT programand physical architecture

actions

Figure 2.1: An agent interacts with its environment

(22)

the preconditions and the effects of their actions on the environment. They have also some knowledge about the mechanisms that govern the dynamic evolution of the environment. This kind of agent seems more autonomous than the previous one, but actually it can ”only” choose which plan to execute among all the combinatorial combinations of their allowed actions. Even if this number can be very high, it is still a finite number and formally the agent cannot be considered truly autonomous.

A truly autonomous artificial agent may be obtained providing it with the built-in knowledge previously described and with the powerful capability to learn. In this way its behavior is actually determined by its own experiences. This kind of agent may learn for example new preconditions and effects of its actions, how much is the reward of each action and so on. Examples of a very successful learning technique are Reinforcement Learning and Neural Networks. They can be used by artificial agents to build and continuously update their own model of the problem that they have to face. It is important that some of such learning techniques have proved to converge, it is the case for example of Q-learning [171] one of the simplest reinforcement learning algorithms.

However, the term autonomy has usually a slightly stronger meaning, it refers also to the quality of an agent to have its own goals and its own freedom of will with respect to other agents requests. In this case the term autonomy reflects the idea of social freedom in human beings society. This type of autonomy is very important in multiagent system applications, where the various agents are self-interested like for example in electronic commerce. But when artificial agents form a society or reflect the human one, their autonomy has to be limited because it presents some drawbacks. In fact for agents it is important to have the possibility to negotiate with others to achieve more complicated goals, but in order that the social system keeps on working, it is very important that agents honor their commitments. A normative system is then necessary. There are also situations where agents decide to sacrifice their autonomy to collaborate or negotiate with other agents. One example is when the common goal of the agents is to achieve a global optimum, for instance maximize the ”social welfare” or to find a Pareto optimal solution (I will discuss these evaluation criteria of automated negotiation protocols in Chapter 4).

Another possible situation is when in competitive applications everybody may be better off if each negotiates an agreement with the others. For example in the Prisoner’s Dilemma game, described in Table 2.1, for both agents it is better to cooperate than to defect.

In literature it is possible to find also the term design autonomy. It refers to the autonomy of artificial agents designers to develop an agent independently from other agents’ designer or super parties’ directives. Design autonomy minimizes requirements on agent internal structure and on agent behaviour, thus promoting heterogeneity. Recently [151] this type of autonomy became very important thanks to the advent of truly open interaction systems like the Internet. I will discuss its importance in the definition of a commonly accepted agent communication language in Chapter 5.

(23)

2.1 Definition and Main Characteristics 11

Table 2.1: Prisoner’s Dilemma game.

players cooperate defect

cooperate 3,3 0,5

defect 5,0 1,1

2.1.2 Rationality

Here I will analyze rationality as an individual property of intelligent agents, although it is possible to study alsocollective rationality in cooperative multiagent systems or in teamwork treating it as a property of the whole system. An intuitive description of the notion of rational agents can be the following: a system whose actions make sense from the point of view of the information possessed and on the basis of the goals or tasks for which it was designed [130]. While the following formal definition of the concept of rational agent was proposed by Russel and Norvig in their book [131]: ”For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has”. Speaking about the term rationality, I think it is important to report that Simon, one of the fathers of Artificial Intelligence, in 1958 coined the terms ”substantive rationality” and ”procedural rationality” to describe the crucial difference between the question ofwhat decision to make and the question of how to make it [150].

Determining what is rational depends on the definition of the degree of success, on what and how the agent is able to perceive and has perceived so far, and on the actions that the agent can perform. In these notions many problematic questions are hidden: first of all the problem of understanding and expressing the relationship between performance measure and goals or between performance measure and utility. Moreover the question of where the knowledge of the agents comes from related with the problem of rational learning. Finally the question about the capability of an agent topredict the effects of its actions thanks to someplanning capabilities.

An important component of rationality is flexibility. As Wooldridge and Jennings state in [180], an autonomous agent is flexible if it is able to act in order to meet its design objectives in an uncertain environment. Flexibility can then be seen as the capacity to balance pro-activeness, that is the capacity to exhibit a goal directed behaviour and reactivity, that is the capacity to respond to events that occur in the environment where these events effect the agent’s goal or the agent’s assumptions about it.

Generally speaking it is also important to distinguish between rationality and om- niscience. An omniscient agent knows the actual outcome of its actions and can act accordingly. But omniscience is impossible in reality because the environment is usually uncertain mainly because it has an inherent partially unknown dynamic or because it is populated with other agents which can continuously change it with their actions.

In Artificial intelligence there are roughly two main approaches to build the decision- making process by which artificial agents select their actions: the symbolic and the eco-

(24)

nomic one.

The symbolic approach focuses on a model of rational decision making as practical reasoning of the kind that humans engage everyday. In this model agents have to realize some desires that represents their goals. This choice follows the cognitive approach rooted in psychological works, which treat agents as entity with beliefs, desires and intentions (see Section 2.3). In this approach, intelligent reasoning is obtained using logic techniques, such as formal calculation, typically deduction.

Following theeconomic approach an agent has to maximize a utility function. This view borrows from economic metaphors the idea of intelligent reasoning as the ad- herence to the tenets of utility theory. Combining utility theory with probability theory, that is used to manage uncertainty and partial information knowledge of the environment, von Neumann and Morgenster in 1944 [168] founded Decision The- ory. In Decision Theory a rational agent is the one that chooses an action which maximizes expected utility, where expected utility is defined in terms of the actions available to the agent, the probability of certain outcomes, and the preferences the agent has with respect to these outcomes. In multiagent scenarios, where an agent has to interact with other agents,game theory is also a powerful predictive and an- alytic tool. Whereas an approach to solving sequential decision problems, where the agent’s utility depends on a sequence of decisions, is dynamic programming devel- oped by Bellman in the late 1950s [10]. The agent for electronic commerce, Nidsia, that I developed to take part in an international trading agent competition [63](see Chapter 4) is a utility based agent that chooses its action using decision theory techniques.

Following Simon’s studies [150] I think it is important to point out that perfect rational agents do not exist. Physical mechanisms take time to process information and select actions, hence the behaviour of real agents cannot immediately reflect changes in the environment and will generally be suboptimal. In this case the term used to indicate the rationality of real system is ”bounded rationality”.

2.1.3 Interoperability

The ability to interact and in particular the capacity to communicate using a language plays a crucial role in the evolution of the human creatures. The same capability, inter- operability, promises to play the same decisive role for artificial agents.

In fact, as I will discuss in more detail in Chapter 5, the opportunity for distributed and heterogeneous agents to exchange information, competence, services products, etc. and the ability to negotiate, cooperate, or compete with other agents and perhaps with human, makes them able to solve problems that cannot be solved alone and that intrinsically require the interaction with other entities. It is important to underline that this social ability is much more complex than simply the ability to exchange binary information.

Genesereth and Ketchpel have gone so far to equate agency with the ability of a system to exchange knowledge using an agent communication language: ”software agents,

(25)

2.2 Functional Architectures of Intelligent Agents 13 i.e. software ”components” that communicate with their peers by exchanging messages in an expressiveagent communication language” [71, p.48].

Interactions among artificial agents can take placeindirectly through the environment in which they are embedded (for example by observing one another or by carrying out actions that modify the environment) or directly through a shared language. The im- portance of using a common shared language is in its expressiveness. As I will broadly discuss in Chapter 5 about agent communication languages (ACLs), an ACL allows the exchange of complex information. Like for example goals, requests to carry out actions in a declarative form, commitment to perform some other actions. In general an ACL lets agents to have articulated conversations, that is to exchange task-oriented sequences of messages, for example to take part in a negotiation or an auction. Moreover if a certain artificial language will become a ”standard” agents will become able to interact with many different applications, for example to take part in different parallel auctions present on the Internet.

2.2 Functional Architectures of Intelligent Agents

A first important distinction in agents’ studies is between robots, which usually behave in ”physical” environments, and software robots, which are called softbots and usually behave in ”virtual” environments. An artificial environment usually may be complex, dynamic and non-deterministic like a real one; in fact it is usually a simulation of its real counterpart or has important connections with it. For example Nidsia the trading agent for electronic commerce, that will be presented in Chapter 4 is a softbot, it operates in an artificial environment constituted mainly by other trading agents but may, in principle, sell and buy real goods in place of its owner.

As is reported above, an artificial agent is essentially a permanent program that accepts percepts from the environment, makes decisions and generates actions. On the basis of an agent’s decision-making process it is possible to delineate the following agents’ typologies.

2.2.1 Reactive Agents

This type of agent perceives the state of the environment and decides immediately on the corresponding action such as for example those based on Brooks’s Subsumption Architec- ture [19].

Look up agents

The simplest possible agent is a look-up agent. This name comes from its internal look-up table that is used by the agent to map every possible percept sequence into the appropriate action. Such agent uses its memory to keep track of the entire percept sequence. But this type of agent is doomed to failure because the table with all possible percept sequences where look up the actions becomes quickly bigger and bigger and then intractable. This

(26)

type of agent has not autonomy because the calculation of the action is entirely built-in in the look-up table.

Reflex agents

A very basic agent that can be implemented very efficiently is the one that simply follows condition-action rules. If the agent perceives a certain state, then acts in a definite way.

The action ato be performed at timet+ 1 is computed as a function of the perceptsat timet:

a(t+ 1) =f(s(t)).

This type of agent decides what to do without reference to the percept sequences. Also human beings have many of such reactive rules, some of them learned, like for example the rules for driving, and some of them innate, such as blinking when something approaches the eye. This type of agent has no autonomy at all, because the choice of its actions is entirely built-in, so if the environment will change in an unexpected way it would be lost.

A solution to this problem may be, in certain situation, to equip the agent with learning capabilities.

Agents with an internal state

The simple reflex agent described before will work only if the correct decision can be made on the basis of the current percept. Otherwise it has to maintain some sort of internal state in order to choose the right action. The action a to be performed at time t+ 1 is computed as a function of the percept sat timet and of the current internal statex(t):

a(t+ 1) =f(x(t), s(t)) x(t+ 1) =g(x(t), s(t)).

The internal state can be used to keep track of the percept sequence but also to have an internal description of the current state of the environment in the cases when it is not completely accessible. This internal description can be computed by the agent using knowledge about how the world evolves and about the effects of its own actions. It is important to distinguish between the ”physical” internal state of an agent that can be perceived also by a reflex agent, for example in a robot its level of energy in the battery and the ”mental” state of the agents that is used to intend the world and is like a human mental state, for instance an agent may believe that there is something inside a certain box.

2.2.2 Deliberative Agents

In many situations the action to be performed by an artificial agent has to be computed not only on the basis of the state of the environment but also on the basis of its expected effects on it, that is, the agent reasons about its actions. For example an agent that plays

(27)

2.3 The Belief-Desire-Intention Model of Agency 15 chess cannot decide its action only on the basis of the current position of the chess-pieces but it has also to evaluate the future effects of its moves. This type of agent needs to have a model of the dynamics of the environment and of the effects of its actions on it.

Deliberative agents may appear less efficient than reactive agents because they have to reason about the action to perform but they are far more autonomous and flexible than reactive agents. One important class of deliberative agents is the Belief-Desire-Intention (BDI) model of agents treated in detail in Section 2.3.

Goal-directed agents

Goal-based agents have some sort ofgoal information, which describes situations that are desirable. This type of agent can then combine the information about the goal and about the effects of its action to choose the action to perform. This choice can be simple when goal satisfaction results immediately from a single action. But usually it will be more trickier, when the agent has to consider long sequences of actions and their effects on the environment evolution to achieve the goal. The agent has then to use some search or planning capabilities to find action sequences that do achieve the agent’s goal. A detailed description of problem solving by searching techniques can be found in [131, p.55]. A discussion about planning and plans can be found in [131, p.337] and a detailed survey can be found in [4]. Alsolearning capabilities, are very important for deliberative agents in order that they can use their percepts not only for acting, but also for improving their ability to act in the future, a survey on this topic can be found in [131, p.525].

Utility-based agents

Utility is a function that maps a state or sequence of states onto a real number. It allows a comparison among different environment states while the notion of goal just provides a description of desirable states. A complete specification of the utility function allows rational decision in two cases where having only the notion of goal is not enough. One is when there are conflicting goals, only some of which can be achieved. Another is when there are several goals that the agent can aim for, but none of which can be achieved with certainty; utility provides a way in which the likelihood of success can be weighted up against the importance of the goals.

2.3 The Belief-Desire-Intention Model of Agency

In this section, following Wooldridge approach presented in [178], the BDI model of agency is introduced. It is the result of the combination of three distinct components: the philo- sophical, the logical, and the software architecture. Below I discuss the BDI model of agency with some details because of its importance in the field of Multiagent Systems.

First the idea of realizing BDI agents is so strong in AI that Shoham, in an article about Agent Oriented Programming [146], defines an agent as to be an entity that recognizes

(28)

and deals with the outside world as having mental qualities such asbeliefs,intentions and desires.

Second the intentional stance, that is, the strategy of interpreting the behavior of an entity (person, animal, or artifact) by treating it as if it were a rational agent that governed its choice of actions by a consideration of its beliefs and desires, is a useful abstraction tool used in computer science to describe the behavior of very complex systems. Its distinctive features can be best seen by contrasting it with another basic method of prediction the physical stance, which proposes to use the laws of physics and the physical constitution of the things in question to devise our prediction. The intentional stance was proposed by the philosopher Daniel Dennett [43, 44] and it was first applied to computational systems by the computer scientist John McCarthy in 1979 [109].

Third mentalistic models are good candidates for representing information about end users, for example for a personal assistant, a crucial point to enhance interactions among human beings and software agents.

Finally as I will fully discuss in Chapter 5, several researchers have proposed to use cognitive concepts as semantic basis for agent communication languages. But intentional concepts are not well suited to give basis to a public, standardized view of communication [151].

Philosophical component

Thebelief-desire-intention (BDI) model of rational agency is based on a widely respected philosophical theory of human rational actions, developed by the philosopher Michael Bratman [17] within the tradition of analytical philosophy. It is a theory of practical reasoning, i.e. the process of reasoning that we all go trough in our everyday lives, deciding moment by moment which action to perform next. Human practical reasoning is mainly characterized by a process for deciding what to achieve and by a subsequent process of decidinghow to achieve these state of affairs. The former process is known aldeliberation and the latter as means-ends reasoning.

This model focuses in particular on future-directed intentions, i.e. desires that has to be achieved to which human beings are committed with themselves. Intentions are important because they allow us to not waste time considering possible actions that are incompatible with our intentions. Since any software agent that we might implement must have resource bounds, this model seems attractive. It is important to note that intentions are persistent, a human being does not give up an intention without a good reason. Furthermore intentions interact with an agent’s beliefs and other mental states.

For example that an agent has the intention to achieve a certain state of affairsϕ, implies that it believes ϕis possible and it believes that given the right circumstances,ϕ will be achieved. Formally capturing the interaction between intention and belief is very hard.

(29)

2.3 The Belief-Desire-Intention Model of Agency 17

Logical component

A complicated question in BDI systems is finding a method to axiomatize general prop- erties of BDI agents. A formalization of some aspects of Bratman’s theory, using modal logic, was made by Cohen and Levesque in 1990 [29]. In the meantime Rao and Georgeff developed the BDI logic in order to give an abstract, idealized semantics to the BDI agents they were building throughout the early 1990s at the Australian AI Institute [124, 72].

They present alternative possible-worlds formalism for BDI-architectures with three cru- cial elements. First, intentions are treated on a par with belief and desires. This allows defining different strategies of commitment to intentions as described above. Second, they distinguish between the choice an agent has over the actions it can perform and the possi- bilities of different outcomes of an action. Third they specify an interrelationship between beliefs, desires, and intentions [124, p.1].

Further recent studies proposed by Wooldridge to improve such BDI logic can be found in [178]. In this book a new logic LORA is presented; this logic in addition contains a temporal component that allows one to represent the dynamic of agents and their envi- ronments, and an action component that allows one to represent the actions that agents perform and their effects.

Software architecture component

Intuitively beliefs correspond to what an agent imagines its world state to be and these beliefs may be incomplete and incorrect. Agent’sdesiresrepresent states of affairs that the agent would wish to be brought about. Intentions represent desires that it has committed to achieving.

As is mentioned in the previous section, the main components of the software architec- ture of a BDI agent are the deliberation phase about what intention to achieve next, and the means-end reasoning to get a plan for achieving the intention. These two processes have a time cost associated with them. Consequently, each one produces an output at time t1 on the basis of assumptions about the world at time t0. If the world doesn’t remain static during the interval of time t1−t0, like in realistic environments, the result of the computations may be obsolete.

The deliberation process has two distinct functional components. First theoption gen- eration, in which the agent generates a set of possible alternatives. It takes the agent’s beliefs and current intentions and determines a set of desires. Secondly thefiltering com- ponent that chooses one desire and commits itself to achieve it [178]. After that the means-end reasoning process creates a plan to achieve the intention. Usually it does not start from scratch [4] but its work consists of finding the correct plan in an existing plan library [73], on the basis of the pre- and post-conditions of the listed plans.

Analyzing the aspect of creating a commitment to intentions, a problem arises: how long should an agent be committed to its intentions? There are mainly three commitment strategies which are commonly discussed in the literature about artificial agents [124]:

blind commitment, when an agent continues to maintain an intention until it believes

(30)

the intention has actually been achieved;single-commitment, when an agent continues to maintain an intention until it believes that either the intention has been achieved, or else that is no longer possible to achieve the intention; open-minded commitment, when an agent maintains an intention as long as it is still believed possible. On the basis of the chosen commitment strategy the agent has to reconsider its intentions more or less often during its reasoning process.

Implementations of the BDI model

The Procedural Reasoning System (PRS) [74] was one of the first implemented systems to be based on a BDI architectures. It was implemented in LISP and has been used for a wide range of applications in problem diagnosis for example for the Space Shuttle [86], air-traffic management [103], and network management [86].

dMARS is a faster, more robust reimplementation of PRS in C++. It has been used in a variety of operational environments, like for example paint shop scheduling in car man- ufacturing, air combat simulation, resource exploration, malfunction handling on NASA’s Space Shuttle [97].

COSY is another BDI architecture, with many similarities to the previous [82]. In addition it has given importance to both psychological and social commitments. COSY has a strong component of cooperation based on formal protocols built on top of an agent communication language. Such protocols involve commitments among the agents, and include rules through which tasks may be delegated to and adopted by different agents.

Breiter and Sadek implemented a formal theory of beliefs and intentions in their AR- ITIMIS system [18]. This system carries out intelligent dialogue in assisting the user in tasks such as information access. This system applies Grice’s maxims [78] whereby the computer attempts to infer the user’s intentions and act accordingly.

DEPNET is an interpreter for agents who can perform social reasoning [148]. Agents in this system represent knowledge about one another to determine their relative autonomy or dependence for various goals. Dependence leads to joint plans for achieving the intended goals. The underlying theory is based on dependence rather than social commitments.

This tool shows how social notions can be realized in tools for simulating and analyzing multiagent systems.

2.4 Conclusions

In this chapter the notion of an agent that will be used in all the remaining parts of this dissertation has been introduced. In the following chapter systems with multiple autonomous agents will be introduced and studied and particular focus will be posed on their coordination mechanisms. Two of such coordination mechanisms will be then analyzed in detail: fixed interaction protocols in Chapter 4 and agent communication languages in Chapter 5. In Chapter 4 a deliberative agent for electronic commerce will be described and studied. In Chapter 5 a proposal for a new agent communication language

(31)

2.4 Conclusions 19 based on the notion of commitments is presented and compared with current existing proposals which are based on the BDI model of agency.

(32)
(33)

Chapter 3

Multiagent Systems and Agent Societies

The software agent paradigm presented in the previous chapter is becoming more and more important thanks to its crucial contribution in the realization of innovative and complex applications consisting of multiple interacting agents. Indeed, nowadays, due to the increasing of interconnections and networking of computers, and especially the growth of Internet, the situations where an agent can operate usefully by itself are getting rare, whereas those in which agents operate and interact in environments inhabited by other agents are becoming more and more common. Departing from these observations the focus of this chapter is on systems consisting of two or more homogeneous or heterogeneous agents, which are able to interact with each other and to act in the environment.

First of all, this chapter describes the main historical and ongoing advantages of de- veloping multiple agents applications. On the basis of the main questions posed in the investigation of systems with multiple agents, and on the consequent research topics arisen, I will try to outline a schematization of various existing studies on systems with multiple agents; identifying, in the end, mainly three areas of research: Distributed Problem Solv- ing, Multiagent Systems, andAgent Societies as depicted in Figure 3.1. In this chapter I will try to describe them despite in the dedicated literature a common agreement on both the meaning and use of the various terms and on the outline of each sphere of research has not been reached yet.

Finally, various methods ofcoordination of the agents’ actions will be analyzed. Coor- dination mechanisms are necessary so that some structured interactions may take place.

In particular I will distinguish between cooperation that is coordination among non- antagonistic agents and competition that is coordination among competitive or simply self-interested agents. Among the various possible methods of interaction, particular rel- evance will be given to interaction protocols and communication languages and to ap- plicationdependent and applicationindependent ways of interaction necessary to realize truly open systems. Interaction protocols enable agents to have structured exchanges of messages and are largely used for automated negotiation, for example in electronic com- merce applications, as I will argue in Chapter 4. Communication languages enable agents

21

(34)

Distributed P ro bl em

S o l v in g M ul tia g en t

S y stem s

A g en t S o c ieties

Figure 3.1: Different research approaches for studying systems with multiple agents to exchange and understand messages and I will discuss this important topic of research and present a new proposal for the definition of the semantics of an agent communication language in Chapter 5.

3.1 Systems with Multiple Agents

Research studies concerning multiple agent systems are quite recent and in frenetic evolu- tion. Distributed Artificial Intelligence (DAI), born in the eighties, was the first research area within Artificial Intelligence to be concerned with systems of multiple agents. DAI was defined by Weiß in 1996 as the study and design of systems consisting of several inter- acting entities which are logically and often spatially distributed and in some sense can be called autonomous and intelligent [173]. More recently other parallel research areas such as Multiagent Systems (MAS) and Agent Societies have been arisen.

The content of this chapter is mainly inspired by various online discussions, by several research papers, and by some of the most famous books on this subject. Important col- lections of articles about Distributed Artificial Intelligence were issued at the end of the eighties, and are: [83, 13, 68], whereas a more recent contribution can be found in the book of O’Hare and Jennings [116]. Noticeable works in the area of Multiagent Systems are:

”Multiagent Systems. A Modern Approach to Distributed Artificial Intelligence” edited by Gerhard Weiß (1999) [172], ”Multi-Agent Systems: An Introduction to Distributed Artifi- cial Intelligence” by Jaques Ferber (1999) [55], ”Understanding Agent Systems” by Mark d’Inverno and Michael Luck [47], and the most recent book ”Introduction to Multiagent Systems” by Michael Wooldridge (2002) [179].

There are many reasons and benefits for studying and developing systems with multiple agents. At first the reason was to study distributed approaches to certain type of problems.

In fact even though centralized solutions are generally more efficient, distributed computa- tions are sometime easier to understand and to develop. This is crucial especially when the

(35)

3.1 Systems with Multiple Agents 23 problem to be solved is itself distributed, for instance when data belong to independent or- ganizations which want to keep their information private and safe for commercial reasons.

Moreover the distributed approach ascribes important software engineering qualities to a system, like efficiency, in fact in some situations the distributed approach speeds up the problem solving thanks to the parallel use of resources, reliability and robustness which are both enhanced given that a system gets fault tolerant through redundancy. Distribution also encourages the reuse of the various components and guarantees the scalability of the entire system. Indeed since such systems are inherently modular, it is easier adding new elements to them than adding new capabilities to a monolithic system.

Another interesting aspect of the distributed perspective is the possibility of studying new approaches to solve certain type of problems. In fact, distribution can lead to com- putational algorithms that might not have been discovered otherwise and that often are a more natural way of representing the problem. Existing type of problems which exploit the distributed approach include: vehicle routing among independent dispatch centers, manufacturing planning, digital libraries, multiagent information gathering on the Web, routing and bandwidth allocation in multi-provider multi-consumer computer networks, electronic commerce, and various type of scheduling like scheduling among multiple com- panies, meeting scheduling, scheduling of patient treatments across hospitals, classroom scheduling, etc., to name just a few.

There are important advantages in developing systems composed by multiple self- interested autonomous agents acting as ”individuals”, which very often represent real world parties, rather than as ”parts” of a whole system. The opportunity for artificial agents of efficiently retrieving, filtering and exchanging information, as well as exchang- ing knowledge, services, products and expertise with other agents, or even with humans, enables them to solve problems that cannot be solved alone. The capability of negoti- ating, cooperating or competing with other agents, and of reasoning about their goals or acting so as to influence their behavior lets artificial agents be able to complete tasks which intrinsically require an interaction with other entities, like for instance in electronic commerce or information retrieval applications.

Other reasons for studying systems with multiple agents lie in their applicability to different research fields. For example the simulation of artificial societies in biology or in social science makes it possible to validate new theories. Moreover, systems with multiple interacting agents may also be useful to investigate the various aspects of intelligence. In fact, it has been proposed that the best way to develop intelligent machines at all, might be to start by creating ”social machines” [38].

In the following sections I will try to outline what is intended, within artificial in- telligence, with the terms Distributed Problem Solving (DPS), Multiagent Systems, and Societies of Agents. Durfee and Rosenschein proposed three different approaches to dis- criminate between DPS and MAS [50]. From my point of view the best among such approaches is the one that focuses on the differences between MAS’s and DPS’s research agenda, and in particular on why certain systems were made, what were the questions that

Références

Documents relatifs

Much more than in the case of the first green revolution , which was implemented in potentially the richest environments, the Doubly Green Revolution will be based

Ethical issues in autonomous systems can be addressed according to different points of view: from the philosophical foundations of ethics (Lacan, 1960) to regulation mechanisms

The steps we have identified are (1) defining a generic framework that allows to reason on several ethical principles and situation assessment (both at the mono- and

Ethical issues in autonomous systems can be addressed according to different points of view: from the philosophical foundations of ethics (Lacan, 1960) to regulation mechanisms

Although Process Transparency may complement our research objectives, our main focus will be to investigate methods, techniques and tools to help requirements engineers to

Thus, in the GDT of R1, the link between goal 13 (“go and give R2”) and goal 14 (“go to R2”) has to be modified: whereas the satisfaction of goal 13 is quite unchanged (its

Objecting to the notion that art is “expression”, by which he means mimesis or representational bonds between the artist and the object, Beckett says, “All that should concern us

The duration and frequency of Gaze Owner, the time spent in Food area and the combination of Food area with Gaze Owner (duration) did not differ across phases and conditions