• Aucun résultat trouvé

Techniques for Dynamic Environments

Background and Related Work

2.3 Evolutionary Algorithms

2.3.5 Techniques for Dynamic Environments

Most optimization algorithms assume a static objective function, and this has generally been true of the research in evolutionary algorithms. How-ever, many real-world applications are dynamic in nature, where it becomes essential to adapt solutions due to changes in the environment. Some exam-ples of problems corresponding to dynamic environments are near real-time factory scheduling, resource allocation problems where resource character-istics change with time, and control problems where the system charac-teristics evolve with time. For these types of problems it is desirable to have available optimization algorithms that do not require restarts when-ever environmental changes occur. Recently, evolutionary schemes have been explored for their applicability to such problems [Back, 1998; Grefen-stette, 1999; Liles and De Jong, 1999; Trojanowski and Michalewicz, 1999;

Weicker and Weicker, 1999]. Applicability analyses of evolutionary tech-niques to dynamic problems mostly have an experimental flavor, and the no-tion of adaptano-tion to changes assumes more importance than convergence.

When environmental changes occur, it is advantageous for the algorithm to maintain multiple alternative solutions rather than to have converged to a particular solution. Therefore, any technique for diversity maintenance in evolutionary algorithms can find useful application in dynamic problem contexts. For instance, introducing random solutions in a population or equivalently hyper-mutating some existing solutions has been found to be very useful [Grefenstette, 1992]. [Liles and De Jong, 1999] report on the use of speciation for diversity maintenance, and [Trojanowski and Michalewicz, 1999] report on the use of redundant genetic material that serves as a mem-ory. In the latter scheme, an individual has active genetic material and a

Background and Related Work 29

limited memory (FIFO queue) that stores genetic material from its pre-decessors. An individual is first evaluated using its active genetic material and then reevaluated using genetic material from its predecessors. The best genetic material identified as a result of the evaluation process becomes the active genetic material for the next generation.

2.4 Agents

An agent [Russell and Norvig, 1995] is any module or system that has the ability to perceive its environment, and can select an action or action sequence to manipulate the environment. An agent has the ability to con-struct an internal representation of the environment, and uses reasoning to choose an action. Agents can be designed to act independently or collec-tively [Lesser, 1999]. Agents and agent-based systems have been around for a long time, but more recently they have increased in popularity mainly due to developments in distributed computation and the mainstream adoption of the object-oriented programming paradigm [Booch, 1994;

Stroustrup, 1991], which provides a convenient and logically natural means to structure and construct agent-based systems.

An agent is considered intelligent if it can choose those actions that accomplish some predefined goal and simultaneously increase some perfor-mance measure. An agent is considered autonomous if it has the ability to choose actions based on its perception and experience rather than blindly follow a pre-programmed action schedule. Autonomy widens the scope of tasks that an agent can perform without any reprogramming. However, the task of designing autonomous agents is more challenging than design-ing non-autonomous agents. An agent is considered mobile if it has the ability to move itself in its environment. For certain agent applications, such as for a pipe inspection robot, mobility is important, while for oth-ers such as an electronic mail management system, mobility may not be necessary. The structure of an intelligent agent is shown in Figure 2.2.

Software agents [Nwana and Ndumu, 1997; Jennings et al., 1998] are also sometimes known as softbots [Russell and Norvig, 1995]. An environment for a software agent essentially consists of information (computer files) and other software agents. Files may exist in repositories that are logically and physically distributed. Software agents can be located in the local memory of a single computer or they can exist in computers that are logically and physically distributed. Software agents have the ability to read (perception)

Fig. 2.2 Structure of an intelligent agent.

information from their environment and write (action) information to their environment. Their representation mechanism consists of data structures.

Their reasoning mechanism consists of algorithms and data.

Research in the field of software agents and their applications is grow-ing rapidly, and this includes descriptions of numerous prototype systems developed at various universities and research labs. Some have argued that given the highly multidisciplinary nature of research in agent-based sys-tems, there is a tendency to view the literature in this fast growing field as chaotic and incoherent [Jennings et al, 1998]. Given this, the aim is not to attempt to present a complete list of agent models and applica-tions described in the literature. Instead, some important contribuapplica-tions are highlighted.

An excellent overview of agent-based systems, their essential character-istics, and applications is presented in [Hayes, 1999]. [Maes, 1994] describes the essential features of autonomous agents, and the common characteristics of agent-based solutions that have been proposed. [Jennings and Woolridge, 1998] describe prototype agent systems in information management, man-ufacturing, entertainment, process control, telecommunications, air traffic control and transportation systems. Recently, [Talukdar and de Souza, 1995] have proposed A-Teams (Asynchronous Teams) for problem solving.

A-Teams are collections of autonomous agents that work iteratively and in parallel on populations of solutions. The motivation for A-Teams is derived

Background and Related Work 31

from collective problem solving observed in nature: for instance, ants in a colony cooperatively working towards construction of a nest. Such behavior is also seen in human societies: for instance, scientists in disparate fields collectively advancing scientific knowledge. Agents in A-Teams collaborate by modifying one another's results, which collect in shared memories. An A-Team agent consists of five components: an input memory, a scheduler for work triggering, a selector that chooses one or more solutions from the input memory, an operator that modifies the selected solutions and writes the result to the output memory.