• Aucun résultat trouvé

Pre-Raphaelite Brotherhood

Their desire for fidelity to nature was expressed through detailed observation of flora etc.

[Chilvers and Osborne, 1988]

From his work on robotics with Papert, Minsky developed a theory of agent based intelligence that he laid out in the Society of Mind [1986]. Similar sentiments are to be found in Arbib’s conception of the brain’s information processing as a collection of concurrent “schemas” [Arbib, 1988]. Brooks [1986] suggests that the idea that intelligence can be split vertically into tasks such as search and knowledge represen-tation is misguided. He claims a more suitable split is horizontal and based on func-tion. The argument is that biological control is associated with the imposition of con-straints and requires consideration of at least two hierarchical levels. At a given level, it is often possible to describe the dynamical properties of the system such as the possible transitions or search paths. Any description of control entails an upper level imposing constraints on the lower level. For example, the cell as a whole constrains the physicochemical possibilities available to DNA that makes it the bearer of infor-mation. The upper level is the source of an alternative (simpler) description of the lower level by specific functions that are emergent (epiphenomena) due to the impo-sition of constraints. Some otherwise undistinguished molecules in a cell are con-strained to bear the function of repressor or activator. These functions are not avail-able in the chemical properties of the molecules but are the result of hierarchical control.

According to Brooks [1991], AI should proceed as evolution does, beginning by constructing primitive autonomous artificial insects and progressing to more sophisti-cated mechanisms. Turing’s universal computing machine suggested to von Neumann [1966] the idea of a universal construction machine: a machine which, given a suffi-ciently rich environment of components and furnished with suitable instructions, could replicate itself. While at Cambridge, John Conway invented an autonomous computer game, the Game of Life [Gardner, 1970]. A deterministic set of rules served as the physical laws and the microprocessor clock determined the time-scale. De-signed as cellular automata, the screen was divided into cells whose states were de-termined by the states of their neighbors. The rules determine what happens when neighboring cells are alive or dead thereby triggering a cascade of changes through-out the system. One interesting discovery was a glider, a cell that moved across the screen. Conway proved that the Game of Life was not predictable; it was undecidable if the patterns were endlessly varying or repeating. Though it had a small number of deterministic rules, it had the capacity to generate unlimited complexity.

The aptly named computer virus is a recent manifestation of artificial life. Like its biological counterpart, a computer virus is incapable of replication without being

incorporated in a host program [Ferbrache, 1992]. The code of the computer virus can be compared with the codon or nucleotide structure of DNA of a biological virus. The virus subverts the host program to infect other programs directly. Infection spreads through networks or files on discs. A virus arranges for its code to be executed by subverting machine or operating system initialization, termination, or demon code.

Consequently, computer viruses are machine and operating system specific. A virus’s behavior can be somewhat benign, only consuming space, or it can wreak havoc.

Some computer viruses, such as the one that disrupted the Internet (the DARPA communication protocol that links defense, research, and educational sites in the US), operate from a predetermined, declarative instruction set. The Internet virus vividly demonstrated the vulnerability of computer networks to sabotage.

Random mutations of computer viruses caused by data corruption have been recorded so there is a possibility for evolution. However, computer systems try to prevent evolutionary behavior with error detecting and correcting codes. Genetic algorithms are a deliberate attempt to use evolution as a method of search. A caution against the speed of genetic search is expressed by McCulloch et al. [1962]:

If you want a sweetheart in the spring, don’t get an amoeba and wait for it to evolve.

1.14 Renaissance

Term meaning ‘rebirth’ applied to an intellectual and artistic movement.

[Chilvers and Osborne, 1988]

Growing commercialization of the Internet brought about a renaissance in AI with the distillation of the concept of software agent. According to Kay [1984]:

The idea of an agent originated with John McCarthy in the mid-1950s, and the term was coined by Oliver G. Selfridge a few years later, when they were both at the Massachusetts Institute of Tech-nology. They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a ‘soft robot’ living and doing its business within the computer’s world.

The word agent derives from the Latin verb agere: to drive, lead, act, or do. The philosopher Dennett (1987) describes three ways of describing the behavior of sys-tems that cause something to be done: physical, based on physical characteristics and laws; design, based on its functions; and intentional, based on the assumption of rational agent. Doyle [1983] proposed the design of rational agents as the core of AI.

Horvitz et al. [1988] proposed the maximization of utility, in the sense of von Neu-mann and Morgenstern [1944], as the interpretation of rationality.

In The Nature of Explanation, Craik [1943] proposed a mental, deliberative, step between the behaviorists’ stimulus and response. He argued that mental category such as goals, beliefs and reasoning are bulk properties of intelligence and are just as sci-entific as pressure and temperature used to describe gases, despite their being made of molecules which possess neither. Bratman [1987] introduced mental states of belief, desire and intention (BDI). Beliefs express an agent’s expectation of its environment.

Desire expresses preference over future states of the environment. Intentions are partial plans of actions that an agent can perform which are expected to achieve de-sired states. Craik proposed that intelligent systems execute a cycle: a stimulus is transformed into an internal representation; the representation is integrated with the existing mental representation and this is used to effect an action. This model was used as the basis of the influential robotics project, Shakey at SRI [Nilsson, 1984].

Shakey added a planning module to produce the sense-model-plan-act (SMPA) ar-chitecture. Shakey’s world model was based on propositional logic.

Following Wittgenstein [1953], Austin [1962] noted that natural language utterances could be understood as actions that change the state of belief in the same way that actions change physical state. Searle [1969] derived necessary and sufficient condi-tions for the successful performance of speech acts, which distinguished five types of speech acts. Cohen and Perrault [1979] utilized this work on linguistic philosophy into an AI planning problem. Cohen and Leveque [1995] developed a theory in which rational agents perform speech acts in furtherance of their desires.

The Renaissance Movement is characterized by situatedness, which aims to build autonomous intelligent systems, embedded in real environments. This is exemplified by the SOAR agent architecture [Laird et al., 1987, Newell, 1990]. This can be seen as related to the empiricist movement started by Francis Bacon’s Novum Organum. This philosophical movement is characterized by the philosopher John Locke’s dictum:

Nothing is in the understanding, which is not first in the senses.

The theory was taken to extreme by Carnap [1928] and the Vienna Circle who intro-duced logical positivism. This doctrine holds that all knowledge can be characterized by logical theories ultimately connected to observation sentences that correspond to sensory input. Logical positivism held that all meaningful statements could be veri-fied or falsiveri-fied either by analyzing the meaning of the words or by experiment. Pop-per [1972] refuted this claim, with an argument which essentially comes from Hume’s A Treatise of Human Nature. Hume proposed that general rules cannot be proved but are acquired by exposure to repeated associations between their elements – the principle of induction.

1.15 Hindsight

Everything of importance has been said before by somebody who did not discover it.

AN Whitehead (1861–1947)

The factions of AI have been presented by an analogy with the movements of Fine Art elaborating suggestions of Jackson [1986] and Maslov [1987]. Some may feel this is pushing an amusing metaphor too far, but like Fine Art, AI has its fashions and counterfashions, examples and counterexamples, claims and refutations. Papert now claims his and Minsky’s attacks on connectionism have been misinterpreted. They were not directed against neural nets but against universality; the idea that there is a single mechanism that has universal application:

The desire for universality was fed also by the legacy of the scien-tists, largely mathematicians, who created AI, and it was nurtured by the most mundane material circumstances of funding.

[Papert, 1988]

M.M. Huntbach, G.A. Ringwood: Agent-Oriented Programming, LNAI 1630, pp. 279–317, 1999.

© Springer-Verlag Berlin Heidelberg 1999