• Aucun résultat trouvé

… used to cover a reaction from Abstract Expressionism in favor of a revival of naturalistic figuration embued with a spirit of objectiv-ity.

[Chilvers and Osborne, 1988]

A new realism was heralded in the 1980s by a report of a team headed by Rumelhart and McClelland submitted to DARPA and its civilian counterpart the National Sci-ence Foundation. The report argued that parallel-distributed programming (PDP), a new name for artificial neural nets, had been seriously neglected for at least a decade.

They advocated a switch of resources into the PDP arena. In 1985, a special issue of Cognitive Science was devoted to the subject of connectionism, another new name for the field. (When an area of endeavor has been disparaged, an important technique for erasing that memory and suggesting that there is something new is to give it a new name. This technique had been successfully deployed with knowledge-based sys-tems.)

There were several reasons for the neural net renaissance, not the least of which was that it presented an opportunity for the US to regain the leading edge of computer science that had been seized by the Japanese Fifth Generation. The theoretical limita-tions identified by Minsky and Papert [1969] applied only to a single layer of neu-rons. In the intervening period, learning algorithms for multilayer systems such as the back propagation rule or generalized delta rule [Rumelhart, Hinton and Williams, 1986] emerged. (Ironically, back-propagation was discovered in the earlier movement [Bryson and Ho, 1969].) Hopfield’s work [1982] lent rigor to neural nets by relating them to lattice statistical thermodynamics, at the time a fashionable area of physics.

Lastly, demands for greater power appeared to expose the sequential limitations of von Neumann computer architectures.

An apparently convincing proof of concept was provided by the Nettalk system [Se-jnowski and Rosenberg, 1987]. Nettalk is a text-to-speech translator that takes 10 hours to “learn to speak.” The transition of Nettalk’s childlike babbling to slightly alien but recognizable pronunciation has been described as eerily impressive. By contrast, a (symbolic) rule based system for the same task, DECtalk, required a 100-person years development effort.

By the mid-1980s, the connectionist renaissance was well under way. This prompted Minsky and Papert [1969, 1987] to issue a revised edition of their book. In the new prologue they state:

Some readers may be shocked to hear it said that little of signifi-cance has happened in this field [neural nets].

Talk of a sixth generation of connectionism was stifled in Japan, so as not to com-promise the much heralded, but late arriving, Fifth Generation.

The euphoria of the New Realism was short lived. Despite some impressive exem-plars, large neural networks simulated on von Neumann hardware are slow to learn and tend to converge to metastable states. On training data Nettalk’s accuracy goes down to 78%, a level that is intelligible but worse than commercially available

pro-grams. Other techniques such as hidden Markov models require less development time but perform just as well. Connectionist systems are unable to explain their rea-soning and show little signs of common sense. An anecdotal example is a military application of tank recognition. A neural net had been trained to recognize tanks in a landscape. Testing with non-training data the system failed to recognize tanks relia-bly. It turned out that all the training photographs with tanks in the scene were taken on sunny days and those without tanks were taken on dull days. The network had learnt to reliably distinguish sunny days from dull days.

1.12 Baroque

The emphasis is on balance, through the harmony of parts in sub-ordination to the whole.

[Chilvers and Osborne, 1988]

Brachmann [1985] claims that the use of frames for common-sense reasoning is fraught with difficulties. The formalism suggested by Minsky was widely criticized as, at best, a trivial extension of the techniques of object-oriented programming, such as inheritance and default values [Dahl et al., 1970; Birtwistle et al., 1973]. General problem solving systems like GPS [Newell and Simon, 1963] had faired no better than machine translation in naive physics. As the programs were expanded to handle more classes of problems, they performed less satisfactorily on any single one. Min-sky [1975] remarked:

Just constructing a knowledgebase is a major intellectual problem ... We still know far too little about the contents and structure of commonsense knowledge. A “minimal” commonsense system must

“know” something about cause-effect, time, purpose, locality, pro-cess and types of knowledge ... We need a serious epistemological research effort in this area.

Expert systems are now so established in industry that they are rarely considered as AI. A general disillusionment with expert systems and AI grew because of the inabil-ity to capture naive physics.

That human intelligence is the result of a number of coordinating, possibly compet-ing, intelligences grew out the work of the Swiss psychologist Piaget. Piaget’s obser-vations of his own children suggested they go through distinct stages of intellectual development. According to Papert:

... children give us a window into the ways the mind really works because they are open ... I think we understand ourselves best by looking at children.

Piaget influenced Papert in the development of a programming language (Logo) for children. One of Piaget’s well known experiments [Flavell, 1963] involves two drinking glasses, one tall and thin the other short and fat. A child’s choice of the tall glass when the same volume of lemonade is contained in each is attributed to the

intuitive mentality that develops in early childhood. In the second, kinesthetic stage, children learn by manipulating objects. In the final stage what they learn is dominated by language and becomes more abstract. The American psychologist Bruner devel-oped Piaget’s thesis to the point where these mentalities behave as semi-independent processes in the brain that persist through adult life. They exist concurrently and can cooperate or be in conflict.

A first attempt to coordinate multiple expert systems emerged in the 1970s, when DARPA launched a national effort to develop a natural speech understanding system.

The result of this effort was Hearsay, a program that met its limited goals after five years. It was developed as a natural language interface to a literature database. Its task was to answer spoken queries about documents and to retrieve documents from a collection of abstracts of artificial intelligence publications. Hearsay gave a major push to the technology of speech understanding and additionally led to new sources of inspiration for AI: sociology and economics. Hearsay [Erman, 1976] comprised several knowledge sources (acoustic, phonetic, phonological, lexical, syntactic and pragmatic) and featured a Blackboard System for communication between them. In a blackboard system, a set of processes or agents, typically called knowledge sources (abbreviated KSs) share a common database. Each KS is an expert in a particular area and they cooperate, communicating with each other via the database. The blackboard metaphor refers to problem solving by a group of academics gathered around a blackboard to collectively solve a problem. Writing an idea or fact on the blackboard by one specialist can act as a trigger for another expert to contribute another part of the solution.

An early reference to the blackboard metaphor was Newell [1962]. The short-term memory of the Impressionist Movement can be viewed as a bulletin board that pro-vides a channel of communication between rules. If autonomous agents use produc-tion rules, the workspace becomes a means of synchronizaproduc-tion and communicaproduc-tion.

Newell pointed out that, in conventional single agent problem solving paradigms, the agent is wandering over a goal net much as an explorer may wander over the country-side, having a single context and taking it with them wherever they go. The single agent view led AI researchers to concentrate on search or reasoning with a single locus of control. As noted by Newell, the blackboard concept is reminiscent of Sel-fridge’s (neural network) Pandemonium [Selfridge, 1955] where a set of demons independently look at a situation and react in proportion to what they see that fits their natures. Kilmer, McCulloch, and Blum [1969] offered a network in which each node was itself a neural network. From its own input sample, each network forms initial estimates of the likelihood of a finite set of modes. Then networks communi-cate, back and forth, with other networks to obtain a consensus that is most appropri-ate. The notion of organizing knowledge into unitary wholes was a theme of Kant’s Critique of Pure Reason [1787], which was revived in the 20th century by Barlett [1932].

Hewitt [1977] developed the idea of control as a pattern of communication (message passing) amongst a collection of computational agents. Hewitt [1985] uses the term open system to describe a large collection of services provided by autonomous agents.

Agents use each other without central coordination, trust, or complete knowledge. He

argues that networks of interconnected and interdependent computers are qualita-tively different from the self-contained computers of the past. The biologist von Ber-talanffy expressed a similar sentiment in considering living organisms. In living sys-tems, the whole is always more complex than the union of the parts. Von Bertalanffy drew attention to the distinction between systems that are open to their environment and those that are closed. He defined an open system [1940] as one having to import and export material from its environment.

In the same period after the war that Post published his results on deductive systems, von Bertalanffy proposed a holist view of biology: biology could not be reduced to chemistry. Biological evolution has developed one solution to the management of complex dynamic systems. In living things there is a hierarchy of structures: mole-cules, organelles (entities making up a cell), cells, organs, and organisms. By dis-secting an organism into its representative parts, the form and function of each organ and chemical components can be discovered. In this reductionist process, the living entity vanishes in the search for elements. Reproduction is not within the power of any single molecule or organelle by itself. The reductionist philosophy of Descartes tries to explain social science with psychology, psychology with neurophysiology, neurophysiology with chemistry and chemistry with physics. Longuet-Higgins et al.

[1972] satirically carries the argument further, reducing history to economics and economics to sociology.

Biological systems maintain control by confining processes and their data in self-contained cells. These cells act on each other by sending “messages” carried by chemical messengers. The cell membrane protects its data, DNA, from inappropriate processes. General Systems Theory is a logico-mathematical study whose objective is the formulation and derivation of those principles that are applicable to general sys-tems, not only biological ones. The name is due to von Bertalanffy [1956]. Although von Bertalanffy introduced the terminology verbally in the 1930s, the first written presentations only appeared after World War II. General Systems Theory is founded on two pairs of ideas: hierarchy and emergence; communication and control. The theory challenges the use of reductionist philosophy in the theory of organizations and biology.

Organisms, von Bertalanffy pointed out, are unlike the closed systems usually studied in physics in which unchanging components settle to a state of equilibrium. Organ-isms can reach steady states that depend upon continuous exchanges with the envi-ronment. Whereas closed systems evolve towards increasing disorder (higher en-tropy), open systems may take up organized, yet improbable, steady states. Mainte-nance of a hierarchy, such as molecule to organism, entails a set of processes in which there is communication of information for purposes of regulation and control.

The blackboard technique is a form of opportunistic search. Partial information dis-covered by one knowledge base can be of sufficient use to guide another so that the two may solve a problem faster than by combining the knowledge. Hewitt and Korn-field [1980] have called this accelerator effect combinatorial implosion. The black-board technique has the same problem as expert systems: it does not scale. If there is only one blackboard it becomes a severe bottleneck [Hewitt and Liebermann, 1984].

Cooperative Knowledge Based Systems (CKBS) exploit other forms of sociological

cooperation. Axelrod [1984] explores the evolution of cooperation. The idea that competition through market forces is a more efficient search strategy than centralized control was an economic dogma of the same (Thatcher) decade.