• Aucun résultat trouvé

THE HISTORY OF GENETIC ALGORITHMS

Dans le document EVOLUTIONARY OPTIMIZATION ALGORITHMS (Page 75-78)

Genetic Algorithms

3.3 THE HISTORY OF GENETIC ALGORITHMS

1903 was a good year for technology. The Marconi Company began the first reg-ular trans-Atlantic radio broadcast, the Wright brothers successfully completed their first airplane flight, and Neumann Janos was born in Budapest, Hungary.

Neumann's genius displayed itself at a young age in his voracious reading and his mathematical aptitude. His parents, both of whom were from educated upper-class families, recognized early on that he was a prodigy, but they were careful not to push him too hard. By the time he was 23 years old he had an undergraduate degree in chemical engineering and a PhD in mathematics. He continued to be productive as an academic professional, and in 1929 accepted a faculty position at Princeton University in New Jersey. His name was now John von Neumann, and in 1933 he became one of the original members of Princeton's Institute of Advanced Studies.

During his wide-ranging career at Princeton, von Neumann made fundamental contributions to mathematics, physics, and economics. He was one of the leaders of the atomic bomb effort during World War II, and he was also one of the pioneers of the invention of the digital computer. There were others who were just as influential (or perhaps even more influential) in the development of digital computing - for example, Alan Turing (who worked with von Neumann at Princeton), and John Mauchly and John Eckert (who led the construction of ENIAC, the first computer, in the 1940s). But it was von Neumann who first realized that program instructions

should be stored in the same way in a computer as program data. To this day, such machines are called "von Neumann machines."

After the war, von Neumann became interested in artificial intelligence. In 1953 he invited Italian-Norwegian mathematician Nils Barricelli to Princeton to study artificial life. Barricelli used the new digital computers to write simulations of evolutionary processes. He was not interested in biological evolution, and he was not interested in solving optimization problems. He wanted to create artificial life inside a computer by using processes that are found in nature (e.g., reproduction and mutation). In 1953 he wrote, "A series of numerical experiments are being made with the aim of verifying the possibility of an evolution similar to that of living organisms, taking place in an artificially created universe" [Dyson, 1998, page 111]. Barricelli became the first person to write genetic algorithm software.

His first work on the subject was published in Italian in 1954 with the title "Esempi numerici di processi di evoluzione" (Numerical models of evolutionary processes) [Barricelli, 1954].

Alexander Eraser, born in London in 1923, followed shortly after Barricelli and used computer programs to simulate evolution. His education and career took him to Hong Kong, New Zealand, Scotland, and finally, in the 1950s, to the Com-monwealth Scientific and Industrial Research Organisation in Sydney, Australia.

Fraser was not an engineer; he was a biologist, and he was interested in evolution.

He couldn't observe evolution happening in the world around him because it was too slow, requiring time periods on the order of millions of years. So Fraser de-cided that he would study evolution by creating his own universe inside of a digital computer. That way he could speed up the process and observe how evolution really worked. In 1957 Fraser wrote a paper titled "Simulation of genetic systems by automatic digital computers" [Fraser, 1957] becoming the first to use computer simulations for the express purpose of studying biological evolution. He published many papers about his work, mostly in biology journals. In the late 1950s and 1960s, many other biologists followed in his steps and began using computers to simulate biological evolution.

Hans-Joachim Bremermann, a mathematician and physicist, also performed early computer simulations of biological evolution. His first work on the subject was pub-lished as a technical report in 1958 while he was a professor at the University of Washington, and was titled "The evolution of intelligence" [Fogel and Anderson, 2000]. Bremermann worked for most of his career at the University of California, Berkeley, where in the 1960s he used computer simulations to study the operation of complex systems, especially evolution. But his computer programs didn't just model evolution - they also simulated parasite/host interactions, pattern recogni-tion by the human brain, and immune system response.

George Box, born in 1919 in England, was also interested in artificial evolution, but unlike his predecessors, he wasn't interested in artificial life or evolution for its own sake. He wanted to solve real-world problems. Box used statistics to analyze the design and results of experiments, then he became an industrial engineer and used statistics to optimize manufacturing processes. What's the best way to lay out the machines on the plant floor to maximize the production of widgets? What's the best way to schedule the flow of material through the plant? During the 1950s, Box developed a technique that he called "evolutionary operation" as a way of optimizing an industrial process while it was operating. His work was not a GA per se, but it did use the idea of evolution via an accumulation of many incremental

SECTION 3.3: THE HISTORY OF GENETIC ALGORITHMS 4 3

changes to optimize an engineering design. His first paper on the subject was published in 1957 with the title "Evolutionary operation: A method for increasing industrial productivity" [Box, 1957].

George Friedman, like George Box, was also a practical man. For his 1956 Master's thesis at UCLA, he designed a robot that could learn how to build electric circuits to control its own behavior. The title of his thesis was "Selective Feedback Computers for Engineering Synthesis and Nervous System Analogy" [Friedman, 1998], [Fogel, 2006]. His work was similar to today's GAs, although he used the term "selective feedback computer" to describe his approach. The last paragraph of his conclusion states, "The concepts and schematic illustrations in this paper, while not conclusively demonstrating the usefulness of [GAs] ... did at least indicate a possible area for further investigation." Indeed! Now, more than a half century after Friedman's thesis, thousands of technical articles are published every year on the topic of genetic algorithms.

Another pioneer in the area of genetic algorithms was Lawrence Fogel, who began working on GAs in 1962. In 1966, along with Alvin (Al) Owens and Michael (Jack) Walsh, he wrote the first book about GAs: Artificial Intelligence through Simulated Evolution [Fogel et al., 1966]. Fogel's early work in genetic algorithms was motivated by engineering problems such as the prediction of signals, modeling combat, and controlling engineering systems. Lawrence Fogel's son, David Fogel, edited an important volume that contains 31 foundational papers about GAs and related topics [Fogel, 1998].

After the seminal work of Barricelli, Fraser, Bremermann, Box, and Friedman in the 1950s, others began using genetic algorithms to study biological evolution and to solve engineering problems. Some important advances in genetic algorithms were made in the 1960s by John Holland, a professor of psychology, electrical engi-neering, and computer science at the University of Michigan. In the 1960s Holland was interested in adaptive systems. He wasn't necessarily interested in evolution or optimization, but rather in how systems adapt to their surroundings. He began teaching and conducting research in these areas, and in 1975 he wrote his famous book Adaptation in Natural and Artificial Systems [Holland, 1975]. The book be-came a classic because of its presentation of the mathematics of evolution. Also in 1975, Holland's student Kenneth De Jong finished his doctoral dissertation, titled

"An analysis of the behavior of a class of genetic adaptive systems." De Jong's dis-sertation was the first systematic and thorough investigation of the use of GAs for optimization. De Jong used a set of sample problems to explore the effects of vari-ous GA parameters on optimization performance. His work was so thorough that for a long time any optimization paper that did not include De Jong's benchmark problems was considered inadequate.

It was in the 1970s and 1980s that G A research increased exponentially. This was probably due to several factors. One factor was the increased computing power that became available with the popularization and commercialization of the tran-sistor in the 1950s, which exponentially increased computing capabilities. Another factor was the increased interest in biologically-motivated algorithms as researchers saw the limitations of conventional computing. Fuzzy logic and neural network re-search, two other biologically-motivated computing algorithms, also increased ex-ponentially in the 1970s and 1980s, even though those paradigms do not require much computing power.

Dans le document EVOLUTIONARY OPTIMIZATION ALGORITHMS (Page 75-78)