Tracing back the origin of the notion of complexity is not an easy task. The difficulty is amplified by the fact that the terms “complex” and “complexity”
are used (and sometimes misused) to express their general, non-technical meaning,evenin the academic context. Therefore, it is not straightforward to single out those contributions to research that actually deal with complexity in the specific sense that I am discussing here. However, since complexity is a relatively recent scientific paradigm, a dissertation dealing with it has to discuss it also from a historical perspective. In this section I present the history of the notion of complexity. I start by discussing the theories that first dealt with it. Then I review the specific contributions from different scientific disciplines that provided the theoretical basis for the notion of complexity, bridging the natural sciences with the social sciences by means of examples (also) from language studies.
7“A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say that that effect is due to chance. If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict ex-actly the situation of that same universe at a succeeding moment. But, even if it were the case that the natural laws had no longer any secret for us, we could still only know the ini-tial situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, and we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena A small error in the former will produce an enormous error in the latter. Prediction becomes im-possible, and we have the fortuitous phenomenon.” (Translation by Francis Maitland from
“Science et Méthode”, originally published in 1914).
FIGURE(1.1) The genesis of complexity.
The genesis of complexity could be thought of as a tree, in that it started out as a number of different roots (i.e. the contributions from various dis-ciplines) that eventually converged into a robust trunk (complexity). This image is particularly useful also to convey the idea that several disciplines still continue feeding the notion of complexity. Besides, this picture could be further refined by including branches and sub-branches, which represent the different disciplinary developments and treatments of the notion of com-plexity (e.g. comcom-plexity theory, chaos theory, dynamical systems theory, bi-furcation theory, catastrophe theory, network theory).8 The resulting picture would approximately look like Figure1.1.
Bifurcation theory can probably be considered one of the first important
8Some notions or graphic representations brought forward by cognates of complexity theory can come in handy in this chapter to explain the basic ideas of complexity. In such instances, a few introductory words are devoted to presenting other theories.
theories that addressed the issue of complexity. First introduced by Henri Poincaré (1885), bifurcation theory studies the changes caused by small vari-ations in certain parameter values (called bifurcation parameters) within a system. The point where two or more alternative paths can be followed by a system as a consequence of a change in one or more parameter values is called a bifurcation. Interestingly, bifurcation can be triggered by a change in one variable as well as inn variables at the same time (in which case we speak of bifurcation with co-dimensionn). In particular, a sub-branch of bi-furcation theory called catastrophe theory, largely based on the contributions by French mathematician René Thom in the 1960s, studies specific cases of bi-furcation that can potentially bring about dramatic changes in the behaviour of a function.
Another important antecedent of complexity theory is chaos theory, which studies systems whose behaviour is heavily dependent on initial conditions.
It should be noted, however, that chaos is not to be confused with random-ness. In fact, in chaos theory, the behaviour of systems can be completely deterministic. The focus is on the fact that approximately similar initial con-ditions can lead to drastically different future scenarios. As a matter of fact, it would not be nonsensical to speak of deterministic chaos. The mathematical counterpart of chaos theory is dynamical systems theory, which studies the behaviour of complex dynamic systems by means of difference and differen-tial equations.
Although the idea of complexity was analysed from several theoretical perspectives, locating when and where it was first brought forward as a sci-entific notion is not an easy task. This is especially true because the devel-opment of complexity theory did not follow the path from the natural sci-ences to the social scisci-ences that characterised several other theories, such as Newtonian mechanics, thermodynamics and Darwinian evolution. Each of these prominent natural science paradigms and theories inspired, at different times, new ways of approaching research in the social sciences and provided arguments for methodological justification. A classical example is that of the gravity model of trade, which, drawing from Newton’s law of universal gravitation, explains the size of trade flows between two countries as a func-tion of their economic "masses" and of the geographical distance between them. In the case of complexity, a number of different sciences co-evolved around the same ideas that eventually provided the founding principles of
complexity theory (Room,2011, p. 15). Speaking of natural sciences in com-plexity theory, Russian-Belgian chemist Ilya Prigogine is traditionally consid-ered one of the major contributors. During the 1970s, he carried out studies on thermodynamics applied to complex and far-from-equilibrium systems.
Larsen-Freeman and Cameron (2008) also mention Chilean biologists Hum-berto Maturana and Francisco Varela as important contributors to the evo-lution of complexity theory (as well as to systems theory and sociology). In the early 1970s, they proposed the idea of “autopoietic” systems referring to living organisms that continuously change and redefine themselves while maintaining their identity (Maturana and Varela,1972).
In the social sciences, Austro-Hungarian born economist and philosopher Friedrich Hayek largely contributed to the development of the notion of com-plexity, introducing it in many different fields. In particular, he worked on the idea of spontaneous order arising in complex systems (which is discussed at greater length in the next section). Hayek (1964) observed that there is an important distinction between “a prediction for the appearance of a pattern of a certain class” and “a prediction of the appearance of a particular instance of this class”. In other words, there exists a non-negligible gap between the formal description (of the development) of a general phenomenon and the specific circumstances that eventually come about. For example, it is one thing to predict that a certain phenomenon (say, the course of celestial bodies) will follow the mechanics described by a certain (mathematical) formalisa-tion, but it is another thing to predict the manifestation of a specific instance of such mechanics. Hayek goes on to say that this gap is especially significant for issues of life, mind, and society, in that they imply a higher level of com-plexity and unpredictability than physical phenomena. Conversely, physical issues can be described effectively by means of relatively simple formulae.
In other words, the minimum number of variables that need to be included in a model in order to reproduce the characteristic patterns of a phenomena is lower for physical matters than it is for social matters. However, this does not mean that the additional challenges raised by social phenomena could and should be addressed by adding more variables, quite the opposite. The extremely high number of variables that would need to be taken into con-sideration rules out the possibility to rely exclusively on statistical methods and makes it therefore impossible to work out very precise predictions. In-terestingly, some have gone as far as to say that it does not make any sense to speak of “variables” when it comes to the description of social phenom-ena, as systems are so complex and dynamics so nested and intersected that
scientists should only look at the system as a whole without trying to break it down in several variables in the hope of coming across some causal links (Byrne,2002).
Hayek’s research contributed particularly to the field of economics. He received the Nobel Prize for, among other things, his contributions concern-ing the notion of self-organization and the emergent properties of markets.
In particular, he adopted the word “catallaxy” to describe “the order brought about by the mutual adjustment of many individual economies in a market”
(Hayek,2013, p. 269). A similar idea was brought forward by English anthro-pologist and linguist Gregory Bateson, who was among the first to argue that systems have a multi-layer structure. In particular, speaking of ecological an-thropology, he supported the idea of a world made of systems structured on several levels (individuals, societies, and ecosystems) and characterized by mechanisms of continuous adaptation and feedback loops that depend on several variables (Bateson,1972) (these notions are discussed in greater de-tail in the next section).
One of the earliest scientific contributions to complexity theory that made the link between the natural and social sciences was given by Warren Weaver, mathematician and, among other things, one of the pioneers of machine translation. He talked about complexity to show how research in biology and the medical sciences were essentially different from research in the phys-ical sciences during the seventeenth, eighteenth, and nineteenth centuries (Weaver,1948). The physical sciences, he argues, focus on studying phenom-ena maintaining constant all but a couple of variables of interest. However, research in biology and medicine could not adopt the same methodology, as it is seldom possible to maintain things constant in living organisms to allow for an isolated observation of the variable(s) of interest. Therefore, measuring effects and variations is often made harder by the confounding effect of other variables. Interestingly, and quite ahead of his time, Weaver also discusses in the very same essay the crucial distinction betweendisorganizedand orga-nized complexity. He argues that problems of disorganized complexity are characterized by a very large number of variables having a behaviour which is individually erratic. Conversely, when speaking of organized complexity, the same large amount of variables display interrelated patterns which make them anorganicwhole. Besides, Weaver also extends the idea to the social sci-ences, asking questions such as: “To what extent is it safe to depend on the free interplay of such economic forces as supply and demand?” or “How can one explain the behaviour pattern of an organized group of persons such as
a labour union, or a group of manufacturers, or a racial minority?” (Weaver, 1948, pp. 539-540). These speculations clearly laid the ground for future applications of the notion of complexity in the social sciences. Again quite ahead of his time, Weaver observes that problems of organized complexity can be effectively treated in essentially two ways:
1. by means of electronic computing devices (which were in their earliest stages at the time) able to simulate complex processes, and
2. through mixed teams of scholars from different disciplines able to pro-vide insights on the same issue from different analytical perspectives.
Not surprisingly, these two methodologies are still today two of the main pillars of complexity science. Incidentally, one of Weaver’s concluding state-ments would prove true several decades later, when complexity science would finally reach language disciplines:
“Communication must be improved between peoples of dif-ferent languages and cultures, as well as between all the varied interests which use the same language, but often with such dan-gerously differing connotations.” (Weaver,1948, p. 544)9
Concerning the specific case of machine translation, Weaver was among the first (if not the very first) to recognize its complex nature. In his famous 1949 “Translation” memorandum, he supported the idea of applying the high speed, capacity, and logical flexibility of (then) modern computing devices to translation tasks (Weaver,1955). The proximity of Weaver’s proposals to today’s founding ideas of machine translation is remarkable, especially con-sidering that they date back to the first half of the last century. Among other things, he had two proposals of particular interest for this discussion on com-plexity theory. First, he rejected the idea of a word-for-word approach, as a word might have several translations in another language. Instead, recog-nizing that the meaning of a word is highly dependent on the words along which it occurs, he supported the idea of deciding on a translation by looking at the context, a numbernof words occurring in proximity of the word to be translated. Ifnis large enough, then it is possible to translate the word with reasonable accuracy. Therefore, computing devices represent a major ally in the translation job as they can handle a large corpus of co-occurrences.
He called the study of statistical co-occurrence of words and the application
9As a matter of fact, this very observation served also as an important source of inspira-tion for this dissertainspira-tion.
of statistical methods to translation “statistical semantics”. Second, he pro-posed that translation should be approached as a problem of cryptography.
This idea was probably the result of his participation in World War II as a mathematician. He suggested that a text to be translated from language A to language B should be seen as a text in language B coded into the “A code”.
He starts his discussion by telling the anecdote of a mathematician (whom Weaver simply refers to as “P”) who had spent a period of time at the Uni-versity of Istanbul and had learnt the Turkish language. P was asked by a colleague who had come up with a deciphering method to write down a text and then encode it into numbers. P wrote a message in Turkish and then gave the encoded version to his colleague. The day after the colleague came back with a text that, although not perfect, was accurate enough to be under-stood with no much pain by someone who spoke the language well. What Weaver found remarkable was that the person who decoded the numerical text was neither aware that the original text was in Turkish, nor could he or she speak the language. Weaver goes on to observe that human beings have all the same physical endowment to cope with their communication needs (i.e. to come up with a language). As different as languages can be, they share some traits that he names “invariants”, which are largely independent of the language used, such as letter combinations, letter patterns, and so on.
Therefore, he purports the idea that machine translation should build on the notion of “invariants” as they represent the key to decoding across different languages.
In 1984, the Santa Fe Institute was founded. Located in Santa Fe, New Mexico, United States, the Santa Fe Institute is a non-profit theoretical re-search institution whose mission is to “understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technologi-cal, and even possible astrobiological worlds” in order to “promote the well-being of humankind and of life on earth”.10 As of today, the Santa Fe Institute is deemed to have played a key role in the development, consolidation and diffusion of the notion of complexity (Room,2011, p. 16).