Even though it was Ross Ashby who first conceived the notion of self-organization in systems, he did not understand its relation to entropy. It would be Heinz von Foerster in his essay, “On Self-Organizing Systems and Their Environments” who would deploy the formula that informs self-organization today:

For von Foerster, a self-organizing system is one whose “internal order” increases over time. This immediately raises the double problem of how this order is to be measured and how the boundary between the system and the environment is to be defined and located. The second problem is provisionally solved by defining the boundary “at any instant of time as being the envelope of that region in space which shows the desired increase in order”. For measuring order, von Foerster finds that Claude Shannon’s definition of “redundancy” in a communication tion system is “tailor-made.” In Shannon’s formula,

R=1-H/Hm3

where R is the measure of redundancy and H/Hthe ratio of the entropy H of an information source to its maximum value H,,,. Accordingly, if the system is in a state of maximum disorder (i.e., H is equal to H,,,), then R is equal to zero there is no redundancy and therefore no order. If, however, ever, the elements in the system are arranged in a such a way that, “given one element, the position of all other elements are determined” , then the system’s entropy H (which is really the degree of uncertainty about these elements) vanishes to zero. R then becomes unity, indicating perfect order. Summarily, “Since the entropy of the system is dependent upon the probability distribution of the elements to be found in certain distinguishable able states, it is clear that [in a self-organizing system] this probability distribution must change such that H is reduced”. 1

The formula thus leads to a simple criterion: if the system is self-organizing, organizing, then the rate of change of R should be positive (i.e., (5R/Bt > 0). To apply the formula, however, R must be computed for both the system and the environment, since their respective entropies are coupled. Since there are several different ways for the system’s entropy to decrease in relation to the entropy of the environment, von Foerster refers to the agent responsible for the changes in the former as the “internal demon” and the agent responsible for changes in the latter as the “external demon.” These two demons work interdependently, in terms of both their efforts and results. (ibid. 56)

Given the fruitfulness of the idea that a complex order can emerge from a system’s exposure to “noise” or other disturbances, von Foerster’s illustration can only seem disappointing. Or rather, viewed in the light of the sea changes that the concept of self-organization would undergo over the next twenty years, von Foerster’s proposal-and some would say the same about Ashby’s theorizing can have at most an anticipatory value. These sea changes followed from the interaction and relay of two series of developments. On the one hand, Ilya Prigogine (chemistry), Hermann mann Haken (physics), and Manfred Eigen (biology) made groundbreaking empirical discoveries of self-organizing systems, in which instabilities resulting from the amplification of positive feedback loops spontaneously create more complex forms of organization. On the other hand, Steven Smale, Rene Thom, Benoit Mandelbrot, and Mitchell Feigenbaum, to name a few, made discoveries in topology and nonlinear mathematics that led to a complete revamping of dynamical systems theory. This story, which involves the discovery of how nonlinear factors can produce deterministically chaotic systems, is now fairly familiar. One simple but telling difference this sea change has made in dynamical systems theory is that the concept of the attractor has replaced notions like stability and equilibrium. As a system moves toward increasing instability it may reach a point where two sets of values for its variables and hence two different states are equally viable. (In its phase portrait as a dynamical system this is referred to as a bifurcation.) But which state will it “choose”? There is no way of knowing since the system’s behavior has become indeterminate at this point. The very presence of a bifurcation means that the system is falling under the influence of a different attractor and thus undergoing a dynamical change as a whole. (ibid. 57-58)

To see how these early pioneers shifted the ground both of the sciences and philosophical speculation is an important lesson. Knowing that it was pragmatic and practical engineering thinking and empirical approaches rather than theoretical knowledge seems viable in itself. Yet, these early pioneers because they lacked the breadth of the theoretical base were unable to make the leap necessary to widen their practical empirical knowledge into other domains.

Either way the notion that instabilities resulting from the amplification of positive feedback loops spontaneously create more complex forms of organization would be something that would come back into play decades later. One should also look back toward Thermodynamics, too. When one imagines how an artificial organism might suddenly bifurcate and make that leap into superintelligence one can look back and see just what path it took to get to that indeterminate point.

1. John Johnston. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (p. 55). Kindle Edition.