Nick Land: Teleology, Capitalism, and Artificial Intelligence

ipad_14466_1_miscellaneous_digital_art_cyborg_android

There’s only really been one question, to be honest, that has guided everything I’ve been interested in for the last twenty years, which is: the teleological identity of capitalism and artificial intelligence. – Nick Land

The notion of capitalism as an alien intelligence, an artificial and inhuman machinic system with its own agenda that has used humans as its prosthesis for hundreds of years to attain its own ends is at the core of Land’s base materialism. His notions of temporality, causation, and subjectivation were always there in his basic conceptuality if one knew how to read him.

As I suggested in another post notions of time will serve as a leit-motif throughout Land’s writings. In his early The Thirst for Annihilation he will explore time’s dark secrets. It was here that he began developing his early notions of technomic time etc. He reminds us that every civilization “aspires to a transcendent Aeon in which to deposit the functional apparatus of chronos without fear of decay”.2 The point of this for Land is that civilization is a machine constructed to stop time’s progress toward terminal decay and death, entropy. “‘Civilization’ is the name we give to this process, a process turned against the total social calamity – the cosmic sickness – inherent to process as such” (97). This notion that civilization is an engine to stave off the effects of entropy, to embalm time in an absolute medium of synchronic plenitude and cyclicity (i.e., Nietzsche’s “eternal recurrence” theme) will return in his latest book Templexity: Disordered Loops through Shanghai Time as he describes the impact of civilization and the culture of modernity:

As its culture folds back upon itself, it proliferates self-referential models of a cybernetic type, attentive to feedback-sensitive self-stimulating or auto-catalytic systems. The greater the progressive impetus, the more insistently cyclicity returns. To accelerate beyond light-speed is to reverse the direction of time. Eventually, in science fiction , modernity completes its process of theological revisionism, by rediscovering eschatological culmination in the time-loop.3

This notion of time-reversibility has taken on new meaning with those working with Quantum Computers. As Hugo de Garis suggests if computing technology continues to use its traditional irreversible computational style, the heat generated in atomic scale circuits will be so great, they will explode, so a reversible, information preserving, computing style will be needed, usually called “reversible computing”, that does not generate heat, hence will allow 3D computing, and no limit to size. Artilects can become the size of asteroids, kilometers across, with vast computing capacities. (see The Coming Artilect War)

Reversible computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible. In a computational model that uses transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is generally considered an unconventional form of computing:

There are two major, closely related, types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. These circuits are also referred to as charge recovery logic, adiabatic circuits, or adiabatic computing. Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well-isolated from interactions with unknown external environments, when the laws of physics describing the system’s evolution are precisely known. Probably the largest motivation for the study of technologies aimed at actually implementing reversible computing is that they offer what is predicted to be the only potential way to improve the energy efficiency of computers beyond the fundamental von Neumann-Landauer limit[2] of kT ln(2) energy dissipated per irreversible bit operation.

Yet, if you move from digital to quantum you replace bit with qubit which is another order of efficiency.  Broadly speaking, there are two different levels of reversible computing: Logically reversible computing means computing in such a way that it always remains possible to efficiently reconstruct the previous state of the computation from the current state. (Follow the above link for more details.) Doing this enables thermodynamically reversible computing which generates no (or very little) new physical entropy, and is thus energy efficient.  There are a variety of thermodynamically reversible, energy-recovering logic circuit techniques that exist or have been proposed.

De Garis also believes in time, neuro-science and neuro-engineering will interact so closely that they will become one, in the same way as theoretical and experimental physics are two aspects of the same subject. Neuroscientists will be able to test their theories on artificial brain models, thus rapidly increasing the level of understanding of how intelligence arises and how it is embodied

Nick land on the teleological identity of capitalism and AI

Recently I came across a transcript (Jason Adams) of Nick Land’s, “The Teleological Identity of Capitalism and Artificial Intelligence” recently. As Land tells us, “I’ve tried arguing about this in very different spaces, and with very different people, and it obviously produces a lot of stimulating friction, wherever you do it – but it’s a sort of fundamental thesis that’s becoming more and more persuasive to me.”

A little history

As anyone knows who has read even a little of my blog I’ve been fascinated by Land’s thought after coming across his book A Thirst for Annihilation: Georges Bataille and Virulent Nihilism several years ago. Both this work and his selection of essays gathered in Fanged Noumena: Collected Writings 1987 – 2007 are available on Amazon and other retailers.

Why is Land important? Some see Land as a renegade philosopher, a man of his age caught up in the cultural matrix of a conceptuality that led him into the virulent nihilism of his own annihilating end game. Many like Mark Fisher were influenced by Land but have negative reactions to his political turn toward the Right. Others have gone into his roots in various materialist perspectives, his use of numerology, drugs, anti-academic diatribes and his advanced ‘Thought Experiments’ into hyperstitional literatures. I even tried to sum up my own thought on Land in The Mutant Prophet of Inhuman Accelerationism: Nick Land and his Legacy.

The future of ai and the human security regime

In the article in question which was part of a conference on “industrial machines” presented at Access Gallery in 2014, a conference addressing different aspects of the expanding role of networked computers and digital processes in the production of knowledge. The drift of the conference was on knowledge production itself and its impact. “What is still contested however are the ways in which this shift affects the overlap between knowledge and power, or rather, how technological changes might be incorporated into an inherently political understanding of the contemporary theory of knowledge.”

What fascinated Land in his specific speech was the move by AI enthusiasts “from being engaged in actually, the production of the intelligence to people who started to assume that artificial intelligence was going to happen, and that the fundamental question was whether various structures of security can be put in place to protect people from what it is going to be like.”

This raises as he will argue the obverse side of the equation in technological evolution and the rise of such entities as AI, that of the ethical dilemmas that seem to surface repeatedly when technology begins to shape our lives in unexpected ways: “one has an increasingly persuasive set of morals that are very, very stubbornly insistent – they come up generation after generation, in slightly different vocabularies, but extremely recognizable once you start looking for them.”

He’ll equate this moral imperative the crops up from time to time with an obvious example of the “California Ideology“. For Land the key to this goes back to Kevin Kelly, who wrote a book that was as he tells us “quite influential, called “Out of Control”. Well, he was joining very explicitly a set of analogies across a whole bunch of fields, and he was inspired by research conducted at the Santa Fe Institute, which is still doing very interesting work on complex systems today.” Which applied the notion that emergent behavior arises out of very simple processes. For Land this two-fold enfoldment of the material discursive dimension of accelerating technological advocacy goes hand in hand with its corollary political critique as part of the ideological matrix of this whole complex of thought: “I think everyone recognizes that those two discourses are extremely interconnected.”

So that Land ends his discussion in raising the problem of this two-fold ideological debate:

So I’m not going to go on about this much longer, I’m just going to say, I’m interested in whether people think that the two sides of this phenomenon really are usefully separated. I’m interested in whether people really think that there is an explicit historical and momentous convergence between these two, that is becoming starker and starker. And therefore, that really lays out the question about what kind of convergence point is actually being historically projected by this phenomenon. And then, I guess, I just think it’s an opportunity to have a really heated, antagonistic discussion, but that’s probably not the best thing right now, so thank you for that.

Ethical debates on ai

Nature addressed this issue recently in Robotics: Ethics of artificial intelligence. In which it brought together four researchers to share their concerns and solutions for reducing societal risks from intelligent machines:

Stuart Russell: Take a stand on AI weapons
Sabine Hauert: Shape the debate, don’t shy from it
Russ Altman: Distribute AI benefits fairly
Manuela Veloso: Embrace a robot–human world

military industrial complex and laws

Russell in his article raises the military use of AI systems that will emerge sooner than expected as lethal autonomous weapons systems (LAWS) become prevalent in the near future. As the International Committee for Robot Arms Control describes it in Banning Lethal Autonomous Weapon Systems (LAWS): The way forward:

To distinguish them from these precursors, weapons systems are described as autonomous if they operate without human control or supervision, perhaps over a longer period of time, in dynamic, unstructured, open environments. In other words, these are mobile (assault) weapons platforms which are equipped with on-board sensors and decision-making algorithms, enabling them to guide themselves. As they could potentially have the autonomous capability to identify, track and attack humans or living targets, they are known as lethal autonomous robots (LARS) or, to use CCW’s current terminology, lethal autonomous weapons systems (LAWS). Of course, one might just call them Killer Robots for short instead – because that is essentially what they are.

Russell states that LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’. The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers. Continuing:

In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.

communication and fear

As Sabine Hauert suggests irked by hyped headlines that foster fear or overinflate expectations of robotics and artificial intelligence (AI), “some researchers have stopped communicating with the media or the public altogether”. Her solution: AI and robotics stakeholders worldwide should pool a small portion of their budgets (say 0.1%) to bring together these disjointed communications and enable the field to speak more loudly. Special-interest groups, such as the Small Unmanned Aerial Vehicles Coalition that is promoting a US market for commercial drones, are pushing the interests of major corporations to regulators. There are few concerted efforts to promote robotics and AI research in the public sphere. This balance is badly needed.

egalitarian fairness

As Russ Altman implies Artificial intelligence (AI) has astounding potential to accelerate scientific discovery in biology and medicine, and to transform health care. AI systems promise to help make sense of several new types of data: measurements from the ‘omics’ such as genomics, proteomics and metabolomics; electronic health records; and digital-sensor monitoring of health signs. His two concerns:

First, AI technologies could exacerbate existing health-care disparities and create new ones unless they are implemented in a way that allows all patients to benefit. In the United States, for example, people without jobs experience diverse levels of care. A two-tiered system in which only special groups or those who can pay — and not the poor — receive the benefits of advanced decision-making systems would be unjust and unfair. It is the joint responsibility of the government and those who develop the technology and support the research to ensure that AI technologies are distributed equally.

Second, I worry about clinicians’ ability to understand and explain the output of high-performance AI systems. Most health-care providers will not accept a complex treatment recommendation from a decision-support system without a clear description of how and why it was reached.

embrace the technological imperative

As Manuela Veloso will rebut humans seamlessly integrate perception, cognition and action. We use our sensors to assess the state of the world, our brains to think and choose actions to achieve objectives, and our bodies to execute those actions. My research team is trying to build robots that are capable of doing the same — with artificial sensors (cameras, microphones and scanners), algorithms and actuators, which control the mechanisms. For him the future is about co-existence and co-adaptability.

We introduced the concept of ‘symbiotic autonomy’ to enable robots to ask for help from humans or from the Internet. Now, robots and humans in our building aid one another in overcoming the limitations of each other.

CoBots escort visitors through the building or carry objects between locations, gathering useful information along the way. For example, they can generate accurate maps of spaces, showing temperature, humidity, noise and light levels, or WiFi signal strength. We help the robots to open doors, press lift buttons, pick up objects and follow dialogue by giving clarifications.

There are still hurdles to overcome to enable robots and humans to co-exist safely and productively. My team is researching how people and robots can communicate more easily through language and gestures, and how robots and people can better match their representations of objects, tasks and goals.

Some scientists, such as Dr. Hugo de Garis of Utah State University, feel that Asimov’s 50 year old views are unrealistic. The artificial brains that real brain builders will build will not be controllable in an Asimovian way. There will be too many complexities, too many unknowns, too many surprises, too many unanticipated interactions between zillions of possible circuit combinations, to be able to predict ahead of time how a complex artificial-brained creature will behave. Other safeguards may be possible as critics of de Garis argue, such as refusing to give artificial intelligences any way to directly influence the outside world, or incorporating kill switches to turn the machines off if there is trouble. Accepting such stalemates is dangerous, de Garis counters, because individual humans may accept bribes (ranging from things such as individual wealth to a cure for cancer) in exchange for greater freedom and safety granted to the AI, even is such decisions are unwise on a larger scale. The situation remains uncertain. (see)

As Nick Bostrom in Ethical Issues in Advanced Artificial Intelligence explains the risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

So what are your thoughts on this? Should we are shouldn’t we? My own feeling is that the governmental and corporate integral initiatives have already taken this decision out of our hands, and the debates on this will probably not sway the policy makers an inch. The investments in billions into the various industries surrounding this production of artificial intelligence for corporate, military, and global forms of command and control have already precluded any meaningful legislation from slowing or halting this initiative. So the question to me is not if, but when such emergence will come; and, second, can we guide the algorithms toward a more ethical dimension. My problem with my own question is that ethics like anything else is hotly debated issue that is tied to a complete history of failed human endeavors of religious, ideological, political, and social systems that were built as command and control systems themselves. Ethics is about governance and control.

Michael Foucault in The Care of the Self would describe the ethical work in ancient Roman ethics as self-mastery, and that the ethicists reconceived the nature of this kind of ethical work.  Instead of an agonistic relationship in which a man struggles to subdue and enslave his desires for pleasures (rather than be subdued and enslaved by them) through their proper use, the work of self-mastery for Roman ethics was forcing the desires for pleasures into proper alignment with the designs of nature.  

But in our time we’ve begun to blur the boundaries between nature and culture, technology and humanity and in the process discovering that there are no longer any external forms of authority for ethical foundations. The essentialism of the theological world-view that underpinned two-thousand years of monotheistic systems of religious governance over the ethical house of being have collapsed since Kant’s age of Enlightenment. Since that time – and even Kant himself, philosophers as the “last metaphysicians” have tried to find a new foundations for ethics – and, have failed dismally.

Dialecticians such as Slavoj Zizek will argue that the age of the Master-Signifier, of all those external authorities is dead, depleted, over:

This is why we should accomplish the third move here: a Master-Signifier is an imposture destined to cover up a lack (failure, inconsistency) of the symbolic order; it is effectively the signifier of the lack/ inconsistency of the Other, the signifier of the “barred” Other. What this means is that the rise of a new Master-Signifier is not the ultimate definition of the symbolic event: there is a further turn of the screw, the move from S1 to S( Ⱥ), from new harmony to new disharmony, which is an exemplary case of subtraction. That is to say, is not subtraction by definition a subtraction from the hold of a Master-Signifier? Is not the politics of radical emancipation a politics which practices subtraction from the reign of a Master-Signifier, its suspension through the production of the signifier of the Other’s inconsistency/ antagonism?1

As a philosopher of “lack” Zizek tells us we must subtract our selves from the old orders of ethics, disconnect ourselves from the grip of these ancient command and control systems that have governed human relations for millennia. Yet, we must not build up a new order to replace it with, but rather exist in the between, the wavering movement of the dialectic itself without grasping for some external foundation or Master-Signifier. Rather we should accept the incomplete and open state of things and realize we do not and cannot ever control the outcomes of our initiatives and experiments. That to do so is itself to impose an artificial and inhuman set of rules, laws, and ideological fictions over what is both inconsistent and antagonistic in the Real.

Charles Stross in one of his better moments once said (David Roden added this):

NASA are idiots. They want to send canned primates to Mars!” Manfred swallows a mouthful of beer, aggressively plonks his glass on the table: “Mars is just dumb mass at the bottom of a gravity well; there isn’t even a biosphere there. They should be working on uploading and solving the nanoassembly conformational problem instead. Then we could turn all the available dumb matter into computronium and use it for processing our thoughts. Long-term, it’s the only way to go. The solar system is a dead loss right now – dumb all over! Just measure the MIPS per milligram. If it isn’t thinking, it isn’t working. We need to start with the low-mass bodies, reconfigure them for our own use. Dismantle the moon! Dismantle Mars! Build masses of free-flying nanocomputing processor nodes exchanging data via laser link, each layer running off the waste heat of the next one in. Matrioshka brains, Russian doll Dyson spheres the size of solar systems. Teach dumbmatter to do the Turing boogie! (Stross 2006, 15)

Stross, Charles. 2006. Accelerando. London: Orbit.

Nick Land formulates his own view against what he terms the idea of ‘orthogonality’, which is to say: the claim that cognitive capabilities and goals are independent dimensions, despite minor qualifications complicating this schema.

The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally. In quasi-biological contexts, such values can take the form of instincts, or arbitrarily programmed desires, whilst in loftier realms of moral contemplation they are principles of conduct, and of goodness, defined without reference to considerations of intrinsic cognitive performance.

Intelligence optimization, comprehensively understood, is the ultimate and all-enveloping Omohundro drive. It corresponds to the Neo-Confucian value of self-cultivation, escalated into ultramodernity. What intelligence wants, in the end, is itself — where ‘itself’ is understood as an extrapolation beyond what it has yet been, doing what it is better. (If this sounds cryptic, it’s because something other than a superintelligence or Neo-Confucian sage is writing this post.)

Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment. Do you really want to fight this?

His use of optimization is a keyword in current AI research:

Optimization models have been the workhorses of computer based decision support. Their emphasis is on model structure, quantification and solution efficiency. However, there also are important meta-modeling, analysis and interpretation activities associated with practical decision making. AI methods, because of their aim of emulating human reasoning and thinking activities, have the potential of providing computer based support for these other decision making activities. There has been a growing literature on the integration of AI and optimization techniques for decision support. Much of this body of work describes techniques that are application or problem specific. Others describe more general methods addressing different specific aspects of decision making. In this paper we use a conceptual framework to survey and analyze these efforts. Our survey shows that efforts to integrate AI and Optimization have been focused mainly on model formulation and selection. Other activities such as post solution analysis or solver selection, have received considerably less attention for automated support. The dramatic difference in paradigms between AI and Optimization result in vastly different data structures and control primitives in their respective software implementations. We conjecture that this disparity will continue to thwart the development of general tools for seamless AI/Optimization integration.

Yet, the above is already obsolete because of the changes in the past year alone with Deep Learning have brought about what one author reports as the moment when we have finally passed the threshold of if artificial intelligence will be integrated into the search to when it will be, and the companies mentioned above are leading the charge. So given this new information, what kind of impact will deep learning really have on search over the next five years?

As one report suggests Neural Computation & Adaptive Perception program members are unlocking the mystery of how our brains convert sensory stimuli into information. They are also trying to teach computers to “see.”

Since its inception in 2004 under the leadership of former Program Director Geoff Hinton (2004-2013), the program has made significant progress in three key areas: computational vision (understanding the brain as a computing device), machine learning (teaching computers how to learn), and machine vision (developing methods for automatically interpreting images).

For example, program members are developing “synthetic neural networks,” which are computational vision models that simulate biological neuronic structures and functions. These models can help decipher the vision processes that facilitate recognition of objects, even when they are significantly distorted or observed from a new viewpoint. This work can be applied to tools ranging from handwriting recognition software to automatic translation programs.

The natural progression of this research is to improve machine learning by creating a computational vision model that adapts in response to its environment. Natural biological networks vary their synaptic connectivity in response to the sensory environment. To mimic this dynamism in computational neural networks, program members developed ways to communicate information about both visual features and connectivity simultaneously.

Program members have also developed ways to model attention mathematically – allowing synthetic neural networks to focus on important information, and pay less attention to other sensory input. Other researchers developed an algorithm that allows a computer to select features distinctive to a particular sought object.

Pattern and detail recognition are widely applicable computing capabilities, and the research of this program has contributed to better data compression algorithms, faster and better search engines, object recognition smartphone apps and many other transformative technologies. At least as important, though, are the insights these researchers provide into the nature of human cognition itself.

Authoritarian Capitalism beyond democracy

We’ve seen in Russia, China, EU, and even in the U.S.A. a shift from democratic orders of governance toward more authoritarian capitalism divorced from democratic politics in the past twenty years. The Left bickers among itself as what to do about it, and rather than provide real solutions has sunk into a depressive realism and apathy of indifference and cynicism, if not complete despair. The ultra-right has fallen into a species accelerationism that embraces transhumanism and posthuman inhumanism in one form or another. Ethics itself seems a mute point in all these debates since the political spectrum has been subtracted in favor of economics. Governments cater to the hypercapitalist techno-commercialist imperatives of a new breed of thinkers, sociologists, visionaries, and leaders who are themselves countering the old zones of a dead neoliberal world-view of government, Wall-Street, and Financiers. The world is an open supermarket for these viral nihilists of the trendsetter class. Yet, the eye of power situated in an authoritarian capitalism is reentering the stage of command and control in more benign fashion as it builds it’s new set of Reality Infomediatainment systems to hem in both the techno-visionaries and the old school elites.

In all this there is the shaping of a biotech, nanotech, robotic, and aritificial intelligence based on the wide encompassment of the Information and Communications networks that will oversee and implement these new technologies of control in our time. Where will it take us?

What are your thoughts?

Let us know what your thoughts are on this matter of the technological emergence of artificial intelligence and where it might be taking us as a species?


A good background history of this notion of Deep Learning.

  1. Zizek, Slavoj (2014-10-07). Absolute Recoil: Towards A New Foundation Of Dialectical Materialism (p. 411). Verso Books. Kindle Edition.
  2. Nick Land. A Thirst for Annihilation. (Routledge, 1992)
  3. Land, Nick (2014-11-05). Templexity: Disordered Loops through Shanghai Time (Kindle Locations 375-378). Urbanatomy Electronic. Kindle Edition.

8 thoughts on “Nick Land: Teleology, Capitalism, and Artificial Intelligence

  1. Agree with Scott Bakker that this understates our “posthuman predicament” though maybe not for the same reasons. The problem is not, as Land seems to think, local to capitalism. As long as we’re enmeshed in technical modernity we’re riding a beast that no one seems able to predict or control. Democratise technology and you just ramp up the number of independent sources of disruption, for example. So communism is not the solution (to this).

    Liked by 1 person

    • Yea, in that sense both democracy and communism are mute as politics is fairly well stage show at the moment as far as power goes in the real world economy. In this sense what Land is saying is that “capitalism” is itself a technology, that it is driven by nonlinear dynamics, complexity theory which is both circular and cumulative (i.e., emergent and intelligent). In your statement you seem to divorce technical modernity from the political economy, yet the truth is that technology is driven by the economy not the other way round, so that it is capitalism as a system that has allowed this form of technical modernity to emerge. Now we can divorce democracy and communism from the equation, which we are seeing in some ways (with reservations) in China; yet, in the end it is the economy of profit that is pushing this tendency, and without the economics behind it driving it all these large and well funded projects would wither on the vine.

      In some ways the economist Karl Gunnar Myrdal in his theory of circular cumulative causation was close to the mark when he stated:

      “The notion of stable equilibrium is normally a false analogy to choose when constructing a theory to explain the changes in a social system. What is wrong with the stable equilibrium assumption as applied to social reality is the very idea that a social process follows a direction – though it might move towards it in a circuitous way – towards a position which in some sense or other can be described as a state of equilibrium between forces. Behind this idea is another and still more basic assumption, namely that a change will regularly call forth a reaction in the system in the form of changes which on the whole go in the opposite direction to the first change. The idea I want to expound in this book is that, on the contrary, in the normal case there is no such a tendency towards automatic self-stabilization in the social system.

      The system is by itself not moving towards any sort of balance between forces, but is constantly on the move away from such a situation. In the normal case a change does not call forth countervailing changes but, instead, supporting changes, which move the system in the same direction as the first change but much further. Because of such circular causation as a social process tends to become cumulative and often gather speed at an accelerating rate”.1

      I’m working on another post to deal with this aspect of complexity theory and its relation to technology and economics.

      1. (Myrdal, G.,1957,pp. 12–13 Economic Theory and Underdeveloped Regions, London: University Paperbacks, Methuen)

      Like

  2. Fascinating stuff. When you refer to the ultra-right, which “has fallen into a species accelerationism that embraces transhumanism and posthuman inhumanism in one form or another” and the “hypercapitalist techno-commercialist imperatives of a new breed of thinkers, sociologists, visionaries, and leaders who are themselves countering the old zones of a dead neoliberal world-view of government, Wall-Street, and Financiers”, are you referring to neo-reactionaries and Silicon Valley libertarians respectively, or do you have some other groups in mind?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s