The Global Cyberwar: The Alogrithms of Intelligent Malware

When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world . Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.1

Wired has an article by Kim Zetter An Unprecedented Look at Stuxnet, the World’s First Digital Weapon which elaborates on the now widely known collaboration between US and Israeli intelligence agencies seeking a way to infiltrate and slow down or destroy centrifuges in the Natanz nuclear facility in Iran.

Needless to say they were successful, yet in their success they failed miserably. Why? As you read the quoted passage again you notice the code that was originally brought into the closed facility by way of memory sticks was released into the computers by way of flash drives. And after it was slowly unwound, installed and phased into it operative mode it began to work through the networks of the facility until by chance or accident it discovered itself outside the facility and on the internet. So that as James Barrat reminds us we “do not know the downstream implications of delivering this powerful technology into the hands of our enemies. How bad could it get? An attack on elements of the U.S. power grid, for starters.” (Barrat, 261-262)

The article by Zetter doesn’t mention this fatal flaw in the plan, and how the malware is now spreading across the globe and is available for even our enemies to use against us. As Barrat states it a former head of cyberdefense at DHS Sean McGurk was asked on a CBS 60 Minutes interview if he’d been asked would he have built such a malware application:

MCGURK: [Stuxnet’s creators] opened up the box. They demonstrated the capability. They showed the ability and the desire to do so. And it’s not something that can be put back.
KROFT: If somebody in the government had come to you and said, “Look, we’re thinking about doing this. What do you think?” What would you have told them? MCGURK: I would have strongly cautioned them against it because of the unintended consequences of releasing such a code.
KROFT: Meaning that other people could use it against you?

(Barrat, 260)

The segment ends with German industrial control systems expert Ralph Langner. Langner “discovered” Stuxnet by taking it apart in his lab and testing its payload. He tells 60 Minutes that Stuxnet dramatically lowered the dollar cost of a terrorist attack on the U.S. electrical grid to about a million dollars. Elsewhere, Langner warned about the mass casualties that could result from unprotected control systems throughout America, in “important facilities like power, water, and chemical facilities that process poisonous gases.”

“What’s really worrying are the concepts that Stuxnet gives hackers,” said Langner. “Before, a Stuxnet-type attack could have been created by maybe five people. Now it’s more like five hundred who could do this. The skill set that’s out there right now, and the level required to make this kind of thing, has dropped considerably simply because you can copy so much from Stuxnet.”

(Barrat, 261-265)

 As one analyst put it Stuxnet is remarkably complex, but is hardly extraordinary. Some analysts have described it as a Frankenstein of existing cyber criminal tradecraft – bits and pieces of existing knowledge patched together to create a chimera. The analogy is apt and, just like the literary Frankenstein, the monster may come back to haunt its creators. The virus leaked out and infected computers in India, Indonesia, and even the U.S., a leak that occurred through an error in the code of a new variant of Stuxnet sent into the Natanz nuclear enrichment facility. This error allowed the Stuxnet worm to spread into an engineer’s computer when it was hooked up to the centrifuges, and when he left the facility and connected his computer to the Internet the worm did not realize that its environment had changed. Stuxnet began spreading and replicating itself around the world. The Americans blamed the Israelis, who admitted nothing, but whoever was at fault, the toothpaste was out of the tube.2

Deibert goes on to say the real significance of Stuxnet lies not in its complexity, or in the political intrigue involved (including the calculated leaks), but in the threshold that it crossed: major governments taking at least implicit credit for a cyber weapon that sabotaged a critical infrastructure facility through computer coding. No longer was it possible to counter the Kasperskys and Clarkeses of the world with the retort that their fears were simply “theoretical.” Stuxnet had demonstrated just what type of damage can be done with black code. (Deibert, KL 2728)

Such things are just the tip of the iceberg, too. The world of cybercrime, cyberterrorism, cyberwar is a thriving billion dollar industry that is flourishing as full time aspect of the global initiatives of almost every major player on the planet. As reported in the NY Times U.S. Blames China’s Military Directly for Cyberattack. The Obama administration explicitly accused China’s military of mounting attacks on American government computer systems and defense contractors, saying one motive could be to map “military capabilities that could be exploited during a crisis.”  While countries like Russian target their former satellites Suspicion Falls on Russia as ‘Snake’ Cyberattacks Target Ukraine’s Government: According to a report published by the British-based defense and security company BAE Systems, dozens of computer networks in Ukraine have been infected for years by a cyberespionage “tool kit” called Snake, which seems similar to a system that several years ago plagued the Pentagon, where it attacked classified systems.

Bloomberg summarized this concept this the following statement:

“The U.S. national security apparatus may be dominant in the physical world, but it’s far less prepared in the virtual one. The rules of cyberwarfare are still being written, and it may be that the deployment of attack code is an act of war as destructive as the disabling of any real infrastructure. And it’s an act of war that can be hard to trace: Almost four years after the initial NASDAQ intrusion, U.S. officials are still sorting out what happened. Although American military is an excellent deterrent, it doesn’t work if you don’t know whom to use it on.”

As Deibert warns we are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some of them open, scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? (Deibert, KL 2819)

At one time President Dwight D. Eisenhower warned of the growing Military-Industrial Complex in the era of the 50’s, now we have Deibert suggests an ever-growing cyber security industrial complex, a world where a rotating cast of characters moves in and out of national security agencies and the private sector companies that service them. (Deibert, KL 2927) For those in the defence and intelligence services industry this scenario represents an irresistibly attractive market opportunity. Some estimates value cyber-security military-industrial business at upwards of US $150 billion annually. (Deibert, KL 3022) The digital arms trade for products and services around “active defence” may end up causing serious instability and chaos. Frustrated by their inability to prevent constant penetrations of their networks through passive defensive measures, it is becoming increasingly legitimate for companies to take retaliatory measures. (ibid., 3079)

Malicious software that pries open and exposes insecure computing systems is developing at a rate beyond the capacities of cyber security agencies even to count, let alone mitigate. Data breaches of governments, private sector companies, NGOS, and others are now an almost daily occurrence, and systems that control critical infrastructure – electrical grids, nuclear power plants, water treatment facilities – have been demonstrably compromised. (Deibert, KL 3490) The social forces leading us down the path of control and surveillance are formidable, even sometimes appear to be inevitable. But nothing is ever inevitable. (Deibert, KL 3532)

In Mind Factory Slavoj Zizek will ask the question: Are we entering the posthuman era? He will then go on to say that the survival of being-human by humans cannot depend on an ontic decision by humans.3

Instead he reminds us we should admit that the true catastrophe has already happened: we already experience ourselves as in principle manipulable, we need only freely renounce ourselves to fully deploy these potentials. But the crucial point is that, not only will our universe of meaning disappear with biogenetic planning, i.e. not only are the utopian descriptions of the digital paradise wrong, since they imply that meaning will persist; the opposite, negative, descriptions of the “meaningless” universe of technological self-manipulation is also the victim of a perspective fallacy , it also measures the future with inadequate present standards. That is to say, the future of technological self-manipulation only appears as “deprived of meaning” if measured by (or, rather, from within the horizon of) the traditional notion of what a meaningful universe is. Who knows what this “posthuman” universe will reveal itself to be “in itself”? (Mind Factory, KL 368-66)

What if there is no singular and simple answer, what if the contemporary trends (digitalisation, biogenetic self-manipulation) open themselves up to a multitude of possible symbolisations? What if the utopia— the pervert dream of the passage from hardware to software of a subjectivity freely floating between different embodiments— and the dystopia— the nightmare of humans voluntarily transforming themselves into programmed beings— are just the positive and the negative of the same ideological fantasy? What if it is only and precisely this technological prospect that fully confronts us with the most radical dimension of our finitude?(Mind Factory, KL 366-83)

With so many things going on in the sciences, military, governments, nations etc. where are the watchdogs that can discern the trends? Who can give answer to all the myriad elements that are making up this strange new posthuman era we all seem blindly moving toward? Or is it already here? With Malware on the loose, algorithms that manipulate, grow, improve on the loose around the globe; as well as being reprogramed by various unknown governments, criminal syndicates, hackers: what does the man/woman on the street do? As Nick Land will say of one of his alter ego’s

Vauung seems to think there are lessons to be learnt from this despicable mess.4


1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 261). St. Martin’s Press. Kindle Edition.
2. Deibert, Ronald J. (2013-05-14). Black Code: Inside the Battle for Cyberspace (Kindle Locations 2721-2728). McClelland & Stewart. Kindle Edition.
3. Armand, Louis; Zizek, Slavoj; Critchley, Simon; McCarthy, Tom; Wark, McKenzie; Ulmer, Gregory L.; Kroker, Arthur; Tofts, Darren; Lewty, Jane (2013-07-19). Mind Factory (Kindle Locations 367-368). Litteraria Pragensia. Kindle Edition.
4. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Location 9008). Urbanomic/Sequence Press. Kindle Edition.




Technocapitalism: Creativity, Governance, and Neo-Imperialism

The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.

— Nick Land,  Fanged Noumena: Collected Writings 1987 – 2007

Luis Suarez-Villa in his Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism informs us that the major feature that sets technocapitalism apart from previous eras is the vital need to commodify creativity.1 Why is this different from older forms of capitalism? The overarching importance of creativity as a commodity can be found readily in any of the activities that are typical of technocapitalism. Due to the rise of NBIC (Nanotech,Biotech,Information and Communications) technologies as in the area of biotechnology, such as genomics, proteomics, bioinformatics, or biopharmaceuticals; in nanotechnology; in molecular computing and the other sectors that are symbolic of the twenty-first century, the commodification and reproduction of creativity are at the center of their commercialization. None of these activities could have formed, much less flourished, without the unremitting commodification of creativity that makes their existence possible.(Suarez-Villa, KL 365-67)

Nick Land in Fanged Noumena will offer us the latest version of a meltdown in which we all participate in a planet wide china-syndrome, the dissolution of the biosphere into the technosphere.2 Luciano Floridi will augment this notion in turn equating this transformation or metamorphosis into the technosphere as part of technocapital corporatism’s ‘Onlife’ strategy, one in which information becomes our surround, our environment, our reality.3 As Floridi will state it ICTs are re-ontologizing the very nature of the infosphere, and here lies the source of some of the most profound transformations and challenging problems that we will experience in the close future, as far as technology is concerned (Floridi, 6-7). He will expand on this topic, saying:

ICTs are as much re-ontologizing our world as they are creating new realities. The threshold between here (analogue, carbon-based, offline) and there (digital, silicon-based, online) is fast becoming blurred, but this is as much to the advantage of the latter as it is to the former. Adapting Horace’s famous phrase, ‘captive infosphere is conquering its victor’, the digital-online is spilling over into the analogue-offline and merging with it. This recent phenomenon is variously known as ‘Ubiquitous Computing ’,‘Ambient Intelligence’, ‘The Internet of Things’, or ‘Web-augmented Things’. I prefer to refer to it as the onlife experience.(Floridi, 8)

The notion of an Onlife experience is moving us toward that rubicon zone of the posthuman or becoming inhuman. The Onlife blurs the distinctions between reality and virtuality; blurring the boundaries of human, machine, and nature; reversing information scarcity to information abundance (and, some might say, ‘glut’); and, finally, a shift from substance based notions of entities to process and relations, or interactions.4 Floridi would have us believe that ICT’s are becoming a force of good, that they will break down the older modernist or Enlightenment notions of disembodied autonomous subjects, and will bine us within a democratic enclave of information and creativity.

Yet, as Suarez-Villa warns control over society at large, and not just governance, is the larger concern involving technocapitalism and corporate power. The globalist agenda is not to create democratic and participatory governance, but rather to impose new forms of control and power using advanced technological systems. Technology has always been a two-edged sword. The quest for corporate and global hegemony coupled with poor social accountability can have far-reaching effects. It would not be shocking to see genetic engineering bound into the human realm to produce individuals with characteristics that are highly desirable to corporatism. The “design” or “engineering” of humans with greater potential for creativity and innovation would be of great interest in this regard. After all, most people want their offspring to be “successful” and “well adjusted.” One can therefore expect corporatism to appeal to such sentiments that suit its need for power.(see Suarez, KL 1880-83)

Technocapital hegemony incorporates its most valuable resource, creativity , transcends boundaries and restraints. Commodifying creativity therefore acquires a global scope for the technocapitalist corporation, even though it is carried out within the corporate domain. Moreover, as it appropriates the results of creativity, the technocapitalist corporation becomes a powerful entity in the context of globalization. Its power takes up a supranational character that transcends the governance of any nation or locale. Corporate intellectual property regimes that are increasingly global in scope and enforcement magnify that power to an unprecedented extent. Thus, given the contemporary importance of technology, corporate technocapitalism is in a position to impose its influence around the world, particularly on societies with a limited possibility to create new technology. (Suarez, KL 2017-23)

This sense that technocapital corporatism is constructing a global hegemony outside the strictures of the older nation states, one that can bypass the regulatory mechanisms of any one sovereignty is at the heart of this new technological imperative. The technocorporatism of the 21st Century seeks to denationalize sovereignty, to eliminate the borders and barriers between rival factions. Instead of this ancient battle between China, Russia, EU, America, etc. they seek a strategy to circumvent nations altogether and build new relations of trust beyond the paranoia of national borders.

The globalists seek to appropriate the results of creativity on a global scale . Research is the corporate operation through which such appropriation typically occurs. Appropriating the results of creativity has therefore become a major vehicle to sustain and expand the global ambitions of corporate power. Intellectual property rights that confer monopoly power, such as patents, are now a very important concern of corporatism. The fact that corporate intellectual property has become a major component of international trade, and an important focus of litigation around the world, underlines the rising importance of creativity as a corporate resource. (Suarez-Villa, KL 2115)

Beyond corporate control and hegemony is the notion of reproduction, which is inherently social in nature. Reproduction is inherently social because of creativity’s intangibility, because of its qualitative character, and because it depends on social contexts and social relations to develop. Many aspects of reproduction are antithetical to the corporate commodification of creativity, yet they are essential if this intangible resource is to be regenerated and deployed. (Suarez-Villa, KL 2121)

Along with this new technocapitalist utopia comes the other side of the coin, the permanence of inequalities and injustices between the haves and the have-nots becomes one of the pathological outcomes of technocapitalism, of its apparatus of corporate power, and of its new vehicles of global domination. (Suarez-Villa, KL 4066) As Suarez-Villa iterates:

The new vehicles of domination are multi-dimensional. They comprise corporate, technological, scientific, military, organizational and cultural elements. All of these elements of domination are part of the conceptual construct of fast neo-imperialism— a new systemic form of domination under the control of the “have” nations at the vanguard of technocapitalism. This new neo-imperial power is closely associated with the phenomena of fast accumulation, with the new corporatism, with its need to appropriate and commodity creativity through research, and with its quest to obtain profit and power wherever and whenever it can. (KL 4068-72)

Corporatocracy’s slow transformation and disabling of the old Nation State powers involves a redistribution of power and wealth from the mass of the people, and most of all from the poor and working classes, toward the corporate elites and the richest segment of society. Redistribution is accompanied by a dispossession of the people from a wide spectrum of rights, individual, social, economic, political, environmental and ecologic , in order to benefit corporatism and increase its influence over society’s governance. This vast migration of wealth from poor of all nations, and the inequalities it engenders support the new corporatism’s urgent need for more creative talent, aggressive intellectual property rights, lower research costs, and for its appropriation of a wide range of bioresources, including the genetic codes of every living organism on earth. (Suarez-Villa, KL 4840-82)

As Suarez-Villa will sum it up we are now at the crossroads of what may be a new trajectory for humanity, given technocapitalism’s use and abuse technology and science , the overwhelming power of its corporations, its capacity to legitimize such power, and its quest to impose it on the world. The crises that we have witnessed in recent times may be a prelude to the maelstrom of crises and injustice that await us, if effective means are not enlisted to contest this new version of capitalism. (Suarez-Villa, KL 5555-60)

Is it too late? Have we waited way too long to wake up? Nick Land will opt for the harsh truth: “Nothing human makes it out of the near-future” (Land, KL 6063). James Barrat in his Our Final Invention: Artificial Intelligence and the End of the Human Era will offer little comfort, telling us that most scientists, engineers, thinkers, funders, etc. within the construction of the emerging AGI to AI technologies are not concerned with humanity in their well-funded bid to build Artificial systems that can think a thousand times better than us. In fact they’ll use ordinary programming and black box tools like genetic algorithms and neural networks. Add to that the sheer complexity of cognitive architectures and you get an unknowability that will not be incidental but fundamental to AGI systems. Scientists will achieve intelligent, alien systems.5 These will be systems that are totally other, inhuman to the core, without values human or otherwise, gifted only with superintelligence. And, many of these scientists believe that this will come about by 2030.  As Barrat tells us:

Of the AI researchers I’ve spoken with whose stated goal is to achieve AGI, all are aware of the problem of runaway AI. But none, except Omohundro, have spent concerted time addressing it. Some have even gone so far as to say they don’t know why they don’t think about it when they know they should. But it’s easy to see why not. The technology is fascinating. The advances are real. The problems seem remote. The pursuit can be profitable, and may someday be wildly so. For the most part the researchers I’ve spoken with had deep personal revelations at a young age about what they wanted to spend their lives doing, and that was to build brains, robots, or intelligent computers. As leaders in their fields they are thrilled to now have the opportunity and the funds to pursue their dreams, and at some of the most respected universities and corporations in the world. Clearly there are a number of cognitive biases at work within their extra-large brains when they consider the risks.(Barrat, 234-235)

 And behind most of this is the need to weaponize AI and Robotics technologies. At least here in the States, DARPA is the great power and funder behind most of the stealth companies and other like Google, IBM, and others… Not to put too fine a point on it, but the “D” is for “Defense.” It’s not the least bit controversial to anticipate that when AGI comes about, it’ll be partly or wholly due to DARPA funding. The development of information technology owes a great debt to DARPA. But that doesn’t alter the fact that DARPA has authorized its contractors to weaponize AI in battlefield robots and autonomous drones. Of course DARPA will continue to fund AI’s weaponization all the way to AGI. Absolutely nothing stands in its way. (Barrat, 235)

So here we are at the transitional moment staring into the abyss of the future wondering what beasts lurk on the other side. As Barrat surmises “I believe we’ll first have horrendous accidents, and should count ourselves fortunate if we as a species survive them, chastened and reformed. Psychologically and commercially, the stage is set for a disaster. What can we do to prevent it?” (Barrat, 236)


Only the possibility of youth, or as Land tells us as we enter the derelicted warrens at the heart of darkness where feral youth cultures splice neo-rituals with innovated weapons, dangerous drugs, and scavenged infotech. As their skins migrate to machine interfacing they become mottled and reptilian. They kill each other for artificial body-parts, explore the outer reaches of meaningless sex, tinker with their DNA, and listen to LOUD electro-sonic mayhem untouched by human feeling. (Land, KL 6218-6222)

Welcome to the posthuman Real.

1. Luis Suarez-Villa. Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism (Kindle Locations 364-365). Kindle Edition. 
2. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Location 6049). Urbanomic/Sequence Press. Kindle Edition.
3. Floridi, Luciano (2013-10-10). The Ethics of Information (p. 6). Oxford University Press, USA. Kindle Edition.
4. Floridi, Luciano. The Onlife Manifesto. (see here)
5. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 230). St. Martin’s Press. Kindle Edition.

Romancing the Machine: Intelligence, Myth, and the Singularity

“We choose to go to the moon,” the president said. “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”

I was sitting in front of our first Motorola color television set when President Kennedy spoke to us of going to the moon. After the Manhattan Project to build a nuclear bomb this was the second great project that America used to confront another great power in the race to land on the moon. As I listened to the video (see below) I started thinking about a new race going on in our midst: the intelligence race to build the first advanced Artificial General Intelligence (AGI). As you listen to Kennedy think about how one of these days soon we might very well hear another President tell us that we must fund the greatest experiment in the history of human kind: the building of a superior intelligence.

Why? Because if we do not we face certain extinction. Oh sure, such rhetoric of doom and fear has always had a great effect on humans. I’ll imagine him/her trumping us with all the scientific validation about climate change, asteroid impacts, food and resource depletion, etc., but in the end he may pull out the obvious trump card: the idea that a rogue state – maybe North Korea, or Iran, etc. is on the verge of building such a superior machinic intelligence, an AGI. But hold on. It gets better. For the moment an AGI is finally achieved is not the end. No. That is only the beginning, the tip of the ice-berg. What comes next is AI or complete artificial intelligence: superintelligence. And, know one can tell you what that truly means for the human race. Because for the first time in our planetary history we will live alongside something that is superior and alien to our own life form, something that is both unpredictable and unknown: an X Factor.


Just think about it. Let it seep down into that quiet three pounds of meat you call a brain. Let it wander around the neurons for a few moments. Then listen to Kennedy’s speech on the romance of the moon, and remember the notion of some future leader who will one day come to you saying other words, promising a great and terrible vision of surpassing intelligence and with it the likely ending of the human species as we have known it:

“We choose to build an Artificial Intelligence,” the president said. “We choose to build it in this decade, not because it is easy, but because it is for our future, our security, because that goal will serve to organize our defenses and the security of the world, because that risk is one that we are willing to accept, one we are not willing to postpone, because of the consequences of rogue states gaining such AI’s, and one which we intend to win at all costs.”

Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

—Michael Anissimov, MIRI Media Director

 We are all bound by certain cognitive biases. Looking them over I was struck by the conservativism bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” As we move into the 21st Century we are confronted with what many term convergence technologies: nanotechnology, biotechnology, genetechnology, and AGI. As I was looking over PewResearch’s site which does analysis of many of our most prone belief systems I spotted one on AI, robotics, et. al.:

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade. (see AI, Robotics, and the Future of Jobs)

 This almost universal acceptance that robotics and AI will be a part of our inevitable future permeates the mythologies of our culture at the moment. Yet, as shows there is a deep divide as to what this means and how it will impact the daily lives of most citizens. Of course the vanguard pundits and intelligent AGI experts hype it up, telling us as Benjamin Goertzel and Steve Omohundro argue AGI, robotics, medical apps, finance, programming, etc. will improve substantially:

…robotize the AGI— put it in a robot body— and whole worlds open up. Take dangerous jobs— mining, sea and space exploration, soldiering, law enforcement, firefighting. Add service jobs— caring for the elderly and children, valets, maids, personal assistants. Robot gardeners, chauffeurs, bodyguards, and personal trainers. Science, medicine, and technology— what human enterprise couldn’t be wildly advanced with teams of tireless and ultimately expendable human-level-intelligent agents working for them around the clock?1

As I read the above I hear no hint of the human workers that will be displaced, put out of jobs, left to their own devices, lost in a world of machines, victims of technological and economic progress. In fact such pundits are only hyping to the elite, the rich, the corporations and governments that will benefit from such things because humans are lazy, inefficient, victims of time and energy, expendable. Seems most humans at this point will be of no use to the elite globalists, so will be put to pasture in some global commons or maybe fed to the machine gods.

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.

—Ray Kurzweil, inventor, author, futurist

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

—George Dyson, historian

Kurzweil and Dyson agree that whatever these new beings become, they will not have our interests as a central motif of their ongoing script.  As Goertzel tells Barrat the arrival of human-level intelligent systems would have stunning implications for the world economy. AGI makers will receive immense investment capital to complete and commercialize the technology. The range of products and services intelligent agents of human caliber could provide is mind-boggling. Take white-collar jobs of all kinds— who wouldn’t want smart-as-human teams working around the clock doing things normal flesh-and-blood humans do, but without rest and without error. (Barrat, pp 183-184) Oh, yes, who wouldn’t… one might want to ask all those precarious intellectual laborers that will be out on the street in soup lines with the rest of us that question.

As many of the experts in the report mentioned above relate: about half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

Sounds more like dystopia for the mass, and just another nickelodeon day for the elite oligarchs around the globe. Yet, the other 52% have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution. Sounds a little optimistic to me. Human ingenuity versus full-blown AI? Sound more like blind-man’s bluff with the deck stacked in favor of the machines. As one researcher Stowe Boyd, lead researcher at GigaOM Research, said of the year 2025 when all this might be in place: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?’ Indeed, one wonders… we know the Romans built the great Circus, gladiatorial combat, great blood-bath entertainment for the bored and out-of-work minions of the Empire. What will the Globalists do?

A sort of half-way house of non-commitment came from Seth Finkelstein, a programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, who responded, “The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence; rather it’s a political choice.” 

I love it that one can cop-out by throwing it back into politics, thereby washing one’s hands of the whole problem as if magically saying: “I’m just a technologist, let the politicians worry about jobs. It’s not technology’s fault, there is no determinism on our side of the fence.” Except it is not politicians who supply jobs, its corporations: and, whether technology is determined or not, corporations are: their determined by capital, by their stockholders, by profit margins, etc. So if they decide to replace workers with more efficient players (think AI, robots, multi-agent systems, etc.) they will if it make them money and profits. Politicians can hem and haw all day about it, but will be lacking in answers. So as usual the vast plebian forces of the planet will be thrown back onto their own resources, and for the most part excluded from the enclaves and smart cities of the future. In this scenario humans will become the untouchables, the invisible, the servants of machines or pets; or, worst case scenario: pests to be eliminated.

Yet, there are others like Vernor Vinge who believe all the above may be true, but not for a long while, that we will probably go through a phase when humans are augmented by intelligence devices. He believes this is one of three sure routes to an intelligence explosion in the future, when a device can be attached to your brain that imbues it with additional speed, memory, and intelligence. (Barrat, p. 189) As Barrat tells us our intelligence is broadly enhanced by the mobilization of powerful information technology, for example, our mobile phones, many of which have roughly the computing power of personal computers circa 2000, and a billion times the power per dollar of sixties-era mainframe computers. We humans are mobile, and to be truly relevant, our intelligence enhancements must be mobile. The Internet, and other kinds of knowledge, not the least of which is navigation, gain vast new power and dimension as we are able to take them wherever we go. (Barrat, p. 192)

But even if we have all this data at our braintips it is still data that must be filtered and appraised, evaluated. Data is not information. As Luciano Floridi tells us “we need more and better technologies and techniques to see the small-data patterns, but we need more and better epistemology to sift the valuable ones”.2 As Floridi will explain it what Descartes acknowledged to be an essential sign of intelligence— the capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage— would be a priceless feature of any appliance that sought to be more than merely smart. (Floridi, KL 2657) Floridi will put an opposite spin on all the issues around AGI and AI telling us that whatever it ultimately becomes it will not be some singular entity or self-aware being, but will instead be our very environment – what he terms, the InfoSphere: the world is becoming an infosphere increasingly well adapted to ICTs’ (Information and Communications Technologies) limited capacities. In a comparable way, we are adapting the environment to our smart technologies to make sure the latter can interact with it successfully. (Floridi, KL 2661)

For Floridi the environment around us is taking on intelligence, that it will be so ubiquitous and invisible, naturalized that it will be seamless and a part of our very onlife lives. The world itself will be intelligent:

Light AI, smart agents, artificial companions, Semantic Web, or Web 2.0 applications are part of what I have described as a fourth revolution in the long process of reassessing humanity’s fundamental nature and role in the universe. The deepest philosophical issue brought about by ICTs concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other. When artificial agents, including artificial companions and software-based smart systems, become commodities as ordinary as cars, we shall accept this new conceptual revolution with much less reluctance. It is humbling, but also exciting. For in view of this important evolution in our self-understanding, and given the sort of ICT-mediated interactions that humans will increasingly enjoy with other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. (Floridi, KL 3055-62)

That our conceptions of reality, self, and environment will suddenly take on a whole new meaning is beyond doubt. Everything we’ve been taught for two-thousand years in the humanistic traditions will go bye-bye; or, at least will be treated for the ramblings of early human children fumbling in the dark. At least so goes the neo-information philosophers such as Floridi. He tries to put a neo-liberal spin on it and sponsors an optimistic vision of economic paradises for all, etc. As he says in his conclusion we are constructing an artificial intelligent environment, an infosphere that will be inhabited for millennia of future generations. “We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new physical and intellectual environments that will be inhabited by future generations (Floridi, KL 3954).”  Because of this he tells us we will need to forge a new new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, 3971) 

In some ways I concur with his statement that we need to take a critical view of our current narratives. To me the key is just that. Humans live by narratives, stories, tales, fictions, etc., always have. The modernists wanted grand narratives, while the postmodernists loved micro-narratives. What will our age need? What will help us to understand and to participate in this great adventure ahead in which the natural and artificial suddenly form alliances in ways never before seen from the beginning of human history. From the time of the great agricultural civilizations to the Industrial Age to our own strange fusion of science fiction and fact in a world where superhuman agents might one day walk among us what stories will we tell? What narratives do we need to help us contribute to our future, and to the future hopefully of our species? Will the narratives ultimately be told a thousand years from now by our inhuman alien AI’s to their children of a garden that once existed wherein ancient flesh and blood beings once lived: the beings that once were our creators? Or shall it be a tale of symbiotic relations in which natural and artificial kinds walk hand in hand forging together adventures in exploration of the galaxy and beyond? What tale will it be?

Romance or annihilation? Let’s go back to the bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” If we listen to the religious wing of transhumanism and the singulatarians, we are presented with a rosy future full of augmentations, wonders, and romance. On the other side we have the dystopians, the pessimists, the curmudgeons who tell us the future of AGI leads to the apocalypse of AI or superintelligence and the demise of the human race as a species. Is their a middle ground. Floridi seems to opt for that middle ground where humans and technologies do not exactly merge nor destroy each other, but instead become symbionts in an ongoing onlife project without boundaries other than those we impose by a shared vision of balance and affiliation between natural and artificial kinds. Either way we do not know for sure what that future holds, but as some propose the future is not some blank slate or mirror but is instead to be constructed. How shall we construct it? Above all: whose future is it anyway? 

As James Barrat will tell us consider DARPA. Without DARPA, computer science and all we gain from it would be at a much more primitive state. AI would lag far behind if it existed at all. But DARPA is a defense agency. Will DARPA be prepared for just how complex and inscrutable AGI will be? Will they anticipate that AGI will have its own drives, beyond the goals with which it is created? Will DARPA’s grantees weaponize advanced AI before they’ve created an ethics policy regarding its use? (Barrat, 189)

My feeling is that even if they had an ethics policy in place would it matter? Once AGI takes off and is self-aware and able to self-improve its capabilities, software, programs, etc. it will as some say become in a very few iterations a full blown AI or superintelligence with as much as a thousand, ten thousand, or beyond intelligence beyond the human. Would ethics matter when confronted with an alien intelligence that is so far beyond our simple three pound limited organic brain that it may not even care or bother to recognize us or communicate. What then?

We might be better off studying some of the posthuman science fiction authors in our future posts (from i09 Essential Posthuman Science Fiction):

  1. Frankenstein, by Mary Shelley
  2. The Time Machine, by H.G. Wells
  3. Slan, by A.E. Van Vogt
  4. Dying Earth, Jack Vance
  5. More Than Human, by Theodore Sturgeon
  6. Slave Ship, Fredrick Pohl
  7. The Ship Who Sang, by Anne McCaffrey
  8. Dune, by Frank Herbert
  9. “The Girl Who Was Plugged In” by James Tiptree Jr.
  10. Aye, And Gomorrah, by Samuel Delany
  11. Uplift Series, by David Brin
  12. Marooned In Realtime, by Vernor Vinge
  13. Beggars In Spain, by Nancy Kress
  14. Permutation City, by Greg Egan
  15. The Bohr Maker, by Linda Nagata
  16. Nanotech Quartet series, by Kathleen Ann Goonan
  17. Patternist series, by Octavia Butler
  18. Blue Light, Walter Mosley
  19. Look to Windward, by Iain M. Banks
  20. Revelation Space series, by Alasdair Reynolds
  21. Blindsight, by Peter Watts
  22. Saturn’s Children, by Charles Stross
  23. Postsingular, by Rudy Rucker
  24. The World Without Us, by Alan Weisman
  25. Natural History, by Justina Robson
  26. Windup Girl, by Paolo Bacigalupi

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (pp. 184-185). St. Martin’s Press. Kindle Edition.
2. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 2422-2423). Oxford University Press. Kindle Edition.

Posthuman Economics: The Empire of Capital

Maybe what haunts posthumanism is not technology but utopian capitalism, the dark silences long repressed, excluded, disavowed, and negated within the Empire of Capital.  Franco Berardi’s The Uprising grabs the history of art and capital by the horns as the slow and methodical implementation of the Idealist program. By this he means the dereferentialization of reality – or what we term now the semioitization of reality: the total annihilation of any connection between signifier and signified, word and thing, mind and world. Instead we live in a world structured by fantasy that over time has dematerialized reality.

In economics it was Richard Nixon (1972) who cut the link between financial capital and its referent, the gold standard which subtly dematerialized monetarism of the neoliberal era. This slow vanishing act of reality into its digital matrix has in our time become so naturalized that we have forgotten how much our lives are enmeshed in fictions divorced from even the illusion of reality. As Berardi will put it:

The premise of neoliberal dogmatism is the reduction of social life to the mathematical implications of financial algorithms. What is good for finance must be good for society, and if society does not accept this identification and submission, then that means that society is incompetent, and needs to be redressed by some technical authority.1

He speaks of the moment when the newly elected Greek President Papandreou actually had the audacity to question the EU’s austerity program and was summarily ousted by the new entity, The Markets, and replaced with a consultant from Goldman-Sachs. He asks calmly, What is this blind god, the Markets?

Markets are the visible manifestation of the inmost mathematical interfunctionality of algorithms embedded in the techno-linguistic machine: they utter sentences that change the destiny of the living body of society, destroy resources, and swallow the energies of the collective body like a draining pump. (Berardi, 32)

In this sense we are already being run by the machinic systems of math and computation at the core of our economic system. As he tells it the humans behind the system are not fascists, yet they allow society to be enslaved by a mathematical system of economics and financialization, which is clean, smooth, perfect, and efficient. The financial orthodoxy would have you believe that all things should act efficiently. Like all orthodoxies it offers comfort and guidance, but, as orthodoxies do, it also has the power to wound those who cannot follow its dogmas or who resist its rituals of conformity. It is technological because it has primarily to do with making things work, and it is particularly apparent in the contemporary emphasis on quantifiable productivity and associated fears of waste, especially the waste of time.2

Mihaly Csikszentmihalyi once developed his theory of optimal experience based on the concept of flow—the state in which people are so involved in an activity that nothing else seems to matter; the experience itself is so enjoyable that people will do it even at great cost, for the sheer sake of doing it.3 Thinking of flow and efficiency one discovers the key is the concept of flow-of information or of goods, for example-and the role of efficiency in preventing disruptions. This suggests that beneath the zeal for efficiency lies the desire to control a changing world, to keep an optimal and peak level of flow going at all times in society and combatting and preventing anything that might disrupt that flow.

In Berardi’s mathematization of society we’re no longer consumers and users, but have instead become as Bruce Sterling tells us in The Epic Struggle of the Internet of Thingsparticipants under machine surveillance, whose activities are algorithmically combined within Big Data silos” (Sterling, KL 30). So that in this sense we are no longer embodied humans, but are instead bits of data floating among the wired worlds of our digital economy. But a fascinating aspect of the Internet of things is that the giants who control the major thrust within its reaches Google, Amazon, Facebook, Apple or Microsoft could care less about efficiency. No. They in fact don’t bother to “compete” with each other because their real strategy is to “disrupt”. Rather than “competing” – becoming more efficient at doing something specific – “disruption” involves a public proof that the rival shouldn’t even exist.(Sterling, KL 212-216)

The basic order of the economic day is coded in the language of noir dime novels. “Knifing the baby” means deliberately appropriating the work of start-ups before they can become profitable businesses. “Stealing the oxygen” means seeing to it that markets don’t even exist – that no cash exchanges hands, while that formerly profitable activity is carried out on a computer you control. (Sterling, KL 224)

Yet, underneath all the glitter and glitz is the hard truth of reality. If the Internet of things is a neo-feudal empire of tyrant corporations disrupting the flows of efficient commerce in a bid to attain greater and greater power and influence, then the world of austerity and nation states outside the wires is preparing for the barbarians. As Berardi relates it outside the cold steel wires of financial digi-tyranny we can already see the violent underbelly of the old physical body of the social raising its reactionary head: nation, race, ethnic cleansing, and religious fundamentalism are running rampant around the globe. While the digital-elite pirate away the world of finance the forgotten citizenry outside the digital fortress are preparing for war in the streets: despair, suicide, and annihilation living in the austerity vacuum of a bloated world of wires.

Maybe Yeats wrote his poem The Second Coming for our century:

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
   The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.

    Surely some revelation is at hand;
    Surely the Second Coming is at hand.
    The Second Coming! Hardly are those words out
    When a vast image out of Spiritus Mundi
    Troubles my sight: a waste of desert sand;
    A shape with lion body and the head of a man,
    A gaze blank and pitiless as the sun,
    Is moving its slow thighs, while all about it
    Wind shadows of the indignant desert birds.

    The darkness drops again but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle,
    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

1. Franco “Bifo” Berardi. The Uprising. (Semiotext(e), 2012)
2. Jennifer Karns Alexander. The Mantra of Efficiency: From Waterwheel to Social Control (Kindle Locations 29-32). Kindle Edition
3. Csikszentmihalyi, Mihaly (2008-08-18). Flow (P.S.) (Kindle Locations 214-216). HarperCollins. Kindle Edition.


David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 7)

Our role as humans, at least for the time being, is to coax technology along the paths it naturally wants to go. – Kevin Kelly

In a book by that name What technology wants? he’ll elaborate, asking:

So what does technology want? Technology wants what we want— the same long list of merits we crave. When a technology has found its ideal role in the world, it becomes an active agent in increasing the options, choices, and possibilities of others. Our task is to encourage the development of each new invention toward this inherent good, to align it in the same direction that all life is headed. Our choice in the technium— and it is a real and significant choice— is to steer our creations toward those versions, those manifestations, that maximize that technology’s benefits, and to keep it from thwarting itself.1

As you read the above paragraph you notice how Kelley enlivens technology, as if it were alive, vital, had its own will and determination, its own goals. This notion that technology should be coaxed along toward its ‘inherent good’, and that this is our obligation and moral duty to steer (think of steersman: cyber) it and help it along so it doesn’t get frustrated and thwart itself is perilously close to treating technology like a child that needs to be educated, taught what it needs to know, help it become the best it can be, etc. But is technology alive, does it have goals, is it something that has an ‘inherent good’ or moral agenda; and, most of all, is this our task and responsibility to insure technology will get what it wants. Such a discourse shifts the game makes us feel as it technology now has the upper hand, its agenda is more important than ours, etc. What’s Kelley up too, anyway?

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In that post Roden would leave us asking: What is a technology, exactly, and to what extent does technology leave us in a position to prevent, control or modify the way in which a disconnection might occur? If we listened to Kelley we might just discover in helping this agent of the technium – as he terms the symbiotic alliance of humans and technology in our time, that technology wants something we might not quite want for ourselves: the end or humanity. Of course that’s the notion presented in such movies as the Terminator series of films, etc.

What Roden offers instead is a reminder that we may first want to question our role and the role of technology in our lives and futures. He will remind us that in chapter five he provided an theory of accounting which argued that we have a moral interest in making or becoming posthumans since the dated nonexistence of posthumans is the primary source of uncertainty about the value of posthuman life. Now whether we agree or disagree with this is beyond our immediate concern. As he’s shown over and over this is all within the perimeters of a speculative posthumanism that is both undetermined and open to variable accountings. In this chapter he will appraise such actions in the context of our existing technological society.

First thing he’ll question is the work of Jaques Ellul and Martin Heidegger both of whom support to varying degrees the notion that technology is deterministic. The notion that technology asserts a determining effect on society and humans is both instrumentalist and substantive:

Technology is not a neutral instrument but a structure of disclosure that determines how humans are related to things and to one another. If Heidegger is right, we may control individual devices, but our technological mode of being exerts a decisive grip on us: “man does not have control over unconcealment itself, in which at any given time the real shows itself or withdraws” (Heidegger 1978: 299). If this is right, the assumption that humans will determine whether our future is posthuman or not is premature. (Roden, 3476-3480)2

On the other hand Ellul will develop a theory of technique in which the notions of “self-augmentation” is aligned with the autonomy of technology: “the individual represents this abstract tendency, he is permitted to participate in technical creation, which is increasingly independent of him and increasingly linked to its own mathematical law” Ellul quoted (Roden, 3494). Roden on the other hand will argue that a condition of technical autonomy – which Ellul calls “self-augmentation” – is in fact incompatible with autonomy:

Self-augmentation can only operate where techniques do not determine how they are used. Thus substantivists like Ellul and Heidegger are wrong to treat technology as a system that subjects humans to its strictures. (Roden, 3512)

The rest of the chapter Roden will elaborate on this statement with examples from both Ellul and Heidegger. I’ll not go into the details which are mainly to bolster his basic defense of the disconnection thesis being indeterminate and open rather than being determined by technology or technique. The notion that planetary technology is a self-augmenting system then Ellul’s normative technological determinism is lacking in the necessary resilience to explain the various anomalous aspects of existing technological innovations and changes. In fact this chapters main thrust is to align Roden’s argument not over specific notions of technicity etc., but rather to argue for a realist conception of technological rupture and disconnection as compared to the deterministic phenomenological philosophies of Heidegger, Ellul, Verbeek, and Ihde: we should embrace a realist metaphysics of technique in opposition to the phenomenologies of Verbeek and Ihde. Technologies according to this model are abstract, repeatable particulars realized (though never finalized) in ephemeral events (Roden, 3748).

A realist metaphysic will realize that to control a system we also need some way of anticipating what it will do as a result of our attempts to modify it. But given the accounts … [Ellul, Heidegger, Verbeek, Ihde], it is likely that planetary technique is, as Ellul argues, a distinctive causal factor which ineluctably alters the technical fabric of our societies and lives without being controllable in its turn (Roden, 3767). Which will lead us to understand that even with the vast data storage and knowledge based algorithms of data mining, which would provide an almost encyclopedic information of current “technical trends”, this in itself will not be sufficient to identify all future causes of technical change. (Roden, 3773) It also entails a sense of porousness and fuzziness within this abstract and technical space, and as SP has shown technical change could engender posthuman life forms that are functionally autonomous and thus withdraw from any form of human control. (Roden, 3779) Last but not least, any system built to track changes within the various systems would be themselves part of the systems, so that any simulation of the patterns leading to a posthuman rupture would be “qualitatively different” from the one it was originally designed to simulate.

In summary if our planetary system is a SATS (self-augmenting technical system) or assemblage of systems Roden tells us there are grounds to affirm that it is uncontrollable, a decisive mediator of social actions and cultural values, but not a controlling influence (i.e., a deterministic system of technique or control). (Roden, 3794):

On the foregoing hypothesis, the human population is now part of a complex technical system whose long-run qualitative development is out of the hands of the humans within it. This system is, of course, a significant part of W[ide]H[umans]. The fact that the global SATS is out of control doesn’t mean that it, or anything, is in control. There is no finality to the system at all because it is not the kind of thing that can have purposes. So the claim that we belong to a self-augmenting technical system (SATS) should not be confused with the normative technological determinism that we find in Heidegger and Ellul. There is nothing technology wants. (Roden, KL 3797-3802)

In tomorrow’s post we will come to a conclusion, discussing Roden’s “ethics of becoming posthuman”.

1. Kelly, Kevin (2010-10-14). What Technology Wants (Kindle Locations 3943-3944). Penguin Group US. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 3)

Continuing where I left off yesterday in my commentary on David Roden’s Posthuman Life: Philosophy at the Edge of the Human  we discover in Chapter Two a critique of Critical Posthumanism. He will argue that critical humanism like SP understands that technological, political, social and other factors will evolve to the point that the posthuman will become inevitable, but that in critical posthumanism they conflate both transhuman and SP ideologies and see both as outgrowths of the humanist tradition that tend toward either apocalypse or transcendence. Roden will argue otherwise and provides four basic critiques against the anti-humanist argument, the technogenesis argument, the materiality argument, and the anti-essentialist argument. By doing this he hopes to bring into view the commitment of SP to a minimal, non-transcendental and nonanthropocentric humanism and will help up put bones on its realist commitments (Roden, KL 829).1

Critical posthumanism argues that we are already posthuman, that it is our conceptions of human and posthuman that are becoming changing and that any futuristic scenario will be an extension of the human into its future components. SP will argue on the other hand that the posthuman might be radically different from the human altogether, such that the posthuman would constitute a radical break with our conceptual notions altogether. After a lengthy critique of critical posthumanism tracing its lineage in the deconstructive techniques of Derrida and Hayles he will tell us that in fact SP and Critical posthumanism are complementary, and that a “naturalistic position structurally similar to Derrida’s deconstructive account of subjectivity can be applied to transcendental constraints on posthuman weirdness” (Roden, KL 1037). The point being that a “naturalized deconstruction” of subjectivity widens the portals of posthuman possibility whereas it complicates but does not repudiate human actuality (Roden, 1039). As he sums it up:

I conclude that the anti-humanist argument does not succeed in showing that humans lack the powers of rational agency required by ethical humanist doctrines such as cosmopolitanism. Rather, critical posthumanist accounts of subjectivity and embodiment imply a cyborg-humanism that attributes our cognitive and moral natures as much to our cultural environments (languages, technologies, social institutions) as to our biology. But cyborg humanism is compatible with the speculative posthumanist claim that our wide descendants might exhibit distinctively nonhuman moral powers. (Roden, 1045-1049)

When he adds that little leap to “nonhuman moral powers” it seems to beg the question. That seems to align toward the transhumanist ideology, only that it fantasizes normativity for nonhumans rather than enhanced humans. Why should these inhuman/nonhuman progeny of metal-fleshed cyborgs have any moral dimension whatsoever? Some argue that the moral dimension is tied to affective relations much more than cognitive, so what if these new nonhuman beings are emotionless? What if like many sociopathic and psychopathic humans have no emotional or affective relations at all? What would this entail? Is this just a new metaphysical leap without foundation? Another placating gesture of Idealism, much like the Brandomonian notions of ‘give and take’ normativity that such Promethean philosophers as Reza Negarestani have made recently (here, here, here):

Elaborating humanity according to the self-actualizing space of reasons establishes a discontinuity between man’s anticipation of himself (what he expects himself to become) and the image of man modified according to its functionally autonomous content. It is exactly this discontinuity that characterizes the view of human from the space of reasons as a general catastrophe set in motion by activating the content of humanity whose functional kernel is not just autonomous but also compulsive and transformative.
Reza Negarestani , The Labor of the Inhuman One and Two

The above leads into the next argument: technogenesis. Hayles and Andy Clark will argue that there has been a symbiotic relation between technology and humans from the beginning, and that so far there has been no divergence. SP will argue that that’s not an argument. That just because the fact that the game of self-augmentation is ancient does not imply that the rules cannot change (Roden, KL 1076). Technogenesis dismissal of SP invalidly infers that because technological changes have not monstered us into posthumans thus far, they will not do so in the future (Roden, KL 1087).

Hayles will argue a materiality argument that SP and transhumanists agendas deny material embodiment: the notion that a natural system can be fully replicated by a computational system that emulates its functional architecture or simulates its dynamics. This argument Roden will tell us actually works in favor of SP, not against it. It implies that weird morphologies can spawn weird mentalities. 7 On the other hand, Hayles may be wrong about embodiment and substrate neutrality. Mental properties of things may, for all we know, depend on their computational properties because every other property depends on them as well. To conclude: the materiality argument suggests ways in which posthumans might be very inhuman. (Roden, 1102)

The last argument is based on the anti-essentialist move in that it would locate a property of ‘humaneness’ as unique to humanity and not transferable to a nonhuman entity: this is the notion of an X factor that could never be uploaded/downloaded etc. SP will argue instead that we can be anti-essentialists (if we insist) while being realists for whom the world is profoundly differentiated in a way that owes nothing to the transcendental causality of abstract universals, subjectivity or language.  But if anti-essentialism is consistent with the mind-independent reality of differences – including differences between forms of life – there is no reason to think that it is not compatible with the existence of a human– posthuman difference which subsists independently of our representations of them. (Roden, 1136)

Summing up Roden will tell us:

The anti-essentialist argument just considered presupposes a model of difference that is ill-adapted to the sciences that critical posthumanists cite in favour of their naturalized deconstruction of the human subject. The deconstruction of the humanist subject implied in the anti-humanist dismissal complicates rather than corrodes philosophical humanism – leaving open the possibility of a radical differentiation of the human and the posthuman. The technogenesis argument is just invalid. The materiality argument is based on metaphysical assumptions which, if true, would preclude only some scenarios for posthuman divergence while ramping up the weirdness factor for most others. (Roden, 1142-1147)

Most of this chapter has been a clearing of the ground for Roden, to show that many of the supposed arguments against SP are due to spurious and ill-reasoned confusion over just what we mean by posthumanism. Critical posthumanism in fact seems to reduce SP and transhumanist discourse and conflate them into some erroneous amalgam of ill-defined concepts. The main drift of critical posthumanist deliberations tend toward the older forms of the questionable deconstructionist discourse of Derrida which of late has come under attack from Speculative realists among others.

In the Chapter three Roden will take up the work of Transhumanism which seeks many of the things that SP does, but would align it to a human agenda that constrains and moralizes the codes of posthuman discourse toward human ends. In this chapter he will take up threads from Kant, analytical philosophy, and contemporary thought and its critique. Instead of a blow by blow account I’ll briefly summarize the next chapter. In the first two chapters he argued that the distinctions between SP and transhumanism is that the former position allows that our “wide human descendants” could have minds that are very different from ours and thus be unamenable to broadly humanist values or politics. (Roden, KL 1198) While in chapter three he will ask whether there might be constraints on posthuman weirdness that would restrict any posthuman– human divergence of mind and value. (Roden, 1201) After a detailed investigation into Kant and his progeny Roden will conclude that two of the successors to Kantian transcendental humanism – pragmatism and phenomenology – seem to provide rich and plausible theories of meaning, subjectivity and objectivity which place clear constraints on 1) agency and 2) the relationship – or rather correlation – between mind and world. (Roden, 1711) As he tells us these theories place severe anthropological bounds on posthuman weirdness for, whatever kinds of bodies or minds posthumans may have, they will have to be discursively situated agents practically engaged within a common life-world. In Chapter 4 he will consider this “anthropologically bounded posthumanism” critically and argue for a genuinely posthumanist or post-anthropocentric unbinding of SP. (Roden, 1713)

I’ll hold off on questions, but already I see his need to stay with notions of meaning, subjectivity and objectivity in the Western scientific tradition that seem ill-advised. I’ll wait to see what he means by unbinding SP from this “anthropologically bounded posthumanism”, and hopefully that will clarify and disperse the need for these older concepts that still seem to be tied with the theo-philosophical baggage of western metaphysics.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human Taylor and Francis. Kindle Edition.

Science Fiction, Technology, and Accelerationist Politics: Final Thoughts on an Williams and Srnicek’s Manifesto

One of the guiding factors in my science fiction series (quartet) is the collusion and convergence of the current and future trends in NBIC (nanotech, biotech, information tech, and computer tech) and ICT (information and communications technologies) technologies and their personal, social, political, environmental, and moral impact over then next couple centuries.

With notions of economic and environmental collapse central to this I hope to cover the underlying tension of global governance, technological risk, and the posthuman-transhuman singularity in both its neoliberal, reactionary, and ultra-left varieties. With the alternate forms of a philosophy of Accelerationism being promoted by the Right and Left one wants to enact theses differing tensions in an approach to the micro/macro scaled transformations of society and environment across a future history spectrum.

Science Fiction has always based itself on current trends and forecasting, providing both the hard science and the strangeness or wonder at the impact upon society and environment. The idea of giving shape to such a realm is daunting to say the least, but over the past few years I’ve been listening to our philosophers around the globe, as well as the scientists and engineers who enact the pragmatic materiality of such systems of thought through everyday practices. They all seem to agree that the utopian ideologies of the 20th Century or now defunct, passé and of little use in ongoing scenarios that incorporate such technological and economic impacts to both the physical well-being and health of our global civilization and those other creatures we share its resources with. Ours is a time of both accelerating change and a moment when the future of life on this planet is being decided. Over the next hundred years or even less we have some hard choices to make in our ethical initiatives which seem almost archaic as compared to the accelerating pace of technological innovations.

In the Third world we see the manipulation and oppression of billions of humans by war, famine, genocide, economic and social oppression, religious intolerance and bigotry, racial and gender inequalities, etc. The global elite and their minion governments are doing little to obviate such things and seem instead bent on supporting national agendas that will instead worsen the effects of such dire issues. Our intellectuals seem bankrupt and unable to spur the needed actions on the planet to curtail such problems. In a short-lived series of Spring revolutions and Occupy movements we’ve seen the implosive force of late capitalism not only able to survive the shocks of economic disaster but also to co-opt the many initiatives of the left at their own game.

Why? Why has the left withdrawn into an academic cocoon of meetings, speeches, globe-trotting speeches that only the high-brow of academia are interested in? We seem to have no center, no rallying point around which to gather even the semblance of a message. Each faction seems to have broken off like a fractured schizophrenic nomad spouting the messages of its specific needs: colonialism, gender and racial equality, economic anarchist or communist agendas, green speak, etc. The list could go on. The point being there seems to be no umbrella banner under which all these various agendas could be brought together. Part of it is the aversion to monoculture systems with grand narratives that we’ve been taught over the past postmodern era to shy away from. This notion that one fits all just doesn’t work anymore, yet the notion of a thousand petals storming heave want work either.

What to do? Rereading Alex Williams and Nick Srnicek’s #Accelerate: Manifesto for an Accelerationist Politics   the tells us that “today’s politics is beset by an inability to generate the new ideas and modes of organization necessary to transform our societies to confront and resolve the coming annihilations” (3). The enemy for them is the neoliberal project that encompasses our globe whether within the West (EU and Americas) or the East (Russian, China, and other nations). They realize that the housing collapse in 2007 was a mere blip in the neoliberal eye, and that it has slowly recovered and hardened its agendas to deprivatize the planet and through global governance and legal pressures to slowly denationalize and enforce incursions against the remaining social democratic institutions and services (4).

Against the neoliberal world order Williams and Srnicek tell us that the left as situated within its Kitsch Marxism is a lost world of possibilities, that it is bankrupt and hollow and that the only way forward is to “the recovery of lost possible futures, and indeed the recovery of the future as such” (5-6). The notion of the “future” as a concept has a unique heritage in the cycle of 20th Century thought. From the Italian and Russian Futurists on through many of the Utopian visions turned hellish of the different enactments of communisms, democratic socialisms, and darker worlds of Fascism, etc. A global history well documented in Susan Buck-Morss’s Dreamworld and Catastrophe. After the failure of May 1968 and the political struggles of that era a malaise overcame many on the left and as Bifo Berardi in After the Future would affirm com­mu­nist pol­i­tics fell into lethargy with the fall of the Berlin Wall and the rise of the new China. As he states it in our age communisms will emerge from an exo­dus, both vol­un­tary and com­pul­sory, from a stag­nat­ing and increas­ingly preda­tory state-capital nexus. This exo­dus is both social, in the devel­op­ment of an alter­na­tive infra­struc­ture, and per­sonal, in the with­drawal from the hyper-stimulation of the semi­otic econ­omy. Bifo aban­dons hope in col­lec­tive con­tes­ta­tion at the level of the political. It’s this fatalism, this miserabilism of no futures, not possibilities, no hope that aligns such a communism with what Williams and Srnicek among others see as retrograde and feeding into the neoliberal agenda.

Instead Williams and Srnicek look at current capitalism, at the neoliberal project as it situates its global agenda in the face of no opposition – or, at least, minimal. What they see is an economics of acceleration: capitalism demands economic growth, one in which its ideological self-presentation is one of liberating the forces of creative destruction, setting free ever accelerating technological and social innovations (02). With the rise of these new global economies we see an increase in the need for workers across the board. One of the largest underworld trading systems is in human trafficking to supply these new initiatives both physical and sex labor workers (i.e., undocumented workers who act as human slaves to the new marginal initiatives in building the smart cities of the future, etc. see Gridlock: Labor, Migration, and Human Trafficking in Dubai by Pardis Mahdavi;  Disposable People: New Slavery in the Global Economy by Kevin Bales; Ending Slavery: How We Free Today’s Slaves by Kevin Bales, etc. the list could go on). As well as the global drug, money laundering, financial austerity and intervention, etc. (i.e., Policing the Globe: Criminalization and Crime Control in International Relations by Andreas Peter; Banished: The New Social Control In Urban America by Katherine Beckett; A Game As Old As Empire: The Secret World of Economic Hit Men and the Web of Global Corruption by Steven Hiatt; Policing Dissent: Social Control and the Anti-Globalization Movement by Luis Alberto Fernandez, etc.)

Williams and Srnicek diagnose two forms of accelerationism: 1) the neoliberal form exemplified by Nick Land in essays (Fanged Noumena, The Thirst for Annihilation, etc.) in which the neoliberal or late capitalist system is rushing forward blindly in a unidirectional system of transhumanist or posthuman bricolage that constructs itself from the fragments of former civilizations and will at some point reach a techonomic singularity thereby sloughing off its human benefactors and creating the AI and Machinic civilizations of the future; and, 2) the left version of accelerationism that offers an open-ended navigational process of discovery “within a universal space of possibility” (02). This last notion of a “space of possibility” is a take off from a Sellarsian-Brandomian model of a normative “space of reasons” in which a collective consensus of experts commutes through practices of “give and take” a carefully planned out and coordinated effort which Williams and Srnicek will later term The Plan (cartographic mappings) and The Network (infosphere of global action encompassing both virtual and actual environments).

They see a conflict between speed and accelerationism at the heart of these disparate worlds of the neoliberal vision (speed; or, confusion of speed with acceleration) and the communist left vision (accelerationist): one in which the neoliberal version constrained by the tactics and strategies of speed force progress into an economic framework of “surplus value, a reserve army of labour, and free-floating capital” in which economic growth and social innovation becomes “encrusted with kitsch remainders from our communal past” (02:3). Instead of an expansion in cognitive labour and its self-fulfilling innovations they see instead that neoliberalism is shutting down human cognitive labour with automation and the machinic implementation of smart or intelligence systems that will eventually replace humans as the knowledge makers of tomorrow (02:4).

They also look to Marx himself and recite that it was him as well as Land who realized that capitalism should not be destroyed but rather that its “gains were not to be reversed, but accelerated beyond the constraints of the capitalist value form” (02:5). They even realize that Lenin himself understood that large scale capitalist efforts constrained only by the latest sciences could offer the socialist regimes an economic future (02:6). As Williams and Srnicek see it the left must embrace technological and social acclerationsim if they are to have any future at all (02: 7).

In their critique of the Left they see two forces at work: 1) a folk politics of localism, direct action, and relentless horizontalism; and, 2) an accelerationist left “at ease with modernity of abstraction, complexity, globality, and technology (03: 1). The former seems content on a no future politics of withdrawal and exit, of creating non-capitalist zones that will exist outside capitalist relations altogether. The accelerationist alternative politics seeks to manifest the gains of late capitalism without its dire consequences of oppression and exploitation, transforming its goals toward non-oppressive and non-exploitative egalitarian purposes.

In section (03: 2) they wonder at the inability of capitalist theory against its pragmatic outcome in the very notion of reduction of labour hours. Instead of a reduction as predicated by Keynes and other labour theorists what has transpired is the severing of the private and public realms of work and play in which the worker has been incorporated into a 24/7 economy that is pure work-at-play or play-at-work based on ludicrous incentives and lucrative strategies of desire. Instead of human freedom and potential capitalism has squandered its perennial dreams of space flight and technological innovation and into a consumerist nightmare of repetitive gadgetry that must be replaced the moment it is used (03: 4). They tells us that accelerationists do not wish for a return to the Fordist era of the factory, that it is behind us, and even the post-Fordist era of consumer iterations in a void is on decline: the worlds of colonialism, empire, and a third-world periphery in nationalist terms is coming to an end. The days of race, sex and subjugation are coming to an end too. (03:4).

Instead of crushing neoliberalism they tell us we should overtake it, repurpose it toward common ends, allow for a movement toward a post-capitalist future beyond neoliberal traditions and values (03: 5). They admit that technology itself remain entrapped and enslaved by neoliberal agendas, and that even they and the accelerationists have little foresight as to the potentials that a unexploitative technological imperative might bring to the table (03: 6). Against techno-utopians that see technology as autonomous from the socius, and as some kind of ultimate salvation system in its own right, the accelerationist believe that technology should be subordinated to social needs rather than granted superior rights and privileges. In this sense they would constrain technology to human needs and social practices – a return to aspects of the Enlightenment project or a new humanism rather than some techno-extroprian vision beyond human needs and purposes (03:7).

To do this they tells us some form of planning will need to take place, a way of mapping this accelerationist future: “we must develop both a cognitive map of the existing system and a speculative image of the future economic system” (03: 8). As part of this we need the existing toolsets that have informed and made neoliberalism so successful: the very ICT (information and communications technologies) developed over the past half-century; social-network analysis, agent-based modeling, big data analytics, and non-equilibrium economic models, etc. All these will be needed by the left base intellectual or cognitariat in developing a way forward (03: 9). Also there will be a need for a new culture of innovation, creativity, and experimentation that allows for failure and practice on all fronts, an open-ended trial-and-effort model that takes into account the mistakes of the past and revises its methodologies and practices on the fly (03:10).

For all of this to happen the left will need to provide a hegemonic platform of informatics (virtual/immaterial) and material (actual/substantive) infrastructural technologies and realistic social practices and institutions (03: 11). Without the infrastructure the material and immaterial platforms of production, finance, logistics, and consumption will remain in capitalist not post-capitalist modes that will be less effective and stymied by capitalist modes of social relations rather than collective goals and aspirations. To accomplish such a task is to leave behind the needles quarrels of ineffective direct action appeals and failures of the political left’s past, instead we need new modes of action: politics must be “treated as a set of dynamic systems, riven with conflict, adaptations and counter-adaptations, and strategic arms races” (03: 12). Instead of any one strategy or tactic we need to confront the events we meet on their own terms and have an arsenal of strategies and tactics, modeling trajectories and smart systems available at our beck and call that can open up and allow us to act in the moment in real-time with the best available data and cartographic strategies available to move ahead. Instead of centralized bureaucracies we will have decentralized systems of command and control based on immediacy of situational analysis and synthesis using advanced analytic and synthetic algorithms superior to any slow institutional pull and push leverage. This will be a community of trust, a socious of individuals working in collusion and cooperating through modes of being that no longer are tied to senseless hierarchies of command and control that were never effective to begin with. We must study these past failures in the systems and incorporate them into our innovated algorithmic programs of emerging intelligence systems: revisable, updatable, changing systems of multiplicity and openness.

In section 03: 13 I simply disagree with Williams and Srnicek who tells us that the ‘radical Left’ is simply wrong in their fetishisation of openness, horizontality, and inclusion. Instead they want to incorporate older forms of “secrecy, verticality, and exclusion” as having a place in effective political action. But for whom? For which players? This need for secrecy sounds like a return to some notion of hierarchical command, of leaders and followers, rather than comrades all working toward equalitarian ends. Veritcality: as hierarchy, top-down structures of command? Exclusion: of whom? And, who would be the excluders, the judges of this exclusion? Maybe in the transition process I could see this as the neoliberal order is still the enemy we must overcome: but after? Do they presume that in this final post-capitalist order we will need such notions?

In 03: 14 they tell us that democracy must be defined by its “collective self-mastery”. Why must this be the delimiting inscription? Why not as “collective self-emancipation” rather than some organizational notion of mastery which seems a reversion to older slave/master conceptuality? They describe it as essential to the Enlightenment project of ruling ourselves. But even the notion that we need masters to rule us is a false notion of sovereign power that needs to be overcome rather than embraced. As they tell it we need to “posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond out control” (03:14). Instead of institutions of authority and control would we not be better served with balance of equal powers? I am always leery of autonomous forms of power and verticality or top-down governance and justice which throughout history have worked blindly and usually through failures of humans who were behind the thrones of such institutions. Such institutions are prone to oligarchic influx and influence which would leave the multitude at the hands of barbarous mishandling and injustice in the name of authority and justice. Instead we need instead of institutions of power and justice and new ethical society of the good life: of partnership and a sense of egalitarian values and cooperation among equals that do not allow for authoritarian institutions to develop at all.

In section 03: 15 I agree that we need an “ecology of organizations, a pluralism of forces, resonating and feeding back on their competitive strengths”. Yes, I want to say. If they affirm as such then why the need for such top-down authoritarian power and justice to keep tyranny at bay, or to even disallow total anarchy? As they affirm sectarianism and centralization are both death bringers to the left, so instead we need to build other more egalitarian structures that would disallow such emergence toward fracture or tyranny. A part of doing this they affirm is to bring the global media as close as possible back to an open popular control mechanism that allows for each player to develop his/her potentials. Obviously there will always be a need to protect the weaker members of society from exploitation by others or groups that might arise and to exploit the open-ended systems. But I do not see the need for NSA style surveillance as part of that, but rather an ethic of solidarity that polices itself through cooperation and mutual self-help mechanisms rather than from some authoritarian State of Police Justice system.

Section 03: 18 seems more about the struggle to obtain a post-capitalist hegemony, the notion of creating new categories for the solidarity of the global labor force that seems ill-defined at the moment. Yes, we will need better was of connect to each other across the globe, ways of providing a proletariat subjectivation. But it need not be based on identitarian politics. It needs to be revised toward newer notions of subjectivation rather than falling back into older form of identity. I think this is at the heart of Badio, Zizke, Johnson and many other speculative thinkers. They say: yes, yes, all this it true, but what we really need is a new “technosocial platform” and infrastructure of institutions within which all of this can be formalized and provide an ideological, social, and economic footing (03: 19).

None of this will be possible without the one ingredient: capital, money, funding (03:20). Without the nexus of “governments, institutions, think tanks, unions, or individual benefactors” the whole left accelerationist movement will go the way of dinosaurs: extinct.

Lastly, they tells us we must take up the coinage of “mastery” again and realize that for the Left mastery is not tinged by the overreach of the false Enlightenment of fascism, but is instead to be enacted in a new guise as a new form of action: “improvisatory and capable of executing a design through a practice which works with the contingencies it discovers only in the course of its acting, in a politics of geosocial artistry and cunning rationality. A form of abductive experimentation that seeks the best means to act in a complex world.” (03: 21).

In some ways this is an enactment of the original intent of all those poets, artists, thinkers of the original modernist initiatives both in Europe and Russian that were cut off so quickly by WWI and death. The notions of contingency and jazz, improvisation and revisionary blends of processual synthetic systems that forecast the moments ahead rather than through probabilistic or stoachastic algorithms they choose contingent systems that analyze future trends rather than historical datamixes. We need to move out from under the Probabilistic Universe and into the Multiverse of plural contingencies where almost anything happens and can happen. A back to the future constructivist practice of shaping out of the contingent forces of chaos the complex relations of a real future worth having.

Ultimately they tells us we have a choice: fall back into primitivism and chaos, worlds closed into barbarous warfare, hate, and death; or, we can move forward into our long-awaited and dreamed for post-capitalist future of space faring, transhumanist or posthumanist transformations, and where the future “must be cracked open once again, unfastening our horizons towards the universal possibilities of the Outside” (03 – 23-34).

The more I think upon their vision and the other essays I’ve worked with concerning this strange brave world ahead of us the more I’m convinced their on to something positive. I do have my issues with aspects of the conceptual framework of the institutions based on self-mastery and authoritarianism; yet, if what they mean by self-mastery as shown above is the ongoing process of self-revisioning and self-reflective processes in a heuristic ontography of mapping our geoartistic pulsations by way of meta-ethics and meta-philosophy that is provisional and self-revisable: updated by a post-intentional scientific methodology based on the latest sciences; then yes, I, too, can affirm that we need to open our vision to the greater universe beyond our closed off global trajectories. We live on a planet of potentially finite resources that we are depleting day by day, we will need off-planet resources and strategies of survival for our species in the long-term which will be needed for any viable civilization ongoing.

Like any manifesto it is one part bravado, and 2 parts hope, with the rounding of the square in 1 part realist terms of actual social practice. Much thought went into it, but now comes the time of enacting it and making the words become works that act. Without action we are left in the void of inaction and self-defeat rather that will let our enemies – the neoliberals, have the last laugh at our expense. This we can ill-afford to do.

Accelerationism: Ray Brassier as Promethean Philosopher

“Autonomy means that we make the worlds that we are grow.”

     – Tikkun, The Cybernetic Manifesto 

“If contingency is to be thought absolutely, it must be thought independently of the map of possibilities.”

    – Elie Ayache, The Medium of Contingency


Our notions of voluntarism would arise out of the nominalist traditions of the late Middle Ages theology of such thinkers as John Duns Scotus (c. 1265-1308) and William of Ockham (c. 1288-1349) who inaugurated the modern secular separation of nature from the supernatural and the concomitant divorce of philosophy, physics, and ethics from theology that was reinforced by influential early modern figures such as Francisco Suarez (1548-1616).1

As Pope Benedict XVI would remark “Duns Scotus developed a point to which modernity is very sensitive. It is the topic of liberty and its relation with the will and with the intellect. Our author stresses liberty as a fundamental quality of the will, initiating an approach of a voluntaristic tendency, which developed in contrast with the so-called Augustinian and Thomistic intellectualism. For St. Thomas Aquinas, who follows St. Augustine, liberty cannot be considered an innate quality of the will, but the fruit of the collaboration of the will and of the intellect.”

William of Ockham would affirm the supremacy of the divine will over the divine intellect, and in doing so would encounter a problem: if universals are real (i.e. natures and essences exist in things as Aquinas said they did following Aristotle) then voluntarism cannot be true. Ockham’s solution was unique: he simply denied of the reality of universals. Ockham adopts a conceptualist position on universals: while the universal (or concept) exists in the mind beholding a certain particular, it does not exist in the particular itself. Because there are no universals or common natures, there can only be a collection of unrelated individuals (and arguably the rise of modern individualism). With universals removed from the picture, God is free to will as he chooses.

Nominalism and Voluntarism became eternal bedfellows from that time forward. Yet, they would not always be so… therein lies the tale! With universals removed humans, too, are free to do and make as they see fit. For only what we make can we understand. And in our age we are learning to re-engineer ourselves beyond the confines of those old theological norms that once constrained us to a false equilibrium, and thereby free to experiment in new modes of being and rationality. Beyond the balance lays the contingent realm of creation rather than possibilities, only the new Promethean dares to enter that medium of exchange.

A Modern Prometheanism: Ray Brassier and the Critics

“Voluntarism denotes those philosophers who generally agree, not only in their revolt against excessive intellectualism, but also in their tendency to conceive the ultimate nature of reality as some form of will, hence to lay stress on activity as the main feature of experience, and to base their philosophy on the psychological fact of the immediate consciousness of volitional activity.”

      – Susan Stebbing, Pragmatism and French Voluntarism

Ray Brassier in contradistinction to the above tells us that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”.2 He will discover in Martin Heidegger a twentieth century critique of metaphysical voluntarism as his starting point: it will be by way of an essay by Jean-Pierre Dupuy ‘Some Pitfalls in the Philosophical Foundations of Nanoethics’ (download: pdf)3 In Dupuy’s essay the link between technological Prometheanism and Heidegger’s critique of subjectivism come by way Hannah Arendt (471). Brassier will set this religious critique of Prometheanism against the backdrop of both the neoliberal Prometheans found in transhumanist discourse and speculation, and his own account within the Marxist tradition that has been neglected by what Williams and Srnicek in their Accelerationist Manifesto term derisively the Kitsch Marxism of our day.

Brassier will ask: Why Prometheanism? Isn’t this a reversion to myth, to pre-Enlightenment modes of thought and behavior. Yes and no. The central key for Brassier is not so much what the left makes of such notions as it is that the neoliberal Right is banking on it. In fact in Dupuy’s essay we discover, as Brassier will testify, that the U.S. government as well as the so called transhumanist operatives in the private sector are forging alliances in politics and biomedicine for a human enhancement ideology that is centered on the converging NBIC technologies (Nanotechnology, Biotechnology, Information technology and cognitive sciences). As he states it the political Right advocates such a technological Prometheanism because “it renders possible the technological re-engeneering of human nature” (472). One can see in this an almost lateralization or flattening of the immortality complex at the heart of Christianity or its Secularized Religion in a neoliberal mode. Ever since Alan Harrington published his The Immortalist (1977) with its vaunting cry that “Death is an imposition on the human race, and no longer acceptable”, the rise of a transhumanist vision became the order of the day for certain neoliberal mindsets.

Dupuy’s religious critique of this illusionary science of transhumanism as the systematic conflation of ontological indetermination with epistemic uncertainty tells us that : “The [advocates of transhumanism] convert what is in fact an ontological problem about the structure  of reality into an epistemic problem about the limits of knowledge” (472). What the transhumanists have done Brassier (using Heidegger’s metaphysical assumptions) is to collapse and flatten humanity as existence and humanity as essence, conflating the two by encouraging us to think we can modify the properties of human nature-existence using the same technics that have proved so successful with other natural entities (473). Dupuy will rely on this gnostic self or essence (Heidegger’s Dasein) as the central dictum by which he hopes to salvage the human equilibrium. His critique will take in everything from fables of Golem’s (think Frankenstein or Meyrink’s Golem, etc.) and other failures along the path of transhumanist mythology (i.e., hermetic traditions of test-tube homunculus’ etc.).

Dupuy after Arendt would posit the notion of a ‘fragile equilibrium’ between what is made and what is given by human nature and its conditionings, and it is against the transhumanist agenda that they can intervene in this fragile equilibrium “between human shaping, and that which shapes the shaping – whether given by God or Nature – that Prometheanism threatens” (474). Brassier will remind us that Heidegger would radicalize Kant’s notion of finitude of cognition. Kant would incorporate a view that God being infinite could know things as they are (i.e., things-in-themselves), but that humans being finite could only know things “partially and incompletely” (476).

Brassier will go to the core of the conflict that Dupuy and Arendt see in such transhumanist discourses for human enhancement as breaking of the pact between the given and the made, the fragile equilibrium between human finitude as an ontological fact and its transcendence as Dasein. He will put it pointedly: “Prometheanism denies the ontologisation of finitude” (478). He follows Dupuy’s reasoning through his many works on early cybernetic theory on through his religious works in late life, understanding that from Dupuy’s view it was the whole philosophical heritage of mechanistic philosophy culminating in cybernetic theory that would produce the notion that as we understand ourselves as nothing more than contingently generated natural phenomenon, the less able are we to define what we should be (483). Because of this Brassier remarks our “self-objectification deprives us of the normative resources we need to be able to say that we ought to be this way rather than that” (383).

Brassier then will buy into the Viconian notion that humans can truly only understand what they have made: “Only what is humanly made is humanly knowable” (494). Giambattista Vico (1668—1744) would offer an old theme:

Verum esse ipsum factum, the truth is the made. Yet, Vico would twist this in his New Science by saying that “as rational metaphysics teaches that man becomes all things by understanding them, this imaginative metaphysics shows that man becomes all things by not understanding them (NS, 405). The verum-factum principle holds that one can know the truth in what one makes. Vico writes, “For the Latins, verum (the true) and factum (what is made) are interchangeable, or to use the customary language of the Schools, they are convertible (Ancient Wisdom 45).” This would be the idea of the true (verum) and the made (factum) are convertible: verification is fabrication, fact is fabrication; homo faber, man the forger; at his forge, forging as Joyce would say “the uncreated conscience of his race” (Finnegan’s Wake). Or in the parlance of our current breed of speculative philosophy: re-ontologizing the uncreated system that is the inhuman core of the human. Luciano Floridi will tell us that what is happening in this process is the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks. (see my The Onlife Initiative: Luciano Floridi and ICT Philosophy)

Floridi sums his own stance up saying “as far as we can tell, the ultimate nature of reality is informational, that is, it makes sense to adopt Level of Abstractions that commit our theories to a view of reality as mind-independent and constituted by structural objects that are neither substantial nor material (they might well be but we have no need to suppose them to be so ) but cohering clusters of data, not in the alphanumeric sense of the word, but in equally common sense differences de re, i.e., mind-independent, concrete, relational points of lack of uniformity, what have been defined … as daedomena.

Daedomena (‘data’ in Greek). Daedomena are not to be confused with environmental data. They are pure data or proto-epistemic data, that is, data before they are epistemically interpreted. As ‘fractures in the fabric of Being’, they can only be posited as an external anchor of our information, for dedomena are never accessed or elaborated independently of a level of abstraction. They can be reconstructed as ontological requirements, like Kant’s noumena or Locke’s substance: they are not epistemically experienced, but their presence is empirically inferred from , and required by, experience. (The Ethics of Information, pp. 85-86)

Yet, as Brassier relates it Dupuy falls into the old trap of essentialism in his religious diagnosis of Prometheanism in attributing to the human an essence that can only be construed as divine (484): an almost Platonic or Gnostic reversion to a substantial formalist self, the abiding presence of the ghost in the machine ideology that haunts secularist thought and science to this day (i.e., a philosopho-theological throwback to pre-critical thought). This leads Dupuy into the idea that even if we could create life that we shouldn’t do it, that it would upset the fragile balance between the human divine essence and the natural order, etc. (i.e., a reversion to the notion of hubris, an overreaching of the limits of the human that can only bring retribution by the gods or God). As Brassier points out Dupuy in his religious diagnosis does not tell us why the upsetting of this balance would be destructive (495).

At this point in the essay Brassier will turn the tables on Dupuy and discover in this very notion of the equilibrium a hidden element that he finds objectionably theological (495). The point being that for Dupuy the world was designed, made (i.e., a creationist argument); and that instead the truth of things – as Brassier will suggest, is that the world was not made: “it is simply there, uncreated, without reason or purpose” (495) which strikes at the heart of modern nihilism (see Ray Brassier, Nihil Unbound). Because of this Brassier will see a new freedom, a release from the false equilibrium, and a way forward: a speculative reasoning for why we as humans should not fear participating in this uncreated world as creators ourselves. “Promethenism is the attempt to participate in the creation of the world without having to defer to a divine blueprint” (495). This leads to another conclusion that if the world is without reason and purpose then whatever disequilibrium we might introduce is no more harmful than the disequilibrium that already exists in the universe (495).

Since the whole edifice of the metaphysics of equilibrium can no longer be justified the power of separation between the made and the given no longer harbors any dire hold over us (i.e., no big bad Other gods of God to bring retribution down on our heads for hubris). Yet, what does remain is the need for certain rules-based systems created for and by humans themselves to constrain the paths taken in this new brave world (i.e., certain normative navigational devices to map our way forward (see Negarestani in accelerationist reader). For Brassier the ways we understand the world through our interactions and productive operations on it are part of a continuous cycle of redeterminations that are interminable (i.e., no final resting place to set our truths, nothing but process till either we are the universe ends),  each shifting through phases that superseding the oppositions between order and disorder, recognizing in the “catastrophic overturning of intention, and the often disturbing consequences of our technological ingenuity” (486) the truth of our own future humanity.

Brassier will discover in the fiction of J.G. Ballard the truth that “all progress is savage and violent”. He will see no objection in this truth: “the fact that progress is savage and violent does not necessarily disqualify it as progress” (486). In fact he will insist that there “is indeed a savagery recapitulated in rationality” (486). He says we can wallow in our moral outrage and sentimental justifications for accepting the existing state of things. Else we can follow Marx’s own Promethean project and enter into its core notions fully aware that it entails nothing less than the re-engineering of humanity and re-ontologizing our world on a more rational basis (487).

Brassier will bring everything round to his notions of subjectivation from which he started: that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”. He will turn to Alain Badiou’s account of the relation between event and subjectivation and find it objectionable, yet will also discover the need to reconnect his account of subjectivation to an analysis of the biological, economic, and historical processes that condition rational subjectivation (387).  Such is the great task before us, Brassier remarks, a new Prometheanism that “promises an overcoming of the opposition between reason and imagination: reason is fuelled by imagination, but it can also remake the limits of imagination” (487).

Sounds like Brassier has his life’s work set before him. Glad to see him rehabilitating the concept of imagination constrained by speculative reason, and realizing the artistic impulse might be the spark that lights the minds of a generation. Unless we can reach a wider audience through more pragmatic forms such as science fiction, novels, poetry, painting, etc. it will be difficult. Philosophy is for the few who trouble to dive into the cultural abyss. Very few average cognitariat much less the average readers ever get past the base notions in philosophy. This is not some elitist crap, just part of culture we live in at the moment. So we will need some better vehicle of transmission if we are to capture this new generation: film, art, novels, poetry, etc. will all be needed.

One of the difficulties with all of this is that the Left is behind the eight ball so to speak: the finances to support such great projects is massive in itself and we may have the ideas, but our funds are few so the greatest task will be how best to bring so a vision to fruition without becoming capitalists ourselves. Or, maybe this is the point that we will have to radicalize capitalism, join it and work from within its husk to create and forge this new world by sloughing off neoliberalism from within just like they sloughed the left during the era from the Great Depression till now. Is this after all the true mission of the manifesto in its base scenario? Will we need to educate and train a new generation of capitalists to think like us? A radicalization of both capitalism and democracy will entail such a gambit. Is this not what we are seeing in the transformative aspects China?


Previous post: Accelerationism: The New Prometheans – Automate Architecture

1. #Accelerate# the accelerationist reader. Editors Robin Mackay & Armen Avanessian (Urbanomic, 2014)
2. Adrian Pabst. Metaphysics: The Creation of Hierarchy (Kindle Locations 946-948). Kindle Edition.
3. Teachers College, Columbia University. Aesthetics of Technology. (aestech wiki 2013)