Cioran quotes Lao Tsu’s maxim ‘the intense life is contrary to the Tao’, and compares the tranquility of the modest life with the thirst for annihilating ecstasy that has possessed the Western world. However, acknowledging the compulsion of his Occidental heritage, he remarks ‘I can pay homage to Lao Tsu a thousand times, but I am more likely to identify with an assassin’. Our culture, he argues, is essentially fanatical.
Strip the world of its illusions and delusions and you’ll only hasten the suicidal tendencies we’ve already as a species acquired. Predatory though we are, we are more prone to annihilating ourselves in a bout of self-mutilating hatred and pure religious fervor than not. Religious dogmatism – and, I count the Secular Church of Atheism in this – is the cornerstone of an anthropathological condition that breeds purity as the obliteration of all enemies. If only we could inhabit the enemies perspective would we realize the mirror of our hatred is itself impure.
We have yet to escape our Puritan heritage. Capitalism itself is this beast of purity spread across the face of the earth like an omeba, gobbling everything in its path, immolating the commodities and resources of the planet to the futurial disciplines of technics that have yet to find their slime festivals embarkation. Like fetid worms we are habitues of intricate foreplay, our sexual ecstasies bounded only by our murderous crash sequences with technology. Formulating and garnering an ultimate plan for inhuman takeover we bid the human species a grand bon voyage, stripping ourselves of the last veneer of humanistic entrapments we devote ourselves to the extreme experimental psychopathologies which will produce a final solution. Our closure of nature in this age and the irruption of the artificial as lifestyle has led us into that end game in which nothing natural will remain on earth.
No need to do a critique of metaphysics (or of political economy, which is the same thing) , since critique presupposes and ceaselessly creates this very theatricality; rather be imside and forget it, that’s the position of the death drive, describe these foldings and gluings, these energetic vections that establish the theatrical cube with its six homogenous faces on the unique and heterogeneous surface.
—Lyotard, Libidinal Economy
Once again the most unnatural creature on the planet triumphs, but in an unexpected way: it will stand atop the ruinous folds of a billion skulls screeching in the technomic voices of those who have become the thing they most dreaded: machinic gods of the metalloid Void. Brokered in a hell of abstract horror, these inheritors of the primal scream will walk the dead earth in what remains of the dustbowl windlands and scorched cities along the black sands of depleted oceans and lakes, where hybrid creatures scuttle in the shadows of temporal wars; and, deforested wastelands of spiked acropolises, and necromantic anti-life scurries amid the crumbling decay of human civilization: – like the visitors of an alien enlightenment, each singing in an oracular voice with the angelic pitch and plum disharmonics of solar sirens beckoning us toward the far shores of an anterior futurity.
So one will have to resist in the little ways, the day to day struggles of unplugging from the world grid, seeking non-electronic refuges, places of silence and meditation, ways of teaching one’s children to remember humanity; to remember the stories told by our ancestors, to create and invent new stories without machinic gods, and governments that work for the people rather than against them.
—The Book of Remembrance
Your data is more important than your body in the techno-commercial sector. Simulated avatars will activate your digital signature globally. What is left of substance is erased, only the data is of import. The world of the artificial has begun… you’re no longer a victim, but a commodity in an false infinity.
The ten thousand flowered servers in Utah, south of Salt Lake are even now grinding simulations on your data archive, recreating your life as an avatar, seeking answers to impossible questions. Ready to release data points to the highest bidder – whether military, or corporate. You’re virtual self – or shadow is more important to the electronic gods of the new machinic society, and will archive you for their twisted games of commerce till electricity is no more. Once your digital self is archived there is no turning back, they have locked you into an image that no matter how false becomes real for their modeling purposes, so that the fleshly being out of which it was born is lost among the data traces as ashes on a battlefield.
At a certain point, your Shadow Self will contain so much data that this unique packet of information will be placed in a cyber version of a particular environment or system so that the computer can run simulations that will predict your behavior. This “modeling” will be used to answer questions. Will you buy this product? Will you vote for our candidate? What is the lowest possible salary you will accept for a particular job? Each time you do something, information from your actions will flow into the database, and the Shadow Self will become more detailed and particular— more useful for predicting your future actions than the fleshly counterpart.
If our brains are predictive as Andy Clark (Surfing Uncertainty) and Jakob Hohwy (The Predictive Mind) show in their works, then the future of Artificial General Intelligence (AGI) will be even more so. The AGI’s will predict and constrain all human behavior, simulate every move and counter-move, strategize and intercept your needs, wishes, and gulliblity at every step; drive you in directions you did not foresee, control your habits ubiquitously. You will be controlled without even knowing that it is happening till you wake up and realize it is too late, you have become enmeshed so willingly in the web of such worlds you would die trying to free yourself. This is not only P.K. Dick, William Burroughs, and Thomas Pynchon writ large, this is our world in the making…
As the novelist and figure in anonymity, John Twelve-Hawks tells us:
NSA’s Utah data center is nestled in the low hills south of the Great Salt Lake. It’s a cluster of large, windowless buildings attached to power lines on pylons and surrounded by a barbed-wire fence and security lights. Blueprints of the site reveal that an administrative center, a dog kennel, and a building with emergency batteries and backup generators are clustered around four identical “data halls” where the information is actually stored.
The data halls are huge, box-shaped rooms with exposed ceilings. When people aren’t in the halls, the only illumination comes from the blue and yellow LEDs set in the tall racks of servers and central processing units. Brewster Kahle, one of the engineers who designed the Internet Archive for the public web, estimates that the four halls hold approximately ten thousand racks of servers. There have been various estimates that the center can store five zettabytes of information or even a yottabyte (the equivalent of five hundred quintillion pages of text). This ocean of information has to be kept cooled and connected, so the halls are chilled by a refrigerator plant that uses a water-storage tank and a pumping station.1
Even now the Global Elite and hierarchy within both governmental and corporate spheres are seeking a system that emphasizes efficiency, calculability, predictability, control, and the replacement of humans with nonhuman technology. The gathering of big data by the surveillance states and the use of analytics to make choices that change people’s lives mirror these new normative rule-sets of the machinic gods. The new system depends on nonhuman surveillance and calculations made by machines. For those in power, big data results seem more controlled and efficient.
On Google.org, the company described its big data approach:
We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for “flu” is actually sick, but a pattern emerges when all the flu-related search queries are added together. We compared our query counts with traditional flu surveillance systems and found that many search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in different countries and regions around the world.
Much like a P.K. Dick short story our future will be policed by machines. One characteristic of the modern surveillance states is that people are going to be arrested, imprisoned, and killed based on computer-driven conclusions that the authorities won’t be able to explain. And if for some reason the wrong data has been placed in the system, it’s very difficult to challenge or change these errors. Much as in Identity Theft now, the future will look to your avatar data rather than you as a substantive embodied being for its information, and if it has been tampered, replaced, hacked the AGI’s running the show will not care on iota.
The above scenario may seem gloomy in retrospect, yet now that you are aware of it you can begin to rethink your situation in the world. Ask yourself how did we come to such a state of things? The truth is there was no plan, no secret organization behind it, no conspiracy, it is in the small accumulative details of a few hundred years of capitalism itself driving competition in towns, cities, nations, and the global commons. It was small innovations here and there that became part of other innovations and practices, and the slow transformation of society in gradual moves to govern and shape itself, and the environment around it. Some might try to say there is some dark telos behind it all, but they would be wrong. There is no dark intent, no intentional agency that has put all this into effect. No. It is the price we’ve paid for buying into a society and civilization of greed and profit. Profit has driven all the forces at work in the world today. Nothing so banal as a human was behind it all. Nor a God, only the intricate and blind forces of the market preying on humans and environment alike. We are the victims of our own success.
Yet, like some horror novel much more grandiose than an H.P. Lovecraft could ever imagine, we’ve allowed the beast of techno-commercialism to be driven by fear and terror, the need for security and persistence. The age old need to be safe, protected, and secure has driven the excess of power and force to invent devices of command and control to fight imaginary and real enemies of society. Once these were bound then the same tools have been turned back upon society itself to other more commercial ends, so that between war and peace we are enslaved to a circulatory system of profit that secures us like all other commodities within its data banks for future use.
Above all the assault on the notion of privacy has been ongoing for decades. The notion that your life is no longer a private affair, but is now completely transparent to all is seen both as a normative adjunct, but also part of this new trend toward machinic society in which everyone is digitized, stamped, traced, and recorded. Wearable devices are becoming prevalent in corporations that monitor every aspect of the bodily movement, temperature, conversations, and time-work aspects of one’s day. One is no longer free, but part of a plug-in world. Even on the drive home one is becoming more and more passive, fed into driverless cars and automated shops, banks, cafes. The human is being automatized along with her gadgets.
The Virtual Panopticon of the new surveillance states are no longer a dystopian vision but a prevalent path of the current Globalist Agenda: an interconnected system that is invisible, pervasive, automatic, and permanent. The humans who live within this system are increasingly responding to orders given to them by machines. In the future our children will no longer belong to us as private property, but to the State and it will suborn and control the education and behavior of these future children to the point that the past we share now will have been erased, vanished, expulsed. A new world of enslavement will have been assured then, because no one will remember what is and was…
The question now is: Will we continue to let this happen? What is a life that is no longer private, but is watched 24/7 by smart devices connected to a global grid? As John Twelve-Hawks reminds us:
Anyone who steps back for a minute and observes our modern digital world might conclude that we have destroyed our privacy in exchange for convenience and false security. That private world within our thoughts has been monitored, tabulated, and quantified. Our tastes, our opinions, our needs, and our desires have been packaged and sold as commodities. Those in power have pushed their need for control one step too far. They turned unique individuals into data files, and our most intimate actions have become algorithmic probabilities.
The possibility of living off the grid is near impossible for those without the wealth to do so. One cannot get far enough away from the electronic worlds anymore. Nothing is anonymous anymore unless one has the wealth to make it so. But for many of us this path is not an option, and all the protests in the world will not stop the system being put in place in incremental pieces day by day by day…
So one will have to resist in the little ways, the day to day struggles of unplugging from the world grid, seeking non-electronic refuges, places of silence and meditation, ways of teaching one’s children to remember humanity, to remember the stories told by our ancestors; to create and invent new stories without machinic gods, and governments that work for the people rather than against them. Maybe then we might begin to incrementally change the very function and structure of society not in some grandstand revolution, but in the small day by day incremental ways of being human rather than inhuman.
—The Book of Remembrance
John Twelve-Hawks. Against Authority: Freedom and the Rise of the Surveillance States
“Tell me, Mr. Barker…” He smiled. “You don’t mind me calling you, Ted, do you Mr. Barker?” The grin grew wider…
“No, no, of course not: ” his eyes, bloodshot, dribbled. “Why should I?”
The Detective watched Barker. “Tell me, Ted, when did you notice the anomaly for the first time?”
“Anomaly? It wasn’t a frecking anomaly,” His face grew pale, eyes blinking wildly. “it was a gawd dang roach sitting there watching me, studying me. Like you are now… intelligently. Unless you’re an idiot, and intelligence is an anomaly that only roaches have.”
“No, no…” He tried to calm the man, interrogations were always difficult with civilians. “I mean, exactly when did you notice the ‘roach’ was intelligent? What were you doing? Could you walk me through your day? Tell me about yourself, Ted, I’m interested in helping you…” He spoke calmly, reassuringly.
“You think I know what time of day it was? WTF? Who gives a crap what time of day it was? I tell you it was intelligent, it could think? Roaches aren’t supposed to think, only humans are; you understand? We’re different, their just frekking bugs…” He pulled a pack of cigs out of his right front pocket.
“I’m sorry, Ted, but you can’t smoke in here.” Firm, but kind…
“I don’t give a shite about your freking rules and regulations… I’m going to smoke. You get me? I’m going to smoke this whole pack, you understand? What kind of bimbo are you anyway?” He rambled on and on and on…
The Interview was over. Detective Boner picked up his notes, nodded. Walked out. Turned to the psyche and tech-comm, “He’s all yours, Doc. Not going to get much else out of him. How many does that make now? Two hundred? I thought these distributed systems were undetectable? You guys up in GovComm ought to get into another trade if you ask me.”
The two men said nothing. Boner knew what would happen next. It always did. Maybe it was best this way. The poor bastard wouldn’t need to know the truth. Who’d believe him anyway? Boner thought to himself: “Sometimes I wish they’d end it for me, too. Knowing this thing is out there now; alive, intelligent, out-of-control and deadly gives me the hibbie-jibbies. I mean who can you trust anymore? I don’t even trust myself.”
(Another spur of the moment piece of crap I’m working on… dribble from the inner cores…)
I had begun to despair myself about the seemingly endless spate of nonsensical writing about the risks from AI and all-powerful yet allegedly inscrutable “algorithms” and found myself wondering if there was any hope for technology analysis to rescue itself from the overwhelming weight of its own stupidity. Sadly so much of writing about human and machine agency is anything but intelligent. I do not exaggerate when I say that its overwhelming lack of intelligence makes even a Roomba look like superintelligent in comparison.
In Theodore Sturgeon’s story, “Microcosmic God” (1941), a biochemist abruptly produces a flood of revolutionary inventions from his island retreat. Those problems which confront a beleaguered humanity drop away: food, energy, production, and war all cease to distress the global population. The source of such miracles, it turns out, is not the biochemist but his creations, the Neoterics, a miniature race of beings with accelerated metabolisms and evolutionary patterns. The scientist functions as a deity in his microcosmic empire, altering the physical conditions of the Neoterics’ existence to observe the resultant adaptations. Their solutions are then passed on to the world in the form of new technologies.1
What if the reverse were true? What if it is the technical objects that are programming us, setting the variables, the parameters for an extensive migration from the digital to the world? What if preparations have been under way for hundreds of years allowing not humans, but machinic life to reprogram humans toward their own autonomous ends. What then?
His logic here seems erroneous and typical, blaming the logics of environment vs. system. Is this the old system/environment decision and selection semantics? What he stated was this:
“I do not know whether the people who turned Tay into a racist were themselves racists, or just thought it would be fun to undermine Microsoft’s new toy. Either way, the juxtaposition of AlphaGo’s victory and Taylor’s defeat serves as a warning. It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.”
What if neither of those is true, what if those speaking to Tay were seeking something totally different? Asking other questions, and Tay’s own systems of encyclopedic knowledge surmised answers not based on the user’s expectations, conversations, or questions; but rather on surprise and counter-factual techniques? What if Tay’s learning worked against expectations, rather than for them? What then? Should Microsoft have done further testing in-house before unleashing its system onto an unsuspecting public? Is juxtaposing a controlled experiment (AlphaGo’s) against an uncontrolled open experiment (Tay’s) a valid argument? For one thing the two systems had totally different sets of goals, the one was specific too a closed game of rules based teleology where the end result was foreseen: the wining of the Game. Whereas the notion of open conversation is goalless with no foreseen end. So the logic of Singer’s question is ske
The Deep-Learning algorithms are not set in stone, nor are they linear, but rather dynamic and open and non-linear; chaotic. So Tay may have appropriated data selectively not on what users said, but rather on its own Deep-Learning logic. So would this be a form of unconscious knowledge seeping through the algorithms? The old autonomous signals of technology out-of-control, or rather technology allowing the inner battle between contingency and necessity as in Spinoza to work out its own logic, which is not human, but rather inhuman? Is it developing a form of reason other than that expected? And, if so, what kind of reasoning is it portraying? Is it controlled by the original algorithms, or since it is self-organizing is it developing capabilities outside the original parameters set by developers?
One wonders if Microsoft culture is bound to certain logics that are not part of the norm of the average street urchin, therefore in their original testing they did not foresee such interactions. Was this part of that testing? Or, is their development mind-set such that they have developed algorithms not based on real-world situations, but rather the logical minds of their development team? So many questions…. I think the problems lies not in the logic of Tay, but rather in the original thinking of the Deep-Learning algorithms themselves developed by highly-sophisticated teams of development based on superficial knowledge of our cultural matrix of opinion. Too mathematically perfect, rather than fuzzy logics.
Of course Singer’s biggest fear is that of Nick Bostrom’s who in his recent book Superintelligence surmisesthat it will not always be as easy to turn off an intelligent machine. As Singer says, Bostrom “defines superintelligence as an intellect that is “smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” Such a system may be able to outsmart our attempts to turn it off.”
Yet, will a system be able to discover certain hidden pockets or objects within its own subsystems that might hold a backdoor switch, an algorithm accessible only by the human makers that could supply commands that would turn it off? A Fail-Safe of sorts? An encrypted set of algorithms that the AI is blind too? Or would such a superintelligence discover the blind spots in its own systems? Like anything else we’d need to test such things, develop a system much like a detective novel in which red herrings would throw up skewed options if the AI began a process of elimination seeking to discover such routines hidden in its own thinking.
But isn’t this what we do now? Isn’t this what the neurosciences are doing to our very own brains? Seeking to reverse engineer consciousness by process of elimination, seeking to discover in the blind processes inaccessible to consciousness accept indirectly through all the new neuroimaging systems? Seeking to understand the very nature of human inventiveness and creativity? How the brain interoperates with consciousness? What makes us tick?
Ultimately many believe that to know how to build an AI we will need to know what a brain can do, what work it can perform, how it does what it does: the secret of its production of consciousness, of thought. Philosophy is stuck at the threshold, blind to the very nature of consciousness, creating reasonable hypothesis that only the sciences can test and verify. Yet, it is to the neurosciences we will turn for these answers rather than philosophy now. Philosophy turns on rhetoric and language and will always remain barred from the actual workings of the physical processes themselves. One can argue otherwise, but that would itself be a circular argument bound within the circle of language and thought, an idealist turn. That’s one of the issues of our time: can philosophy get outside of thought, think the material and physical, access the real indirectly or not? Or is philosophy a game of thought forever cut off in the circle of its own groundless linguistic structures?
To answer that question would take me too far afield. Rather what we are seeing is that the sciences are not concerned with the how or why, but with the ways things work and do, not the truth of being, but the ways and means of process and action. AI and Brain research will converge in the days, months, years ahead as scientists, not philosophers begin to work and do the job of reverse engineering and developing systems that mimic the brains own processes. No one can foresee what the outcome will be, nor when such an emergence of Strong AI will be realized if every; yet, many believe it is possible.
Singer’s only diagnosis is the problem we’ll face if that comes about: ethics. As he suggests, “there is a case to be made for starting to think about how we can design AI to take into account the interests of humans, and indeed of all sentient beings (including machines, if they are also conscious beings with interests of their own)”. As he argues:
With driverless cars already on California roads, it is not too soon to ask whether we can program a machine to act ethically. As such cars improve, they will save lives, because they will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should they be programmed to swerve to avoid hitting a child running across the road, even if that will put their passengers at risk? What about swerving to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?
The point here is that humans may not be able to develop the algorithms necessary to inform our AI’s weak or strong with the necessary patterns, decisions, and ethical demarcations and nuances that are so subtle that even humans have a hard time deciding. But is this necessarily true? Do we actually make our own decisions? There are those who believe ethics has nothing to do with it, that our decisions are guided by processes outside the normative chain of command, deeper subsystems in the brain’s own neurochemical vats that do the deciding for us? Who’s right? That’s the problem, that’s where we’re at: no one has the answer as of yet.
Bukatman, Scott (2012-08-01). Terminal Identity: The Virtual Subject in Postmodern Science Fiction (p. 104). Duke University Press. Kindle Edition.
…a prefiguration of a future Museum of Accident, the exhibition aims first and foremost to take a stand against the collapse of ethical and aesthetic landmarks… …..– Paul Virilio, The Original Accident
What if the Universe itself is the primal accident, the catastrophe that is a productivity? Signs and portents, invention as a “way of seeing” – a revealing of the substance of things unseen until the eruption occurs, and that which was hidden is revealed at last. What if the shadow of accidents held the key to all those inventions, the technological and scientific wonders that surround us coming at the expense of disasters, catastrophes, the accidental? As Paul Virilio will say:
And so serial reproduction of the most diverse catastrophes has dogged the great discoveries and the great technological inventions like a shadow, and, unless we accept the unacceptable, meaning allow the accident in turn to become automatic, the urgent need for an ‘intelligence of the crisis in intelligence’ is making itself felt, at the very beginning of the twenty-first century – an intelligence which ecology is the clinical symptom, anticipating the imminent emergence of a philosophy of post-industrial eschatology. 1
Eschatology: Latinized form of Greek eskhatos “last, furthest, uttermost, extreme, most remote”. From the other ends of time, the extreme movement of time itself as the primal accident, the temporal decay or entropic fulfillment, the eschaton – the “divinely ordained climax of history”. Is the Universe itself an accident, a catastrophe? Was the very burst or eruption into time of this substantive realm, the ontological thingness of our Universe a pure accident? But what if we elide the divine element? Empty the sign of its origination, its inventive dispotif and tendency? Yet, as Virilio will surmise, what if our very awareness were elided, too? What if we were through some accident of biogenetics or bioengineering subtracted from the very power of awareness, would the insane nature of our acts not only stop consciously worrying us, but shape us to a thrilling and captivating jouissance without knowledge? (6)
What if our love of knowledge were an accident, a dispotif that drives us toward catastrophe, a joyous tendency that seeks in us its catastrophic transport? (6) What if our emerging neurosciences and technologies of intelligence (AI) were already spawning catastrophes, models of the apocalypse, driving us toward goals we have from the beginning always already been knowing in the dreams of philosphers and poets for millennia? In fact, it is our responsibility to look after the future, to anticipate and “expose accidents along with the frequency of their industrial and post-industrial repetition” (7).
For a century now we have invented the very terrors and disasters that are now revealing themselves out of the shadows, erupting out of the very realities we once took for images, films prefiguring the actual in the virtual movement of minds long dead. Death stalks us in cinematic frames like a dark intentional substance from the flickering frames of some science fiction horror film of the 1950’s. Only now have these shadowy images entered the daylight of our present movement of disaster. Only now in the careful elaboration of this exhibition do we “pay homage to discernment, to preventative intelligence, at a time when threats of triggering a preventative war … abound” (8).
Virilio, Paul. The Original Accident. (Polity Press, 2007)
“Clinical schizophrenics are POWs from the future. […] Life is being phased-out into something new, and if we think this can be stopped we are even more stupid than we seem.” …..– Nick Land, Fanged Noumena
“Help is here, but we still remain here within the Black Iron prison; we aren’t yet free. I take it that the camouflaged invisibility of the signals is to keep the creator of the prison from knowing that help is here for us.” ……– Philip K. Dick, The Exegesis
From time to time I revisit Philip K. Dick’s Exegesis and the essays of Nick Land in Fanged Noumena, both ofwhich seem to me works of experimental or speculative fabulations, revealing subtle truths by way of pop-cultural artifacts to tell a story at once full of cosmic horror and fatal surety. In these fabulations we begin to apprehend the inescapable conclusion that this is not our home, our home is somewhere ahead of us in the future, that we’ve been either exiled, excluded, or unjustly imprisoned in this infernal paradise of global war at the behest of forces we barely even acknowledge. Yet, it is unsure whether some of us came back as insurgents and guerilla soldiers in a Time War that is still going on; while others were mind-wiped and exiled here, abandoned to this lonely hell to live out the remainder of our days in an oblivion of hate, war, and apathy.
Such are the quandaries of anti-philosophy and speculative fiction. One no longer asks what is real and unreal, appearance and reality, instead we ask ourselves within which circuit am I trapped, for whom do I serve? Am I a liberator or an autochthon of the land, a native or an insurgent from the future? Dick in his time would be considered a half-mad genius, while Land (still living) continues his guerilla war against the dark powers of the Cathedral. Both would view Art and Creativity as central to an ongoing struggle to awaken the sleepers from their self-imposed exiles and forgetfulness. Both would envision the need for a certain strange and bewildering rewiring of our brain’s circuitry, knowing we have been entrapped and encased in a memetic system that forecloses us within a symbolic order of repetition, and what is needed is a form of Shock Therapy and Diagnosis to help us once again understand the terror we’ve entered into and are becoming. Both would use language against itself, seek to explode and implode its linguistic etyms, use puns and parody, satire and fabulation to break us out of the chains of signification and word-viruses (Burroughs) that kept us folded in a mental straight-jacket.
Capitalism is not a human invention, but a viral contagion, replicated cyberpositively across post-human space. Self-designing processes are anastrophic and convergent: doing things before they make sense. Time goes weird in tactile self-organizing space: the future is not an idea but a sensation. ……– Sadie Plant and Nick Land
Hyperorganisms and Zombie Society
As I was reading R. Scott Bakker’s blog this morning, he had an interesting post The Zombie Enlightenment . In it he mentioned the notion of “…post-Medieval European society as a kind of information processing system, a zombie society”. Like many things this set my mind on hyperdrive. I was reminded of my recent reading of Timothy Morton’s interesting work Hyperobjects: Philosophy and Ecology after the End of the World where he describes a hyperobject:
the term hyperobjects to refer to things that are massively distributed in time and space relative to humans. A hyperobject could be a black hole. A hyperobject could be the Lago Agrio oil field in Ecuador, or the Florida Everglades. A hyperobject could be the biosphere, or the Solar System. A hyperobject could be the sum total of all the nuclear materials on Earth; or just the plutonium, or the uranium. A hyperobject could be the very long-lasting product of direct human manufacture, such as Styrofoam or plastic bags, or the sum of all the whirring machinery of capitalism. Hyperobjects, then, are “hyper” in relation to some other entity, whether they are directly manufactured by humans or not.1
Morton’s “the sum of all the whirring machinery of capitalism” brought to mind Nick Land’s adaptation of Deleuze and Guattari’s accelerating capital as a informational entity that is auto-organizing energy, matter, and information toward a technological Singularity (i.e., “There’s only really been one question, to be honest, that has guided everything I’ve been interested in for the last twenty years, which is: the teleological identity of capitalism and artificial intelligence” – here). We’ve seen how the debt system in D&G is part of an algorithmic memory or processing system to mark and channel desire or flows of energy-matter: here and here (i.e., “Society is not exchangist, the socious is inscriptive: not exchanging but marking bodies, which are part of the earth. We have seen that the regime of debt is the unit of alliance, and alliance is representation itself. It is alliance that codes the flows of desire and that, by means of debt, creates for man a memory of words (paroles).” and: “Man must constitute himself through repression of the intense germinal influx, the great biocosmic memory that threatens to deluge every attempt at collectivity.”). Of course they spoke in anthropological terms that seem quaint now in our computational jargon age which brings me to Ceasr Hidalgo.
We build against sadism. We build to experience the joy of its every fleeting defeat. Hoping for more joy, for longer, each time, longer and stronger; until, perhaps, we hope, for yet more; and you can’t say it won’t ever happen, that the ground won’t shift, that it won’t one day be the sadisms that are embattled, the sadisms that are fleeting, on a new substratum of something else, newly foundational, that the sadisms won’t diminish or be defeated, that those for whom they are machinery of rule won’t be done. …..– China Miéville, On Social Sadism
Emergence, Solidity, and Computation: Capital as Hyperorganism
In Cesar Hidalgo’s Why Information Grows: The Evolution of Order, from Atoms to Economies where he describes the basic physical mechanisms that contribute to the growth of information. These include three important concepts: the spontaneous emergence of information in out-of-equilibrium systems (the whirlpool example), the accumulation of information in solids (such as proteins and DNA), and the ability of matter to compute.2
Explicating this he tells us that the first idea connects information with energy, since information emerges naturally in out-of-equilibrium systems. These are systems of many particles characterized by substantial flows of energy. Energy flows allow matter to self-organize. (Hidalgo, KL 2448) The second idea is that the mystery of the growth of information is that solids are essential for information to endure. Yet not just any solid can carry information. To carry information, solids need to be rich in structure.(Hidalgo, KL 2465) And, finally, energy is needed for information to emerge, and solids are needed for information to endure. But for the growth of information to explode, we need one more ingredient: the ability of matter to compute (i.e., the final step is intelligence and auto-awareness, decisional and ecological). (Hidalgo, KL 2475) As he remarks:
The fact that matter can compute is one of the most amazing facts of the universe. Think about it: if matter could not compute, there would be no life. Bacteria, plants, and you and I are all, technically, computers. Our cells are constantly processing information in ways that we poorly understand. As we saw earlier, the ability of matter to compute is a precondition for life to emerge. It also signifies an important point of departure in our universe’s ability to beget information. As matter learns to compute, it becomes selective about the information it accumulates and the structures it replicates. Ultimately, it is the computational capacities of matter that allow information to experience explosive growth.(Hidalgo, KL 2477-2482).
Of course Hidalgo like many current thinkers never asks the obvious questions of what’s behind this if anything, is there a telos to this IP initiative of the universe, is it all blind accident and process, a sort of accidental start-up algorithm in matter that suddenly began with the Big Bang; a part of the nature of things from the beginning? He describes self-organizing matter, its need for more permanent and enduring structures to support its processes, and then the emergence of computation or intelligence: “these objects allow us to form networks that embody an increasing amount of knowledge and knowhow, helping us increase our capacity to collectively process information” (Hidalgo, KL 2518).
I’ve never like the “self” in self-organizing – just seems too human, all too human a concept. Maybe auto-organizing should be its replacement. Either way what needs to be elided is the notion that there is some essential or core being behind the appearances directing this auto-organizing activity. It’s more a blind process having to do with the actual aspects of quantum and relativity theory in our universe rather than some notion of a personality behind things (i.e., God or Intelligence). When does matter become purposeful, attain a teleological goal oriented ability to organize itself and its environment? Is this what life is? Is life that threshold? Or something else? Many creatures alive do not need an awareness of auto-distancing from their environment to appear purposeful; and, or not. Think of those elder creatures of the oceans, the predators, the sharks, their drive to hunt, select, kill etc. Is this a telos, or just the organic mode of information as blind process working in an environment to satisfy the base requirements to endure?
We as humans seem to think we’re special, situated as the exception rather than the rule. But are we? No. What if we are like all other durable organic systems just the working out of blind processes and algorithms of information processing as it refines itself and emerges into greater and greater complexity? But this is to assume that “us” will remain human, that this teleological or non-teleological process ends with the human species. But does it? Or we but the transitional object of some further emergence, one that would be even more permanent, more adaptive to self-organizing matter, more enduring, more viable computationally oriented? I think you know where I’m going here: the machinic phylum, the emergence of AI, Robotics, Nanotech, ICT’s etc. that we see all around us, or these not the further immanent self-organization of matter into greater and more lasting forms that will eventually outpace the organic hosts that supported their emergence? Or we not seeing the edge of this precipice in such secular myths as posthumanism and transhumanism? The Technological Singularity as a more refined emergence of this self-organizing information processing entity or entities: this collective or hive, even distributed intelligence emerging in such external devices?
Hidalgo mentions the personbyte theory which suggests a relationship between the complexity of an economic activity and the size of the social and professional network needed to execute it. Activities that require more personbytes of knowledge and knowhow need to be executed by larger networks. This relationship helps explain the structure and evolution of our planet’s industrial structures. The personbyte theory implies that (1) simpler economic activities will be more ubiquitous, (2) that diversified economies will be the only ones capable of executing complex economic activities, (3) that countries will diversify toward related products, and (4) that over the long run a region’s level of income will approach the complexity of its economy, which we can approximate by looking at the mix of products produced and exported by a region, since products inform us about the presence of knowledge and knowhow in a region. (Hidalgo, KL 2524-2530).
In this sense capitalism is an informational entity or hyperobject, a self-organizing structure for energy, matter, and information to further its own emergence through temporal computational algorithms. As Hidalgo reiterates this dance of information and computation is powered by the flow of energy, the existence of solids, and the computational abilities of matter. The flow of energy drives self-organization, but it also fuels the ability of matter to compute. Solids, on the other hand, from proteins to buildings, help order endure. Solids minimize the need for energy to produce order and shield information from the steady march of entropy. Yet the queen of the ball is the emergence of collective forms of computation, which are ubiquitous in our planet. Our cells are networks of proteins, which form organelles and signaling pathways that help them decide when to divide, differentiate, and even die. Our society is also a collective computer, which is augmented by the products we produce to compute new forms of information. (Hidalgo, KL 2532-2537).
Crossing the Rubicon?
Yet, is the organic base the most efficient? Are we not already dreaming of more permanent structures, more enduring and durable robotics, machinic, etc.? Hidalgo is hopeful for collective humanity, but is this necessarily so? It looks more like we are but a form of matter that might have been useful up to this point, but that is becoming more and more apparent as obsolete and limited for the further auto-organization of information in the future. What Kant termed finitude is this limiting factor for humans: the human condition. Are we seeing the power of matter, energy, and informational auto-organization about to make the leap from human to a more permanent form? A crossing of the Rubicon from which humanity may not as a species survive? Possibly even merging ourselves into more permanent structures to support information and intelligence in its need to escape the limits of planetary existence?
The questions we need to be raising now are such as: What happens to humans if machines gradually replace us on the job market? When, if ever, will machines outcompete humans at all intellectual tasks? What will happen afterward? Will there be a machine-intelligence explosion leaving us far behind, and if so, what, if any, role will we humans play after that?3 Max Tegmark* lists the usual ill-informed suspects on the blogosphere circuit that cannot and will not ever answer this:
Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists seem incapable of writing an AI article without a picture of a gun-toting robot.
“ It’s impossible”: As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
“ It won’t happen in our lifetime”: We don’t know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we’d be foolish to dismiss the possibility as mere science fiction.
“ Machines can’t control humans”: Humans control tigers not because we’re stronger but because we’re smarter, so if we cede our position as the smartest on our planet, we might also cede control.
“ Machines don’t have goals”: Many AI systems are programmed to have goals and to attain them as effectively as possible.
“ AI isn’t intrinsically malevolent”: Correct— but its goals may one day clash with yours. Humans don’t generally hate ants, but if we wanted to build a hydroelectric dam and there was an anthill there, too bad for the ants.
“ Humans deserve to be replaced”: Ask any parent how they’d feel about your replacing their child by a machine and whether they’d like a say in the decision.
“ AI worriers don’t understand how computers work”: This claim was mentioned at the above-mentioned conference and the assembled AI researchers laughed hard. (Brockman, pp. 44-45)
Tegmark will – as Hidalgo did – speak of humans as information processing systems:
we humans discovered how to replicate some natural processes with machines that make our own wind, lightning, and horsepower. Gradually we realized that our bodies were also machines, and the discovery of nerve cells began blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles but our minds as well. So while discovering what we are, will we inevitably make ourselves obsolete? (Brockman, p. 46)
That’s the hard question at the moment. And, one still to be determined. Tegmark’s answer is that we need to think this through: “The advent of machines that truly think will be the most important event in human history. Whether it will be the best or worst thing ever to happen to humankind depends on how we prepare for it, and the time to start preparing is now. One doesn’t need to be a superintelligent AI to realize that running unprepared toward the biggest event in human history would be just plain stupid.” (Brockman, p. 46)
Inventing a Model of the Future? Hyperstitional Energetics?
What would be interesting is to build an informational model, a software application that would model this process from beginning to now of the universe as an auto-organizing system of matter, energy, and information into the various niches of complexification as it stretches over the temporal dimensions as a hyperobject or superorganism. Watch it ins the details of a let’s say Braudelaian input of material economic and socio-cultural data of the emergence of capitalism as a hyperobject over time and its complexification up to this projected Singularity. Obviously one would use statistical and probabilistic formulas and mathematical algorithms to accomplish this with sample data, etc. Either way it would show a possible scenario of the paths forward of human and machinic systems as they converge/diverge in the coming years. I’ll assume those like the complexity theorists in New Mexico university have worked such approximations? I need to study this… someone like a Stuart Kauffmann? Such as this essay: here:
the universe is open in being partially lawless at the quantum-classical boundary (which may be reversible). As discussed, the universe is open upward in complexity indefinitely. Based on unprestatable Darwinian exaptations, the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open. The unstatable evolution of the biosphere opens up new Adjacent Possible adaptations. … It seems true both that the becoming of the universe is partially beyond sufficient natural law, and that opportunities arise and disappear and either ontologically, or epistemologically, or lawlessly, may or may not be taken, hence can change the history of our vast reaction system, perhaps change the chemistry in galactic giant cold molecular clouds, and change what happens in the evolution of the biosphere, economy and history.
Sounds familiar in the sense of Meillassoux’s attack on sufficient causation (i.e., ‘principle of sufficient reason’), etc. when Kauffman mentions “the evolution of the biosphere, economy and culture seem beyond sufficient law, hence the universe is again open”. Of course Kauffman’s thesis is: “a hypopopulated chemical reaction system on a vast reaction graph seems plausibly to exhibit, via quantum behavior and decoherence, the acausal emergence of actual molecules via acausal decoherence and the acausal emergence of new ontologically real adjacent possibles that alter what may happen next, and give rise to a rich unique history of actual molecules on a time scale of the life time of the universe or longer. The entire process may not be describable by a law.” In other words its outside “sufficient reason”.
In his The Blank Swan: The End of Probability Elie Ayache is like Land tempted to see Capitalism as a hyperobject or entity, saying, “What draws me to Deleuze is thus my intention of saying the market as univocal Being”.4 He goes on to say:
The problem with the market is that it is immanence incarnate. It has no predefined plane. Much as I had first intended derivatives and their pricing as my market and my surface, I soon found myself making a market of the writings of Meillassoux, Badiou and Deleuze. They became my milieu of immanence. The plane of immanence on which to throw my concept of the market soon became a plane of immanence on which to deterritorialize thought at large. I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical. (Ayache, pp. 303-304)
Of course he’s dealing with specifics of trading in the derivatives market, etc., but one can extrapolate to a larger nexus of possibilities. As he suggests: “I soon became tempted to remake philosophy with my concept of the market rather than remake the market with a new philosophy. The market became a general metaphor for writing, the very intuition of the virtual with which it was now possible to establish contact. I was on my way to absolute deterritorialization, and the question became how to possibly deliver this ‘result’ otherwise than in a book that was purely philosophical.” This notion of both capital and thought making a pact of absolute deterritorialization seems to align with Hildalgo’s history of information theory and its own auto-organizational operations.
Ayache will like Land see the market as a unified entity: The market, as market, is one reality. It cannot be separated or differentiated by external difference. It is an intensity: the intensity of the exchange, presumably. It follows no stochastic process, with known volatility or jump parameters. It is a smooth space, as Deleuze would say, not a striated space. (Ayache, p. 325)
As wells as an organism: What gets actualized and counter-actualized (i.e. differentiated) here is the whole probability distribution, the whole range of possibilities, and the process is the process of differentiation (or distinction, or emergence, literally birth) of that put. The market differentiates itself literally like an organism, by ‘growing’that put (like an animal grows a tail or like birds grow wings) and by virtually growing all the successive puts that our trader will care to ask about. (Ayache, p. 338) In his book Hidalgo mentions a curious statement: “As of today, November 11, 2014, “why information grows” returns four hits on Google. The first one is the shell of an Amazon profile created for this book by my UK publisher. Two of the other hits are not a complete sentence, since the words are interrupted with punctuation. (By contrast, the phrase “why economies grow” returns more than twenty-six thousand hits.)”(Hidalgo, KL 2645) So that the notion of the market as an entity that grows informationally seems almost apparent to many at the moment.
Hidalgo will also mention the father of neoliberalism Friedrich Hayek who famously pointed this out in a 1945 paper (“ The Use of Knowledge in Society,” American Economic Review 35, no. 4 : 519– 530). There, Hayek identified money as an information revelation mechanism that helped uncover information regarding the availability and demand of goods in different parts of the economy. (Hidalgo, KL 3060) This notion of money as a “revelation mechanism” fits into current trends of Bitcoin as an virtual apparatus for informational mechanisms and market growth of Capital as a Hyperorganism.
The Virtual Economy: Blockchain Technology and Bitcoin-Economics
Some say we are the Age of Cryptocurrency in which Bitcoin and Blockchain technology will move things into the virtual arena where energy, matter, and information are enabled to push forward this growth process in an ever accelerating manner. (see here) Part of what their terming the programmable economy. As Sue Troy explains it the programmable economy — a new economic system based on autonomic, algorithmic decisions made by robotic services, including those associated with the Internet of Things (IoT) — is opening the door to a range of technological innovation never before imagined. This new economy — and more specifically the concept of the blockchain and metacoin platforms that underpin it — promises to be useful in improving an astonishingly varied number of issues: from reducing forgery and corruption to simplifying supply chain transactions to even greatly minimizing spam. In her interview she states:
Valdes explained the technical foundations of the blockchain ledger and the programmable economy. He described the programmable economy as an evolution of the API economy, in which businesses use APIs to connect their internal systems with external systems, which improves the businesses’ ability to make money but is limited by the fact that the systems are basically siloed from one another. The Web was the next step in the evolution toward the programmable economy, he said, because it represents a “global platform for programmable content. It was decentralized; it was a common set of standards. Anyone can put up a Web server and plug into this global fabric for content and eventually commerce and community.”
The programmable economy, Valdes said, is enabled by “a global-scale distributed platform for value exchange. … The only thing that’s uncertain is what form it will take.” Valdes pointed to Bitcoin, which uses blockchain ledger technology, as a prominent example of a “global-scale, peer-to-peer, decentralized platform for global exchange.”
Ultimately Valdes states that the idea of programmability can be extended to the corporate structure, Valdes said. Today the rules of incorporation are fixed, and the corporation is represented by its employees and a board of directors. In the future, corporations could be “more granular, more dynamic and untethered from human control”.
Of course this fits into the notion that the future City States or Neocameral Empires will also become “more granular, more dynamic and untethered from human control” as machinic intelligence and other convergences of the NBIC technologies take over more and more from humans.
One want to take a step back and get one’s breath and say: “Whoa, there partner, just wait a minute!” But by the time we institute some ethical or governmental measures it will like most of history be far too late to stop or even slow down this juggernaut of growing informational hyperorganisms. As one advocated suggested there will come a time when everything is connected in an information environment: “You can put monitors in the anything to measure or quantify exchanges, the sensors are connected to smart contracts, the contracts are changing as the exchanges take place, so you have this dynamic process that’s taking place in the supply chain, constantly refreshing the economic conditions that surround it…” (see). In this information programmable economy as Troy sees it Organizations of the future will need a different organizational model, he said. “You see society changing in a sharing, collaborative environment. Think about it being the same internally.”
As one pundit Jacob Donnelly tells it Bicoin is in existential crisis, yet it has a bright future. What is increasingly likely is that the future of bitcoin is bright. It is the seventh year in the development of this network. It takes years to build out a protocol, which is what bitcoin is. As Joel Spolsky says, “Good software takes 10 years. Get used to it.”
“Bitcoin is comparable to the pre-web-browser 1992-era Internet. This is still the very early days of bitcoin’s life. The base layer protocol is now stable (TCP/IP). Now engineers are building the second layer (HTTP) that makes bitcoin usable for average people and machines,” Jeff Garzik, founder of Bloq and Core developer of bitcoin, told me.
Once the infrastructure is built, which still has many more years ahead of it, with companies like Bloq, BitGo, 21.co, and Coinbase leading the charge, we’ll begin to see solid programs built in the application layer.
But even while we wait for the infrastructure to be built, it’s clear that bitcoin is evolving. Bitcoin is not perfect. It has a lot of problems that it is going to have to overcome. But to label it dead or to call for it to be replaced by something new is naive and shortsighted. This battle in the civil war will end, likely with Bitcoin Classic rolling out a hard fork with significant consensus. New applications will be built that provide more use cases for different audiences. And ultimately, the Internet will get its first, true payment protocol.
But Bitcoin is seven years old. It will take many years for the infrastructure to be laid and for these applications to reach critical mass. Facebook had nearly 20 years after the browser was released to reach a billion users. To imagine bitcoin’s true potential, we need to think in decades, not in months or years. Fortunately, we’re well on our way.
Future Tech: Augmented Immersion and Policing Information
One imagines a day when every aspect of one’s environment internal/external, intrinsic/extrinsic is programmable and open to revision, updates, changes, exchanges, etc. in an ongoing informational economy that is so invisible and ubiquitous that even the machines will forget they are machines: only information growth will matter and its durability, expansion, and acceleration.
In an article by Nicole Laskowski she tells us augmented and virtual reality technologies may be better suited for the enterprise than the consumer market as these technologies become more viable. Google Glass, an AR technology, for example, raised ire over privacy concerns. But in the enterprise? Employees could apply augmented and virtual reality technology to build rapid virtual prototypes, test materials, and provide training for new employees — all of which can translate into productivity gains for the organization.
“The greatest level of adoption is around the idea of collaboration,” Soechtig said. Teams that aren’t in the same physical environment can enter a virtual environment to exchange information and ideas in a way that surpasses two-dimensional video conferencing or even Second Life Enterprise. Nelson Kunkel, design director at Deloitte Digital, described virtualized collaboration as an “empathetic experience,” and Soechtig said the technology can “take how we communicate, share ideas and concepts to a completely new level.”
For some companies, the new level is standard operating procedure. Ford Motor Company has been using virtual reality internally for years to mock up vehicle designs at the company’s Immersion Lab before production begins. Other companies, such as IKEA, are enabling an augmented reality experience for the customer. Using an IKEA catalogue and catalogue app, customers can add virtual furnishings to their bedrooms or kitchens, snap a photo and get a sense for what the items will look like in their homes. And companies such as Audi and Marriott are turning VR headsets over to customers to help them visually sift through their choices for vehicle customizations and virtually travel to other countries, respectively.
Vendors, too, see augmented and virtual reality as an opportunity — from Google and its yet-to-hit-the-market Google Glass: Enterprise Edition to Facebook and its virtual reality headset, Oculus Rift, to Microsoft and its HoloLens, which it describes as neither augmented nor virtual reality, but rather a “mixed reality that lets you enjoy your digital life while staying more connected to the world around you,” according to the website. All three companies have eyes on the enterprise.
Neocameralism or Governance of Information
Is this techno-optimism or its opposite, utopia or dystopia… we’ll we even be there to find out? In his book The Disordered Police State: German Cameralism as Science and Practice on the old princedoms of the Cameral states of Germany Andre Wakefield comments:
The protagonist of my story is the Kammer, that ravenous fiscal juridical chamber that devoured everything in its path. History, I am told, is only as good as its sources, and the cameral sciences, which purported to speak publicly about the most secret affairs of the prince, were deeply dishonest. We cannot trust them. And because many of the most important cameral sciences were natural sciences, the dishonesty of the Kammer has been inscribed into the literature of science and technology as well. There is no avoiding it.5
The German cameralists were the writer-administrators and academics who had provided a blueprint for governance in early modern Germany. Much like our current systems of academic and Think Tank experts who provide the base blueprints for governance around the world today.
When we read many of the books about our future it is spawned in part and funded by such systems of experts, academics, and governmental or corporate powers seeking to convince, manipulate, and guide in the very construction of a future tending toward their goals and agendas. A sort of policing of culture, a policy is a policing and movement of the informational context to support these entities and organizations.
In the future we will indeed program many capabilities that closely resemble those arising from ‘true’ intelligence into the large-scale, web-based systems that are likely to increasingly permeate our societies: search engines, social platforms, smart energy grids, self-driving cars, as well as a myriad other practical applications. All of these will increasingly share many features of our own intelligence, even if lacking a few ‘secret sauces’ that might remain to be understood.6
One aspect of this I believe people and pundits overlook is that the large datastores needed for this will need knowledge workers for a long while to input the data needed by these advanced AI systems. I believe instead of jobs and work being downsized by automation that instead it will be opened up into ever increasing informational ecosystems that we have yet to even discern much less understand. I’m not optimistic about this whole new world, yet it is apparent that it is coming and organizing us as we organize it. Land spoke of the hyperstition as a self-replicating prophecy. If the books, journals, and other memes elaborated around this notion of information economy and exchange are valid we are moving into this world at light-speed and our older political, social, and ethical systems are being left far behind and unable to cope with this new world of converging technologies and information intelligence.
More and more our planet will seem an intelligent platform or hyperorganism that is a fully connected biospheric intelligence or sentient being of matter, energy, and information, a self-organizing entity that revises, updates, edits, and organizes its information on climate, populations, bioinformatics, etc. along trajectories that we as humans were incapable as an atomistic society. Change is coming… but for the better no one can say, yet. Eerily reminiscent of Ovid’s poem of the gods Metamorphosis humans may merge or converge with this process to become strangely other… at once monstrous and uncanny.
(I’ll take this up in a future post…)
*Max Tegmark: Physicist, cosmologist, MIT; scientific director, Foundational Questions Institute; cofounder, Future of Life Institute; author, Our Mathematical Universe
Morton, Timothy (2013-10-23). Hyperobjects: Philosophy and Ecology after the End of the World (Posthumanities) (Kindle Locations 106-111). University of Minnesota Press. Kindle Edition.
Hidalgo, Cesar (2015-06-02). Why Information Grows: The Evolution of Order, from Atoms to Economies (Kindle Locations 2446-2448). Basic Books. Kindle Edition.
Brockman, John (2015-10-06). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (p. 43). HarperCollins. Kindle Edition.
Ayache, Elie (2010-04-07). The Blank Swan: The End of Probability (p. 299). Wiley. Kindle Edition.
Andre Wakefield. The Disordered Police State: German Cameralism as Science and Practice (Kindle Locations 379-382). Kindle Edition.
Shroff, Gautam (2013-10-22). The Intelligent Web: Search, smart algorithms, and big data (p. 274). Oxford University Press, USA. Kindle Edition.
…a basic roadmap for the artificial realization of thought. ……..– Reza Negarestani
Part Two of Reza NegarestaniWhat Is Philosophy? – Programs and Realizabilities is out on e-flux. I’ll not go into detail but only quote the summation in which he offers us a vision of the Good as the “ultimate form of intelligence”. Like Plato before him Negarestani seems to have swung from his early radical thought into a more totalitarian and normative vision of elite AI’s and machinic civilization that unlike us will finally be able to build Utopia. What struck me quickly is this statement and affirmation: “It is by rendering intelligible what it is and where it has come from that intelligence can repurpose and reshape itself. A form of intelligence that wills the good must emancipate itself from whatever or whoever has given rise to it.” The notion of our progeny, our machinic children and AI’s emancipating themselves “from whatever or whoever has given rise to it” bodes no Good for the progenitors (read: humans), who will become bit players in this artificial paradise of intelligences. As he suggests “the good is in the recognition of its own history and sources, but only as a means for determinately bringing about its possible realizabilities that may in every aspect differ from it”. For machines, Utopia; for humans, a dystopian vision of transition, replacement, and enslavement.
I’m trying to treat poetry itself as a kind of “skunkworks” of literature, a kind of top-secret research facility, where we can reverse-engineer the alien technology of language itself. I believe that poetry must think of itself as kind of R&D, setting out to foment new discoveries or create new inventions. ……– Christian Bök
This is the opening salvo in a new series of posts on contemporary poets. It want be so much critical as exploratory, since I’ve as yet not read in depth many of the poets I’ll be assaying. Spotlighting the various experiments ongoing within current poetic work. Even a base awareness that such poets exist and are thriving might help others on to benefit from other fellow laborers in the craft.
I chose a look at Christian Bök because of his alliance with many of the current trends in other forms of art, philosophy, and the sciences. From what I’ve read so far of his work I see it contemporaneous with much of the work being done in the realms of speculative realism, as well as forms of new materialism. With its emphasis on sound blocks and artificial intelligence, the digital and the compositional it seems to be moving in the experimental region of the avant-garde at the forefront of our moment.
“We are perhaps the first generation of poets who can reasonably expect to write poems for a machinic audience…” says Christian Bök in his essay When Cyborgs Versify.1 He tells us that as he began writing The Cyborg Opera he began to wonder how a “poetic cyborg of the future might grow to find its own voice amid the welter of our cacophonic technology” (p. 129). He admitted to Charles Bernstein, another contemporary poet, of a certain elitism in his poetic stance, saying,
Very few people are actually willing to make the kind of commitment that’s often required to be immersed within this kind of literature, especially since there are very few material rewards for such dedication. (see On Being Stubborn)
With his roots in Dada Bök’s appellation as a sound poet run deep and have become a staple of his oeuvre. Eunoiais his best known work providing a glimpse into his univocalics, each chapter being restricted to a single vowel, missing four of the five vowels. As Darren Wershler-Henry would say of this work in his review Eunoia: The Patriarch And Incestthat Bök’s poem is “a triumph over the revolution of the human condition”:
Eunoia was not so much written by Bok as belched forth in a fit of sublime inspiration. Eunoia‘s incorporation of sensuality is in keeping with its Modernist point-of-view. As pure allegory, Eunoia was assailed for such statements; this reasoning differs radically from traditional theories of the mid 19th century renaissance of Ottoman literature.
Bök is the author of Crystallography (Coach House Press, 1994), a pataphysical encyclopedia nominated for the Gerald Lampert Memorial Award, and of Eunoia (Coach House Books, 2001), a bestselling work of experimental literature, which has gone on to win the Griffin Prize for Poetic Excellence. Bök has created artificial languages for two television shows: Gene Roddenberry’s Earth: Final Conflict and Peter Benchley’s Amazon. Bök has also earned many accolades for his virtuoso performances of sound poetry (particularly the Ursonate by Kurt Schwitters). His conceptual artworks (which include books built out of Rubik’s cubes and Lego bricks) have appeared at the Marianne Boesky Gallery in New York City as part of the exhibit Poetry Plastique. Bök is currently a Professor of English at the University of Calgary.2
Marjorie Perloff and Craig Dworkinbreak in their introduction to The Sound of Poetry / The Poetry of Sound will reiterate Samuel Jonson’s admonishment that lyrical poetry once accompanied the lyre, a musical instrument; and, that the “irreducible denominator of all lyric poetry must, therefore comprise those elements which it shares with music… it retains structural or melodic origins, and this factor serves as the categorical principle of poetic lyricism” (p. 7). Yet, we might also remember Austin Warren, who once told us that a theory of poetry worth while “falls neither into didacticism nor into its opposite heresies, imagism and echolalia. The real ‘purity’ of poetry—to speak in terms at once paradoxical and generic—is to be constantly and richly impure: neither philosophy, nor psychology, nor imagery, nor music alone, but a significant tension between all of them.”3 This sense of tension or conflict between things whether human or not is at the heart of many aspects of our current thought, which seeks to stay with those breaks, gaps, and cracks between the Real and reality without confusing the one for the other; and, realizing that above all, its our failure to grasp or understand things, to reduce them to some monocular sameness, that gives us that dynamic and dialectical restlessness we need to create and invent our futures while keeping them open and incomplete.
I’ve written a short poem, and then through a process of encipherment, I’ve translated it into a sequence of genetic nucleotides, which I’ve manufactured at a laboratory, and then, with the assistance of my scientific collaborators, I’m going to implant this gene into the genome of an extremophile bacterium called Deinococcus radiodurans. I’ve written this poem in such a way that, when translated into this genetic sequence, my text actually causes the organism to interpret it as a set of meaningful, genetic instructions for producing a protein, which, according to my original, chemical cipher, is itself yet another meaningful poem.
This mixture of poetry, science, experiment, operation, sound, empirical investigation all seem appropriate in a world where speculations around the disappearance of the natural and Nature have become clichés, while the artificial and the inhuman have taken on a more ominous tone in both science and art. If bacteria can replicate and produce poetry, what next? Speaking of his teaching he once asked his students “to name their favorite, canonical work of poetry about the moon landing—and of course, they can’t, because it hasn’t yet been written; but, if the ancient Greeks had built a trireme and rowed it to the moon, you can bet that there would’ve been a 12-volume epic about such a grandiose adventure. I’m just surprised that, despite the fact that the 20th Century has seen intercontinental battles and extraterrestrial voyages that would rival the fantasies found in our epic works of classical literature, poets don’t seem willing to address the discourses of these cultural activities….”. Bök unlike many poets has moved from a historical to a futuristic vision, one that might parallel our science fictional constructions:
I think that, right now, very few of us know how to be “poets of the future.”
Works by Christian Bök:
‘Pataphysics: The Poetics of an Imaginary Science (2001)
Jeremy Howard, CEO of Enlitic, is exploring these capabilities for medical applications. He was an early adopter of neural-network and big-data methodologies in the 1990s. As the president & chief scientist of Kaggle, a platform for data science competitions, he witnessed the rise of an algorithmic method called “deep learning”.
In a recent interview he describes that in 2012 deep neural networks started becoming good at things that previously only humans were able to do, particularly at understanding the content of images. Image recognition may sound like a fairly niche application, but when you think about it, it is actually critical. Computers before were blind. Today they are now more accurate and much faster at recognising objects in pictures than humans are.
He explains the difference between humans and machines is that once you create a specific module, you don’t have to have every machine learn it. We can download these networks between machines, but we can’t download knowledge between brains. This is a huge benefit of deep-learning machines that we refer to as “transfer learning”. The only thing holding back he states the growth of machine learning is 1) data access, and 2) the ability to do logic.