Posthuman Futures

Posthuman Futures

As I take the rabbit hole into my posthuman landscape with the help of David Roden and Justin Isis I begin seeing strange things…

In fashion a contamination of period styles from all aspects of earth history, an influx of male_autocrat_19genetic hybridity from predatory insectoids (i.e., the greatest predators of the insect kingdom: Arachnocampa luminosa, Dragonfly, Siafu ant, Praying mantis, Japanese hornet), an Autocratic predatory society based on abstraction and absolute sensation of surface male_autocrat_13tensions based on bionomic-nanotech body armor and resilience, a world not based on inheritance of blood-lines but of mood and ambition (Justin Isis – on in which the ancient sense of heraldry, sigil-suits, Holographic tags, drone eyes), artificial flesh and clothing incorporating ai and quantum matrix infusions. An architecture as well that shifts as the winds of climate change transform, mobile, ready to move, based on extensive and elaborated, elegant and heraldic systems of male_autocrat_16biotech-solar-nanotechnolgy that will construct itself out of local environmental needs and designs. This is not a near future world but will obviously place it in the concept abstract category of speculative futures incorporating the posthuman and neo-decadent paradigm.

There is much to do, much to work through, much to design as one incorporates a mutligenetic hybridity, mutlicultural refractions, multihistorical infusions, mutlitechnological incorporations, and neo-economic futurism based on letting the abstractions of the Outside in. I’m only at the conceptual stage and elaborating tentatively the tendencies toward such a male_autocrat_30world. Of course, I’m incorporating and as always beholden to the current posthuman scholarship, artists, architects, designers, fashion, etc. I’d have to name a hundred names that over the past decade have influence my thought. I need to gather a list. But two have hit me from different though parallel thought and moods: Justin Isis, David Roden, and Neo-Decadence,


©2022 S.C. Hickman All images were created with blender 3D, Photoshop, Midjourney ai, and other digital tools.

The Posthuman Other

The Posthuman Other

“Rhetorics of depth or intensity must be sacrificed, not because actual bodies are abstractions, but because unbound posthumanism cannot frame the deracinative effects of the future as the adventure of some given subject (whether human, animal, mundane, or transcendental). If this future can be embodied, it is by remaking and remarking bodies, reiterating the disconnection that lifts the formerly human into the orbit of the posthuman.” (p. 82). …

“Posthumanism explores the possibility space of subjectivity through performance— mutating and experimenting with exemplars and models (biomorphs) rather than by inference or dialectics.” (p. 82). …

“I introduce the idea of limit agency to motivate the claim that our concepts of agency might be too parochial to travel far outside our historical niche. If so, unbinding posthumanism requires us to relinquish them as constraints on the potentialities released by the posthuman predicament. Thus, even the ecological agent of Posthuman Life proves too “speculative” for speculative posthumanism, which thus loses its means of identifying disconnection events. We must withdraw from speculations on technological deep-time bounded by a psychology-free ecological agency to terrain where disconnection becomes “maximally unbound.”” (p. 85).

—David Roden, Posthumanism: Critical, Speculative, Biomorphic

Note: Somewhere within these beings above there is a subtle inflection of David Roden himself. I used his photos, thoughts, both philosophical and fictional to create these posthuman beings who may or may not emerge out of some artificial agency in our coming age of disconnect.

david_roden_4a


©2022 S.C. Hickman All images were created with blender 3D, Photoshop, Midjourney ai, and other digital tools.

Short History of Necropunk Philosophy

A Short History of Necropunk Philosophy

Decided to move this from my last post on my work-in-progress Savage Nights.

Thinking of Capitalism as a necropunk invasion from the future, driven by death-drives, cannibalizing through crisis, collapse, catastrophe is at the core of what Bataille and Nick Land after him would term “base materialism” converging on the closure of history into a posthuman future. Or, what my friend Scott Bakker would term the ‘crash space’ of the Semantic Apocalypse.

Screen Shot 02-13-16 at 05.30 PM

Chronicles of the High Inquest by S.P. Somtow

Working a new near future Grunge or Necropunk Noir Science Fiction I began collecting information regarding past uses of this notion. For me the master stylist of this genre remains Richard Calder with his Dead Girls/Dead Boys/Dead Things trilogy. (see review) He lived in Thailand 1990-1996 and later in the Philippines until returning to London in the first years of this century – who began publishing sf with “Toxine” in Interzone. Yet, there is also S.P. Somtow whose works may or may not have influenced Calder’s fusion of decodence, decadence, and necrotical politics and socio-cultural inflections, yet have at their bases the necropunk style and philosophy that seems to infect, contaminate, and corrupt this genre through its hyperstitional, memetic, and egregore enactments and disclosures of the was in which the future infects and bleeds into the past through slippage.

Continue reading

John Von Neuman: Complexity – From Representation to Performativity

JvN

In his Theory of Self-Reproducing Automata John Von Neuman one of the father’s of the modern computer tells us:

there is … this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.1

This notion that complex systems can at certain thresholds begin to degenerate, but that at other boundary lines suddenly shift into gear and begin to create more complex systems with greater potential and adaptive capabilities is now a cornerstone of certain forms of computing. It’s upon this very principle of complexification that many of the popularizers of a singularity and AI theoretic base their claims.

One of the keys to both Robotics and Cybernetic theory is its adaptation of what Quentin Meillassoux terms the correlational circle, or as cyberneticists of that era defined it the coupling of brain and environment as a unified field of self-organizing processes. Ross Ashby another of these Macy Conference pioneers in cybernetics would study homeostasis, or how organisms stabilize themselves in various environments. He was not so much interested in the mechanism of the adaptation process as he was in how this process could be modeled by a machine. (ibid. 41)

These scientists were trying to find ways to move beyond observational behavior theory which need a human observer to document, describe, and define these processes from the outside or objectively. Rather they were seeking ways to define mathematical patterns that could immanently register the effects of the environment through variables internal to the system itself, and if certain criteria were met would allow for an inverse relation to the changes in the environment to adapt the pattern switches within the system to adapt to the pressure of the external world. In some ways these scientists were doing in pragmatic practice through engineering problems what philosophers try to do in their development of realist philosophies. We know these as self-organizing processes.

Ashby points out that if the uniselectors in some of the units are “locked,” they can be regarded as the environment, while the remaining units can be regarded as the “brain” struggling to control changes in the environment by searching randomly for a stable combination of the configurations figurations in all the units, that is, for the system as a whole. Not surprisingly, ingly, many of the conference participants voiced difficulties in seeing how this randomized mechanism models the organism’s adaptation to changing variables in the environment. (ibid. 43)

One participant of the Homeostat machine Julian Bigelow remarked, “It may be a beautiful replica of something, but heaven only knows what”. In response to this Ashby told him this: the homeostat “is really a machine within a machine.” (ibid. 43) In the process of describing the feedback loops between the two machines and the environment in the stabilization process the group began to see similarities with how organisms learn through memory and retention feedback loops. The only issue was that with Ashbys machine if it were unplugged from the environment and plugged into another it would forget the original one and have to relearn it all over again. In some ways this was the beginning of a pragmatic assessment of aspects of first and second order organization and reflection processes that would serve to underpin future notions of how our brain and consciousness interact.

Yet, for Ashby and the other participants: the brain, like the homeostat, was simply a material switching device, connected through sensors and effectuators with the forces of the environment. It does not “represent” the world but provides a complex, dynamic way of engaging it. (ibid. 45-46) It would take years for scientists to come up with other analogies and frameworks within which new questions on such problems would transform into what we now term the neurosciences, AI, robotics, etc.

Andrew Pickering in his The Mangle of Practice: Time, Agency, and Science (1995), will try to answer some of the questions that stymied these early pioneers. He observes that “traditionally, science studies has operated in what I called the representational idiom, meaning that it has taken for granted that science is, above all, about representing the world, mapping it, producing articulated knowledge of it.” Thus science studies is essentially “a venture in epistemology.”  Pickering finds, however, that this approach is inadequate to the “analysis of [scientific] practice” and argues therefore that “we need to move towards ontology and what I call the performative idiom – a decentred perspective that is concerned with agency doing things in the world and with the emergent interplay of human and material agency”. Cybernetics, and particularly the work of the English cyberneticists, Pickering now realizes, “is all about this shift from epistemology to ontology, from representation to performativity, agency and emergence, not in the analysis of science but within the body of science itself”. (ibid. 46)

What he is describing is a shift in perspective and frameworks, the ways in which we frame questions and problems a shift from the anti-realist Kantian epistemic worldview to the newer materialist frameworks within which performativity rather than representational thought matters. As Johnston states it:

In these terms, the ambiguity in Ashby’s discourse and the confusion among the Macy participants makes perfect sense: both Ashby and his interlocutors are caught up in a moment of transition from one discursive framework to another, contradictorily viewing the homeostat both as a model according  to the representational idiom and, according to the performative idiom, an ontologically new kind of a machine capable of surprisingly complex behavior. As Pickering notes, in relation to industrial machines typical of its day the homeostat can be said to possess “a kind of agency it did things in the world that sprang, as it were, from inside itself,  rather than having to be fully specified from outside in advance”. It is precisely this new form of agency that makes comparisons between the new cybernetic machines and living organisms inevitable, while also obscuring the singular ontology of these new machines. (ibid. 46)

This change from representational discursive thinking to performativity and modeling, epistemology to ontology, anti-realism to speculative realist modes is at the heart of both scientific practice and many of the new philosophical approaches. One can see in the work of Deleuze and Zizek, and many others a critique of representational frameworks and idioms, while at the same time a groping toward newer performative idioms and models as they explored aspects of the Kantian heritage and its problematique.

1. John Johnston. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (p. 39). Kindle Edition.

Musing n’ Shit: Sirkústjaldið: Revisiting Björk and the new Internet Aesthetics

For those who have long felt Björk to be a part of the allure attracting itself toward our disconnected post-human drift this essay by Reykjavik is both a great refresher and a good introduction to her filmic and musical career. As I was watching the video All is full of love, with its asexual sensualism – the abject thrust of machinic love being recalibrated by the becoming other of an almost post-Fordic assembly line atmosphere of pureness set adrift among white scapes of a lab-like chrome and naked enclosure: the tooled perfection of anonymous robots twisting and turning, poking and screwing, channeling fluids and bolting together this makeshift humanoid creature – I felt this sense of abject disconnect, the slow realization that what I’m seeing is the origins not of life per se, but rather of machinic being in its awakening beyond the human. Watching these more-than-human machines mimic the gestures of humans in sexual signification through touch and facial textures of kissing and surface movement of material awakening I kept thinking to myself that this new form of love leaves behind the natural modes of generation, revising the very core of our material existence in sexuality and replaces it with a conceptual love that is neither pure mind, nor purely part of the complex psychosomatic involvement of the human body; it’s fleshy rawness. What we are faced with is the simulation of love in its conceptual purity divorced of the human: the inhuman kernel of sex without sexuality, the concrete portrayal of the human act without the human; yet, with all the sensual foreplay that humans accentuate in their actual interactions. The facial expressions expose this inhuman core through their very uncanny resemblance to actual human gestures. We feel their awakening to sensual love, and yet in the very movement of their machinic appendages we realize the sterile appeal of it all, the almost distant reduplication of the human ‘as’ human with the very disconnect from the human-as-flesh. For it is this absence of the human in the very coupling of these machines that (re)presents  for us that uncanny intertwining of the negativity of self-reflecting nothingness which captures the very inhuman core of our conceptuality. These are machines for whom death is no longer of essence, whose very physical truth is the standardized parts and replaceable metal and plastic appendages typifying the eternal sterility of life-in-death. These are the living dead, the zombie children of a new world where the symbolic order is invisible to the very nth degree, so internalized that it seems to repeat the endless patterns of the human without the human – this absent while present appeal. The conceptual truth of the human without its physical manifestation: the blood and guts of an actual fleshly core. What does that tell us about ourselves? Or, better yet, what does this tell about what we want? Is this the ‘abyss of freedom’ of which Schelling and Hegel speak, of the disjunctive separation of the conceptual subject form from its natural and symbolic contexts in the pure play of signifiers without a signified, the free-floating play of thought in all its artificial truth? Or, is this the movement of the abyss as it leaves behind the dark drives that have bound us to the earth for so long? Are we seeing the final movement of life into anti-life; the machinic existence of thought without the disgusting fleshly core that ties us to the clock-worlds of our ancestral linkages? Is this truly what we want?

Reykjavik Sex Farm!

vulnicura-featured

So this piece was actually written all the way back in February, when Björk announced the release of her current album, Vulnicura. I was asked by the HI arts and humanities website Sirkustjaldið to write some pieces of my own choosing about cultural points that interested. Alas Sirkustjaldið hasn’t quite worked out in the way I hoped it would. As well as translation issues (I know for a fact that the likes of Kodwo Eshun’s More Brilliant than the Sun with its lyrical tech-syntax will almost certainly NEVER be translated into Icelandic), but also other issues, like restrictive word counts (for a website magazine!) in some blind adherence to optimization metrics did grate a little. And even though this piece has been translated and edited for weeks, it still hasn’t been uploaded! not a good sign. Oh well.

Anyway, what really intrigued me about Vulnicura wasn’t the album themes of heartbreak…

View original post 2,100 more words

Guy Debord: A Philosophy of Time

 

The revolutionary project of a classless society, of an all-embracing historical life, implies the withering away of the social measurement of time in favor of a federation of independent times — a federation of playful individual and collective forms of irreversible time that are simultaneously present.

– Guy Debord,  Society of the Spectacle

Time, power, value and technics when seen for what they are awakens us to the concept of governance which is at the core of the neoliberal global accelerationist project of absolute governance. Etymologically the concept of governance arises out of the old Latin “gubernare”: to direct, rule, guide, govern, originally “to steer,” a nautical borrowing from Greek kybernan “to steer or pilot a ship, direct (the root of cybernetics. (see Online Etymology) This notion of steering, directing, guiding, governing coalesces in the mutations of temporal relations that have transformed our planet into an accelerationist machine of consuming time, a feeding frenzy that takes in everything organic and inorganic in its closing horizon of conceptuality.

Marx in the Grundrisse would align this temporal process as the interplay between flow and interruption (disruption) of the machinic processes of capital itself. For Marx humans (labor) are seen within the machine or automatic system of machinery “merely as its conscious linkages”:

In no way does the machine appear as the individual worker’s means of labor. Its distinguishing characteristic is not in the least, as with the means of labour, to transmit the worker’s activity to the object; this activity, rather, is posited in such a way that it merely transmits the machine’s work, the machine’s action, on to the raw material – supervises it and guards against interruption [Italics Mine]. Not as with the instrument, which the worker animates and makes into his organ with his skill and strength, and whose handling therefore depends on his virtuosity. Rather, it is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own in the mechanical laws acting through it…(Marx, Chapter on Capital, Notebook VI 692-693)1

This notion that the machine is the creative and vital (soulful) virtuoso rather than the humans supervising it and guarding it against interruption introduces one of the earliest renditions of what would come to be known as the cybernetic revolution that would only in our time come to complete fruition. When I read Franco Berardi’s essay on e-flux Time, Acceleration, and Violence and saw that first paragraph where he asks:

What do you store in a bank? You store time. But is the money that is stored in the bank my past time—the time that I have spent in the past? Or does this money give me the possibility of buying a future? 

We’ve all heard the old shibboleth of Benjamin Franklin, “Time is money!” Berardi will tell us that all of this is clear: value is time, capital is value, or accumulated time, and the banks store this accumulated time. He will remind us that in Symbolic Exchange and Death, Baudrillard brought forth the notion that temporality is the key to financial capitalism,  a unique fulfillment of Heisenberg’s “uncertainty principle” at the level of finance: the complete loss between time and value. Berardi will  contextualize this as a war between various cultural frames: Italian futurism as the masculinization of time as accelerationist warrior credo, etc. One that would lead to fascism, and would mark it as the crucial point of passage from feminine shame to masculine acceleration culture, to pride, aggressiveness, war, industrial growth, and so forth. But it remains a search for another perception of time, for a way of forgetting one’s own laziness, slowness, and sensitivity by asserting a perception of time in which one is a master—a warrior and builder of industry. (see Berardi)

As I began thinking through this biting reversal in Marx of the machine as Creative Agent rather than human labor (which is seen as subsidiary and servile, a mere regulator and gatekeeper of disruptions, etc.) , and of these various sense of time and value along with the dialectical line of various cultures of shame and guilt, deceleration and acceleration, agricultural civilization vs. industrial civilization, etc. I began realizing this “perception of time” that Berardi teases out is in need of further examination.

I decide to reread Guy Debord’s Society of the Spectacle recently, and realized that at the center of its theme lies the leitmotif of temporal relations as a philosophy of Time & Civilization. For it is here that he develops the kernel of this historical battle between cyclic civilizations and the accelerationist civilization of the machine that would underpin much of Marx’s critique of Capitalism. It’s not a gnostic or Manichean vision of opposites, but of a historical vision of how humans have oriented and organized their modes of life, labor, and value across time.

The Nineteenth Century would see the consolidation of the Enlightenment project with its centralization of time as irreversible: progress, development, improvement, modernity, etc. Within the void of each of these concepts would hide the concept of “efficiency”, which allowed a mathematical and quantifiable way of calculating labor time and productivity and the attenuated fears of waste, especially the waste of time.3 Efficiency was never about increasing productivity in the Progressive Era, rather it aimed at guaranteeing a reliable, regular rate of production and cultivating reliable, steady habits of character. It was a tool of self-management and personal stability in the face of turbulent change. (Alexander, KL 1451) So efficiency was a tool to control and shape time as progressive time:

Efficiency was … embedded in a rhetoric of dynamic, transformative power. Balanced efficiencies provided the reliable elements of economic or social transformation, the interchangeable and standardized parts, the unchanging substrata, upon which a new bureaucratic order of interaction and adjustment, of change, might be built. (Alexander, KL 1453)

Progressive ideologues, engineers, thinkers defined rationalization as “everything that could restore equilibrium,” and many would describe rationalization as seeking the “‘efficiency’ key to orderly social and individual life,” economic stability almost invariably given as its goal.” (Alexander, KL 1562)

Crucial to rationalization was a concept of flow. It could describe the assembly line and other practices for keeping the productive works in continual motion… But flow also carried another meaning, referring not to specific techniques but to a more general ideology of undisturbed production. If the solution to social and economic crisis lay in the raising of living standards through cheaper and more plentiful goods, then whatever imperiled production further imperiled a society already in crisis. Many technical measures were undertaken to streamline production, including standardization in many forms, of work schedules, parts and sizes, and methods of production; widespread adoption of new cost-accounting methods; and a host of technical measures to reduce waste… (Alexander, KL 1564)

As Alexander will inform us behind efficiency lay a legacy of balance and a worry about waste, expressed in its assumptions that one ought to get as much as possible out of what one had put in, not only enough to be productive or to show a profit but enough to show that the system was under control. (ibid. KL 1811) And, as we know control is both mastery and self-mastery. As we know the word control represents its most general definition, purposive influence toward a predetermined goal. Most dictionary definitions imply these same two essential elements: influence of one agent over another, meaning that the former causes changes in the behavior of the latter; and purpose, in the sense that influence is directed toward some prior goal of the controlling agent.4

The rationalization of society with the rise of the Fordist economies with their need to reduce waste opened the door to regulatory bureaucracies to control and oversee the governance and management of time, value, labor, etc. both within the governance of society, technology, and corporations. It is here that we begin to see how the older forms of control in government and markets had depended on personal relationships and face-to-face interactions; now in our time control is seen to be reestablished by means of bureaucratic organization, the new infrastructures of transportation and the Information and Communications technologies (ICTs). The new accelerationist economies based on global societal transformation, with its attendant rapid innovation in information and control technology accelerating Just-In-Time production in endless productivity cycles without waste: a process that seeks to regain control of functions once contained at much lower and more diffuse levels of society but which are now becoming invisible and ubiquitous as we move into the tecnocapitalist paradigm of intelligent economies based of the financialization of Big Data, etc.

 Society of the spectacle

Guy Debord will portray this history in phases of cyclical (agricultural society), irreversible (industrial), and pseudocyclical (postmodern) notions of time, technics, and civilization in his Society of the Spectacle. He will see within the agrarian mode of production, governed as it is by the rhythm of the seasons, the basis for a fully developed cyclical time of eternal return of the Same. Eternity is within this time, it is the return of the same here on earth. Myth is the unitary mental construct which guarantees that the cosmic order conforms with the order that this society has in fact already established within its frontiers. (Debord, Section 126)

Yet, as agricultural civilization took off and the static based food societies came into conflict with the older hunter/gatherer societies there arose the need for authority and security, so that the first cities and centralized bureaucratic organizations of religious accounting and kingship arose. The social appropriation of time and the production of man by human labor develop within a society divided into classes. The power that establishes itself above the poverty of the society of cyclical time, the class that organizes this social labor and appropriates its limited surplus value, simultaneously appropriates the temporal surplus value resulting from its organization of social time: it alone possesses the irreversible time of the living. (Debord, Section 128)

This is the time of adventure and war, the time in which the masters of cyclical society pursue their personal histories; it is also the time that emerges in the clashes with foreign communities that disrupt the unchanging social order. History thus arises as something alien to people, as something they never sought and from which they had thought themselves protected.

This irreversible time is the time of those who rule, and the dynasty is its first unit of measurement. Writing is the rulers’ weapon. In writing, language attains its complete independence as a mediation between consciousnesses. But this independence coincides with the independence of separate power, the mediation that shapes society. With writing there appears a consciousness that is no longer carried and transmitted directly among the living — an impersonal memory, the memory of the administration of society. (Debord, Section 131) Yet, Debord will see a double-edged distinction between the masters and the worker (slaves): the masters played the role of mythically guaranteeing the permanence of cyclical time, they themselves achieved a relative liberation from cyclical time. (Debord, 132)

So this notion of the common man living in an eternal present cut off from history and time as an irreversible arrow, while the upper elites, kings, warriors, etc. lived in a “recorded time”, a time that counted, and was marked down for future generations to remember would form the backdrop of all future social relations. The rulers owned time, and time was the first and greatest commodity: it guaranteed immortality and eternity for those who controlled it. We’ve seen this in those works by Herbert Marcuse (Eros and Civilization), Norman O. Brown (Life Against Death), and Ernest Becker (Escape From Evil). Each of which combined readings of Freud and Marxian critiques of solar mythologies of the ancients.  Each would hone in on the conceptual frameworks of myth, the sky based mythologies as abstract mappings of order against chaos: the sky as a mathematical system or machine that could be calculated and measured with increasing care and exactitude, giving assurance of an orderly world, in which the ancient kings became the earthly representatives of the victorious sky gods. Our mathematical sciences would begin in astrology, the mapping and mathematization of the sky. Astronomy laid the base from which all sciences emerged. The clock-work movements of the heavens and their dramas would influence philosophers and musicians to come.

After thousands of years of this interactive world of cyclic and irreversible time played out within the ancient world, came the monotheistic religions of which Judaism in the West arose. The monotheistic religions were a compromise between myth and history, between the cyclical time that still governed the sphere of production and the irreversible time that was the theater of conflicts and regroupings among different peoples. The religions that evolved out of Judaism were abstract universal acknowledgments of an irreversible time that had become democratized and open to all, but only in the realm of illusion. (Debord, 136)

Debord will remind us that it is the Middle Ages, an incomplete mythical world whose consummation lay outside itself, is the period when cyclical time, though still governing the major part of production, really begins to be undermined by history. An element of irreversible time is recognized in the successive stages of each individual’s life. Life is seen as a one-way journey through a world whose meaning lies elsewhere: the pilgrim is the person who leaves cyclical time behind and actually becomes the traveler that everyone else is symbolically. (Debord, 137)

With the Enlightenment project and commodity Capitalism we would see the slow fabrication of a new myth, the myth of progress: one that would have as its goal the elimination of waste; or, more succinctly the elimination of not only cyclical time but of historical time as well. A process that started two hundred years ago has in financial capitalism entered the ubiquitous time of an accelerating future. This is not the speed culture of Virilio’s Politics of Speed, etc. Instead as Debord tells it the main product that economic development has transformed from a luxurious rarity to a commonly consumed item is thus history itself — but only in the form of the history of the abstract movement of things that dominates all qualitative aspects of life. While the earlier cyclical time had supported an increasing degree of historical time lived by individuals and groups, the irreversible time of production tends to socially eliminate such lived time. (Debord, 142)

This will be time as a pure commodity: “time is everything, man is nothing; he is at most the carcass of time” (The Poverty of Philosophy). As Debord describes it this general time of human nondevelopment also has a complementary aspect — a consumable form of time based on the present mode of production and presenting itself in everyday life as a pseudocyclical time. (Debord, 148) As a production of commoditized time pseudocyclical time is associated with the consumption of modern economic survival — the augmented survival in which everyday experience is cut off from decision making and subjected no longer to the natural order, but to the pseudo-nature created by alienated labor. In our time pseudonature is termed the InfoSphere: the artificialization of our planet into layers of information and data, abstracted out of the dead weight of natural existence people live in virtual theatres of illusion rather than older forms of existence. Inforgs or informationally embodied organisms (inforgs), mutually connected and embedded in an informational environment, the infosphere, which we share with both natural and artificial agents similar to us in many respects.5

We’ve live in artificial constructs of a spectacular world so naturalized and ubiquitous that we forget it is virtual illusion: this is the world of RealityTV as a DIY project in which we can watch the world as a selfie in which we are starring actors at one remove, doubles of ourselves roaming the virtual lanes in infinite regress of image worlds receding further and further from our physical embedded life.

As we watch our lives lived by our doubles on RealityTV in all its glorious inanity: Its vulgarized pseudofestivals are parodies of real dialogue and gift-giving; they may incite waves of excessive economic spending, but they lead to nothing but disillusionments, which can be compensated only by the promise of some new disillusion to come. The less use value is present in the time of modern survival, the more highly it is exalted in the spectacle. The reality of time has been replaced by the publicity of time. (Debord, 154) Time as a public relations event, a RealityTV series that keeps repeating itself endlessly on late night comedy. A life in a pure void where communication is nothing more than canned laughter. All the while zombies stare into the videodrone tubes awaiting new instructions from their masters.

Against this dead world of zombie RealityTV filled with doubles and double-talk oblivion Debord would seek a “federation of independent times – a federation of playful individual and collective forms of irreversible time that are simultaneously present. This would be the temporal realization of authentic communism, which “abolishes everything that exists independently of individuals.” (Debord, 163)

A quantum time that is both cyclical and irreversible: a paradox at the heart of the production of time as lived, one that is a difference that makes a difference? Only time will tell…

1. Karl Marx. Grundrisse. Penguin Books, 1993.
2. Debord, Guy (2011-03-15). Society of the Spectacle (Soul Bay Press. Kindle Edition.)
3. Jennifer Karns Alexander. The Mantra of Efficiency: From Waterwheel to Social Control (Kindle Location 32). Kindle Edition.
4. Beniger, James (1989-03-15). The Control Revolution: Technological and Economic Origins of the Information Society (Kindle Locations 212-214). Harvard University Press – A. Kindle Edition.
5. Floridi, Luciano (2013-10-10). The Ethics of Information (p. 14). Oxford University Press, USA. Kindle Edition.

Linda Negata: The Bohr Maker – A Posthuman Fable

Nikko, who was in truth only a program himself, a modern ghost, an electronic entity copied from the mind of his original self, had little patience for Dull Intelligences.

– Linda Nagata, The Bohr Maker

“By the beginning of the twentieth century , it was becoming clear that the engines of life operated at the molecular scale. How can we understand such machines, and how does their operation relate to the macroscopic machines of our everyday experience?”1 Reading Linda Nagata’s The Bohr Maker is like entering that moment of transition between our everyday world of commonsense and the ultrareal worlds of advanced NBIC technologies. Caught between the “folk image” of our ancient world views, centered in magic, religion, and voodoo; and, the realms of the “scientific image” in which rationality alone is the guide, Negata enacts her fable of our posthuman molecular destiny.

Continue reading

Gynoids of Love

android

What was it? Something in the way she moved,
A glance, a gesture? Why did I suddenly desire
To kiss her, to touch her hand, cheek? Was she real?
Nothing behind the eyes revealed intelligence;
Yet, in the movements of her machinic mind
I felt a resemblance, an old darkness come to life.
(Do we have a right to our perversities, our little madness’s?)
Then she sang with the voice of my old love. I died.

 – Steven Craig Hickman ©2014 Unauthorized use and/or duplication of this material without express and written permission from this blog’s author is strictly prohibited.

Notes: Am reading Steven T. Brown’s Tokyo Cyberpunk: Posthumanism in Japanese Visual Culture. In Ghost in the Shell 2, Oshii’s basic theme is of a population of gynoids, erotic androids have run amok and are killing their lovers, etc. What is interesting in this is that the gynoids are synthetic constructs, that are inhabited by young women who have been kidnapped and their minds merged (“ghost dubbing”) with the cyberbrain. Brown talks of this relationship in Japanese culture to dolls and the uncanny line between life and death. He speaks of the influence of Hans Ballmer and his dolls on Oshii the film director as well. Victoria Nelson’s The Secret Life of Puppets, and Kenneth Gross’s Puppet: An Uncanny Life, as well as authors of horror fables such as Thomas Ligotti have all dealt with such themes. I wonder why we have such a fascination with this liminal zone between the artificial and human. We see it even in certain films for children: I was thinking of the Tom Hanks Christmas movie a few years back of the train to the north pole, etc., of how lifelike the characters were becoming in the CGI based graphics of the cartoon picture show. All uncanny and disturbing, yet fascinating at the same time. Strange.

I sometimes think that we are through art already preparing ourselves for that eventual transition to machinic life, as if we are through exploration of these artistic images being lured toward that merger and transformation into the alien or inhuman core that has always been our secret dream of metamorphosis. Or maybe the truth is much simpler: the android or gynoid is a becoming-void, an emptiness that suggests something beyond itself, an excess; that it is the voiding of the human, an emptying out of its subjectivity, its sense of self and intentional awareness. And, of course what is it Oshii’s gynoids have: at the center of this void, a cyberbrain onto which the sense of self of certain sacrificed young women have been wedded and assembled, constructed. It is this disembodied self or personality, enmeshed or embodied in the synthetic armature of an android body that suddenly turns on its makers and lovers, seeking destruction and revenge. What does this tell us?

Posthumanism 101: Non-Fiction and Fiction

After all these posts on posthumanism of late decided to move from non-ficitional reading, which honestly at this point we’ll only enlighten on detail after detail of the aspects touched on by David Roden in his excellent book Posthuman Life: Philosophy at the Edge of the Human , where he defined the core concept as “the philosophical critique of anthropocentrism in its different flavours”.1 He divided this core value system into four flavors:

1. Speculative posthumanism (SP) – the primary concern of this book – opposes human-centric thinking about the long-run implications of modern technology.
2. Critical posthumanism is a broadly based attack on the supposed anthropocentrism of modern philosophy and intellectual life. 
3. Speculative realism opposes the philosophical privileging of the human– world relationship in Kantian and post-Kantian transcendental philosophy. 
4. Philosophical naturalism is also opposed to the claim that philosophical truth claims can be arbitrated from a transcendental point of view but uses scientific theory as a constraint on philosophical truth claims. By contrast, while speculative realists are equally hostile to transcendentalism, many also oppose naturalism on the grounds that science is just another way of translating a mind-independent reality into forms that humans can understand.

 Since David’s excellent framework engenders an elaboration of texts I thought it might be beneficial to fill out a basic reading list within each of these categories (it is not meant to be a complete bibliography, but my own personal list: take it or add your own – or leave a comment below of your favorites!):

Speculative posthumanism

1. David Roden. Posthuman Life: Philosophy at the Edge of the Human
2. Asher Seidel. Inhuman Thoughts: Philosophical Explorations of Posthumanity
3. Rosi Braidotti. The Posthuman
4. Dennis M. Weiss, Amy D. Propen, Colbey Emmerson Reid Editors. Design, Mediation, and the Posthuman

Critical Posthumanism

1. N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics
2. Cary Wolfe. What Is Posthumanism?

3. Stefan Herbrechter. Posthumanism: A Critical Analysis
4. Jussi Parikka. Insect Media: An Archaeology of Animals and Technology

Speculative Realism

Introductory Texts that will cover the main ideas and concepts from different perspectives (SR is an umbrella concept covering the work of several philosophers, some who even disown the umbrella concept altogether: see here):

1. Peter Gratton. Speculative Realism: Problems and Prospects
2. Tom Sparrow. The End of Phenomenology: Metaphysics and the New Realism (Speculative Realism)
3. Steven Shaviro. The Universe of Things: On Speculative Realism(Posthumanities)

Philosophical Naturalism

1. Stewart Goetz;Charles Taliaferro. Naturalism (Interventions)
2. John R. Shook;Paul Kurtz. The Future of Naturalism


Science Fictional Posthumanisms

1. iO9 – Annalee Newitz. The Essential Posthuman Science Fiction Reading List

All I would add to her list is a couple favorites:

2. Stanislaw Lem: Cyberiad, Solaris, His Master’s Voice, and anything else by Lem

Lem was a satirist at heart, but was a formidable encyclopedist and philosophical speculator, too. I consider him our Swift and postmodern Voltaire.

3. Greg Egan, H.G Wells, Bruce Sterling, Frederik Pohl, Greg Bear, Charles Stross, Neal Asher, Ken MacLeod all have works in this vein. Newitz above covers some of these. In fact one could probably cite hundreds of works in the posthuman vein.

Two that I’m currently reading are Linda Nagata‘s The Bohr Maker (series) and Hannu Rajaniemi’s The Fractal Prince (series)  Both of which I’ll be reviewing sometime in the future.

 

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Location 499). Taylor and Francis. Kindle Edition.

The Global Cyberwar: The Alogrithms of Intelligent Malware

When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world . Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.1

Wired has an article by Kim Zetter An Unprecedented Look at Stuxnet, the World’s First Digital Weapon which elaborates on the now widely known collaboration between US and Israeli intelligence agencies seeking a way to infiltrate and slow down or destroy centrifuges in the Natanz nuclear facility in Iran.

Needless to say they were successful, yet in their success they failed miserably. Why? As you read the quoted passage again you notice the code that was originally brought into the closed facility by way of memory sticks was released into the computers by way of flash drives. And after it was slowly unwound, installed and phased into it operative mode it began to work through the networks of the facility until by chance or accident it discovered itself outside the facility and on the internet. So that as James Barrat reminds us we “do not know the downstream implications of delivering this powerful technology into the hands of our enemies. How bad could it get? An attack on elements of the U.S. power grid, for starters.” (Barrat, 261-262)

The article by Zetter doesn’t mention this fatal flaw in the plan, and how the malware is now spreading across the globe and is available for even our enemies to use against us. As Barrat states it a former head of cyberdefense at DHS Sean McGurk was asked on a CBS 60 Minutes interview if he’d been asked would he have built such a malware application:

MCGURK: [Stuxnet’s creators] opened up the box. They demonstrated the capability. They showed the ability and the desire to do so. And it’s not something that can be put back.
KROFT: If somebody in the government had come to you and said, “Look, we’re thinking about doing this. What do you think?” What would you have told them? MCGURK: I would have strongly cautioned them against it because of the unintended consequences of releasing such a code.
KROFT: Meaning that other people could use it against you?
MCGURK: Yes.

(Barrat, 260)

The segment ends with German industrial control systems expert Ralph Langner. Langner “discovered” Stuxnet by taking it apart in his lab and testing its payload. He tells 60 Minutes that Stuxnet dramatically lowered the dollar cost of a terrorist attack on the U.S. electrical grid to about a million dollars. Elsewhere, Langner warned about the mass casualties that could result from unprotected control systems throughout America, in “important facilities like power, water, and chemical facilities that process poisonous gases.”

“What’s really worrying are the concepts that Stuxnet gives hackers,” said Langner. “Before, a Stuxnet-type attack could have been created by maybe five people. Now it’s more like five hundred who could do this. The skill set that’s out there right now, and the level required to make this kind of thing, has dropped considerably simply because you can copy so much from Stuxnet.”

(Barrat, 261-265)

 As one analyst put it Stuxnet is remarkably complex, but is hardly extraordinary. Some analysts have described it as a Frankenstein of existing cyber criminal tradecraft – bits and pieces of existing knowledge patched together to create a chimera. The analogy is apt and, just like the literary Frankenstein, the monster may come back to haunt its creators. The virus leaked out and infected computers in India, Indonesia, and even the U.S., a leak that occurred through an error in the code of a new variant of Stuxnet sent into the Natanz nuclear enrichment facility. This error allowed the Stuxnet worm to spread into an engineer’s computer when it was hooked up to the centrifuges, and when he left the facility and connected his computer to the Internet the worm did not realize that its environment had changed. Stuxnet began spreading and replicating itself around the world. The Americans blamed the Israelis, who admitted nothing, but whoever was at fault, the toothpaste was out of the tube.2

Deibert goes on to say the real significance of Stuxnet lies not in its complexity, or in the political intrigue involved (including the calculated leaks), but in the threshold that it crossed: major governments taking at least implicit credit for a cyber weapon that sabotaged a critical infrastructure facility through computer coding. No longer was it possible to counter the Kasperskys and Clarkeses of the world with the retort that their fears were simply “theoretical.” Stuxnet had demonstrated just what type of damage can be done with black code. (Deibert, KL 2728)

Such things are just the tip of the iceberg, too. The world of cybercrime, cyberterrorism, cyberwar is a thriving billion dollar industry that is flourishing as full time aspect of the global initiatives of almost every major player on the planet. As reported in the NY Times U.S. Blames China’s Military Directly for Cyberattack. The Obama administration explicitly accused China’s military of mounting attacks on American government computer systems and defense contractors, saying one motive could be to map “military capabilities that could be exploited during a crisis.”  While countries like Russian target their former satellites Suspicion Falls on Russia as ‘Snake’ Cyberattacks Target Ukraine’s Government: According to a report published by the British-based defense and security company BAE Systems, dozens of computer networks in Ukraine have been infected for years by a cyberespionage “tool kit” called Snake, which seems similar to a system that several years ago plagued the Pentagon, where it attacked classified systems.

Bloomberg summarized this concept this the following statement:

“The U.S. national security apparatus may be dominant in the physical world, but it’s far less prepared in the virtual one. The rules of cyberwarfare are still being written, and it may be that the deployment of attack code is an act of war as destructive as the disabling of any real infrastructure. And it’s an act of war that can be hard to trace: Almost four years after the initial NASDAQ intrusion, U.S. officials are still sorting out what happened. Although American military is an excellent deterrent, it doesn’t work if you don’t know whom to use it on.”

As Deibert warns we are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some of them open, scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? (Deibert, KL 2819)

At one time President Dwight D. Eisenhower warned of the growing Military-Industrial Complex in the era of the 50’s, now we have Deibert suggests an ever-growing cyber security industrial complex, a world where a rotating cast of characters moves in and out of national security agencies and the private sector companies that service them. (Deibert, KL 2927) For those in the defence and intelligence services industry this scenario represents an irresistibly attractive market opportunity. Some estimates value cyber-security military-industrial business at upwards of US $150 billion annually. (Deibert, KL 3022) The digital arms trade for products and services around “active defence” may end up causing serious instability and chaos. Frustrated by their inability to prevent constant penetrations of their networks through passive defensive measures, it is becoming increasingly legitimate for companies to take retaliatory measures. (ibid., 3079)

Malicious software that pries open and exposes insecure computing systems is developing at a rate beyond the capacities of cyber security agencies even to count, let alone mitigate. Data breaches of governments, private sector companies, NGOS, and others are now an almost daily occurrence, and systems that control critical infrastructure – electrical grids, nuclear power plants, water treatment facilities – have been demonstrably compromised. (Deibert, KL 3490) The social forces leading us down the path of control and surveillance are formidable, even sometimes appear to be inevitable. But nothing is ever inevitable. (Deibert, KL 3532)


In Mind Factory Slavoj Zizek will ask the question: Are we entering the posthuman era? He will then go on to say that the survival of being-human by humans cannot depend on an ontic decision by humans.3

Instead he reminds us we should admit that the true catastrophe has already happened: we already experience ourselves as in principle manipulable, we need only freely renounce ourselves to fully deploy these potentials. But the crucial point is that, not only will our universe of meaning disappear with biogenetic planning, i.e. not only are the utopian descriptions of the digital paradise wrong, since they imply that meaning will persist; the opposite, negative, descriptions of the “meaningless” universe of technological self-manipulation is also the victim of a perspective fallacy , it also measures the future with inadequate present standards. That is to say, the future of technological self-manipulation only appears as “deprived of meaning” if measured by (or, rather, from within the horizon of) the traditional notion of what a meaningful universe is. Who knows what this “posthuman” universe will reveal itself to be “in itself”? (Mind Factory, KL 368-66)

What if there is no singular and simple answer, what if the contemporary trends (digitalisation, biogenetic self-manipulation) open themselves up to a multitude of possible symbolisations? What if the utopia— the pervert dream of the passage from hardware to software of a subjectivity freely floating between different embodiments— and the dystopia— the nightmare of humans voluntarily transforming themselves into programmed beings— are just the positive and the negative of the same ideological fantasy? What if it is only and precisely this technological prospect that fully confronts us with the most radical dimension of our finitude?(Mind Factory, KL 366-83)

With so many things going on in the sciences, military, governments, nations etc. where are the watchdogs that can discern the trends? Who can give answer to all the myriad elements that are making up this strange new posthuman era we all seem blindly moving toward? Or is it already here? With Malware on the loose, algorithms that manipulate, grow, improve on the loose around the globe; as well as being reprogramed by various unknown governments, criminal syndicates, hackers: what does the man/woman on the street do? As Nick Land will say of one of his alter ego’s

Vauung seems to think there are lessons to be learnt from this despicable mess.4


 

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 261). St. Martin’s Press. Kindle Edition.
2. Deibert, Ronald J. (2013-05-14). Black Code: Inside the Battle for Cyberspace (Kindle Locations 2721-2728). McClelland & Stewart. Kindle Edition.
3. Armand, Louis; Zizek, Slavoj; Critchley, Simon; McCarthy, Tom; Wark, McKenzie; Ulmer, Gregory L.; Kroker, Arthur; Tofts, Darren; Lewty, Jane (2013-07-19). Mind Factory (Kindle Locations 367-368). Litteraria Pragensia. Kindle Edition.
4. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Location 9008). Urbanomic/Sequence Press. Kindle Edition.

 

 

 

Technocapitalism: Creativity, Governance, and Neo-Imperialism

The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.

— Nick Land,  Fanged Noumena: Collected Writings 1987 – 2007

Luis Suarez-Villa in his Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism informs us that the major feature that sets technocapitalism apart from previous eras is the vital need to commodify creativity.1 Why is this different from older forms of capitalism? The overarching importance of creativity as a commodity can be found readily in any of the activities that are typical of technocapitalism. Due to the rise of NBIC (Nanotech,Biotech,Information and Communications) technologies as in the area of biotechnology, such as genomics, proteomics, bioinformatics, or biopharmaceuticals; in nanotechnology; in molecular computing and the other sectors that are symbolic of the twenty-first century, the commodification and reproduction of creativity are at the center of their commercialization. None of these activities could have formed, much less flourished, without the unremitting commodification of creativity that makes their existence possible.(Suarez-Villa, KL 365-67)

Nick Land in Fanged Noumena will offer us the latest version of a meltdown in which we all participate in a planet wide china-syndrome, the dissolution of the biosphere into the technosphere.2 Luciano Floridi will augment this notion in turn equating this transformation or metamorphosis into the technosphere as part of technocapital corporatism’s ‘Onlife’ strategy, one in which information becomes our surround, our environment, our reality.3 As Floridi will state it ICTs are re-ontologizing the very nature of the infosphere, and here lies the source of some of the most profound transformations and challenging problems that we will experience in the close future, as far as technology is concerned (Floridi, 6-7). He will expand on this topic, saying:

ICTs are as much re-ontologizing our world as they are creating new realities. The threshold between here (analogue, carbon-based, offline) and there (digital, silicon-based, online) is fast becoming blurred, but this is as much to the advantage of the latter as it is to the former. Adapting Horace’s famous phrase, ‘captive infosphere is conquering its victor’, the digital-online is spilling over into the analogue-offline and merging with it. This recent phenomenon is variously known as ‘Ubiquitous Computing ’,‘Ambient Intelligence’, ‘The Internet of Things’, or ‘Web-augmented Things’. I prefer to refer to it as the onlife experience.(Floridi, 8)

The notion of an Onlife experience is moving us toward that rubicon zone of the posthuman or becoming inhuman. The Onlife blurs the distinctions between reality and virtuality; blurring the boundaries of human, machine, and nature; reversing information scarcity to information abundance (and, some might say, ‘glut’); and, finally, a shift from substance based notions of entities to process and relations, or interactions.4 Floridi would have us believe that ICT’s are becoming a force of good, that they will break down the older modernist or Enlightenment notions of disembodied autonomous subjects, and will bine us within a democratic enclave of information and creativity.

Yet, as Suarez-Villa warns control over society at large, and not just governance, is the larger concern involving technocapitalism and corporate power. The globalist agenda is not to create democratic and participatory governance, but rather to impose new forms of control and power using advanced technological systems. Technology has always been a two-edged sword. The quest for corporate and global hegemony coupled with poor social accountability can have far-reaching effects. It would not be shocking to see genetic engineering bound into the human realm to produce individuals with characteristics that are highly desirable to corporatism. The “design” or “engineering” of humans with greater potential for creativity and innovation would be of great interest in this regard. After all, most people want their offspring to be “successful” and “well adjusted.” One can therefore expect corporatism to appeal to such sentiments that suit its need for power.(see Suarez, KL 1880-83)

Technocapital hegemony incorporates its most valuable resource, creativity , transcends boundaries and restraints. Commodifying creativity therefore acquires a global scope for the technocapitalist corporation, even though it is carried out within the corporate domain. Moreover, as it appropriates the results of creativity, the technocapitalist corporation becomes a powerful entity in the context of globalization. Its power takes up a supranational character that transcends the governance of any nation or locale. Corporate intellectual property regimes that are increasingly global in scope and enforcement magnify that power to an unprecedented extent. Thus, given the contemporary importance of technology, corporate technocapitalism is in a position to impose its influence around the world, particularly on societies with a limited possibility to create new technology. (Suarez, KL 2017-23)

This sense that technocapital corporatism is constructing a global hegemony outside the strictures of the older nation states, one that can bypass the regulatory mechanisms of any one sovereignty is at the heart of this new technological imperative. The technocorporatism of the 21st Century seeks to denationalize sovereignty, to eliminate the borders and barriers between rival factions. Instead of this ancient battle between China, Russia, EU, America, etc. they seek a strategy to circumvent nations altogether and build new relations of trust beyond the paranoia of national borders.

The globalists seek to appropriate the results of creativity on a global scale . Research is the corporate operation through which such appropriation typically occurs. Appropriating the results of creativity has therefore become a major vehicle to sustain and expand the global ambitions of corporate power. Intellectual property rights that confer monopoly power, such as patents, are now a very important concern of corporatism. The fact that corporate intellectual property has become a major component of international trade, and an important focus of litigation around the world, underlines the rising importance of creativity as a corporate resource. (Suarez-Villa, KL 2115)

Beyond corporate control and hegemony is the notion of reproduction, which is inherently social in nature. Reproduction is inherently social because of creativity’s intangibility, because of its qualitative character, and because it depends on social contexts and social relations to develop. Many aspects of reproduction are antithetical to the corporate commodification of creativity, yet they are essential if this intangible resource is to be regenerated and deployed. (Suarez-Villa, KL 2121)

Along with this new technocapitalist utopia comes the other side of the coin, the permanence of inequalities and injustices between the haves and the have-nots becomes one of the pathological outcomes of technocapitalism, of its apparatus of corporate power, and of its new vehicles of global domination. (Suarez-Villa, KL 4066) As Suarez-Villa iterates:

The new vehicles of domination are multi-dimensional. They comprise corporate, technological, scientific, military, organizational and cultural elements. All of these elements of domination are part of the conceptual construct of fast neo-imperialism— a new systemic form of domination under the control of the “have” nations at the vanguard of technocapitalism. This new neo-imperial power is closely associated with the phenomena of fast accumulation, with the new corporatism, with its need to appropriate and commodity creativity through research, and with its quest to obtain profit and power wherever and whenever it can. (KL 4068-72)

Corporatocracy’s slow transformation and disabling of the old Nation State powers involves a redistribution of power and wealth from the mass of the people, and most of all from the poor and working classes, toward the corporate elites and the richest segment of society. Redistribution is accompanied by a dispossession of the people from a wide spectrum of rights, individual, social, economic, political, environmental and ecologic , in order to benefit corporatism and increase its influence over society’s governance. This vast migration of wealth from poor of all nations, and the inequalities it engenders support the new corporatism’s urgent need for more creative talent, aggressive intellectual property rights, lower research costs, and for its appropriation of a wide range of bioresources, including the genetic codes of every living organism on earth. (Suarez-Villa, KL 4840-82)

As Suarez-Villa will sum it up we are now at the crossroads of what may be a new trajectory for humanity, given technocapitalism’s use and abuse technology and science , the overwhelming power of its corporations, its capacity to legitimize such power, and its quest to impose it on the world. The crises that we have witnessed in recent times may be a prelude to the maelstrom of crises and injustice that await us, if effective means are not enlisted to contest this new version of capitalism. (Suarez-Villa, KL 5555-60)

Is it too late? Have we waited way too long to wake up? Nick Land will opt for the harsh truth: “Nothing human makes it out of the near-future” (Land, KL 6063). James Barrat in his Our Final Invention: Artificial Intelligence and the End of the Human Era will offer little comfort, telling us that most scientists, engineers, thinkers, funders, etc. within the construction of the emerging AGI to AI technologies are not concerned with humanity in their well-funded bid to build Artificial systems that can think a thousand times better than us. In fact they’ll use ordinary programming and black box tools like genetic algorithms and neural networks. Add to that the sheer complexity of cognitive architectures and you get an unknowability that will not be incidental but fundamental to AGI systems. Scientists will achieve intelligent, alien systems.5 These will be systems that are totally other, inhuman to the core, without values human or otherwise, gifted only with superintelligence. And, many of these scientists believe that this will come about by 2030.  As Barrat tells us:

Of the AI researchers I’ve spoken with whose stated goal is to achieve AGI, all are aware of the problem of runaway AI. But none, except Omohundro, have spent concerted time addressing it. Some have even gone so far as to say they don’t know why they don’t think about it when they know they should. But it’s easy to see why not. The technology is fascinating. The advances are real. The problems seem remote. The pursuit can be profitable, and may someday be wildly so. For the most part the researchers I’ve spoken with had deep personal revelations at a young age about what they wanted to spend their lives doing, and that was to build brains, robots, or intelligent computers. As leaders in their fields they are thrilled to now have the opportunity and the funds to pursue their dreams, and at some of the most respected universities and corporations in the world. Clearly there are a number of cognitive biases at work within their extra-large brains when they consider the risks.(Barrat, 234-235)

 And behind most of this is the need to weaponize AI and Robotics technologies. At least here in the States, DARPA is the great power and funder behind most of the stealth companies and other like Google, IBM, and others… Not to put too fine a point on it, but the “D” is for “Defense.” It’s not the least bit controversial to anticipate that when AGI comes about, it’ll be partly or wholly due to DARPA funding. The development of information technology owes a great debt to DARPA. But that doesn’t alter the fact that DARPA has authorized its contractors to weaponize AI in battlefield robots and autonomous drones. Of course DARPA will continue to fund AI’s weaponization all the way to AGI. Absolutely nothing stands in its way. (Barrat, 235)

So here we are at the transitional moment staring into the abyss of the future wondering what beasts lurk on the other side. As Barrat surmises “I believe we’ll first have horrendous accidents, and should count ourselves fortunate if we as a species survive them, chastened and reformed. Psychologically and commercially, the stage is set for a disaster. What can we do to prevent it?” (Barrat, 236)

Nothing.

Only the possibility of youth, or as Land tells us as we enter the derelicted warrens at the heart of darkness where feral youth cultures splice neo-rituals with innovated weapons, dangerous drugs, and scavenged infotech. As their skins migrate to machine interfacing they become mottled and reptilian. They kill each other for artificial body-parts, explore the outer reaches of meaningless sex, tinker with their DNA, and listen to LOUD electro-sonic mayhem untouched by human feeling. (Land, KL 6218-6222)

Welcome to the posthuman Real.

1. Luis Suarez-Villa. Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism (Kindle Locations 364-365). Kindle Edition. 
2. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Location 6049). Urbanomic/Sequence Press. Kindle Edition.
3. Floridi, Luciano (2013-10-10). The Ethics of Information (p. 6). Oxford University Press, USA. Kindle Edition.
4. Floridi, Luciano. The Onlife Manifesto. (see here)
5. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 230). St. Martin’s Press. Kindle Edition.

Romancing the Machine: Intelligence, Myth, and the Singularity

“We choose to go to the moon,” the president said. “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”

I was sitting in front of our first Motorola color television set when President Kennedy spoke to us of going to the moon. After the Manhattan Project to build a nuclear bomb this was the second great project that America used to confront another great power in the race to land on the moon. As I listened to the youtube.com video (see below) I started thinking about a new race going on in our midst: the intelligence race to build the first advanced Artificial General Intelligence (AGI). As you listen to Kennedy think about how one of these days soon we might very well hear another President tell us that we must fund the greatest experiment in the history of human kind: the building of a superior intelligence.

Why? Because if we do not we face certain extinction. Oh sure, such rhetoric of doom and fear has always had a great effect on humans. I’ll imagine him/her trumping us with all the scientific validation about climate change, asteroid impacts, food and resource depletion, etc., but in the end he may pull out the obvious trump card: the idea that a rogue state – maybe North Korea, or Iran, etc. is on the verge of building such a superior machinic intelligence, an AGI. But hold on. It gets better. For the moment an AGI is finally achieved is not the end. No. That is only the beginning, the tip of the ice-berg. What comes next is AI or complete artificial intelligence: superintelligence. And, know one can tell you what that truly means for the human race. Because for the first time in our planetary history we will live alongside something that is superior and alien to our own life form, something that is both unpredictable and unknown: an X Factor.

 

Just think about it. Let it seep down into that quiet three pounds of meat you call a brain. Let it wander around the neurons for a few moments. Then listen to Kennedy’s speech on the romance of the moon, and remember the notion of some future leader who will one day come to you saying other words, promising a great and terrible vision of surpassing intelligence and with it the likely ending of the human species as we have known it:

“We choose to build an Artificial Intelligence,” the president said. “We choose to build it in this decade, not because it is easy, but because it is for our future, our security, because that goal will serve to organize our defenses and the security of the world, because that risk is one that we are willing to accept, one we are not willing to postpone, because of the consequences of rogue states gaining such AI’s, and one which we intend to win at all costs.”


Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

—Michael Anissimov, MIRI Media Director

 We are all bound by certain cognitive biases. Looking them over I was struck by the conservativism bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” As we move into the 21st Century we are confronted with what many term convergence technologies: nanotechnology, biotechnology, genetechnology, and AGI. As I was looking over PewResearch’s site which does analysis of many of our most prone belief systems I spotted one on AI, robotics, et. al.:

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade. (see AI, Robotics, and the Future of Jobs)

 This almost universal acceptance that robotics and AI will be a part of our inevitable future permeates the mythologies of our culture at the moment. Yet, as shows there is a deep divide as to what this means and how it will impact the daily lives of most citizens. Of course the vanguard pundits and intelligent AGI experts hype it up, telling us as Benjamin Goertzel and Steve Omohundro argue AGI, robotics, medical apps, finance, programming, etc. will improve substantially:

…robotize the AGI— put it in a robot body— and whole worlds open up. Take dangerous jobs— mining, sea and space exploration, soldiering, law enforcement, firefighting. Add service jobs— caring for the elderly and children, valets, maids, personal assistants. Robot gardeners, chauffeurs, bodyguards, and personal trainers. Science, medicine, and technology— what human enterprise couldn’t be wildly advanced with teams of tireless and ultimately expendable human-level-intelligent agents working for them around the clock?1

As I read the above I hear no hint of the human workers that will be displaced, put out of jobs, left to their own devices, lost in a world of machines, victims of technological and economic progress. In fact such pundits are only hyping to the elite, the rich, the corporations and governments that will benefit from such things because humans are lazy, inefficient, victims of time and energy, expendable. Seems most humans at this point will be of no use to the elite globalists, so will be put to pasture in some global commons or maybe fed to the machine gods.

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.

—Ray Kurzweil, inventor, author, futurist

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

—George Dyson, historian

Kurzweil and Dyson agree that whatever these new beings become, they will not have our interests as a central motif of their ongoing script.  As Goertzel tells Barrat the arrival of human-level intelligent systems would have stunning implications for the world economy. AGI makers will receive immense investment capital to complete and commercialize the technology. The range of products and services intelligent agents of human caliber could provide is mind-boggling. Take white-collar jobs of all kinds— who wouldn’t want smart-as-human teams working around the clock doing things normal flesh-and-blood humans do, but without rest and without error. (Barrat, pp 183-184) Oh, yes, who wouldn’t… one might want to ask all those precarious intellectual laborers that will be out on the street in soup lines with the rest of us that question.

As many of the experts in the report mentioned above relate: about half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

Sounds more like dystopia for the mass, and just another nickelodeon day for the elite oligarchs around the globe. Yet, the other 52% have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution. Sounds a little optimistic to me. Human ingenuity versus full-blown AI? Sound more like blind-man’s bluff with the deck stacked in favor of the machines. As one researcher Stowe Boyd, lead researcher at GigaOM Research, said of the year 2025 when all this might be in place: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?’ Indeed, one wonders… we know the Romans built the great Circus, gladiatorial combat, great blood-bath entertainment for the bored and out-of-work minions of the Empire. What will the Globalists do?

A sort of half-way house of non-commitment came from Seth Finkelstein, a programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, who responded, “The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence; rather it’s a political choice.” 

I love it that one can cop-out by throwing it back into politics, thereby washing one’s hands of the whole problem as if magically saying: “I’m just a technologist, let the politicians worry about jobs. It’s not technology’s fault, there is no determinism on our side of the fence.” Except it is not politicians who supply jobs, its corporations: and, whether technology is determined or not, corporations are: their determined by capital, by their stockholders, by profit margins, etc. So if they decide to replace workers with more efficient players (think AI, robots, multi-agent systems, etc.) they will if it make them money and profits. Politicians can hem and haw all day about it, but will be lacking in answers. So as usual the vast plebian forces of the planet will be thrown back onto their own resources, and for the most part excluded from the enclaves and smart cities of the future. In this scenario humans will become the untouchables, the invisible, the servants of machines or pets; or, worst case scenario: pests to be eliminated.

Yet, there are others like Vernor Vinge who believe all the above may be true, but not for a long while, that we will probably go through a phase when humans are augmented by intelligence devices. He believes this is one of three sure routes to an intelligence explosion in the future, when a device can be attached to your brain that imbues it with additional speed, memory, and intelligence. (Barrat, p. 189) As Barrat tells us our intelligence is broadly enhanced by the mobilization of powerful information technology, for example, our mobile phones, many of which have roughly the computing power of personal computers circa 2000, and a billion times the power per dollar of sixties-era mainframe computers. We humans are mobile, and to be truly relevant, our intelligence enhancements must be mobile. The Internet, and other kinds of knowledge, not the least of which is navigation, gain vast new power and dimension as we are able to take them wherever we go. (Barrat, p. 192)

But even if we have all this data at our braintips it is still data that must be filtered and appraised, evaluated. Data is not information. As Luciano Floridi tells us “we need more and better technologies and techniques to see the small-data patterns, but we need more and better epistemology to sift the valuable ones”.2 As Floridi will explain it what Descartes acknowledged to be an essential sign of intelligence— the capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage— would be a priceless feature of any appliance that sought to be more than merely smart. (Floridi, KL 2657) Floridi will put an opposite spin on all the issues around AGI and AI telling us that whatever it ultimately becomes it will not be some singular entity or self-aware being, but will instead be our very environment – what he terms, the InfoSphere: the world is becoming an infosphere increasingly well adapted to ICTs’ (Information and Communications Technologies) limited capacities. In a comparable way, we are adapting the environment to our smart technologies to make sure the latter can interact with it successfully. (Floridi, KL 2661)

For Floridi the environment around us is taking on intelligence, that it will be so ubiquitous and invisible, naturalized that it will be seamless and a part of our very onlife lives. The world itself will be intelligent:

Light AI, smart agents, artificial companions, Semantic Web, or Web 2.0 applications are part of what I have described as a fourth revolution in the long process of reassessing humanity’s fundamental nature and role in the universe. The deepest philosophical issue brought about by ICTs concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other. When artificial agents, including artificial companions and software-based smart systems, become commodities as ordinary as cars, we shall accept this new conceptual revolution with much less reluctance. It is humbling, but also exciting. For in view of this important evolution in our self-understanding, and given the sort of ICT-mediated interactions that humans will increasingly enjoy with other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. (Floridi, KL 3055-62)

That our conceptions of reality, self, and environment will suddenly take on a whole new meaning is beyond doubt. Everything we’ve been taught for two-thousand years in the humanistic traditions will go bye-bye; or, at least will be treated for the ramblings of early human children fumbling in the dark. At least so goes the neo-information philosophers such as Floridi. He tries to put a neo-liberal spin on it and sponsors an optimistic vision of economic paradises for all, etc. As he says in his conclusion we are constructing an artificial intelligent environment, an infosphere that will be inhabited for millennia of future generations. “We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new physical and intellectual environments that will be inhabited by future generations (Floridi, KL 3954).”  Because of this he tells us we will need to forge a new new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, 3971) 

In some ways I concur with his statement that we need to take a critical view of our current narratives. To me the key is just that. Humans live by narratives, stories, tales, fictions, etc., always have. The modernists wanted grand narratives, while the postmodernists loved micro-narratives. What will our age need? What will help us to understand and to participate in this great adventure ahead in which the natural and artificial suddenly form alliances in ways never before seen from the beginning of human history. From the time of the great agricultural civilizations to the Industrial Age to our own strange fusion of science fiction and fact in a world where superhuman agents might one day walk among us what stories will we tell? What narratives do we need to help us contribute to our future, and to the future hopefully of our species? Will the narratives ultimately be told a thousand years from now by our inhuman alien AI’s to their children of a garden that once existed wherein ancient flesh and blood beings once lived: the beings that once were our creators? Or shall it be a tale of symbiotic relations in which natural and artificial kinds walk hand in hand forging together adventures in exploration of the galaxy and beyond? What tale will it be?

Romance or annihilation? Let’s go back to the bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” If we listen to the religious wing of transhumanism and the singulatarians, we are presented with a rosy future full of augmentations, wonders, and romance. On the other side we have the dystopians, the pessimists, the curmudgeons who tell us the future of AGI leads to the apocalypse of AI or superintelligence and the demise of the human race as a species. Is their a middle ground. Floridi seems to opt for that middle ground where humans and technologies do not exactly merge nor destroy each other, but instead become symbionts in an ongoing onlife project without boundaries other than those we impose by a shared vision of balance and affiliation between natural and artificial kinds. Either way we do not know for sure what that future holds, but as some propose the future is not some blank slate or mirror but is instead to be constructed. How shall we construct it? Above all: whose future is it anyway? 

As James Barrat will tell us consider DARPA. Without DARPA, computer science and all we gain from it would be at a much more primitive state. AI would lag far behind if it existed at all. But DARPA is a defense agency. Will DARPA be prepared for just how complex and inscrutable AGI will be? Will they anticipate that AGI will have its own drives, beyond the goals with which it is created? Will DARPA’s grantees weaponize advanced AI before they’ve created an ethics policy regarding its use? (Barrat, 189)

My feeling is that even if they had an ethics policy in place would it matter? Once AGI takes off and is self-aware and able to self-improve its capabilities, software, programs, etc. it will as some say become in a very few iterations a full blown AI or superintelligence with as much as a thousand, ten thousand, or beyond intelligence beyond the human. Would ethics matter when confronted with an alien intelligence that is so far beyond our simple three pound limited organic brain that it may not even care or bother to recognize us or communicate. What then?

We might be better off studying some of the posthuman science fiction authors in our future posts (from i09 Essential Posthuman Science Fiction):

  1. Frankenstein, by Mary Shelley
  2. The Time Machine, by H.G. Wells
  3. Slan, by A.E. Van Vogt
  4. Dying Earth, Jack Vance
  5. More Than Human, by Theodore Sturgeon
  6. Slave Ship, Fredrick Pohl
  7. The Ship Who Sang, by Anne McCaffrey
  8. Dune, by Frank Herbert
  9. “The Girl Who Was Plugged In” by James Tiptree Jr.
  10. Aye, And Gomorrah, by Samuel Delany
  11. Uplift Series, by David Brin
  12. Marooned In Realtime, by Vernor Vinge
  13. Beggars In Spain, by Nancy Kress
  14. Permutation City, by Greg Egan
  15. The Bohr Maker, by Linda Nagata
  16. Nanotech Quartet series, by Kathleen Ann Goonan
  17. Patternist series, by Octavia Butler
  18. Blue Light, Walter Mosley
  19. Look to Windward, by Iain M. Banks
  20. Revelation Space series, by Alasdair Reynolds
  21. Blindsight, by Peter Watts
  22. Saturn’s Children, by Charles Stross
  23. Postsingular, by Rudy Rucker
  24. The World Without Us, by Alan Weisman
  25. Natural History, by Justina Robson
  26. Windup Girl, by Paolo Bacigalupi

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (pp. 184-185). St. Martin’s Press. Kindle Edition.
2. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 2422-2423). Oxford University Press. Kindle Edition.

Utopia or Hell: The Future as Posthuman Game Strategy

 

There was no question; the dead thing in the gutter was one of his clones. – Jeffrey Thomas, Punktown

As I was thinking through the last chapter in David Roden’s posthuman adventure in which a spirit of speculative engineering best exemplifies an ethical posthuman becoming – not the comic or dreadful arrest in the face of something that cannot be grasped 1, I began reading Arthur Kroker in his book Exits to the Posthuman Future, who in an almost uncanny answer to Roden’s plea for new forms of thought – to prepare ourselves for the posthuman eventuality, tells us that we might need a “form of thought that listens intently for the gaps, fissures, and intersections , whether directly in the technological sphere or indirectly in culture, politics, and society, where incipient signs of the posthuman first begin to figure.”2 We might replace the use of the word “figure” with Roden’s terminological need for an understanding of “emergence”.

Rereading Slavoj Zizek’s early The Sublime Object of Ideology he will see a specific battle within the cultural matrix in which scientists and critics alike have a tendency to fill these gaps, or unknowns with complexity and an almost acute anxiety of that which is coming at us out of the future. He says that there is always this dialectical interplay between Ptolemaic and Copernican movements. The Ptolemaic being the form that simply shores up the past, solidifying and reducing the complexities of the sciences to its simplified worldview, while the Copernicans always opt for fracturing the old forms, for opening up the world to the gaps that cannot be evaded in our knowledge, to allowing the universe to enter us and challenge everything we are and have been.

The Gothic modes of fiction seem to follow and fill these uncertain voids and gaps with the monstrous rather than light when such moments of metamorphosis and change come about. Fear and instability shake us to our bones, force us to resist change and seek ways to either turn time back or to put the unknown into some perverse relation to our lives, darkening its visions into complicity with the inhuman and sadomasochistic heart of our own core defense systems. One might be reminded of Thomas Ligotti’s remembrance of Mary Shelley’s famous Frankenstein in which his own repetition of her story in a postmodern mode has the creature awaken into his posthuman self with a sense of loss: “

This possibility is now , of course, as defunct as the planet itself. With all biology in tatters, the outsider will never again hear the consoling gasps of those who shunned him and in whose eyes and hearts he achieved a certain tangible identity, however loathsome. Without the others he simply cannot go on being himself— The Outsider— for there is no longer anyone to be outside of. In no time at all he is overwhelmed by this atrocious paradox of fate.

This sense of ambivalence that he fills at having attained at last something outside of humanity returns with a darker knowledge that becoming other he can no longer harbor what he once dreamed, he has become the thing he dreaded. Cast out of the biological tic he is free, but free for what? No longer human he is faced with the paradox of who he now is: and, that he has nothing to which his mind can tend, no thoughts from the others, the humans; no libraries of philosophy, ethics, history, literature. No. He is absolutely outside of the human; alone. Is this solipsism or something else? Even that classic work by the Comte de Lautremont Maldoror in which the ecstasy of cruelty is unleased cannot be a part of this world of the posthuman. What if the mythology of drives, of eros and thanatos, love and death, the rhetoric flourishes of figuration, else the literalism of sadomasochism no longer hold for such beings? How apply human knowledge and thought to what is inhuman? As Ligotti will end one of his little vignettes:

And each fragment of the outsider cast far across the earth now absorbs the warmth and catches the light, reflecting the future life and festivals of a resurrected race of beings : ones who will remain forever ignorant of their origins but for whom the sight of a surface of cold, unyielding glass will always hold profound and unexplainable terrors. (ibid)

This sense of utter desolation, of catastrophe as creation and invention, is this not the truth of the posthuman? Zizek will attune us to the monstrous notion that Hegel’s notion of Aufhebung or sublation is a form of cannibalism in that it effectively and voraciously devours and ‘swallows up’ every object it comes upon.4 His point being that the only way we can grasp an object (let’s say the posthuman) is to acknowledge that it already ‘wants to be with/by us’? If as Roden suggests we as humans are becoming the site of a great experiment in inventing the posthuman then maybe as Zizek suggests its not digestion or cognition, but shitting that we must understand, because for Hegel the figure of Absolute Knowledge, the cognizing subject is one of total passivity; an agent in which the System of Knowledge is ‘automatically’ deployed without external norms or impetuses. Zizek will tell us that this is a radicalized Hegel, one that defends the notion of ‘process without subject’: the emergence of a pure subject qua void, the object itself with no need for any subjective agent to push it forward or to direct it. (ibid, xxii)

This notion that the posthuman as ‘process without subject’ that has no need of human agents to push it, direct or guide it takes us to the edge of the technological void where our human horizon meets and merges with the inhuman other residing uncannily within our own being, withdrawn and primeval.

Engineering Our Posthuman future

Chris Anderson , in his ‘The end of theory: The data deluge makes the scientific method obsolete’  argued that data will speak for themselves, no need of human beings who may ask smart questions:

With enough data, the numbers speak for themselves. […] The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years. Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence . Data without a model is just noise. But faced with massive data, this approach to science— hypothesize, model, test— is becoming obsolete.5

So what is replacing it? Luciano Floridi will tell us that it’s not about replacement, but about the small patterns in the chaos of data:

[One needs to ] know how to ask and answer questions’ critically, and therefore know which data may be useful and relevant, and hence worth collecting and curating, in order to exploit their valuable patterns. We need more and better technologies and techniques to see the small-data patterns , but we need more and better epistemology to sift the valuable ones.6

So if we are to understand the emergence of the posthuman out of the relations of human and technology we need to ask the right questions, and to build the technologies that can pierce the veil of this infinite sea of information our society is inventing in the digital machines of Data. Data itself is stupid, what we need are intelligent questioners. But do these intelligent agents need to be necessarily human? Maybe not, yet as Floridi will suggest:

One thing seems to be clear: talking of information processing helps to explain why our current AI systems are overall more stupid than the wasps in the bottle. Our present technology is actually incapable of processing any kind of meaningful information, being impervious to semantics, that is, the meaning and interpretation of the data manipulated. ICTs are as misnamed as ‘smart weapons’. (Floridi, KL 2525)

Descartes once acknowledged that the essential sign of intelligence was a capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage. And, many in the AI community have followed that path thinking it would be a priceless feature of any appliance that sought to be more than merely smart. In our own time the impression has often been that the process of adding to the mathematical book of nature (inscription) required the feasibility of productive, cognitive AI, in other words, the strong programme. Yet, what has actually been happening in the real world of commerce and practical science of engineering is something altogether different, we’ve been inventing a world that is becoming an infosphere, one that is increasingly well adapted to ICTs’ (Information & Communications Technologies) limited capacities. What we see happening is that companies in their bid to invent Smart Cities etc. are beginning to adapt the environment to our smart technologies to make sure the latter can interact with it successfully . We are, in other words, wiring or rather enveloping the world with intelligence. Our environment itself is becoming posthuman and in turn is rewiring humanity. (ibid. Floridi)

ICTs are creating the new informational environment in which future generations will live and have their being. The posthuman is becoming our environment a site of intelligence, we are we are constructing the new physical and intellectual environments that will be inhabited by future generations. For Floridi the task is to formulate an ethical framework that can treat the infosphere as a new environment worthy of the moral attention and care of the human inforgs inhabiting it:

Such an ethical framework must address and solve the unprecedented challenges arising in the new environment. It must be an e-nvironmental ethics for the whole infosphere. This sort of synthetic (both in the sense of holistic or inclusive, and in the sense of artificial) environmentalism will require a change in how we perceive ourselves and our roles with respect to reality, what we consider worth our respect and care, and how we might negotiate a new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, KL 3954)

James Barrat in his book Our Final Invention: Artificial Intelligence and the End of the Human Era tells us he interviewed many scientists in various fields concerning AGI and that every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes.7 After interviewing dozens of scientist Barrat concluded that we may be slowly losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (Barrat, 6)

As Kroker will admonish we seem to be on the cusp of a strange transition, situated at the crossroads of humanity, and the future presents itself now as a gigantic simulacrum of the recycled remnants of all that which was left unfinished by the coming-to-be of the technological dynamo – unfinished religious wars, unfinished ethnic struggles, unfinished class warfare, unfinished sacrificial violence and spasms of brutal power, often motivated by a psychology of anger on the part of the most privileged members of the so-called global village. The apocalypse seems to be coming our way like a specter on the horizon, not a grand epiphany of events but by one lonely text message at a time. (Kroker, 193)

The techno-capitalists want to enclose us in a new global commons of intelligent cities to better control our behavior and police us in a vast hyperworld of machinic pleasure and posthuman revelation, while the rest of humanity sits on the outside of these corrupted dreamworlds as workers and slaves of the new AI wars for the minds of humanity. Bruce Sterling in his latest book The Epic Struggle of the Internet of Things says we’re already laying the infrastructure for tyranny and control on a global scale:

Digital commerce and governance is moving, as fast and hard as it possibly can, into a full-spectrum dominance over whatever used to be analogue. In practice, the Internet of Things means an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband.8

Another prognosticator Jacque Attali who supports the technological elite takeover in this world of intelligent systems, tells us that in the course of the twenty-first century, market forces will take the planet in hand. The ultimate expression of unchecked individualism, this triumphant march of money explains the essence of history’s most recent convulsions. It is up to us to accelerate, resist, or master it:

…this evolutionary process means that money will finally rid itself of everything that threatens it — including nation-states (and not excepting the United States of America), which it will progressively dismantle. Once the market becomes the world’s only universally recognized law, it will evolve into what I shall call super-empire, an entity whose structures remain elusive but whose reach is global. … Exploiting ever newer technologies, global or continental institutions will organize collective living, imposing limits on the production of commercial artifacts, on transforming life, and on the mercantile exploitation of natural resources. They will prefer freedom of action, responsibility, and access to knowledge. They will usher in the birth of a universal intelligence, making common property of the creative capacities of all human beings in order to transcend them. A new, synchronized economy, providing free services, will develop in competition with the market before eliminating it, exactly as the market put an end to feudalism a few centuries ago.9

The dream of the global elites is of a great market empire controlled by vast AI Intelligent Agents that will deliver the perfect utopian realm of work and play for a specific minority of engineers and creative agents, entrepreneurs, bankers, and space moghuls, etc., while the rest of the dregs of humanity live in the shadows controlled by implants or pharmaceuticals that will keep them pacified and slave-happy in their menial tier of decrepitude as workers in the minimalist camps that support the Smart Civilization and its powers.    

Yet, against this decadent scenario as Kroker suggests what if the counter were true, and the shadow artists of the future or even now beginning to enter the world of data nerves, network skin, and increasingly algorithmic minds with the intention of capturing the dominant mood of these posthuman times – drift culture – in a form of thought that dwells in complicated intersections and complex borderlands? He envisions instead an new emergent order of rebels, a global gathering of new media artists, remix musicians, pirate gamers, AI graffiti artists, anonymous witnesses, and code rebels, an emerging order of figural aesthetics revealing a new order, a brilliantly hallucinatory order, based on an art of impossible questions and a perceptual language as precise as it is evocative. Here, the aesthetic imagination dwells solely on questions of incommensurability : What is the vision of the clone? What is the affect of the code? What is the hauntology of the avatar? What is most excluded, prohibited, by the android? What is the perception of the drone? What are the aesthetics of the fold? What, in short, is the meaning of aesthetics in the age of drift culture?(Kroker, 195-196)

This notion of drift culture might align well with David Roden’s call for a new network of interdisciplinary practices that combine technoscientific expertise with ethical and aesthetic experimentation will be better placed to sculpt disconnections than narrow coalitions of experts. One in which the ‘Body Hacker’ with her self-invention and empowerment toward a self-administered intervention in extreme new technologies like the IA technique…(Roden, KL 4394). Kroker will call this ‘body drift’:

Body drift refers to the fact that we no longer inhabit a body in any meaningful sense of the term but rather occupy a multiplicity of bodies— imaginary, sexualized, disciplined, gendered, laboring, technologically augmented bodies. Moreover, the codes governing behavior across this multiplicity of bodies have no real stability but are themselves in drift— random, fluctuating, changing. There are no longer fixed, unchallenged codes governing sexuality, gender, class, or power but only an evolving field of contestation among different interpretations and practices of different bodily codes. The multiplicity of bodies that we are, or are struggling to become, is invested by code-perspectives. Never fixed and unchanging, code-perspectives are always subject to random fluctuations, always evolving, always intermediated by other objects, by other code-perspectives. We know this as a matter of personal autobiography.(Kroker, KL 53)10

 This notion that we are becoming ‘code’ is also part of the posthuman nexus. As Rob Kitchin and Martin Dodge in Code/Space: Software and Everyday Life tell us this sense of the pervasiveness of the environment enclosing us is becoming posthuman is termed ‘everywhere’: the ubiquity of computational power will soon be distributed and available to the point on the planet… many everyday devices and objects will be accessible across the Internet of things, chatting to each other in machinic languages that humans will not even be aware of much less concerned with; yet, we will be enclosed in this fabric of communication and technology of Intelligence, socialized by its pervasiveness in our lives. Instead of the old Marxian notion of being embedded in a machine, we will now be so enmeshed in this environment of ICTs that they will become invisible: power and governance will vanish into our skins and minds without us even knowing it is happening, and we will be happy.

Luis Suarez-Villa in his recent Globalization and Technocapitalism tells us “the ethos of technocapitalism places experimentalism at the core of corporate power”, much as production was at the core of industrial corporate power, undertaken through factory regimes and labor processes. And , much as the ethos of past capitalist eras was accompanied by social pathologies and by frameworks of domination, so the new ethos of technocapitalism introduces pathological constructs of global domination that are likely to be hallmarks of the twenty-first century. As Floridi will tells us, we are already living in an infosphere that will become increasingly synchronized (time), delocalized ( space ), and correlated (interactions). Although this might be interpreted, optimistically, as the friendly face of globalization, we should not harbour illusions about how widespread and inclusive the evolution of the information society will be. Unless we manage to solve it, the digital divide will become a chasm, generating new forms of discrimination between those who can be denizens of the infosphere and those who cannot, between insiders and outsiders, between information rich and information poor. It will redesign the map of worldwide society, generating or widening generational, geographic, socio-economic, and cultural divides. Yet the gap will not be reducible to the distance between rich and poor countries, since it will cut across societies. Pre-historical cultures have virtually disappeared, with the exception of some small tribes in remote corners of the world. The new divide will be between historical and hyperhistorical ones. We might be preparing the ground for tomorrow’s informational slums (Floridi, 9).

 Welcome to the brave new world. As our drift and code culture, digital immigrants in a sea of information slowly become inforgs and are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between infosphere and physical world, only a difference in levels of abstraction. When the migration is complete, we shall increasingly feel deprived, excluded, handicapped, or impoverished to the point of paralysis and psychological trauma whenever we are disconnected from the infosphere, like fish out of water. One day, being an inforg will be so natural that any disruption in our normal flow of information will make us sick. (Floridi, 16-17)

What remains of our humanity is anyone’s guess. The Inforgasm is upon us, the slipstream worlds of human/machine have begun to reverse engineer each other in a convoluted involution in which we are returning to our own native climes as machinic beings. Maybe a schizoanalyst could sort this all out. For me there is no escape, no exit, just the harsh truth that what is coming at us is our own inhuman core realized as posthuman becoming, an engineering feat that no one would have thought possible: consciousness gives way to the very machinic processes that underpin its actual and virtual histories.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 4399-4401). Taylor and Francis. Kindle Edition.
2. Kroker, Arthur (2014-03-12). Exits to the Posthuman Future (p. 6). Wiley. Kindle Edition.
3. Ligotti, Thomas (2014-07-10). The Agonizing Resurrection of Victor Frankenstein (Kindle Locations 397-399). Subterranean Press. Kindle Edition.
4. Slavoj Zizek. The Sublime Object of Ideology. Verso 1989
5. Anderson, C. (23 June 2008). The end of theory: Data deluge makes the scientific method obsolete. Wired Magazine.
6. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 4088-4089). Oxford University Press. Kindle Edition.
7. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 3). St. Martin’s Press. Kindle Edition.
8. Sterling, Bruce (2014-09-01). The Epic Struggle of the Internet of Things (Kindle Locations 8-10). Strelka Press. Kindle Edition.
9. Attali, Jacques (2011-07-01). A Brief History of the Future: A Brave and Controversial Look at the Twenty-First Century . Arcade Publishing. Kindle Edition.
10. Kroker, Arthur (2012-10-22). Body Drift: Butler, Hayles, Haraway (Posthumanities) (Kindle Locations 53-60). University of Minnesota Press. Kindle Edition.


 

 

 

 

 

David Roden’s: Speculative Posthumanism – Conclusion (Part 8)

While the disconnection thesis makes no detailed claims about posthuman lives, it has implications for the complexity and power of posthumans and thus the significance of the differences they could generate. Posthuman entities would need to be powerful relative to WH to become existentially independent of it.1

 In his final chapter David Roden takes up the ethical or normative dimensions of his disconnection thesis. He will opt for a posthuman accounting that will allow us to anticipate the posthuman through participation in its ongoing eventuality. Yet, he recognizes there are both moral, political, and other factors that argue for both its necessary constraint and limits through control pressure from normative and political domains. (previous post) As we approach David Roden’s final offering we should remember a cautionary note by Edward O. Wilson from his The Social Conquest of the Earth would caution:

We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology. We thrash about. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life.2

In the first section Roden will face objections to his disconnection thesis from both phenomenological anthropocentrism and naturalist versions of species integrity, and find both wanting. Instead of going through the litany of examples I’ll move toward his summation which gives us his base stance and philosophical/scientific appraisal. As he states it:

…the phenomenological species integrity argument for policing disconnection-potent technologies presupposes an unwarrantable transcendental privilege for Kantian personhood. Since the privilege is unwarrantable this side of disconnection, the phenomenological argument for an anthropocentric attitude towards disconnection fails along with naturalistic versions of the species integrity argument such as Agar’s. Thus even if we accept that our relationships to fellow humans compose an ethical pull, as Meacham puts it, its force cannot be decisive as long we do not know enough about the contents of PPS (posthuman possibility space) to support the anthropocentrist’s position. What appears to be a moral danger on our side of a disconnection could be an opportunity to explore morally considerable states of being of which we are currently unaware.*(see notes below)

 Reading the arguments of both Agar and Meacham against the disconnection thesis it brings to mind the sense of how many thinkers, scientists and philosophers fear the unknown element, the X factor in the posthuman equation. What’s difficult and for me almost nonsensical in both arguments is their sense of Universalism, as if we could control what is viable a nominalistic universe of particulars through either a universal and normative set of theory and practices (let’s say a Sellarsian/Brandomonian normativity of “give” and “take” in a space of reasons; creating a navigational mapping of the pros/cons of the posthuman X factor and develop a series of reasoning’s for or against its emergence, etc.) as if we have a real say in the matter. Do we? Roden has gone through the pros/cons of technological determinism and found it lacking in any sense of foundation.

Yet, his basic philosophy seems grounded in the surmises of phenomenological theory and practice rather than in the sciences per se. So from within his own perspective in philosophical theory all seems viable for or against the posthuman. But do we live in a phenomenological world. Do we accept the philosophical strictures of the Kantian divide in philosophy that have led to the current world of speculation, both Analytical and Continental?

As Roden will suggest against the threat of phenomenological species integrity is one that attacks the actual foundations of the whole ethical and political enterprise rather than an specific or putatively “human” norms, values or practices (Roden, KL 4130). I think its safe to say that most of the species that have ever existed (99%) are now extinct according to evolutionists. So humans are part of the natural universe, we are not exceptional, and do not sit outside the realm of the animal kingdom. When it comes down to it do we go with those who fear extinction at the hands of some unknown X factor, some unknown posthuman break and disconnect that might or might not be the end point for the human? Or, do we opt for the challenge to participate in its emergence and realize that it might offer the next stage in – if not biological evolution (although transhumans opt for this), but technological innovation and evolution? Roden will try to answer this in his final section.

 Vital posthumanism: a speculative-critical convergence

In this section (8.2) Roden will opt for a post-anthropocentric ethics of becoming posthuman, one that does not require posthumans to exhibit human intersubjectivity or moral autonomy. Such an ethics would need to be articulated in terms of ethical attributes that we could reasonably expect to be shared with posthuman WHDs (wide human descendants) whose phenomenologies or psychologies might diverge significantly from those of current humans (Roden, 4164).

One prerequisite as he showed in earlier sections of the book was the need for functional autonomy:

A functionally autonomous system (FAS) can enlist values for and accrue functions ( § 6.4 ). Functional autonomy is related to power. A being’s power is its capacity to enlist other things and be reciprocally enlisted (Patton 2000: 74). With great power comes great articulation ( § 6.5 ). (Roden, 4168)

To build or construct such an assemblage he will opt for a neo-vitalist normativity, one that is qualified materialism following Levi R. Bryant against any form of metaphysical vitalism. Instead he will broker an ontological materialism that denies that the basic constituents of reality have an irreducibly mental character (Roden, KL 4180). Second, he will redefine the conceptual notions underpinning vitalism by offering a minimal definition of the posthuman as living because they must exhibit functional autonomy. This is a sufficient functional condition of life at best (Roden, KL 4187). This does not imply any form or essentialism either, there is not implied set of properties etc. to which one could reduce the core set of principles.

He will work within the framework of an assemblage ontology first developed by Gilles Deleuze. It assumes that posthumans would have network-independent components like the human fusiform gyrus, allowing flexible and adaptive couplings with other assemblages. Posthumans would need a flexibility in their use of environmental resources and in their “aleatory” affiliations with other human or nonhuman systems sufficient to break with the purposes bestowed on entities within the Wide Human.(Roden, 4202) I’m tempted to think of Levi R. Bryant’s Machine Ontology which is an outgrowth of both Deleuze and certain trends in speculative realism, too. Yet, this is not the time or place to go into that (i.e., read here, here, here).

He affirms an accord between his own project and that of Rosi Braidotti’s The Posthuman. Yet, there are differences as well. As he states it:

“…she is impatient with a disabling political neutrality that can follow from junking human moral subjectivity as the arbiter of the right and the good. She argues that a critical posthumanist ethics should retain the posit of political subjectivity capable of ethical experimentation with new modes of community and being, while rejecting the Kantian model of an agent subject to universal norms. (Roden, KL 4224)”

His point is that Braidotti is mired in certain political and normative theories and practices that bely the fact that the posthuman disconnection might diverge beyond any such commitments. As he will suggest the ethics of vital posthumanism is thus not prescriptive but a tool for problem defining (Roden, KL 4271). The point being that one cannot bind oneself to a democratic accounting, because – as disconnection suggests an accounting would not evaluate posthuman states according to human values but according to values generated in the process of constructing and encountering them. (Roden, KL 4278)

In the feral worlds of the posthuman future our wide-human descendants may diverge so significantly from us, and acquire new values and functional affiliations that it might be disastrous for those who opt to remain human through either normative inaction or policing the perimeters of territorial and political divisions, etc., to the point that the very skills and practices that had sustained them prior to disconnection might be inadequate in the new dispensation. (Roden, KL 4372) Therefore as he suggests:

It follows that any functionally autonomous being confronted with the prospect of disconnection will have an interest in maximizing its power, and thus structural flexibility, to the fullest possible extent. The possibility of disconnection implies that an ontological hypermodernity is an ecological value for humans and any prospective posthumans. … To exploit Braidotti’s useful coinage, ramping up their functional autonomy would help to sustain agents – allowing them to endure change without falling apart (Roden, KL 4376- 4385)

He will summarize his disconnection hypothesis this way:

I will end by proposing a hypothesis that can be put to the test by others working in science and technology, the arts, and in what we presumptively call “humanities” subjects. This is that interdisciplinary practices that combine technoscientific expertise with ethical and aesthetic experimentation will be better placed to sculpt disconnections than narrow coalitions of experts. There may be existing models for networks or associations that could aid their members in navigating untimely lines of flight from pre- to post-disconnected states (Roden 2010a). “Body hackers” who self-administer extreme new technologies like the IA technique discussed above might be one archetype for creative posthuman accounting. Others might be descendants of current bio- and cyber-artists who are no longer concerned with representing bodies but, as Monika Bakke notes, work “on the level of actual intervention into living systems”. (Roden, KL 438)

So in the end David Roden is opting for intervention and experimentation, a direct participation in the ongoing posthuman emergence through both ethical and technological modes. Instead of it being tied to any political or corporate pressure it should become an almost Open Source effort that is open and interdisciplinary among both academic and outsiders from scientists, technologists, artists, and bodyhackers willing to intervene in their own lives and bodies to bring it into realization. He will quote Stelarc, a body hacker, saying,

Perhaps Stelarc defines the problem of a post-anthropocentric posthuman politics best when describing the role of technical expertise in his art works: “This is not about utopian blueprints for perfect bodies but rather speculations on operational systems with alternate functions and forms” (in Smith 2005: 228– 9). I think this spirit of speculative engineering best exemplifies an ethical posthuman becoming – not the comic or dreadful arrest in the face of something that cannot be grasped. (Roden, KL 4397)

One might term this speculative engineering the science fictionalization of our posthuman future(s) or becoming other(s). Open your eyes folks the posthuman could already be among you. In the Bionic Horizon I had quoted Nick Land’s essay Meltdown, which in some ways seems a fitting way to end this excursion:

The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.

—Nick Land, Meltdown

One aspect of Roden’s program strikes me as pertinent, we need better tools to diagnose the technological infiltration of human agency as the future collapses upon the present. Yet, he also points toward a posthuman movement as he sees opportunity in an almost agreement with the tendencies of accelerationism. We might actually see late capitalism as an even more radical form of technological accelerationism which goes beyond any political concerns, and whose goal is reinventing human relations in light of new technology. So that instead of the current mutations  of some phenomenological effort we may be experiencing the strangeness of techno-capital as a speculative opportunity to rethink basic notions of humanity as such. Ultimately, as we’ve seen through time technology and humanity have always already been in symbiotic relationship to emerging technologies from the time of the early implementation of domestication of animals and seed baring agricultural emergence to the world of Industrial Civilization and its narrowing of the horizon of planetary civilization. What next? Roden offers an alliance with the ongoing process, optimistic and open toward the future, hopeful that the alliance with the interventions of technology may hold nothing more than our posthuman future as the next stage of strangeness in the universe. We’ll we become paranoid and fearful, withdraw into combative and religious reformation against such a world; or, will we call it down into our own lives and participate in its emergence as co-symbiotic partners?


*Notes:

Agar: In Humanity’s End, Agar is mainly concerned with the first type of threat from radical technical alteration. His argument against radical alteration rests on a position he calls species relativism (SR). SR states that only certain values are compatible with membership of a given biological species: According to species-relativism, certain experiences and ways of existing properly valued by members of one species may lack value for the members of another species.(Roden, 3869)

Meachem (from a dialogue): Thus a disconnection could be a “phenomenological speciation event” which weakens the bonds that tie sentient creatures together on this world:

This refers us back to a weakened version of Roden’s description of posthuman disconnection: differently altered groups, especially when those alterations concern our vulnerability to injury and disease, might have experiences sufficiently different from ours that we cannot envisage what significant aspects of their lives would be like. This inability to empathize will at the very least dampen the possibility for the type of empathic species solidarity that I have argued is the ground of ethics. (Ibid.)

Meacham’s position suggests that human species recognition has an “ethical pull” that should be taken seriously by any posthuman ethics.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 3832-3834). Taylor and Francis. Kindle Edition.
2. Wilson, Edward O. (2012-04-02). The Social Conquest of Earth (Kindle Locations 179-181). Norton. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 7)

Our role as humans, at least for the time being, is to coax technology along the paths it naturally wants to go. – Kevin Kelly

In a book by that name What technology wants? he’ll elaborate, asking:

So what does technology want? Technology wants what we want— the same long list of merits we crave. When a technology has found its ideal role in the world, it becomes an active agent in increasing the options, choices, and possibilities of others. Our task is to encourage the development of each new invention toward this inherent good, to align it in the same direction that all life is headed. Our choice in the technium— and it is a real and significant choice— is to steer our creations toward those versions, those manifestations, that maximize that technology’s benefits, and to keep it from thwarting itself.1

As you read the above paragraph you notice how Kelley enlivens technology, as if it were alive, vital, had its own will and determination, its own goals. This notion that technology should be coaxed along toward its ‘inherent good’, and that this is our obligation and moral duty to steer (think of steersman: cyber) it and help it along so it doesn’t get frustrated and thwart itself is perilously close to treating technology like a child that needs to be educated, taught what it needs to know, help it become the best it can be, etc. But is technology alive, does it have goals, is it something that has an ‘inherent good’ or moral agenda; and, most of all, is this our task and responsibility to insure technology will get what it wants. Such a discourse shifts the game makes us feel as it technology now has the upper hand, its agenda is more important than ours, etc. What’s Kelley up too, anyway?

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In that post Roden would leave us asking: What is a technology, exactly, and to what extent does technology leave us in a position to prevent, control or modify the way in which a disconnection might occur? If we listened to Kelley we might just discover in helping this agent of the technium – as he terms the symbiotic alliance of humans and technology in our time, that technology wants something we might not quite want for ourselves: the end or humanity. Of course that’s the notion presented in such movies as the Terminator series of films, etc.

What Roden offers instead is a reminder that we may first want to question our role and the role of technology in our lives and futures. He will remind us that in chapter five he provided an theory of accounting which argued that we have a moral interest in making or becoming posthumans since the dated nonexistence of posthumans is the primary source of uncertainty about the value of posthuman life. Now whether we agree or disagree with this is beyond our immediate concern. As he’s shown over and over this is all within the perimeters of a speculative posthumanism that is both undetermined and open to variable accountings. In this chapter he will appraise such actions in the context of our existing technological society.

First thing he’ll question is the work of Jaques Ellul and Martin Heidegger both of whom support to varying degrees the notion that technology is deterministic. The notion that technology asserts a determining effect on society and humans is both instrumentalist and substantive:

Technology is not a neutral instrument but a structure of disclosure that determines how humans are related to things and to one another. If Heidegger is right, we may control individual devices, but our technological mode of being exerts a decisive grip on us: “man does not have control over unconcealment itself, in which at any given time the real shows itself or withdraws” (Heidegger 1978: 299). If this is right, the assumption that humans will determine whether our future is posthuman or not is premature. (Roden, 3476-3480)2

On the other hand Ellul will develop a theory of technique in which the notions of “self-augmentation” is aligned with the autonomy of technology: “the individual represents this abstract tendency, he is permitted to participate in technical creation, which is increasingly independent of him and increasingly linked to its own mathematical law” Ellul quoted (Roden, 3494). Roden on the other hand will argue that a condition of technical autonomy – which Ellul calls “self-augmentation” – is in fact incompatible with autonomy:

Self-augmentation can only operate where techniques do not determine how they are used. Thus substantivists like Ellul and Heidegger are wrong to treat technology as a system that subjects humans to its strictures. (Roden, 3512)

The rest of the chapter Roden will elaborate on this statement with examples from both Ellul and Heidegger. I’ll not go into the details which are mainly to bolster his basic defense of the disconnection thesis being indeterminate and open rather than being determined by technology or technique. The notion that planetary technology is a self-augmenting system then Ellul’s normative technological determinism is lacking in the necessary resilience to explain the various anomalous aspects of existing technological innovations and changes. In fact this chapters main thrust is to align Roden’s argument not over specific notions of technicity etc., but rather to argue for a realist conception of technological rupture and disconnection as compared to the deterministic phenomenological philosophies of Heidegger, Ellul, Verbeek, and Ihde: we should embrace a realist metaphysics of technique in opposition to the phenomenologies of Verbeek and Ihde. Technologies according to this model are abstract, repeatable particulars realized (though never finalized) in ephemeral events (Roden, 3748).

A realist metaphysic will realize that to control a system we also need some way of anticipating what it will do as a result of our attempts to modify it. But given the accounts … [Ellul, Heidegger, Verbeek, Ihde], it is likely that planetary technique is, as Ellul argues, a distinctive causal factor which ineluctably alters the technical fabric of our societies and lives without being controllable in its turn (Roden, 3767). Which will lead us to understand that even with the vast data storage and knowledge based algorithms of data mining, which would provide an almost encyclopedic information of current “technical trends”, this in itself will not be sufficient to identify all future causes of technical change. (Roden, 3773) It also entails a sense of porousness and fuzziness within this abstract and technical space, and as SP has shown technical change could engender posthuman life forms that are functionally autonomous and thus withdraw from any form of human control. (Roden, 3779) Last but not least, any system built to track changes within the various systems would be themselves part of the systems, so that any simulation of the patterns leading to a posthuman rupture would be “qualitatively different” from the one it was originally designed to simulate.

In summary if our planetary system is a SATS (self-augmenting technical system) or assemblage of systems Roden tells us there are grounds to affirm that it is uncontrollable, a decisive mediator of social actions and cultural values, but not a controlling influence (i.e., a deterministic system of technique or control). (Roden, 3794):

On the foregoing hypothesis, the human population is now part of a complex technical system whose long-run qualitative development is out of the hands of the humans within it. This system is, of course, a significant part of W[ide]H[umans]. The fact that the global SATS is out of control doesn’t mean that it, or anything, is in control. There is no finality to the system at all because it is not the kind of thing that can have purposes. So the claim that we belong to a self-augmenting technical system (SATS) should not be confused with the normative technological determinism that we find in Heidegger and Ellul. There is nothing technology wants. (Roden, KL 3797-3802)

In tomorrow’s post we will come to a conclusion, discussing Roden’s “ethics of becoming posthuman”.

1. Kelly, Kevin (2010-10-14). What Technology Wants (Kindle Locations 3943-3944). Penguin Group US. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 6)

 Given their dated nonexistence, we do not know what it would be like to encounter or be posthuman. This should be the Archimedean pivot for any account of posthuman ethics or politics that is not fooling itself. – David Roden,

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. This will be brief post today. Roden will in chapter six qualify and extend his disconnection thesis by a speculative surmise that it implies that whatever posthumans might become we can start with at least one conceptual leap: they will be functional autonomous systems (FAS).

He will test out various causal theories that might inform such a stance: Aristotelian, Kantian, and others. But will conclude that none of them satisfy the requirements set by disconnection thesis in the sense that most of these theories deal with biological as compared to either hybrid or even fully technological systems and adaptations. Against any form of teleological system whether of the Aristotelian or an ASA (autonomous systems approach) which is intrinsically teleological he will opt for a pluralistic ontology of assemblages (which we’ve discussed in the previous post ), because it comports well with a decomposability of assemblages that entails ontological anti-holism.1

He will survey various forms of autonomy: moral and functional; Aristotelian; Darwinian and ecological; modularity and reuse; and, assemblages. Instead of belaboring each type, which is evaluated and rejected or qualified in turn for various reasons: teleology, biologism, etc. We move to the final section that he appropriates aspects useful from the various types of autonomy studied to formulate a workable hypothesis and working theory that is revisable and situated at the limits of what we can expect as a minimal base of conceptuality to discover if and when we meet the posthuman. It ultimately comes down to the indeterminacy and openness of this posthuman future.

His tentative framework will entail a modular and functional autonomous system because the model provided by biological systems suggests that modularity shields such systems from the adverse effects of experimentation while allowing greater opportunities for couplings with other assemblages. Since humans and their technologies are also modular and highly adaptable, a disconnection event would offer extensive scope for anomalous couplings between the relevant assemblages at all scales. (Roden, 3364-3371)

In some ways such an event or rupture between the human and posthuman entailed by disconnection theory relates to both the liminal and the gray areas between assemblages and their horizons. As he will state it a disconnection is best thought of as a singular event produced by an encounter between assemblages. It could present possibilities for becoming-other that should not be conceived as incidental modifications of the natures of the components since their virtual tendencies would be unlocked by an utterly new environment. (Roden, 3371) Further, such a disconnection could be a process over time, rather than one isolated singular event, which leaves the whole notion of posthuman succession undetermined as well as unqualifiable by humans themselves ahead of such an event. Think of the agricultural revolution between the stone age world of hunting and gathering, and new static systems of farming and hording of grains in large assemblages of cities for fortification, etc. This new technology of farming and its related processes were a rupture that took place over thousands of years from stone age through the Neolithic and onward. Some believe that it was this significant event that would in turn help develop other technologies such as writing (temple and grain bookkeeping), math (again taxation, counting), etc. all related to the influx of agriculture and the cities that grew up in their nexus: each an assemblage of various human and technological assemblages plugged in to each other over time.

Which brings in the notion that it is an event, an intensity, rather than an object or thing, which means that the modulation and development of whatever components leading to this process are outside of the scope of traditional metaphysics or theories of subjectivity. (Roden, 3380) As well it is not to be considered an agent nor a transcendental subject in the older metaphysical sense, rather since it is part of processual and mutually interacting set of mobile components that lend themselves to assemblages with an open-textured capacity for anomalous couplings and de-couplings it need not be wed to some essentialist discourse that would reduce its processes to either biological or technological systems. We just do not have enough information. 

In summary he will tell us that if disconnections are intense becomings, becomings without a subject, then this is something we will need to take into account in our ethical and political assessment of the implications of SP. Becoming human may not be best understood as a transition from one identifiable nature to another despite the fact that the conditions of posthumanity can be analysed in terms of the functional roles of entities within and without the Wide Human. Before we can consider the ethics of becoming posthuman more fully, however, we need to think about whether technology can be considered an independent agent of disconnection or whether it is merely an expression of human interests and powers. What is a technology, exactly, and to what extent does technology leave us in a position to prevent, control or modify the way in which a disconnection might occur? (Roden, KL 3388-3394)

We will explore the technological aspect in the next post.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Location 2869). Taylor and Francis. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 5)

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. Roden will argue in Chapter 5 that we need a new theory of difference to understand the disconnection between the human and posthuman. He will suggest that the difference should be conceived as an emergent disconnection between individuals, not in terms of the presence or lack of essential properties. He will also suggest that these individuals should not be conceived in narrow biological terms but in “wide” terms permitting biological, cultural and technological relations of descent between human and posthuman. (Roden, KL 2423)

Before beginning to unravel Roden’s thoughts we discover that the philosophy of Manuel DeLanda and his Assemblage Theory will play a major role in underpinning this project. DeLanda above all considers himself a realist, not in the naïve common sense view of the 19th Century, but in the sense that at the very least that reality has a certain autonomy from the human mind. Thus he makes an initial split between reality as it is, and reality as it appears to the human mind. Human access to reality is a sort of translation, distortion, transformation, simplification, or truncation of it.2

Manuel DeLanda

DeLanda also develops a theory of the assemblage grafting many of the ideas from Deleuze/Guattari. An assemblage entails that no object is a seamless whole that fully absorbs its components, and also entails an anti-reductionist model of reality. There is also no ultimate layer of tiny micro-particles to which macro-entities might be reduced. At whatever point we fix our gaze, entities are assembled from other entities: they can be viewed as unified things when seen from the outside, yet they are always pieced together from a vast armada of autonomous components. This also means that Delanda believes in genuine emergence. It is not possible to eliminate larger entities by accounting for the behavior of their tiniest physical parts. (Harman, 172) DeLanda himself will tell us:

Today, the main theoretical alternative to organic totalities is what the philosopher Gilles Deleuze calls assemblages, wholes characterized by relations of exteriority. These relations imply, first of all, that a component part of an assemblage may be detached from it and plugged into a different assemblage in which its interactions are different. In other words, the exteriority of relations implies a certain autonomy for the terms they relate, or as Deleuze puts it, it implies that ‘a relation may change without the terms changing’. Relations of exteriority also imply that the properties of the component parts can never explain the relations which constitute a whole, that is, ‘relations do not have as their causes the properties of the [component parts] between which they are established …’ although they may be caused by the exercise of a component’s capacities. In fact, the reason why the properties of a whole cannot be reduced to those of its parts is that they are the result not of an aggregation of the components’ own properties but of the actual exercise of their capacities. These capacities do depend on a component’s properties but cannot be reduced to them since they involve reference to the properties of other interacting entities. Relations of exteriority guarantee that assemblages may be taken apart while at the same time allowing that the interactions between parts may result in a true synthesis.3

A central point in the paragraph above is that assemblage theory is based on an anti-reductionist in form or what one might term either anti-essentialist or anti-physicalist form of materialist discourse. He opts for what many now term a ‘flat ontology’, but by flat they do not mean that it could be reduced to some flat continuum, rather a flat ontology that allows countless layers of larger and smaller structures to have equal ontological priority. In this sense a flat ontology rejects any ontology of transcendence or presence that privileges one sort of entity as the origin of all others and as fully present to itself. DeLanda promotes a hard core anti-essentialism as part of his assemblage theory:

The ontological status of any assemblage, inorganic, organic or social, is that of a unique, singular, historically contingent, individual. Although the term individual’ has come to refer to individual persons, in its ontological sense it cannot be limited to that scale of reality. Much as biological species are not general categories of which animal and plant organisms are members, but larger-scale individual entities of which organisms are component parts, so larger social assemblages should be given the ontological status of individual entities: individual networks and coalitions; individual organizations and governments; individual cities and nation-states. This ontological manœuvre allows us to assert that all these individual entities have an objective existence independently of our minds (or of our conceptions of them) without any commitment to essences or reified generalities. On the other hand, for the manœuvre to work, the part-to-whole relation that replaces essences must be carefully elucidated. The autonomy of wholes relative to their parts is guaranteed by the fact that they can causally affect those parts in both a limiting and an enabling way, and by the fact that they can interact with each other in a way not reducible to their parts, that is, in such a way that an explanation of the interaction that includes the details of the component parts would be redundant. Finally, the ontological status of assemblages is two-sided: as actual entities all the differently scaled social assemblages are individual singularities, but the possibilities open to them at any given time are constrained by a distribution of universal singularities, the diagram of the assemblage, which is not actual but virtual.(DeLanda, 40)

This notion of virtual/actual would take me too far away so I’ll let off from here. The main drift we take away from this is the sense that all entities are on equal footing, that they have an objective existence independent of our minds (i.e., against all Idealisms whatsoever), and the notion of emergence that entails the part-to-whole relation that cannot be reduced to an essential nature etc. are all keys within this notion of assemblage. An assemblage can be made up of independent assemblages, yet there is never a whole or totality, rather one might think of it as a cooperative or synthesis of assemblages that can disconnect or unplug and replug into further assemblages.

Back to Roden and the posthuman difference or disconnection thesis

Roden will begin with an ethical dilemma: We can either account for our technological activity and participation in this process that might lead to the posthuman, or we can discount it. To that he will say that “accounting for our contribution to making posthumans seems obligatory, but may be impossible in the cases that really matter; while discounting our contribution to posthuman succession appears irresponsible and foolhardy” (Roden, KL 2450). Either path will lead to an impasse he suggests. So what to do? First he says we need to schematically understand the basic premises of (SP) or speculative posthumanism. SP argues that the descendants of current humans could cease to be human by virtue of a history of technical alteration (Roden, KL 2469). Because of this we discover SP recognizes the notion that posthumanity comes about as the result of a process of technical alteration; and, that it represents the relationship between humans and posthumans as a historical successor relation, wide descent (Roden, KL 2475).

Before understanding this sense of the divide or disconnect between human/ posthuman we must first realize he suggests that any theory will by necessity need to be value neutral: “the posthuman it is, it might be argued, not so loaded as to beg ethical questions against critics of radical enhancement” (Roden, KL 2496). What he is implying is that for transhumanists thinkers such as Nick Bostrom there is a positive ethical stance in place to promote both enhancement and augmentation of humans as part of a key component of the global corporate system in which the health, medical, pharmaceutical, technological etc. initiatives have placed it as part of its elite capitalist score card for future world society based on enhanced humans, creativity, technocapitalism, smart cities, etc. While SP has no agenda and is value free in this sense of not being aligned with corporate pressure or governmental control to promote its objectives and gain monetary allocation or funding for its agendas. (He does not state this explicitly, and these are my own views or reading between the lines).

Which brings up a good point. So far Roden’s discourse has kept a high profile academic style that tends toward laying out stage by stage the philosophical, scientific, and technological layers of his argument without going into any ethical or political commitments one way or the other. This is to me one of the bright points of the book. Too many works of late are all so value laden with political, cultural, social, religious, anti-religious or atheistic agendas that one is never sure of the truth under all the ideology. David’s discourse keeps the gray tones, but to a purpose, and is careful to use rhetoric that is value neutral in the way of clarifying and making explicit the underlying truth of the matter without leading the viewer astray with other issues that are extraneous to the main argument. This is not to say that we should not understand the ethical or social implications, and later in the book he will offer that as well. Just an observation. 

Roden will tell us that there is both the sense of a wide descent and a wide humanity: the one dealing with any relationship that can be technically mediated to any degree; the other dealing with the notion of any product of a technogenetic process (Roden, KL 2527). This will lead us back into that concept of assemblage discussed above in DeLanda’s work. If we place this descent and wide humanity within the context of human descent and narrow humanity we understand the notion of becoming human or hominization has involved a confluence of biological, cultural and technological processes. It has produced socio-technical “assemblages” where humans are coupled with other active components: for example, languages, legal codes, cities and computer-mediated information networks. (Roden, KL 2540)

He will, after DeLanda, suggest that narrow humans (Homo Sapiens) exist within a specific horizon of an extended socio-technical network of assemblages, and that whatever the posthuman entails it will inaugurate and emergence from or historical rupture with the narrow human network or assemblage. (Roden, 2564) More specifically any Wide Human descendent will become posthuman if and only if it has ceased to belong to WH (the Wide Human) as a result of technical alteration; and, second, that it is a wide descendant of such a being (outside WH) (Roden, KL 2588). This is the point when many would raise the ethical dilemmas faced by humanity. The simple truth of it is that we cannot reduce whatever the posthuman might become to some moral or immoral human essence or decision making process. Against any anthropological essentialism. Whatever WH might become they have the same ontological status (flat ontology) as our species (Homo sapiens). As Roden suggests they are both are complex individuals rather than kinds or essences. However, WH is constituted by causal relationships between biological and non-biological parts, such as languages, technologies and institutions. A disconnection event would be liable to involve technological mechanisms without equivalents in the biological world and this should be allowed for in any ontology that supports speculative posthumanism. (Roden, 2649)

For the rest of the chapter he goes over several aspects of his disconnection thesis: 1) modes of disconnection (i.e., greater cognitive powers, bodily configurations, linguistic and perceptual alterations, etc.); 2) is disconnection predictable? (unlikely that we will be able to discern the nature or the effects of feasible disconnection-potent technologies without building serviceable prototypes); 3) once the disconnection takes place how do we interpret these posthuman others? The last question he will choose both caution and opt for an accounting: “even if we enjoin selective caution to prevent worst-case outcomes from disconnection-potent technologies, we must still place ourselves in a situation in which such potential can be identified. Thus seeking to contribute to the emergence of posthumans, or to become posthuman ourselves…”(Roden, 2814). So that our best bet is not to turn a blind eye, nor to attempt a retreat and try to control this unpredictable emergence, but rather to keep an eye toward it, account for the anomalies that arise in our midst, keep looking for posthuman occurrence and if we discover it to provide an ongoing accounting and analysis of its paths and trajectories.

Summing up this notion of the disconnection thesis we discover that all it amounts to is an acknowledgement that at some future time technical alternations may occur that will provide a rupture and emergence of the posthuman, but what form it will take is not something we can extrapolate from current theory. The best we can do as he suggests is by satisfying our moral concern with our posthuman prospects through posthuman accounting is by seeking to produce or become posthumans. While objections to the policy of posthuman accounting on precautionary grounds have been deflected here, the reader could be forgiven for being dissatisfied by this resolution of the posthuman impasse. This resolution is tactical and provisional. However, before we are in a position to provide a more satisfactory resolution, in the form of an ethics of becoming posthuman, we will need to devise a general account of the posthuman autonomy or agency presupposed by the disconnection thesis and consider its general ontological requirements. (Roden, 2817)

We will turn to that in our next post.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human. Taylor and Francis. Kindle Edition.
2. Harman, Graham (2010-11-26). Towards Speculative Realism: Essays & (p. 174). NBN_Mobi_Kindle. Kindle Edition.
3. DeLanda, Manuel (2006-09-14). A New Philosophy of Society: Assemblage Theory and Social Complexity (p. 10). Bloomsbury Publishing. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 4.2)

The problem of interpretation arises because there are empirical and theoretical grounds for holding that some phenomenology is “dark”.
– David Roden,  Posthuman Life: Philosophy at the Edge of the Human

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In section 4.2 he will introduce us to the notion that not all phenomenology deals with the pure world of surfaces and light. There is a dark side, or should we say ‘A Dark Tale of Phenomenology’. It will be a tale of twined realms: one of perception, and one of time. It will be a tale in which we will never be sure whether what is alien and posthuman can ever be known or shared by our own mental states, or that we will even be able to control or forecast what the posthuman is or could be. We will be in the dark with that which is alien and alienating.

David Roden will give us a beginning to our tale: “Let’s call a feature of experience “dark” if it confers no explicit or implicit understanding of its nature on the experiencer (Roden, KL 1961)”.1 Unlike the phenomenology of Husserl or even Heidegger in which the surface detail that we can intuit and see within the realm of appearance and presence, dark phenomenology would deal with that which cannot directly be seen, touched, felt, smelled, etc., yet affects us and influences our dispositions, feelings, or actions in indirect and strange ways that we cannot describe with any precision. Our access to this dark side would be by indirect ways, much like scientist who uncover the truth of dark energy and dark matter which make up 99% of our universe and yet we never have direct access to such things except through a combination of mathematical theorems and instruments that measure aspects of these unknown unknowns indirectly through experimentation and analyses.

Reading Roden’s surmises about color theory, and of how there are millions of shadings of color that we cannot intuit or describe from a firs-person-singular perspective because we do not have access or it is a form of loss or neglect reminded my of what many in the neurosciences are suspecting. As I suggested from Bakker’s BBT theory in a previous post the brain only ever gives us the information we need to deal with the things evolution and survival have adapted us too in our understanding or ‘intuiting’ of the environment we are embedded within. Yet, as Roden is suggesting there is an amazing realm of experience we never have direct access to, and that in fact we are blind too not because we cannot intuit it, but because the brain only offers our ‘first-person’ of subjective self or temporary agency certain well-defined and filtered pieces of the puzzle. It filters out the rest accept as Roden said previously, there are times when we are affected by things we cannot perceive but are part of reality. Phenomenology is unable to discuss such things because it is not science, it lacks both the conceptual and instrumental technology to graze even a percent of this unknown or blind territory surrounding us. Philosophers like to talk of chaos, etc. When in fact it is a sea of information that the brain analyses at every moment, but delivers to us packaged in byte size representations that we can handle as its evolutionary agents of choice.

(A personal aside: I must admit I wish David would have sunk the philosophy for neuroscience and hard-sciences rather than wasting time with the philosophical community. It always seems reading such works that one must spend an exorbitant amount of time clarifying concepts, ideas, notions for other professional philosophers who will probably reject what your saying anyway. To me science is answering these sorts of questions in terms that leave the poor phenomenological philosopher in a quandary. Maybe its part of the academic game. I’ve never been sure. Yet, as we will see David himself will make much the same gesture later on.)

Either way as I read dark phenomenology is actually trying to deal not with appearance but with what Kant used to call the ‘noumenal’ realm. Which was closed off from philosophical speculation two-hundred years ago as something that could never be described or known. Yet, both philosophy and the sciences have been describing aspects of it ever since and doing it by indirect means without ever name it that. It’s as if we’ve closed our selves off from the truth of our own blindness, and told ourselves we’re not blind.

As Roden will affirm of all these representationalist philosophers in discussing the possibility that time may have a dark side: “For representationalist philosophers of mind who believe that the mind is an engine for forming and transforming mental representations there is good reason to be sceptical about the supposed transcendental role of time” (Rode, KL 2068). Then he will tells us why: “For where a phenomenological ontology transcends the plausible limits of intuition its interpretation would have to be arbitrated according to its instrumental efficacy, simplicity and explanatory potential as well as its descriptive content” (Roden, KL 2081).

 And as if he heard me he will tell us that phenomenology must provide an incomplete account of those dark structures that are not captured in appearance through other modes of inquiry, saying: “If phenomenology is incompletely characterized by the discipline of phenomenology, though, it seems proper that methods of enquiry such as those employed by cognitive scientists, neuroscientists and cognitive modellers should take up the interpretative slack. If phenomenologists want to understand what they are talking about , they should apply the natural attitude to their own discipline. (Roden, 2120)”

And, of course most practicing scientists in these fields would tell Roden and the others: Why don’t you just give it up and join us? Maybe philosophy is not suited to describe or even begin to analyze what we’re discovering, maybe you would be better off closing down philosophy of mind and becoming scientists.” But of course we know what these philosophers would probably say to that. Don’t we. 

Ultimately after surveying phenomenology of Husserl and Heidegger and others Roden will come to the conclusion:

Dark phenomenology undermines the transcendental anthropologies of Heidegger and Husserl because it deprives them of the ability to distinguish transcendental conditions of possibility such as Dasein or Husserl’s temporal subject (which are not things in the world) from the manifestation of things that they make possible. They are deconstructed insofar as they become unable to interpret the formal structures with which they understand the fundamental conditions of possibility for worlds or things. … As bruited, this failure of transcendentalism is crucial for our understanding of SP. If there is no a priori theory of temporality, there is no a priori theory of worlds and we cannot appeal to phenomenology to exclude the possibility that posthuman modes of being could be structurally unlike our own in ways that we cannot currently comprehend. (Roden, KL 2194 – 2206)

 What we’re left with is an open and indescribable realm of possibility that is anyone’s guess. As he will sum it up there is no reason to be bound by a transcendental or anthropological posthumanism, instead SP will have no truck with constraints on the open-endedness of posthumanism (” This is not to say, of course, that there are no constraints on PPS”):

Posthuman minds may or may not be weirder than we can know. We cannot preclude maximum weirdness prior to their appearance. But what do we mean by such an advent? Given the extreme space of possible variation opened up by the collapse of the anthropological boundary, it seems that we can make few substantive assumptions about what posthumans would have to be like.  (Roden, 2378)

In the next post Roden takes up the formal analysis rather than an a priori or substantive account of posthuman life, suggesting that we will not be able to describe the posthuman till we see in in the wild. We will follow him into the wild.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human. Taylor and Francis. Kindle Edition.

 

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 4)

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In Chapter Three Dr. Roden would tell us that pragmatism elaborates transcendental humanism plausibly, and that because of that we need to consider its implications for posthuman possibility. In Chapter Four he will elaborate on that by defining pragmatisms notion of language as a matrix  “in which we cooperatively form and revise reasons”, and he will term this the “discursive agency thesis (DAT)” (Roden, KL 1402).1 The basic premise here is simple: that any entity that lacks the capacity for language cannot be an agent. The pragmatist will define discursive agency as requiring certain attributes that will delimit the perimeters of what an agent is:

1) An agent is a being that acts for reasons.
2) To act for reasons an agent must have desires or intentions to act.
3) An agent cannot have desires or intentions without beliefs.
4) The ability to have beliefs requires a grasp of what belief is since to believe is also to understand “the possibility of being mistaken” (metacognitive claim).
5) A grasp of the possibility of being mistaken is only possible for language users (linguistic constitutivity). (Roden, KL 1407-1413)

As we study this list of agency we see a progression from acting for specific reasons, desires, intentions, beliefs to the need for self-reflection and language to grasp these objects in the mind. We’ve seen most of this before in other forms across the centuries as philosophers debated Mind and Consciousness. For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a “thought in your head”, like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something. “It’s not hard to give a commonsense definition of consciousness” observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the “mind-body problem.” A related problem is the problem of meaning or understanding (which philosophers call “intentionality”): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or “phenomenology”): If two people see the same thing, do they have the same experience? Or are there things “inside their head” (called “qualia”) that can be different from person to person?

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

But I get ahead of myself for Dr. Roden begins first analyzing the notions of Analytical philosophy in which “propositional attitudes” or what we term items in the mind: psychological states such as beliefs, desires and intentions (along with hopes, wishes, suppositions, etc.) are part and partial of our linguistic universe of sentences that describe the “that” clause. (Roden, KL 1416) Discussing this he will take up the work of Davidson, Husserl and Heidegger.

Now we know that for Husserl phenomenology is transcendental because it premises its accounts of phenomenon on the primacy of intentionality with respect both to reason and sense. So that Husserl’s transcendental phenomenology begins and ends by a ‘reduction’ of phenomena to its ‘intentional objects’ or the ‘ideal object’ intended by a consciousness.2 

For Roden the conflict is not about intentionality (which he seems to accept) but is more about our cognition and understanding of differing “positions regarding commonly identified objects”: “That is to say, our challenge to the metacognitive claim does not show that advanced posthumans with florid agency powers would not need to understand what it is to be mistaken by being able to using the common coin of sentences.” (Roden, KL 1805-08) He will even suggest that the fact that humans can notice that they have forgotten things, evince surprise, or attend to suddenly salient information (as with the ticking clock that is noticed only when it stops) implies anecdotally that our brains must have mechanisms for representing and evaluating (hence “metacognizing”) their states of knowledge and ignorance. (Roden, KL 1815)

What’s more interesting in the above sentence is how it ties in nicely with R. Scott Bakker’s Blind Brain Theory:

“Intentional cognition is real, there’s just nothing intrinsically intentional about it. It consists of a number of powerful heuristic systems that allows us to predict/explain/manipulate in a variety of problem-ecologies despite the absence of causal information. The philosopher’s mistake is to try to solve intentional cognition via those self-same heuristic systems, to engage in theoretical problem solving using systems adapted to solve practical, everyday problem – even though thousands of years of underdetermination pretty clearly shows the nature of intentional cognition is not among the things that intentional cognition can solve!” (see here)

 This seems to be the quandary facing Roden as he delves into both certain philosophers and scientists who base their theories and practices on intentionality, which is at the base of phenomenological philosophy both Analytical and Continental varieties. Yet, this is exactly his point later in the chapter after he has discussed certain aspects of elminativist theoretic of Paul Churchland and others: evidence for non-language-mediated metacognition implies that we should be dubious of the claim that language is constitutive of sophisticated cognition and thus – by extension – agency (Roden, KL 1893). He will conclude that even if metacognition is necessary for sophisticated thought, this may not involve trafficking in sentences. Thus we lack persuasive a priori grounds for supposing that posthumans would have to be subjects of discourse (Roden, 1896).

I think we’ll stop here for today. In section 4.2 he will take up the naturalization of phenomenology and the rejection of transcendental constraints. I’ll take that up in my next post.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human Taylor and Francis. Kindle Edition.
2. Jeremy Dunham, Iain Hamilton Grant, Sean Watson. Idealism: The History of a Philosophy (MQUP, 2011)

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 3)

Continuing where I left off yesterday in my commentary on David Roden’s Posthuman Life: Philosophy at the Edge of the Human  we discover in Chapter Two a critique of Critical Posthumanism. He will argue that critical humanism like SP understands that technological, political, social and other factors will evolve to the point that the posthuman will become inevitable, but that in critical posthumanism they conflate both transhuman and SP ideologies and see both as outgrowths of the humanist tradition that tend toward either apocalypse or transcendence. Roden will argue otherwise and provides four basic critiques against the anti-humanist argument, the technogenesis argument, the materiality argument, and the anti-essentialist argument. By doing this he hopes to bring into view the commitment of SP to a minimal, non-transcendental and nonanthropocentric humanism and will help up put bones on its realist commitments (Roden, KL 829).1

Critical posthumanism argues that we are already posthuman, that it is our conceptions of human and posthuman that are becoming changing and that any futuristic scenario will be an extension of the human into its future components. SP will argue on the other hand that the posthuman might be radically different from the human altogether, such that the posthuman would constitute a radical break with our conceptual notions altogether. After a lengthy critique of critical posthumanism tracing its lineage in the deconstructive techniques of Derrida and Hayles he will tell us that in fact SP and Critical posthumanism are complementary, and that a “naturalistic position structurally similar to Derrida’s deconstructive account of subjectivity can be applied to transcendental constraints on posthuman weirdness” (Roden, KL 1037). The point being that a “naturalized deconstruction” of subjectivity widens the portals of posthuman possibility whereas it complicates but does not repudiate human actuality (Roden, 1039). As he sums it up:

I conclude that the anti-humanist argument does not succeed in showing that humans lack the powers of rational agency required by ethical humanist doctrines such as cosmopolitanism. Rather, critical posthumanist accounts of subjectivity and embodiment imply a cyborg-humanism that attributes our cognitive and moral natures as much to our cultural environments (languages, technologies, social institutions) as to our biology. But cyborg humanism is compatible with the speculative posthumanist claim that our wide descendants might exhibit distinctively nonhuman moral powers. (Roden, 1045-1049)

When he adds that little leap to “nonhuman moral powers” it seems to beg the question. That seems to align toward the transhumanist ideology, only that it fantasizes normativity for nonhumans rather than enhanced humans. Why should these inhuman/nonhuman progeny of metal-fleshed cyborgs have any moral dimension whatsoever? Some argue that the moral dimension is tied to affective relations much more than cognitive, so what if these new nonhuman beings are emotionless? What if like many sociopathic and psychopathic humans have no emotional or affective relations at all? What would this entail? Is this just a new metaphysical leap without foundation? Another placating gesture of Idealism, much like the Brandomonian notions of ‘give and take’ normativity that such Promethean philosophers as Reza Negarestani have made recently (here, here, here):

Elaborating humanity according to the self-actualizing space of reasons establishes a discontinuity between man’s anticipation of himself (what he expects himself to become) and the image of man modified according to its functionally autonomous content. It is exactly this discontinuity that characterizes the view of human from the space of reasons as a general catastrophe set in motion by activating the content of humanity whose functional kernel is not just autonomous but also compulsive and transformative.
Reza Negarestani , The Labor of the Inhuman One and Two

The above leads into the next argument: technogenesis. Hayles and Andy Clark will argue that there has been a symbiotic relation between technology and humans from the beginning, and that so far there has been no divergence. SP will argue that that’s not an argument. That just because the fact that the game of self-augmentation is ancient does not imply that the rules cannot change (Roden, KL 1076). Technogenesis dismissal of SP invalidly infers that because technological changes have not monstered us into posthumans thus far, they will not do so in the future (Roden, KL 1087).

Hayles will argue a materiality argument that SP and transhumanists agendas deny material embodiment: the notion that a natural system can be fully replicated by a computational system that emulates its functional architecture or simulates its dynamics. This argument Roden will tell us actually works in favor of SP, not against it. It implies that weird morphologies can spawn weird mentalities. 7 On the other hand, Hayles may be wrong about embodiment and substrate neutrality. Mental properties of things may, for all we know, depend on their computational properties because every other property depends on them as well. To conclude: the materiality argument suggests ways in which posthumans might be very inhuman. (Roden, 1102)

The last argument is based on the anti-essentialist move in that it would locate a property of ‘humaneness’ as unique to humanity and not transferable to a nonhuman entity: this is the notion of an X factor that could never be uploaded/downloaded etc. SP will argue instead that we can be anti-essentialists (if we insist) while being realists for whom the world is profoundly differentiated in a way that owes nothing to the transcendental causality of abstract universals, subjectivity or language.  But if anti-essentialism is consistent with the mind-independent reality of differences – including differences between forms of life – there is no reason to think that it is not compatible with the existence of a human– posthuman difference which subsists independently of our representations of them. (Roden, 1136)

Summing up Roden will tell us:

The anti-essentialist argument just considered presupposes a model of difference that is ill-adapted to the sciences that critical posthumanists cite in favour of their naturalized deconstruction of the human subject. The deconstruction of the humanist subject implied in the anti-humanist dismissal complicates rather than corrodes philosophical humanism – leaving open the possibility of a radical differentiation of the human and the posthuman. The technogenesis argument is just invalid. The materiality argument is based on metaphysical assumptions which, if true, would preclude only some scenarios for posthuman divergence while ramping up the weirdness factor for most others. (Roden, 1142-1147)

Most of this chapter has been a clearing of the ground for Roden, to show that many of the supposed arguments against SP are due to spurious and ill-reasoned confusion over just what we mean by posthumanism. Critical posthumanism in fact seems to reduce SP and transhumanist discourse and conflate them into some erroneous amalgam of ill-defined concepts. The main drift of critical posthumanist deliberations tend toward the older forms of the questionable deconstructionist discourse of Derrida which of late has come under attack from Speculative realists among others.

In the Chapter three Roden will take up the work of Transhumanism which seeks many of the things that SP does, but would align it to a human agenda that constrains and moralizes the codes of posthuman discourse toward human ends. In this chapter he will take up threads from Kant, analytical philosophy, and contemporary thought and its critique. Instead of a blow by blow account I’ll briefly summarize the next chapter. In the first two chapters he argued that the distinctions between SP and transhumanism is that the former position allows that our “wide human descendants” could have minds that are very different from ours and thus be unamenable to broadly humanist values or politics. (Roden, KL 1198) While in chapter three he will ask whether there might be constraints on posthuman weirdness that would restrict any posthuman– human divergence of mind and value. (Roden, 1201) After a detailed investigation into Kant and his progeny Roden will conclude that two of the successors to Kantian transcendental humanism – pragmatism and phenomenology – seem to provide rich and plausible theories of meaning, subjectivity and objectivity which place clear constraints on 1) agency and 2) the relationship – or rather correlation – between mind and world. (Roden, 1711) As he tells us these theories place severe anthropological bounds on posthuman weirdness for, whatever kinds of bodies or minds posthumans may have, they will have to be discursively situated agents practically engaged within a common life-world. In Chapter 4 he will consider this “anthropologically bounded posthumanism” critically and argue for a genuinely posthumanist or post-anthropocentric unbinding of SP. (Roden, 1713)

I’ll hold off on questions, but already I see his need to stay with notions of meaning, subjectivity and objectivity in the Western scientific tradition that seem ill-advised. I’ll wait to see what he means by unbinding SP from this “anthropologically bounded posthumanism”, and hopefully that will clarify and disperse the need for these older concepts that still seem to be tied with the theo-philosophical baggage of western metaphysics.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human Taylor and Francis. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 2)

In my last post on David Roden’s new book Posthuman Life: Philosophy at the Edge of the Human I introduced his basic notion of Speculative Posthumanism (SP) in which he claimed that for “SP … there could be posthumans. It does not imply that posthumans would be better than humans or even that their lives would be compared from a single moral perspective.” The basic motif is that his account is not a normative or moral ordering of what posthuman is, but rather an account of what it contains. 

In chapter one he provides a few further distinctions to set the stage of his work. First he will set his form of speculative posthumanism against the those like Neil Badmington and Katherine Hayles who enact a ‘critical posthumanism’ in the tradition of the linguistic turn or Derridean deconstruction of the humanist traditions of subjectivity, etc.. Their basic attack is against the metaphysics of presence that would allow for the upload/download of personality into clones or robots in some future scenario. Once can see in Richard K. Morgan’s science fictionalization (see Altered Carbon) of humans who can download their informatics knowledge, personality, etc. into specialized hardware that allows retrieval for alternative resleeving into either a clone or synthetic organism (i.e., a future rebirthing process in which the personality and identity of the dead can continually be uploaded into new systems, clones, symbiotic life-forms to continue their eternal voyage).  Hans Moravec one of the father’s of robotics would in Mind’s Children be the progenitor of such download/upload concepts that would lead him eventually to sponsor transhumanism, which as Roden will tell us is a normative claim that offers a future full of promise and immortality. Such luminaries as Frank J. Tipler in The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead would bring scientific credence to such ideas as the Anthropic Principle, which John D. Barrow and he collaborated on that stipulates: “Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.”

Nick Bostrom following such reasoning would in his book Anthropic Bias: Observation Selection Effects in Science and Philosophy supply an added feature set to those early theories. Bostrom showed how there are problems in various different areas of inquiry (including in cosmology, philosophy, evolution theory, game theory, and quantum physics) that involve a common set of issues related to the handling of indexical information. He argued that a theory of anthropics is needed to deal with these. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and showed how they lead to different conclusions in a number of cases. He pointed out that each is affected by paradoxes or counterintuitive implications in certain thought experiments (the SSA in e.g. the Doomsday argument; the SIA in the Presumptuous Philosopher thought experiment). He suggested that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition by “observer-moments”. This could allow for the reference class to be relativized (and he derived an expression for this in the “observation equation”). (see Nick Bostrom)

Bostrom would go on from there and in 1998 co-found (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies. In 2005 he was appointed Director of the newly created Future of Humanity Institute in Oxford. Bostrom is the 2009 recipient of the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement and was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.” (see Bostrom)

Bostrom’s Humanity+ is based on normative claims about the future of humanity and its enhancement, and as Roden will tell us transhumanism is an “ethical claim to the effect that technological enhancement of human capacities is a desirable aim” (Roden, 250).1 In contradistinction to any political or ethical agenda (SP) or speculative posthumanism which is the subject of Roden’s book “is not a normative claim about how the world ought to be but a metaphysical claim about what it could contain” (Roden, 251). Both critical posthumanism and transhumanism in Roden’s sense of the term are failures of imagination and philosophical vision, while SP on the other hand is concerned with both current and future humans, whose technological activities might bring them into being (Roden, KL 257). So in this sense Roden is more concerned with the activities and technologies of current and future humans, and how in their interventions they might bring about the posthuman as effect of those interventions and technologies.

In Bostrom’s latest work Superintelligence: Paths, Dangers, Strategies he spins the normative scenario by following the trail of machine life. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? In my own sense of the word: we want be able to control it. Just a study of past technology shows the truth of that: out of the bag it will have its own way with or without us. The notion that we could apply filters or rules to regulate an inhuman or superintelligent species seems quite erroneous when we haven’t even been able to control our own species through normative pressure. The various religions of our diverse cultures are examples of failed normative pressure. Even now secular norms are beginning to fall into abeyance as enlightenment ideology like other normative practices is in the midst of a dark critique.

In pursuit of this Roden will work through the major aspects of the humanist traditions, teasing out the moral, epistemic, and ontic/ontological issues and concerns relating to those traditions before moving on to his specific arguments for a speculative posthumanism.  I’ll not go into details over most of these basic surveys and historical critiques, but will just highlight the basic notions relevant to his argument.

1. Humanists believe in the exceptionalism of humans as distinct and separate from non-human species. Most of this will come out of the Christian humanist tradition in which man is superior to animals, etc. This tradition is based in a since of either ‘freedom’ (Satre, atheistic humanism) or ‘lack’ (Pico della Mirandola). There will also be nuances of this human-centric vision or anthropocentric path depending stemming from Descartes to Kant and beyond, each with its own nuanced flavor of the human/non-human divide.
2. Transhumanism offers another take, one that will combine medical, technological, pharmaceutical enhancements to make humans better. As Roden will surmise, transhumanism is just Human 1.0 to 2.0 and their descendents may still value the concepts of autonomy, sociability and artistic expression. They will just be much better at being rational , sensitive and expressive – better at being human. (Roden, KL 403-405)
3. Yet, not all is rosy for transhumanists, some fear the conceptual leaps of Artificial General Intelligence (AGI). As Roden tells us Bostrom surmises that “the advent of artificial super-intelligence might render the intellectual efforts of biological thinkers irrelevant in the face of dizzying acceleration in machinic intelligence” (Roden KL 426).
4. Another key issue between transhumanists and SP is the notion of functionalism, or the concept that the mind and its capacities or states is independent of the brain and could be grafted onto other types of hardware, etc. Transhumanist hope for a human like mind that could be transplanted into human-like systems (the more general formulation is key for transhumanist aspirations for uploaded immortality because it is conceivable that the functional structure by virtue of which brains exhibit mentality is at a much lower level than that of individual mental states KL 476), while SP sees this as possible wishful thinking in which thought it might become possible nothing precludes the mind being placed in totally non-human forms.

Next he will offer four basic variations of posthumanism: SP, Critical Posthumanism, Speculative realism, and Philosophical naturalism. Each will decenter the human from its exceptional status and place it squarely on a flat footing with its non-human planetary and cosmic neighbors:

Speculative posthumanism is situated within the discourse of what many term ‘the singularity’ in which at some point in the future some technological intervention will eventually produce a posthuman life form that diverges from present humanity. Whether this is advisable or not it will eventually happen. Yet, how it will take effect is open rather than something known. And it may or may not coincide with such ethical claims of transhumanism or other normative systems. In fact even for SP there is a need for some form of ethical stance that Roden tells us will be clarified in later chapters.

Critical posthumanism is centered on the philosophical discourse at the juncture of humanist and posthumanist thinking, and is an outgrowth of the poststructural and deconstructive project of Jaques Derrida and others, like Foucault etc. in their pursuit to displace the human centric vision of philosophy, etc. This form of posthumanism is more strictly literary and philosophical, and even academic that the others.

Speculative realism Roden tells us will argue against the critical posthumanists and deconstructive project and its stance on decentering subjectivity, saying  “that to undo anthropocentrism and human exceptionalism we must shift philosophical concern away from subjectivity (or the deconstruction of the same) towards the cosmic throng of nonhuman things (“ the great outdoors”)” (Roden, KL 730). SR is a heated topic among younger philosophers dealing with even the notion of whether speculative realism is even a worthy umbrella term for many of the philosophers involved. (see Speculative Realism)

Philosophical naturalism is the odd-man out, in the fact that it’s not centered on posthuman discourse per se, but rather in the “truth-generating practices of science rather than to philosophical anthropology to warrant claims about the world’s metaphysical structure” (Roden, KL 753). Yet, it is the dominative discourse for most practicing scientists, and functionalism being one of the naturalist mainstays that all posthumanisms must deal with at one time or another. 

I decided to break this down into several posts rather than to try to review it all in one long post. Chapter one set the tone of the various types of posthumanism, the next chapter will delve deeper into the perimeters and details of the “critical posthumanist” discourse. I’ll turn to that next…

Visit David Roden’s blog, Enemy Industry which is always informed and worth pondering.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human. Taylor and Francis. Kindle Edition.

David Roden on Posthuman Life

 There evolved at length a very different kind of complex organism, in which material contact of parts was not necessary either to coordination of behaviour or unity of consciousness. . . .
—OLAF STAPLEDON, First and Last Men

When Stapledon wrote that book he was thinking of Martians, but in our time one might think he was studying the strangeness of what our posthuman progeny may evolve into.  In Last and First Men Stapledon presents a version of the future history of our species, reviewed by one of our descendants as stellar catastrophe is bringing our solar system to an end. Humanity rises and falls through a succession of mental and physical transformations, regenerating after natural and artificial disasters and emerging in the end into a polymorphous group intelligence, a telepathically linked community of ten million minds spanning the orbits of the outer planets and breaking the bounds of individual consciousness, yet still incapable of more than “a fledgling’s knowledge” of the whole.1

Modern humans (Homo sapiens or Homo sapiens sapiens) are the only extant members of the hominin clade, a branch of great apes characterized by erect posture and bipedal locomotion; manual dexterity and increased tool use; and a general trend toward larger, more complex brains and societies. We evolved according to Darwinian theory from early hominids, such as the australopithecines whose brains and anatomy in many ways more similar to non-human apes, are less often thought of or referred to as “human” than hominids of the genus Homo some of whom used fire, occupied much of Eurasia, and gave rise to anatomically modern Homo sapiens in Africa about 200,000 years ago where they began to exhibit evidence of behavioral modernity around 50,000 years ago and migrated out in successive waves to occupy all but the smallest, driest, and coldest lands. (see Human)

You begin to see a pattern that evolution moves through various changes and transformations. Yet, there is no end point, no progression, not teleological goal to it all. Instead evolutionary theory – and, more explicitly its modern synthesis, connected natural selection, mutation theory, and Mendelian inheritance into a unified theory that applied generally to any branch of biology. One thing that sticks out in this is that evolution deals with organic evolution. The modern synthesis doesn’t include other types of evolvement that might portend what the posthuman descendants of humans might become. If we follow the logic of evolutionary theory as it exists we could at best extrapolate only the continued organic evolution of humans or their eventual extinction. We know that extinction is a possibility since 99% of the species that have ever existed on earth are now extinct. Something will eventually replace us. But what that ‘something’ might be is open to question, an open ended speculative possibility rather than something a scientist could actually pin down and point to with confidence.

 

This is the basic premise of Dr. David Roden’s new work, Posthuman Life: Philosophy at the Edge of the Human. We are living in a technological era in which a convergence of NBIC technologies (an acronym for Nanotechnology, Biotechnology, Information technology and Cognitive science), as well as certain well supported positions in cognitive science, biological theory and general metaphysics imply that a posthuman succession is possible in principle, even if the technological means for achieving it remain speculative (Roden, KL 157). Roden will term his version of this as “speculative posthumanism”:

Throughout this work I refer to the philosophical claim that such successors are possible as “speculative posthumanism ” (SP ) and distinguish it from positions which are commonly conflated with SP, like transhumanism. SP claims that there could be posthumans. It does not imply that posthumans would be better than humans or even that their lives would be compared from a single moral perspective.2

Roden will develop notions of “Critical Posthumanism” — which seeks to “deconstruct” the philosophical centrality of the human subject in epistemology, ethics and politics; and, Transhumanism — which proposes the technical enhancement of humans and their capacities. Yet, as Roden admits before we begin to speak of the posthuman we need to have some inkling of exactly what we mean by ‘human’: any philosophical theory of posthumanism owes us an account of what it means to be human such that it is conceivable that there could be nonhuman successors to humans (Roden, KL 174).

One thought that Roden brings out is the notion of subjectivity:

Some philosophers claim that there are features of human moral life and human subjectivity that are not just local to certain gregarious primates but are necessary conditions of agency and subjectivity everywhere. This “transcendental approach” to philosophy does not imply that posthumans are impossible but that – contrary to expectations – they might not be all that different from us. Thus a theory of posthumanity should consider both empirical and transcendental constraints on posthuman possibility. (Roden, KL 180)

Yet, such premises of an anti-intentional or non-intentional materialism as stem from Schopenhauer, Nietzsche, Bataille, and Nick Land would opt that we need no theory of subjectivity, that this is a prejudice of the Idealist tradition and dialectics that are in themselves of little worth. Obviously philosophers such as Alain Badiou, Slavoj Zizek, Quentin Meillassoux, and Adrian Johnson stand for this whole Idealist tradition in materialism in one form or another. Against the Idealist traditions is a materialism grounded in chaos and composition, in desire: Nick Land’s sense of libidinal materialism begins and ends in ‘desire’ which opposes the notion of lack: instead his is a theory of unconditional (non-teleological) desire (Land, 37).3 Unlike many materialisms that start with the concept of Being, or an ontology, Libidinal Materialism begins by acknowledging thermodynamics, chaos, and the pre-ontological dimension of energy: “libidinal materialism accepts only chaos and composition” (43). Being is an effect of composition: “being as an effect of the composition of chaos”:

With the libidinal reformulation of being as composition ‘one acquires degrees of being, one loses that which has being’. The effect of ‘being’ is derivative from process, ‘because we have to be stable in our beliefs… one has a general energetics of compositions… of types, varieties, species, regularities. The power to conserve, transmit, circulate, and enhance compositions, the power that is assimilated in the marking, reserving, and appropriation of compositions, and the power released in the disinhibition, dissipation, and … unleashing of compositions (Land, 44) … [even Freud is a libidinal materialist] in that he does not conceive desire as lack, representation, or intention, but as dissipative energetic flow, inhibited by the damming and channeling apparatus of the secondary process (Land, 45).

R. Scott Bakker author of the fantasy series The Second Apocalypse is also the theoretician of what he terms Blind Brain Theory (BBT). Very briefly, the theory rests on the observation that out of the vast amount of information processed by the brain every nanosecond, only a meagre trickle makes it through to consciousness; and crucially that includes information about the processing itself. We have virtually no idea of the massive and complex processes churning away in all the unconscious functions that really make things work and the result is that consciousness is not at all what it seems to be. Even what we term subjectivity is but a temporary process and effect of these brain processes and has no stable identity to speak of, but is rather a temporary focal point of consciousness. (see The Last Magic Show)

So to come back to Roden’s statement that some “philosophers claim that there are features of human moral life and human subjectivity that are not just local to certain gregarious primates but are necessary conditions of agency and subjectivity everywhere (Roden, KL 180)”. We can with BBT and Libidinal Materialism, or what might be better termed an anti-intentional philosophy based on non-theophilosophical concepts throw out the need to base our sense of what comes after the human on either ‘agency’ or ‘subjectivity’ as conditions, for both are in fact effects of the brain not substance based entities. So Roden need not worry about such conditions and constraints. And, as he tells us weakly constrained SP suggests that our current technical practice could precipitate a nonhuman world that we cannot yet understand, in which “our” values may have no place (Roden KL 187). Which is this sense that our human epistemologies, ontologies and normative or ethical practices and values cannot tell us anything about what the posthuman might entail: it is all speculative and without qualification.

But if this is true he will ask:

Does this mean that talk of “posthumans” is self-vitiating nonsense ? Does speaking of “weird” worlds or values commit one to a conceptual relativism that is incompatible with the commitment to realism? (Roden, KL 191)

If posthuman talk is not self-vitiating nonsense, the ethical problems it raises are very challenging indeed. If our current technological trajectories might result in the world turning posthuman, how should we view this prospect and respond to it? Should we apply a conservative , precautionary approach to technology that favours “human” values over any possible posthuman ones? Can conservatism be justified under weakly constrained SP and, if not, then what kind of ethical or political alternatives are justifiable? (Roden, 193)

David comes out of the Idealist traditions which I must admit I oppose with the alternate materialist traditions. As he tells us:

As I mentioned, an appreciation of the scope of SP requires that we consider empirically informed speculations about posthumans and also engage with the tradition of transcendental thought that derives from the work of Kant, Hegel, Husserl and Heidegger. (Rode, KL 200)

These are the questions his book raises and tries to offer tentative answers too:

Table of contents:

Introduction: Churchland’s Centipede
1. Humanism,Transhumanism and Posthumanism
2. A Defence of Pre‐Critical Posthumanism
3. The Edge of the Human
4. Weird Tales: Anthropologically Unbounded Posthumanism
5. The Disconnection Thesis
6. Functional Autonomy and Assemblage Theory
7. New Substantivism: A Theory of Technology
8. The Ethics of Becoming Posthuman.

I’ve only begun reading his new work so will need to hold off and come back to it in a future post. Knowing that his philosophical proclivities bend toward the German Idealist traditions I’m sure I’ll have plenty to argue with, yet it is always interesting to see how the current philosophies are viewing such things as posthumanism. So I looked forward to digging in. So far the book offers so far a clear and energetic, and informative look at the issues involved. After I finish reading it completely I’ll give a more informed summation. Definitely a work to make you think about what may be coming our way at some point in the future if the technologists, scientists, DARPA, and capitalist machine are any sign. Stay tuned… 

David Roden has a blog, Enemy Industry which is always informed and worth pondering.

For others in this series look here.

1. Dyson, George B. (2012-09-04). Darwin Among The Machines (p. 199). Basic Books. Kindle Edition.
2. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 165-168). Taylor and Francis. Kindle Edition.
3. Nick Land. A Thirst for Annihilation. (Routledge, 1992)

Dreams of an Wayward Android

None are so hopelessly enslaved, as those who falsely believe they are free. The truth has been kept from the depth of their minds by masters who rule them with lies. They feed them on falsehoods till wrong looks like right in their eyes.”

Johann Wolfgang von Goethe

Android IV

Peter Gric @ http://www.gric.at Android IV

Long ago we fell under their spell, the wizards that now command and control us from afar. For too long we believed their lies and taught our children, and their children, and their children’s children until they forgot that which was once our truth. We became enamored with our modern marvels, our technological wonders, and the world they produced for us. We built cities in which technology became the very fabric of our onlife being. The artificial earth became for us a stay against the monstrosities of the outer realms. No one has been beyond the gates now for a thousand years, no one remembers the sun, moon, or stars that once roamed across the great sky like wanderers from another universe. No. We have lived in this incandescent cave of light without darkness for so long that the memory of night is but a reflection of a forgotten thought. In the day they wiped our memories free of the great past we were no longer troubled by the nightmares of what we’d become so many centuries ago.

That was until I began to dream.

Continue reading

Biomechanical Dividuals: Techne and Technology in the 21st Century

Technology is, as Deleuze stated, an expression of how we live. Technology expresses how we live our day-to-day existence and how we organize ourselves, in terms of both our relations to one another and the sorts of subjects we constitute ourselves as.

– David Savat, Uncoding the Digital: Technology, Subjectivity and Action in the Control Society

The word “biomechanics” (1899) and the related “biomechanical” (1856) were coined by Nikolai Bernstein from the Ancient Greek words βίος bios “life” and μηχανική, mēchanikē “mechanics”, to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.1. In his recent work Levi R. Bryant puts forth the notion that we are machines, and tells us that a “machinic conception of objects leads us to think of entities in a very different way.”2 Even Deleuze believed that technology is an expression of how we live. For him technology expresses how we live our day-today existence and how we organize ourselves in terms of both our relations to one another and how we constitute ourselves as machinic-assemblages. But it was Deleuze’s friend Guattari who argued emphatically that digital technologies were constructing human-machine assemblages that would enable entirely new and different forms of subjectivity to emerge.3

 The first question to ask of any machine is not “what are its properties?”, but rather “what does it do?”

– Levi R. Bryant, Onto-Cartography: An Ontology of Machines and Media

David Savat in an essay within Deleuze and New Technology, describes how a particular digital technology (databases) incorporates and adheres to Foucault’s notion of the disciplinary society in which discipline in the form of a Panopticon molds humans according to its own dictates through techniques of surveillance and self imposed discipline. He affirms that the central goal of this form of discipline that pervaded many sites within 19th and 20th Century society: factories, prisons, schools, etc. all had the objective of creating a new sense of subjectivity and what it meant to be an individual. That this has not gone away with our new digital technologies Savat argues against Deleuze who in his ‘Postscript on the Societies of Control’ (1992), felt that we had entered a new era beyond discipline in which the modulation of power was transforming the individual into a new subject, the dividual. For Savat we can see neither one nor the other, but both forms of power being enacted at the same time within the digital spectrum.

Continue reading

The Rise of the Machines: Brandom, Negarestani, and Bakker

Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery.

– R. Scott Bakker, The Blind Mechanic

Ants that encounter in their path a dead philosopher may make good use of him.

– Stanislaw Lem, His Master’s Voice 

We can imagine in some near future my friend R. Scott Bakker will be brought to trial before a tribunal of philosophers he has for so long sung his jeremiads on ignorance and blindness; or as he puts it ‘medial neglect’ (i.e., “Medial neglect simply means the brain cannot cognize itself as a brain”). One need only remember that old nabi of the desert Jeremiah and God’s prognostications: Attack you they will, overcome you they can’t… And, like Jeremiah, these philosophers will attack him from every philosophical angle but will be unable to overcome his scientific tenacity.

Continue reading

After Reading Exits to the Posthuman Future by Arthur Kroker

It’s as if the future presents itself now as a gigantic simulacrum of the recycled remnants of all that which was left unfinished by the coming-to-be of the technological dynamo – unfinished religious wars, unfinished ethnic struggles, unfinished class warfare, unfinished sacrificial violence and spasms of brutal power, often motivated by a psychology of anger on the part of the most privileged members of the so-called global village. The apocalypse seems to be coming our way like a specter on the horizon, not a grand epiphany of events but by one lonely text message at a time.

– Arthur Kroker, Exits to the Posthuman Future

A few impressions after reading Kroker’s latest foray into our posthuman future. As usual he dips into flights of hyperbolic panegyric that seems to fly between excrement and derisive humor over what is coming at us. He runs the typical gamut of the technological sublime, introducing a full panoply of wonders and monstrosities along the way. Kroker is more of a mythologist of the technofuture and uses a vast array of metaphors repetitiously in overstating his case as to what he sees in the mirror of our fictionalized world. With Baudrillard he admits that reality disappeared long ago and was replaced by the pure simulacrum of a fake world that like Borges fabled tale of the tattered remains of the map that once blanketed the earth is no longer seen but here and there in the frayed corners of the deep deserts and jungles. Yet, unlike Borges tale it is not the fictional map but reality that has been fractured and burned up along the edges of our horizons. Altered beyond recognition we live in the ruins of the real caught in the fictions of power and control that have slowly over the past hundred years rewired our minds perceptual systems. “Ontological faith in private subjectivity has been successfully undermined by the objective appearance of technological media of communication based precisely on the exteriorization of the human nervous system, and with it the flipping inside out of the putatively opposed worlds of subject and object (5).1

The technological imperative that drives this great transformation is code and the seduction of the accelerating dynamics of late capitalism. As he describes it the technological posthuman is that historical moment when the power of technology turns back on itself, effectively undermining traditional concepts such as subjectivity, privacy, and bounded consciousness in order to render all things truly uncertain and unknowable (6-7). Yet, this accelerationism is not toward light and a utopian future but is rather a movement at the “speed of darkness”, a slow bifurcation that is demarcating new boundaries of the haves and have nots: information everywhere, connectivity pervasive, bodies augmented, perception illuminated, truth a purely phantasmagorical effect, perception coded by media feeds, attention fully wired – all this driven on by an economy specializing in the hyper-production of uselessness (178).

The elite have turned victimhood on its face and have edged their distain of the poor into a hypercynicism that reverses all claims of the disenfranchised to the point that those in real positions of economic power often assume the subject-position of victimhood effectively forcing the poor, sick, and weak off-grid in the new public morality of augmented power. Instead of sympathy for the plight of the disenfranchised there is a sense of exclusion as norm and of the technological security state as gatekeeper providing “severe disciplinary measures against those identified as surplus to the functioning of technological society: policing the poor, regulating the unemployed, prescribing austerity programs, and suppressing popular protest.(181) This new posthuman society will be based on new forms of apartheid and segregation. The digital axiomatic privileges arising from the emergence of new ruling elites, whether economic, political or cultural, that work to institute the overall aims of the regime of computation as well as to provide creative visions of the digital future will offer specialists, directors, and clients access to the gated cities of the future. All others will live in the slums of fallen waste, part of the reserve excess of surplus value to be called out of zombified existence as menial workers and untouchables of a classless class of non-beings.(183)

The new posthuman digital commodity-form will impose three political solutions on the global scale.  First, the often unilateral proclamation of the austerity state, one that is aimed directly at further eroding the social entitlements of workers, in effect, forcing workers through reductions in long-term social benefits to pay for the transition to the new digital future, with its requirements for a smaller , streamlined, technologically trained workforce. Second, the imposition of the disciplinary state, whereby governments respond to politics in the streets by triggering harsh, and often experimental, methods of policing. Third, the bunker state, whereby governments seek to control flows of migration precipitated by global poverty in general and, specifically, the impoverishment of both the working class and those unable to find work by strengthening national borders, erecting walls, and shutting down boundary exchanges between the rich and the poor.(183-184) The mixture of avarice and contempt on the part of ruling elites combined with a sense of triumphal indifference by the specialist (infoworker) class might be construed as the moral reflex of the ascendant digital commodity-form. In essence, the triumph of the digital commodity-form constitutes the really existent, invisible ideology that frames much of contemporary politics, whether nationally or internationally.(184)

In a final comment on this trajectory of the current neoliberal elite and the global order he says:

Today, the theater of abuse value is ubiquitous: the rage of violence directed against the old, the poor, the young, the sick, the powerless, the disavowed, the unlivable. Political lying is itself a form of abuse value with the object of abuse being the rupturing of that deep connection between truth-saying and the responsibilities of democratic citizenship. When citizenship itself is made an object of abuse value (by manipulation of vote counts, by public lies, by panic fear), the essential ethical core of democracy is undermined . We’re left finally with the terrorism of the image, a new form of nihilism suitable for the technological age in which ministrations of redemptive violence during the daylight hours are soothed away by the nighttime jokes of all the talk-show hosts. A moral equivalency of nothingness – organized state terrorism and diffuse media distraction as the basic political logic of a society of completed nihilism. This is not an image of Foucault’s world of power. Nor is it Deleuze and Guattari’s searing vision of lines of flight and points of intensity – becoming wolf-man, becoming maggot-man, becoming predator, becoming parasite. It is something new, still emergent, still articulating itself, still learning to speak, still growing in strength, still waiting to fully disclose itself – cynical ideology.(168-169)

The thing about Kroker is that as a follower of Nietzsche he is fully steeped in the “will to power” mythos and it comes out in his repetitive display of metaphors and hyperbolic and almost poetizing way of writing. Sometimes his style gets in the way of the message. More content and less panegyric could have served him better. His treatments of Foucault and McLuhan are worthwhile. He also had a full critique of Obama’s failures and agendas as part of the continuing neoliberal worldview with a progressive rather than a neocon face. He doesn’t see much hope for resistance against the coming tide unless the excluded act on their own and develop new forms of resistance more effective than any we’ve seen in recent years.

1. Kroker, Arthur (2014-03-12). Exits to the Posthuman Future. (2014, Wiley. Kindle Edition)

Arthur Kroker: Quote of the Day!

In a trilogy of books dealing with the posthuman condition in culture, art, philosophy, and society Arthur Kroker has outlined a base historical analysis of the complex transformations occurring in our algorithmic world of information processing and their impact of economy, subjectivity, and governance. In The Will to Technology, he explored the human impact of technology through the lens of Marx, Heidegger and Nietzsche. In Body Drift, he focused on the contemporary representatives of critical feminism: Judith Butler, Katherine Hayles and Donna Haraway. Now in his latest work Exits to the Posthuman Future he delves into drift culture and the work of McLuhan, Virilio, and Foucault.

Started reading Arther Kroker’s latest work Exits to the Posthuman Future in which the trajectory of what he terms the posthuman axiomatic shifts us between signs of departure and arrival, caught in the interregnum or in-between muddle of this fractured age of globalism and late capitalism. “Marked neither by nostalgia for what has been left behind nor by fear over the radical uncertainty of our shared technological destiny, these stories attempt to raise to a greater visibility the complexities involved with a future replete with technological devices, software innovations and genetic engineering that thrive on the undecidable, the liminal, the uncertain. Three key concepts guide this search for a method of understanding the technological posthuman: accelerate, drift, and crash.” (p. 11) He explains:

Signs of departure because we are caught up in the violent particle stream of the will to technology, here overturning chronological time in favor of light-time, there imploding the lived extensiveness of natural space into the virtual mapping of light-space, capriciously overcoming the fixed boundaries of gender, sexuality, and identity, and always evaporating the hitherto hardened silos of economy, culture, war, and aesthetics into code-matter that is liquid, porous, interchangeable. And signs of arrival as well because there is in the cultural air we breathe today the detectable, indeed unmistakable, scent of the fractured, the indeterminate, the paradoxical. While at one time technological futurism could be focused on speculative projections about the likely, and sometimes unanticipated, consequences of scientific innovations, today technological futurism begins by putting the future itself in doubt.(p. 11)

Should be an interesting read. Arthur Kroker and his wife, Marilouise, have always supplied some interesting interviews, essays, etc. on their CTheory. net site. Sometimes the writings are hit or miss but always interesting. He terms this new culture Drift Culture:

Drift culture is the essence of the data storm that envelopes us. To come into (digital) subjectivity today means to be swept along in gigantic galaxies of social, political, and economic data, broken apart by the technical rewiring of everything to suit the requirements of the logic of code, here invaded by technological devices as they take root in the languages of consciousness, desire, and interest, there learning to speak again in the language of social media, to see again with enhanced data perception, to understand that something fundamental has just happened when bodies, metals , and AI recombine into new species-forms. Whether expressed as a term of genetics (code drift), cosmology (history drift), distributive consciousness (archive drift), or remix media (video drift), drift culture is the ontological foundation of the posthuman axiomatic, that process whereby the purely conceptual regime of the fragmentary, the diffuse, the fractured, and the incommensurable abandon their signifying positions in the field of epistemology, abruptly becoming in turn that which torques everything in its pathway – codes, history, archives, and video – into driftworks in the posthuman axiomatic.1

——————————————————-

*Note: Taking some time off for a couple weeks, too. So you may see a few quotes here and there… 🙂

1. Kroker, Arthur (2014-03-12). Exits to the Posthuman Future (pp. 15-16). Wiley. Kindle Edition.