Epiphylogenesis: On Becoming Machine

Epiphylogenesis: Bernard Stiegler – Memory and Prosthesis

Once you realize the human body was a migration ploy, a stop gap in a long process of migration technics using memory technology in a process of self-exteriorization, then you realize that becoming artificial and technological (robotic or AI) was immanent to the strange thing we are. Becoming robot are merging with our technologies isn’t really that far fetched after all, and that what we’ve been doing so for thousands if not millions of years is evolving new prosthesis step by step by step. This is at least part of what Bernard Stiegler admits to in his thesis of originary technicity or his theory of lack and supplement (ala Derrida): the supplement of technics is our way of exploiting this lack within the human condition. The human is a placeholder in a process in-between, a transition. The body we take for granted as the foundation of our humanity was never an end point, a static object at the end of some teleological assembly line, but was rather a project and program in an ongoing experimental process that has no foreseeable goal or end point, no design or designer. It can change form. We are not bound to this form, only temporary denizens in transition.

As is well-known, Bernard Stiegler articulates three different forms of memory: genetic memory (which is programmed into our DNA); epigenetic memory (which we acquire during our lifetime and is stored in the central nervous system) and, finally, epiphylogenetic memory (which is embodied in technical systems or artefacts). For Stiegler, then, epiphylogenesis represents a quasi-Lamarckian theory of “artificial selection” where successive epigenetic experiences are stored, accumulated and transmitted from generation to generation in the form of technical objects. In this sense, as we will see in a moment, Stiegler argues that the birth of man represents an absolute break with biological life because it is the moment in the history of life where zoē begins to map itself epiphylogenetically onto technē: what we call the human is “a living being characterised in its forms of life by the non-living.”

In this scenario we’ve been exteriorizing ourselves all along through this tri-fold process of memory works; or, as he terms it: epiphylogenesis. For Stiegler, this account of the origin of man contains a crucial insight into the status of the human that will form the basis for his own philosophy: humanity is constituted by an originary lack of defining qualities— what he calls a “default” of origin [le défaut d’origine]— that must be supplemented from outside by technics. What Stiegler calls technics is in the Deleuze/Guattarian index the “machinic”. For Deleuze and Guattari, every machine is a machine connected to another machine. Every machine functions as a break in the flow in relation to the machine to which it is connected, but is at the same time also a flow itself, or the production of a flow. What we term libido is the “labor” of desiring production. It is pure multiplicity, and for Deleuze and Guattari, it is anoedipal. The flow is non-personal, although investments by desiring machines produce subjectivity alongside its components. (Guattari, “Machinic Heterogenesis”)

Some accuse Stiegler of remaining within an anthropocentric horizon, saying that his thought risks re-anthropologising technics even in the very act of insisting upon the originary technicity of the human: what expropriates the anthropos once again becomes “proper” to it as its defining mode of being. If Stiegler would undoubtedly reject this line of critique— the moral of the story of Epimetheus is clearly that nothing is proper to the human— his enduring focus on hominisation as the unique moment when the living begins to articulate itself through the non-living means that his philosophy arguably still remains within what we might call the penumbra of human self-constitution. The supposedly self-identical human being is put into a relation only in order for the relation itself to be ontologised as an exclusively “human” one: we are the only being that relates.1

In many ways we need to do away with the term “human” which has so many associations that it has become a term indefinable going forward. We’ve tried using terms like “post-human” to obviate this fact, speaking of transitional states. And, yet, much of the discourse surrounding this still deals with the cultural matrix of humanity itself while leaving out the non-human others among us that many now know have recourse to externalization technics as well. The point is that humans are not part of an exception, we are part of the life of this planet. One among other possible life-forms and trajectories taking place in complex of ecologies simultaneously.

David Roden in his excellent book Posthuman Life: Philosophy at the Edge of the Human addresses just this telling us that what we need is a “theory of human– posthuman difference” (Roden: 105).2 As he surmises the posthuman difference is not one between kinds but emerges diachronically between individuals, we cannot specify its nature a priori but only a posteriori – after the emergence of actual posthumans. The ethical implications of this are somewhat paradoxical. (Roden: 106) Catherine Hayles once argued in How We Became Posthuman that one of the key characteristics of the posthuman is that the body is treated as the “original prosthesis,” a prosthetic which contains the informatic pattern of posthuman subjects, but which is not integral to them.3 For Stiegler, this is only possible through a process of exteriorisation.  Our experience of being is therefore not merely a product of memory but is achieved through the processes of mnemotechnics: the ‘technical prostheses’ through which memory is recorded and transmitted across generations, and which is never limited to individual minds.  Without this sense of memory, Stiegler argues, the human would not be possible. The point here is that our bodies might be the last sacrosanct thing we will have to relinquish in this long road from animal to the post-human. For if Stiegler is correct it is our cultural memories and these technics of exteriorization that have for millennia become the project to which the human organic systems were moving, a process that has through the invention of computational machines and the rise of AI and Robotics only accelerated this process of self-exteriorization.

With this notion comes the transition from the terms of technics and machines to that of assemblages. As David Roden in his work will iterate:

The concept of assemblage was developed by the poststructuralist philosophers Gilles Deleuze and Félix Guattari (1988). Its clearest expression, though, is in the work of the Deleuzean philosopher of science Manuel DeLanda. For DeLanda, an assemblage is an emergent but decomposable whole and belongs to the conceptual armory of the particularist “flat” ontology I will propose for SP in § 5.4. Assemblages are emergent wholes in that they exhibit powers and properties not attributable to their parts but which depend (or “supervene”) on those powers. Assemblages are also decomposable insofar as all the relations between their components are “external”: each part can be detached from the whole to exist independently (assemblages are thus opposed to “totalities” in an idealist or holist sense). This is the case even where the part is functionally necessary for the continuation of the whole (DeLanda 2006: 184; see § 6.5).(Roden: 111)

Is the future of the human-in-migration this becoming assemblage? As Roden continues biological humans are currently “obligatory” components of modern technical assemblages. Technical systems like air-carrier groups, cities or financial markets have powers that cannot be attributed to narrow humans but depend on them for their operation and maintenance much as an animal depends on the continued existence of its vital organs. Technological systems are thus intimately coupled with biology and have been over successive technological revolutions. (Roden: 111)

This sense that we are already so coupled with our exterior memory systems that what we’re seeing in our time is a veritable hyperacceleration and migration out of the organic and into the artificial systems we’ve been so eagerly immersed in. As futurist Luciano Floridi reminds us we are witnessing an epochal, unprecedented migration of humanity from its Newtonian, physical space to the infosphere itself as its Umwelt, not least because the latter is absorbing the former. As a result, humans will be inforgs among other (possibly artificial) inforgs and agents operating in an environment that is friendlier to informational creatures. And as digital immigrants like us are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between infosphere and physical world, only a difference in levels of abstraction. When the migration is complete, we shall increasingly feel deprived, excluded, handicapped, or impoverished to the point of paralysis and psychological trauma whenever we are disconnected from the infosphere, like fish out of water. One day, being an inforg will be so natural that any disruption in our normal flow of information will make us sick.4

Most of us hang onto that last bastion of the human, our body. For many the whole notion that we are not bound to this organic husk that has been the natural evolutionary experiment of millions of years seems utter tripe, and yet what if we are about to migrate into a new platform, an assemblage of plasticity and formlessness? What if the whole notion that we are stuck in this dying ember of organicist nature is just a myth, a myth that is keeping us from breaking through the barrier of becoming posthuman? What if the chains that tie us to this dead world of organic being is our religious, philosophical, and political prejudices, our exceptionalisms, our anthropologicisms? What if merging with our software and platforms is not only feasible but the motion and very movement we’ve been performing through this process of self-exteriorization all along? What if this is our way forward? What then?

One day we will quaintly look back upon organic life and the human body with a fondness that is only a memory, while we become pluralistic denizens of a million prismatic forms yet to be shaped by technics into the vast assemblages of the unbound universe. The question to ask yourself is: Will you see this as a worthy task or as a horror? If the former then you are already in migration into the assemblage, if the latter then you have become a problem for yourself and every other living thing on this planet.


  1. Armand, Louis; Bradley, Arthur; Zizek, Slavoj; Stiegler, Bernard; Miller, J. Hillis; Wark, McKenzie; Amerika, Mark; Lucy, Niall; Tofts, Darren; Lovink, Geert. Technicity (Kindle Locations 1749-1757). Litteraria Pragensia. Kindle Edition.
  2. Roden, David. Posthuman Life: Philosophy at the Edge of the Human (p. 105). Taylor and Francis. Kindle Edition.
  3. Hayles, N. Katherine.  How We Became Posthuman. Chicago and London: University of Chicago Press, 1999.
  4. Floridi, Luciano. The Ethics of Information (pp. 16-17). Oxford University Press, USA. Kindle Edition.

 

On David Roden’s Dark Phenomenology

I originally made a post on FB (Facebook) on Steven Shaviro’s new book Discognition which elaborated on aspects of Frank Jackson’s notions on Qualia. At the end of this essay he mentioned the work of a friend, David Roden. David is developing an approach he terms ‘dark phenomenology’, and it is this that I wish to clarify and expand upon. To do this I’ll be digressing across a spectrum of various concepts, authors, philosophers, neuroscientists, etc.. Bare with me…

Terrence W. Deacon, in his recent Incomplete Nature: How Mind Emerged from Matter will argue that a complete theory of the world that includes us, and our experience of the world, must make sense of the way that we are shaped by and emerge from such specific absences. What is absent matters, and yet our current understanding of the physical universe suggests that it should not. A causal role for absence seems to be absent from the natural sciences. (p. 3) As he suggested in his conclusion, “It’s time to recognize that there is room for meaning, purpose, and value in the fabric of physical explanations, because these phenomena effectively occupy the absences that differentiate and interrelate the world that is physically present.” (p. 541)

David Roden in Posthuman Life will argue that our understanding of human agency in terms of iterability and différance leads to a moderately revisionary (but still interesting) account of what human rationality and agency consists in. But this leads us beyond the human by suggesting how rationality and agency depend on structures that are shared by nonhuman systems that may lack the capacities associated with human agency, or have other powers that humans do not enjoy… (p. 45).1 For Roden first person experience is fractured by these “dark” elements of experience which “offers no standard for [their] own description or interpretation.”

To understand notions of iterability and  différance we need to work through the logics of ‘presence’ and ‘absence’ in Western traditions of philosophy. Most of Western Philosophy from the time of Plato till the postmoderns was based on the logic of ‘presence’ rather than ‘absence’. Deacon and, I will say, Roden – in his Dark Phenomenology, will both offer the perspective that ‘absence’ not ‘presence’ is key to our current understanding of how we build up our perceptions of the world. As David reports it “the problem of interpretation arises because there are empirical and theoretical grounds for holding that some phenomenology is “dark”. Dark phenomenology is experienced; but experiencing it offers no standard for its own description or interpretation.” (p. 76)

So let’s begin…

First, presence describes an original state, a state that must have come first.  As I gaze out into the world I can say the world is present to my observing eye.  If that is the case, then my observing consciousness must be present to my own self-reflection.  It thus follows that meaning, in its most pure sense, as conscious thought, must be present to me as I gaze out onto the world.  Presence is, therefore, the main predicate for a text’s meaning (its sense or its reference), despite the fact that this meaning is always absent and in need of reconstruction through reading or interpretation.

For this reason, a second moment of presence invades consciousness as absence (i.e., in the parlance of post-modern thought: the disappearance of the world behind the veils of language, consciousness going astray, the reign of death, non-sense, irrationality).  In this way gaps, absences and deficiencies of all imaginable kinds (the structurality or play of a structure) are subordinated to a principle of presence. Is it possible to imagine an absence without reference to the principle of presence? It would be a radical absence, something always and from the beginning absent, missing, lost to experience.  If there was such an absence, how could we glimpse it?

We glimpse it between repetitions as their repeatability. If the present moment can be repeated (i.e. remembered) then, preceding the present moment, is the possibility of its being repeated in memory (i.e., memory itself as repeatability).  So memory precedes and exceeds the present moment, which we will have remembered.

In Shaviro the crux comes here: “This leads to the ironic consequence that first-person experience cannot be captured adequately by first-person observation and reflection. “What the subject claims to experience should not be granted special epistemic authority since it is possible for us to have a very partial and incomplete grasp of its nature”.”

This “incomplete grasp” of nature/reality is Deacon’s as well as Roden’s acknowledgment that what is important is not what is present to consciousness, but rather what is absent in presence. Let me clarify. In chapter 7 of David’s The Posthuman Life he will develop his dark phenomenological approach. He lays the ground by arguing for a substantive or substantial formalist approach that is based on a non-teleological account of human/technique interaction in which – as in other cognitive scientific accounts – would see our evolutionary cognitive adaptations within a human and technological schema that supports abstraction but not autonomous self-augmentation. Let me digress…

Michel Tomasello in his recent book A Natural History of Human Thinking maintains that our prehuman ancestors, like today’s great apes, were social beings who could solve problems by thinking. But they were almost entirely competitive, aiming only at their individual goals. As ecological changes forced them into more cooperative living arrangements, early humans had to coordinate their actions and communicate their thoughts with collaborative partners. Tomasello develops what he terms the “shared intentionality hypothesis” which captures how these more socially complex forms of life led to more conceptually complex forms of thinking. In order to survive, humans had to learn to see the world from multiple social perspectives, to draw socially recursive inferences, and to monitor their own thinking via the normative standards of the group. Even language and culture arose from the preexisting need to work together and coordinate thoughts.

What this implies is that we developed external memory systems that could be transmitted across time, from generation to generation. Merlin McDonald in his book Origins of the Modern Mind would develop a staged history of this notion. Donald traces the evolution of human culture and cognition from primitive apes to the era of artificial intelligence, and presents an original theory of how the human mind evolved from its presymbolic form. In the emergence of modern human culture, Donald proposes, there were three radical transitions. During the first, our bipedal but still apelike ancestors acquired “mimetic” skill–the ability to represent knowledge through voluntary motor acts–which made Homo erectus successful for over a million years. The second transition–to “mythic” culture–coincided with the development of spoken language. Speech allowed the large-brained Homo sapiens to evolve a complex preliterate culture that survives in many parts of the world today. In the third transition, when humans constructed elaborate symbolic systems ranging from cuneiforms, hieroglyphics, and ideograms to alphabetic languages and mathematics, human biological memory became an inadequate vehicle for storing and processing our collective knowledge. The modern mind is thus a hybrid structure built from vestiges of earlier biological stages as well as new external symbolic memory devices that have radically altered its organization.

My own view is that these external memory storage and transmission systems have been part of an evolving and elaborate combination of technics and technology which humans have shaped, but that in turn have shaped our cognitive relations with each other and our environments. Bernard Stiegler recently argued in his book Techniques and Time that “technics” forms the horizon of human existence. This fact has been suppressed throughout the history of philosophy, which has never ceased to operate on the basis of a distinction between episteme and tekhne. The thesis of the book is that the genesis of technics corresponds not only to the genesis of what is called “human” but of temporality as such, and that this is the clue toward understanding the future of the dynamic process in which the human and the technical consists. Another digression… this time on Aristotle and the notion of Epistêmê and technê:

Epistêmê is the Greek word most often translated as knowledge, while technê is translated as either craft or art. Without going into a full history there are at times in Aristotle that he’ll confuse the two forms. Aristotle says that the person with epistêmê and the person with technê share an important similarity. Aristotle contrasts the person of experience (empeiria) with someone who has technê or epistêmê. Yet, at other times he’ll argue that at person who has a technê goes beyond experience to a universal judgment. Aristotle goes on to say that in general the sign of knowing or not knowing is being able to teach. Because technê can be taught, we think it, rather than experience, is epistêmê ( 981b10). Presumably the reason that the one with technê can teach is that he knows the cause and reason for what is done in his technê. So we can conclude that the person with technê is like the person with epistêmê; both can make a universal judgment and both know the cause, etc.

All this brings me back to something David says in Chapter 7 of his book:

“Abstraction exposes habits and values to a manifold of sensory affects and encounters (§ 8.2). It entails that the evolution of particular technologies depends on hugely complex and counter-final interactions, catalysed by transmissibility and promiscuous reusability (Ellul 1964: 93).” (p. 160)

Now if we put that into the perspective of Tomasello and his “shared intentionality hypothesis,” along with Donald’s notions of various hybrid cognitive revolutions in transmission of cultural memory or representational systems of external storage as complex and counter-final (i.e., having no teleological or autonomous impact). We begin to see a picture emerging of what David will term dark phenomenology. Following Stiegler David will argue that the “essence of a technology is not simply to be found in an analysis of its internal functioning but in the concrete ways in which these functions are integrated in matter. The invention of a new device is neither the instantiation of an abstract Platonic diagram nor the invention of an isolated thing, but the production of a mutable pattern open to dynamic alteration (Stiegler 1998: 77– 8).” (p. 162) He’ll go on to say:

“This reaffirms my claim that a phenomenological ontology which reduces abstract technical entities to their uses is inadequate. Technical entities are more than bundles of internal or external functions. They are materialized potentialities for generating new functions as well as modifiable strategies for integrating and reintegrating functions…” (p. 162)

What is important here is the notion of “materialized potentialities”. What does this mean? Aristotle’s proposal in Book Theta of his Metaphysics, that “a thing is said to be potential if, when the act of which it is said to be potential is realized, there will be nothing im-potential (“that is, there will be nothing able not to be,” (in HS, 45) (see: http://www.iep.utm.edu/agamben/). Giorgio Agamben will offer us a opening onto this. Agamben argues that this ought not be taken to mean simply that “what is not impossible is possible” but rather, highlights the suspension or setting aside of im-potentiality in the passage to actuality. This suspension, though, does not amount to a destruction of im-potentiality, but rather to its fulfilment; that is, through the turning back of potentiality upon itself, which amounts to its “giving of itself to itself,” im-potentiality, or the potentiality to not be, is fully realized in its own suspension such that actuality appears as nothing other than the potentiality to not not-be. While this relation is central to the passage of voice to speech or signification and to attaining toward the experience of language as such, Agamben also claims that in this formulation Aristotle bequeaths to Western philosophy the paradigm of sovereignty, since it reveals the undetermined or sovereign founding of being. As Agamben concludes, ‘“an act is sovereign when it realizes itself by simply taking away its own potentiality not to be, letting itself be, giving itself to itself’” (HS 46).

Ultimately this leads us back to David’s dark phenomenology which is part of what is now termed ‘speculative realism’ in the sense that he will use the concept of ‘withdrawal’ as part of his substantial formalism:

“The conditions for the phenomenology of technology thus show that the existence of technological items exceeds their phenomenological manifestation. Technologies can withdraw from particular human practices (Verbeek 2005: 117). If SP is correct, they may even withdraw from all human practices.” (p. 163)

This notion of withdrawal or disconnection began in the Object-Oriented substantial formalism of Graham Harman, although David uses this concept a little differently. Graham will modify Heidegger’s notions of readiness-to-hand (Zuhandenheit), saying it “refers to objects insofar as they withdraw from human view into a dark subterranean reality that never becomes present to practical action any more than it does to theoretical awareness” (p. 1).2 The point here is that it is not conscious, it is absence under the sign of presence (as we observed in the beginning). So that we never have direct access to objects, but only indirect access since we are apprehending that which is attained only by way of absence rather than direct presence. One could draw from this a complete history of ontology as ‘eye’ or ‘optical’ based, and an opposing one that is based on other senses than the eye; affective relations, etc. The notion of the eye has been central to metaphysics since Aristotle or before.

William McNeill in his The Glance of the Eye: Heidegger, Aristotle, and the Ends of Theory explores the phenomenon of the Augenblick, or glance of the eye, in Heidegger s thought, and in particular its relation to the primacy of seeing and of theoretical apprehending (theoria) both in Aristotle and in the philosophical and scientific tradition of Western thought. McNeill argues that Heidegger s early reading of Aristotle, which identifies the experience of the Augenblick at the heart of ethical and practical knowledge (phronesis), proves to be a decisive encounter for Heidegger s subsequent understanding and critique of the history of philosophy, science, and technology. It provides him with a critical resource for addressing the problematic domination of theoretical knowledge in Western civilization.

So Harman and Roden both are developing a form of counter-theoretic or dark phenomenology in the sense that it is no longer guided by the ‘glance of the eye’. As Harman would suggest when the things “withdraw from presence into their dark subterranean reality, they distance themselves not only from human beings, but from each other as well. If the human perception of a house or tree is forever haunted by some hidden surplus in the things that never becomes present, the same is true of the sheer causal interaction between rocks or raindrops.” (TB, p. 2)

So that when David tells us that “technological items exceeds their phenomenological manifestation,” and that “technologies can withdraw from particular human practices,” he is countering this whole scientific tradition of the eye and exposing us to a darker apprehension of absence rather than presence. Or, I should qualify, saying the “presence within absence,” which is apprehended indirectly through various apparatuses, etc. This will lead Roden to state:

Thus we should embrace a realist metaphysics of technique in opposition to the phenomenologies of Verbeek and Ihde. Technologies according to this model are abstract, repeatable particulars realized (though never finalized) in ephemeral events (§ 6.5). (p. 163)

I’ll need to expand this… but it’s grown too long as is. I did not go into the work of Verbeek or Ihde, so will have to take that up at another point. The main thrust is as David tells us, he is moving toward  “a model that addresses the “abstract particularity” of technique while leaving room for a more detailed metaphysical treatment of technicity” (p. 163). This notion of technicity as Arthur Bradley will tell us following as Roden did, the work of Derrida:

In Jacques Derrida’s view, we live in a state of originary technicity. It is impossible to define the human as either a biological entity (a body or species) or a philosophical state (a soul, mind or consciousness), he argues, because our “nature” is constituted by a relation to technological prostheses. According to a logic that will be very familiar to readers of his work, technology is a supplement that exposes an originary lack within what should be the integrity or plenitude of the human being itself. To put it in a word, what we call the “human” is thus the product of an aporetic relation between interiority and exteriority where each term defines, and contaminates, its other. If Derrida was arguably the first thinker to explicitly propose a philosophy of originary technicity— although there are obvious precedents in the work of Marx, Nietzsche, Bergson, Husserl and Leroi-Gouhran— this line of enquiry has been pursued, refined and extended by a number of other figures including, most notably, Bernard Stiegler. The technological turn in continental philosophy also feeds into a more general crisis about what— if anything— might now be said to be “proper” to humanity. This can be witnessed in the recent debate— gathering together voices from science fiction, cultural theory and the human, life and cognitive sciences— about our so-called “posthuman” future.3

Ultimately as David reminds us if “phenomenology cannot tell us what phenomenology is a priori, then phenomenological investigation cannot secure knowledge of phenomenological necessity. In particular, we have no grounds for holding that we understand what it is to occupy a world that any sophisticated cognizer must share with us.” (p. 76)


 

  1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (p. 160). Taylor and Francis. Kindle Edition.
  2. Harman, Graham (2011-08-31). Tool-Being: Heidegger and the Metaphysics of Objects (p. 1). Open Court. Kindle Edition.
  3. Armand, Louis; Bradley, Arthur; Zizek, Slavoj; Stiegler, Bernard; Miller, J. Hillis; Wark, McKenzie; Amerika, Mark; Lucy, Niall; Tofts, Darren; Lovink, Geert (2013-07-19). Technicity (Kindle Locations 1468-1478). Litteraria Pragensia. Kindle Edition.

The original post on FB for those who don’t have access to it:

Steven Shaviro discusses David Roden‘s notions of Dark Phenomenology in the first chapter of his book, Discogniton (“Thinking Like a Philosopher”), and I quote:

“When we no longer have concepts to guide our intuitions, we are in the realm of what David Roden calls dark phenonemology. Roden extends the arguments of Kant, Sellars, and Metzinger. Since I am able to experience the subtlety of red, but I can only conceive and remember this experience as one of red in general, there must be, within consciousness itself, a radical “gulf between discrimination and identification”. This leads to the ironic consequence that first-person experience cannot be captured adequately by first-person observation and reflection. “What the subject claims to experience should not be granted special epistemic authority since it is possible for us to have a very partial and incomplete grasp of its nature”.

“In other words, rather than claiming (as Dennett does, for instance) that noncognitive phenomenal experience is somehow illusory, Roden accepts such experience, espousing a full “phenomenal realism”. But the conclusion he draws from this non-eliminativist realism is that much of first-person experience “is not intuitively accessible”. I do not necessarily know what I am sensing or thinking. It may well be that I can only figure out the nature of my own experiences indirectly, in the same ways – through observation, inference, and reporting – that I figure out the nature of other people’s experiences. Introspective phenomenological description therefore “requires supplementation through other modes of enquiry”. Roden concludes that we can only examine the “dark” areas of our own phenomenal experience objectively, from the outside, by means of “naturalistic modes of enquiry… such as those employed by cognitive scientists, neuroscientists and cognitive modelers”.

“Roden’s account of dark phenomenology is compelling; but I find his conclusion questionable. For surely the crucial distinction is not between first person and third person modes of comprehension, so much as between what can be cognized, and what cannot. Phenomenological introspection and empirical experimentation are rival ways of capturing and characterizing the nature of subjective experience. But dark phenomenology points to a mode of experience that resists both sorts of conceptualization.” (Kindle Locations: 490-560)1

In the above passage one discovers the differences within the neuroscientific community of the sciences, and the philosophical community: the neurosciences are stripping the lineaments of Kantian intuition and/or ‘phenomenological introspection’ (first person) out of the equation altogether; while those within the philosophical world seek to save the last bastion of Kantian thought from the veritable erosion in a sea of technological systems outside the purview of consciousness. This is the battle confronting 21st Century thought. The Neurosciences vs. Philosophy. On the one hand you have those who believe philosophy should not be seen as opposing so much the sciences as being the guardian of thought itself; maintaining that without philosophy the scientists would not have the theoretical frameworks within which to carry on their conceptual discourses. On the other you have the neuroscientists who could care less about the specifics of thought, but rather seek an understanding of the very real and empirical operations and functions of the brain that gives rise to thought. It’s this intermediary realm between material/immaterial that is at issue. In older forms the physicalist arguments reduced everything to the brain, but newer neurosciences are taking into consideration that things are not so easily reduced; yet, there is no agreement among scientists or philosophers as to what this gap or blank is between the material and immaterial, or even if such questions are pertinent to the task. So that for scientists it’s not so much about frameworks as it is about the pragmatic truth of actual process in real-time that have nothing to do with philosophical intuitionism and much more about the way the brain interacts with the environments within which it is folded.

Already neurosciences, imaging technologies (i.e., fRMI, etc.), and interface tech are bridging the material/immaterial gap without understanding the full details of the processes involved. Along with computer/brain interfaces that can be applied intrinsically and extrinsically to a person, allowing for new and exciting abilities for those whose bodies were otherwise incapacitated access to speech, communication, and computing systems, there is the intraoperative collusion of biochemical and hardware intermediation that up till recently would have been seen as impossible. Yet, in our time technology and invention is bringing a revolution in such splicings of human and machine. More and more those like Andy Clarke are being proven right that humans are already becoming Cyborgs… are, maybe we always already were. Technology that we create is in return changing who and what we are as humans. Some say this is the posthuman divide, a crossing of the Rubicon between human and technology that will change our mode of being in the world forever. What it will lead to is anyone’s guess. David Roden will term it the disconnection thesis: a point beyond which we just don’t know what is being reached in the way of ‘wide descendants’ (or posthuman progeny), one we can only speak of speculatively rather than ontologically with any depth of resolution.

Only time will tell who will come out on top, here; but I suspect if history has a say, that the sciences will uncover the processes of thought in the brain as being outside the control of the first-person navigator we term the Subject altogether. Philosophers want to retain a connection to our sense of Self and Personality, to hold onto the metaphysical basis of human thought and exceptionalism. But the sciences day by day are eroding the very ground and foundations of human subjectivity and self upon which western metaphysics since Plato has encircled itself. The battle continues… and, as Steven suggests, Roden’s “dark phenomenology points to a mode of experience that resists both sorts of conceptualization.” Where it will lead we will need to follow…

1. Steven Shaviro. Discognition. Repeater (April 19, 2016)


 

Steven Shaviro: On David Roden’s Dark Phenomenlogy

Steven Shaviro discusses David Roden‘s notions of Dark Phenomenology in the first chapter of his book, Discogniton (“Thinking Like a Philosopher”), Thinking like a Philosopher in Discognition – and I quote:

“When we no longer have concepts to guide our intuitions, we are in the realm of what David Roden calls dark phenonemology. Roden extends the arguments of Kant, Sellars, and Metzinger. Since I am able to experience the subtlety of red, but I can only conceive and remember this experience as one of red in general, there must be, within consciousness itself, a radical “gulf between discrimination and identification”. This leads to the ironic consequence that first-person experience cannot be captured adequately by first-person observation and reflection. “What the subject claims to experience should not be granted special epistemic authority since it is possible for us to have a very partial and incomplete grasp of its nature”.

“In other words, rather than claiming (as Dennett does, for instance) that noncognitive phenomenal experience is somehow illusory, Roden accepts such experience, espousing a full “phenomenal realism”. But the conclusion he draws from this non-eliminativist realism is that much of first-person experience “is not intuitively accessible”. I do not necessarily know what I am sensing or thinking. It may well be that I can only figure out the nature of my own experiences indirectly, in the same ways – through observation, inference, and reporting – that I figure out the nature of other people’s experiences. Introspective phenomenological description therefore “requires supplementation through other modes of enquiry”. Roden concludes that we can only examine the “dark” areas of our own phenomenal experience objectively, from the outside, by means of “naturalistic modes of enquiry… such as those employed by cognitive scientists, neuroscientists and cognitive modelers”.

“Roden’s account of dark phenomenology is compelling; but I find his conclusion questionable. For surely the crucial distinction is not between first person and third person modes of comprehension, so much as between what can be cognized, and what cannot. Phenomenological introspection and empirical experimentation are rival ways of capturing and characterizing the nature of subjective experience. But dark phenomenology points to a mode of experience that resists both sorts of conceptualization.” (Kindle Locations: 490-560)1

In the above passage one discovers the differences within the neuroscientific community of the sciences, and the philosophical community: the neurosciences are stripping the lineaments of Kantian intuition and/or ‘phenomenological introspection’ (first person) out of the equation altogether; while those within the philosophical world seek to save the last bastion of Kantian thought from the veritable erosion in a sea of technological systems outside the purview of consciousness. This is the battle confronting 21st Century thought. The Neurosciences vs. Philosophy. On the one hand you have those who believe philosophy should not be seen as opposing so much the sciences as being the guardian of thought itself; maintaining that without philosophy the scientists would not have the theoretical frameworks within which to carry on their conceptual discourses. On the other you have the neuroscientists who could care less about the specifics of thought, but rather seek an understanding of the very real and empirical operations and functions of the brain that gives rise to thought. It’s this intermediary realm between material/immaterial that is at issue. In older forms the physicalist arguments reduced everything to the brain, but newer neurosciences are taking into consideration that things are not so easily reduced; yet, there is no agreement among scientists or philosophers as to what this gap or blank is between the material and immaterial, or even if such questions are pertinent to the task. So that for scientists it’s not so much about frameworks as it is about the pragmatic truth of actual process in real-time that have nothing to do with philosophical intuitionism and much more about the way the brain interacts with the environments within which it is folded.

Already neurosciences, imaging technologies (i.e., fRMI, etc.), and interface tech are bridging the material/immaterial gap without understanding the full details of the processes involved. Along with computer/brain interfaces that can be applied intrinsically and extrinsically to a person, allowing for new and exciting abilities for those whose bodies were otherwise incapacitated access to speech, communication, and computing systems, there is the interoperative collusion of biochemical and hardware intermediation that up till recently would have been seen as impossible. Yet, in our time technology and invention is bringing a revolution in such splicings of human and machine. More and more those like Andy Clarke are being proven right that humans are already becoming Cyborgs… are, maybe we always already were. Technology that we create is in return changing who and what we are as humans. Some say this is the posthuman divide, a crossing of the Rubicon between human and technology that will change our mode of being in the world forever. What it will lead to is anyone’s guess. David Roden will term it the disconnection thesis: a point beyond which we just don’t know is being reached, one we can only speak of speculatively rather than ontologically with any depth of resolution.

Only time will tell who will come out on top, here; but I suspect if history has a say, that the sciences will uncover the processes of thought in the brain as being outside the control of the first-person navigator we term the Subject altogether. Philosophers want to retain a connection to our sense of Self and Personality, to hold onto the metaphysical basis of human thought and exceptionalism. But the sciences day by day are eroding the very ground and foundations of human subjectivity and self upon which western metaphysics since Plato has encircled itself. The battle continues… and, as Steven suggests, Roden’s “dark phenomenology points to a mode of experience that resists both sorts of conceptualization.” Where it will lead we will need to follow…

1. Steven Shaviro. Discognition. Repeater (April 19, 2016)

 

You’ll have to read the book to understand the rest of the story…


1. Shaviro, Steven (2016-04-19). Discognition (Kindle Locations 204-205). Watkins Media. Kindle Edition.

Crash Culture: Panic Shock, Semantic Apocalypse, and our Posthuman Future

“We have swallowed our microphones and headsets … We have interiorized our own prosthetic image and become the professional showmen of our own lives.”
…….– Jean Baudrillard

“I think now of the other crashes we visualized, absurd deaths of the wounded, maimed and distraught. I think of the crashes of psychopaths, implausible accidents carried out with venom and self-disgust…”
……..– J. G. Ballard, Crash

The machine gazes into the mirror, an abyss within an abyss. The eye that stares, stares back in a closed circuit – a feed-back loop contorted to the torsion of a solipsistic dance. Caught in the vacuum endgame of performativity rather than knowledge, each lost image moves in a circular void tempting it toward existence; else in utter disgust, an exit from this dark world of virtual multiplicity. Following the trajectory of ideas immanent to the register of thought unbound each image rides the time-wave of a falling arc into history, where human and machine gaze into each other’s eye discovering in the twisted lands of the twenty-first century a latent transport into oblivion.

Selfies are often shared on social networking services such as Facebook, Instagram and Twitter. We follow the gaze of their gaze through its manifold electronic incarnations like blip scores in a self-replicating image-feed for lost memes. Lost among our memetic images, our thoughts blank and emptied of their former glory, we ponder the inane vision of our bodies become immaterial objects – pixel pigments of another Order: the symbolon of an alien cult from the future displayed on the screen of our inexistence. All that remains is to chart the cartography of a hidden image; emptied of its meaning, we follow the nihilist gaze of dissident powers, extreme dispotifs accelerating us toward the extreme convergence of human history onto the Semantic Apocalypse.

Blind Brain Theory: The Theory of Meaning

R. Scott Bakker is an odd man out in the world of comic nihilism – caught between the wars of the Sciences and Philosophy he promotes what he terms Blind Brain Theory (BBT): a final theory of meaning. Well known for his two intellectual fantasy series: The Prince of Nothing and The Aspect-Emperor Trilogies. Each of which as he suggests in his essay “is literature that reaches beyond the narrow circle of the educated classes, and so reaches those who do not already share the bulk of a writer’s values and attitudes. Literature that actually argues, actually provokes, rather than doing so virtually in the imaginations of the like-minded.”

Continue reading

Utopia or Hell: The Future as Posthuman Game Strategy

 

There was no question; the dead thing in the gutter was one of his clones. – Jeffrey Thomas, Punktown

As I was thinking through the last chapter in David Roden’s posthuman adventure in which a spirit of speculative engineering best exemplifies an ethical posthuman becoming – not the comic or dreadful arrest in the face of something that cannot be grasped 1, I began reading Arthur Kroker in his book Exits to the Posthuman Future, who in an almost uncanny answer to Roden’s plea for new forms of thought – to prepare ourselves for the posthuman eventuality, tells us that we might need a “form of thought that listens intently for the gaps, fissures, and intersections , whether directly in the technological sphere or indirectly in culture, politics, and society, where incipient signs of the posthuman first begin to figure.”2 We might replace the use of the word “figure” with Roden’s terminological need for an understanding of “emergence”.

Rereading Slavoj Zizek’s early The Sublime Object of Ideology he will see a specific battle within the cultural matrix in which scientists and critics alike have a tendency to fill these gaps, or unknowns with complexity and an almost acute anxiety of that which is coming at us out of the future. He says that there is always this dialectical interplay between Ptolemaic and Copernican movements. The Ptolemaic being the form that simply shores up the past, solidifying and reducing the complexities of the sciences to its simplified worldview, while the Copernicans always opt for fracturing the old forms, for opening up the world to the gaps that cannot be evaded in our knowledge, to allowing the universe to enter us and challenge everything we are and have been.

The Gothic modes of fiction seem to follow and fill these uncertain voids and gaps with the monstrous rather than light when such moments of metamorphosis and change come about. Fear and instability shake us to our bones, force us to resist change and seek ways to either turn time back or to put the unknown into some perverse relation to our lives, darkening its visions into complicity with the inhuman and sadomasochistic heart of our own core defense systems. One might be reminded of Thomas Ligotti’s remembrance of Mary Shelley’s famous Frankenstein in which his own repetition of her story in a postmodern mode has the creature awaken into his posthuman self with a sense of loss: “

This possibility is now , of course, as defunct as the planet itself. With all biology in tatters, the outsider will never again hear the consoling gasps of those who shunned him and in whose eyes and hearts he achieved a certain tangible identity, however loathsome. Without the others he simply cannot go on being himself— The Outsider— for there is no longer anyone to be outside of. In no time at all he is overwhelmed by this atrocious paradox of fate.

This sense of ambivalence that he fills at having attained at last something outside of humanity returns with a darker knowledge that becoming other he can no longer harbor what he once dreamed, he has become the thing he dreaded. Cast out of the biological tic he is free, but free for what? No longer human he is faced with the paradox of who he now is: and, that he has nothing to which his mind can tend, no thoughts from the others, the humans; no libraries of philosophy, ethics, history, literature. No. He is absolutely outside of the human; alone. Is this solipsism or something else? Even that classic work by the Comte de Lautremont Maldoror in which the ecstasy of cruelty is unleased cannot be a part of this world of the posthuman. What if the mythology of drives, of eros and thanatos, love and death, the rhetoric flourishes of figuration, else the literalism of sadomasochism no longer hold for such beings? How apply human knowledge and thought to what is inhuman? As Ligotti will end one of his little vignettes:

And each fragment of the outsider cast far across the earth now absorbs the warmth and catches the light, reflecting the future life and festivals of a resurrected race of beings : ones who will remain forever ignorant of their origins but for whom the sight of a surface of cold, unyielding glass will always hold profound and unexplainable terrors. (ibid)

This sense of utter desolation, of catastrophe as creation and invention, is this not the truth of the posthuman? Zizek will attune us to the monstrous notion that Hegel’s notion of Aufhebung or sublation is a form of cannibalism in that it effectively and voraciously devours and ‘swallows up’ every object it comes upon.4 His point being that the only way we can grasp an object (let’s say the posthuman) is to acknowledge that it already ‘wants to be with/by us’? If as Roden suggests we as humans are becoming the site of a great experiment in inventing the posthuman then maybe as Zizek suggests its not digestion or cognition, but shitting that we must understand, because for Hegel the figure of Absolute Knowledge, the cognizing subject is one of total passivity; an agent in which the System of Knowledge is ‘automatically’ deployed without external norms or impetuses. Zizek will tell us that this is a radicalized Hegel, one that defends the notion of ‘process without subject’: the emergence of a pure subject qua void, the object itself with no need for any subjective agent to push it forward or to direct it. (ibid, xxii)

This notion that the posthuman as ‘process without subject’ that has no need of human agents to push it, direct or guide it takes us to the edge of the technological void where our human horizon meets and merges with the inhuman other residing uncannily within our own being, withdrawn and primeval.

Engineering Our Posthuman future

Chris Anderson , in his ‘The end of theory: The data deluge makes the scientific method obsolete’  argued that data will speak for themselves, no need of human beings who may ask smart questions:

With enough data, the numbers speak for themselves. […] The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years. Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence . Data without a model is just noise. But faced with massive data, this approach to science— hypothesize, model, test— is becoming obsolete.5

So what is replacing it? Luciano Floridi will tell us that it’s not about replacement, but about the small patterns in the chaos of data:

[One needs to ] know how to ask and answer questions’ critically, and therefore know which data may be useful and relevant, and hence worth collecting and curating, in order to exploit their valuable patterns. We need more and better technologies and techniques to see the small-data patterns , but we need more and better epistemology to sift the valuable ones.6

So if we are to understand the emergence of the posthuman out of the relations of human and technology we need to ask the right questions, and to build the technologies that can pierce the veil of this infinite sea of information our society is inventing in the digital machines of Data. Data itself is stupid, what we need are intelligent questioners. But do these intelligent agents need to be necessarily human? Maybe not, yet as Floridi will suggest:

One thing seems to be clear: talking of information processing helps to explain why our current AI systems are overall more stupid than the wasps in the bottle. Our present technology is actually incapable of processing any kind of meaningful information, being impervious to semantics, that is, the meaning and interpretation of the data manipulated. ICTs are as misnamed as ‘smart weapons’. (Floridi, KL 2525)

Descartes once acknowledged that the essential sign of intelligence was a capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage. And, many in the AI community have followed that path thinking it would be a priceless feature of any appliance that sought to be more than merely smart. In our own time the impression has often been that the process of adding to the mathematical book of nature (inscription) required the feasibility of productive, cognitive AI, in other words, the strong programme. Yet, what has actually been happening in the real world of commerce and practical science of engineering is something altogether different, we’ve been inventing a world that is becoming an infosphere, one that is increasingly well adapted to ICTs’ (Information & Communications Technologies) limited capacities. What we see happening is that companies in their bid to invent Smart Cities etc. are beginning to adapt the environment to our smart technologies to make sure the latter can interact with it successfully . We are, in other words, wiring or rather enveloping the world with intelligence. Our environment itself is becoming posthuman and in turn is rewiring humanity. (ibid. Floridi)

ICTs are creating the new informational environment in which future generations will live and have their being. The posthuman is becoming our environment a site of intelligence, we are we are constructing the new physical and intellectual environments that will be inhabited by future generations. For Floridi the task is to formulate an ethical framework that can treat the infosphere as a new environment worthy of the moral attention and care of the human inforgs inhabiting it:

Such an ethical framework must address and solve the unprecedented challenges arising in the new environment. It must be an e-nvironmental ethics for the whole infosphere. This sort of synthetic (both in the sense of holistic or inclusive, and in the sense of artificial) environmentalism will require a change in how we perceive ourselves and our roles with respect to reality, what we consider worth our respect and care, and how we might negotiate a new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, KL 3954)

James Barrat in his book Our Final Invention: Artificial Intelligence and the End of the Human Era tells us he interviewed many scientists in various fields concerning AGI and that every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes.7 After interviewing dozens of scientist Barrat concluded that we may be slowly losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (Barrat, 6)

As Kroker will admonish we seem to be on the cusp of a strange transition, situated at the crossroads of humanity, and the future presents itself now as a gigantic simulacrum of the recycled remnants of all that which was left unfinished by the coming-to-be of the technological dynamo – unfinished religious wars, unfinished ethnic struggles, unfinished class warfare, unfinished sacrificial violence and spasms of brutal power, often motivated by a psychology of anger on the part of the most privileged members of the so-called global village. The apocalypse seems to be coming our way like a specter on the horizon, not a grand epiphany of events but by one lonely text message at a time. (Kroker, 193)

The techno-capitalists want to enclose us in a new global commons of intelligent cities to better control our behavior and police us in a vast hyperworld of machinic pleasure and posthuman revelation, while the rest of humanity sits on the outside of these corrupted dreamworlds as workers and slaves of the new AI wars for the minds of humanity. Bruce Sterling in his latest book The Epic Struggle of the Internet of Things says we’re already laying the infrastructure for tyranny and control on a global scale:

Digital commerce and governance is moving, as fast and hard as it possibly can, into a full-spectrum dominance over whatever used to be analogue. In practice, the Internet of Things means an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband.8

Another prognosticator Jacque Attali who supports the technological elite takeover in this world of intelligent systems, tells us that in the course of the twenty-first century, market forces will take the planet in hand. The ultimate expression of unchecked individualism, this triumphant march of money explains the essence of history’s most recent convulsions. It is up to us to accelerate, resist, or master it:

…this evolutionary process means that money will finally rid itself of everything that threatens it — including nation-states (and not excepting the United States of America), which it will progressively dismantle. Once the market becomes the world’s only universally recognized law, it will evolve into what I shall call super-empire, an entity whose structures remain elusive but whose reach is global. … Exploiting ever newer technologies, global or continental institutions will organize collective living, imposing limits on the production of commercial artifacts, on transforming life, and on the mercantile exploitation of natural resources. They will prefer freedom of action, responsibility, and access to knowledge. They will usher in the birth of a universal intelligence, making common property of the creative capacities of all human beings in order to transcend them. A new, synchronized economy, providing free services, will develop in competition with the market before eliminating it, exactly as the market put an end to feudalism a few centuries ago.9

The dream of the global elites is of a great market empire controlled by vast AI Intelligent Agents that will deliver the perfect utopian realm of work and play for a specific minority of engineers and creative agents, entrepreneurs, bankers, and space moghuls, etc., while the rest of the dregs of humanity live in the shadows controlled by implants or pharmaceuticals that will keep them pacified and slave-happy in their menial tier of decrepitude as workers in the minimalist camps that support the Smart Civilization and its powers.    

Yet, against this decadent scenario as Kroker suggests what if the counter were true, and the shadow artists of the future or even now beginning to enter the world of data nerves, network skin, and increasingly algorithmic minds with the intention of capturing the dominant mood of these posthuman times – drift culture – in a form of thought that dwells in complicated intersections and complex borderlands? He envisions instead an new emergent order of rebels, a global gathering of new media artists, remix musicians, pirate gamers, AI graffiti artists, anonymous witnesses, and code rebels, an emerging order of figural aesthetics revealing a new order, a brilliantly hallucinatory order, based on an art of impossible questions and a perceptual language as precise as it is evocative. Here, the aesthetic imagination dwells solely on questions of incommensurability : What is the vision of the clone? What is the affect of the code? What is the hauntology of the avatar? What is most excluded, prohibited, by the android? What is the perception of the drone? What are the aesthetics of the fold? What, in short, is the meaning of aesthetics in the age of drift culture?(Kroker, 195-196)

This notion of drift culture might align well with David Roden’s call for a new network of interdisciplinary practices that combine technoscientific expertise with ethical and aesthetic experimentation will be better placed to sculpt disconnections than narrow coalitions of experts. One in which the ‘Body Hacker’ with her self-invention and empowerment toward a self-administered intervention in extreme new technologies like the IA technique…(Roden, KL 4394). Kroker will call this ‘body drift’:

Body drift refers to the fact that we no longer inhabit a body in any meaningful sense of the term but rather occupy a multiplicity of bodies— imaginary, sexualized, disciplined, gendered, laboring, technologically augmented bodies. Moreover, the codes governing behavior across this multiplicity of bodies have no real stability but are themselves in drift— random, fluctuating, changing. There are no longer fixed, unchallenged codes governing sexuality, gender, class, or power but only an evolving field of contestation among different interpretations and practices of different bodily codes. The multiplicity of bodies that we are, or are struggling to become, is invested by code-perspectives. Never fixed and unchanging, code-perspectives are always subject to random fluctuations, always evolving, always intermediated by other objects, by other code-perspectives. We know this as a matter of personal autobiography.(Kroker, KL 53)10

 This notion that we are becoming ‘code’ is also part of the posthuman nexus. As Rob Kitchin and Martin Dodge in Code/Space: Software and Everyday Life tell us this sense of the pervasiveness of the environment enclosing us is becoming posthuman is termed ‘everywhere’: the ubiquity of computational power will soon be distributed and available to the point on the planet… many everyday devices and objects will be accessible across the Internet of things, chatting to each other in machinic languages that humans will not even be aware of much less concerned with; yet, we will be enclosed in this fabric of communication and technology of Intelligence, socialized by its pervasiveness in our lives. Instead of the old Marxian notion of being embedded in a machine, we will now be so enmeshed in this environment of ICTs that they will become invisible: power and governance will vanish into our skins and minds without us even knowing it is happening, and we will be happy.

Luis Suarez-Villa in his recent Globalization and Technocapitalism tells us “the ethos of technocapitalism places experimentalism at the core of corporate power”, much as production was at the core of industrial corporate power, undertaken through factory regimes and labor processes. And , much as the ethos of past capitalist eras was accompanied by social pathologies and by frameworks of domination, so the new ethos of technocapitalism introduces pathological constructs of global domination that are likely to be hallmarks of the twenty-first century. As Floridi will tells us, we are already living in an infosphere that will become increasingly synchronized (time), delocalized ( space ), and correlated (interactions). Although this might be interpreted, optimistically, as the friendly face of globalization, we should not harbour illusions about how widespread and inclusive the evolution of the information society will be. Unless we manage to solve it, the digital divide will become a chasm, generating new forms of discrimination between those who can be denizens of the infosphere and those who cannot, between insiders and outsiders, between information rich and information poor. It will redesign the map of worldwide society, generating or widening generational, geographic, socio-economic, and cultural divides. Yet the gap will not be reducible to the distance between rich and poor countries, since it will cut across societies. Pre-historical cultures have virtually disappeared, with the exception of some small tribes in remote corners of the world. The new divide will be between historical and hyperhistorical ones. We might be preparing the ground for tomorrow’s informational slums (Floridi, 9).

 Welcome to the brave new world. As our drift and code culture, digital immigrants in a sea of information slowly become inforgs and are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between infosphere and physical world, only a difference in levels of abstraction. When the migration is complete, we shall increasingly feel deprived, excluded, handicapped, or impoverished to the point of paralysis and psychological trauma whenever we are disconnected from the infosphere, like fish out of water. One day, being an inforg will be so natural that any disruption in our normal flow of information will make us sick. (Floridi, 16-17)

What remains of our humanity is anyone’s guess. The Inforgasm is upon us, the slipstream worlds of human/machine have begun to reverse engineer each other in a convoluted involution in which we are returning to our own native climes as machinic beings. Maybe a schizoanalyst could sort this all out. For me there is no escape, no exit, just the harsh truth that what is coming at us is our own inhuman core realized as posthuman becoming, an engineering feat that no one would have thought possible: consciousness gives way to the very machinic processes that underpin its actual and virtual histories.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 4399-4401). Taylor and Francis. Kindle Edition.
2. Kroker, Arthur (2014-03-12). Exits to the Posthuman Future (p. 6). Wiley. Kindle Edition.
3. Ligotti, Thomas (2014-07-10). The Agonizing Resurrection of Victor Frankenstein (Kindle Locations 397-399). Subterranean Press. Kindle Edition.
4. Slavoj Zizek. The Sublime Object of Ideology. Verso 1989
5. Anderson, C. (23 June 2008). The end of theory: Data deluge makes the scientific method obsolete. Wired Magazine.
6. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 4088-4089). Oxford University Press. Kindle Edition.
7. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 3). St. Martin’s Press. Kindle Edition.
8. Sterling, Bruce (2014-09-01). The Epic Struggle of the Internet of Things (Kindle Locations 8-10). Strelka Press. Kindle Edition.
9. Attali, Jacques (2011-07-01). A Brief History of the Future: A Brave and Controversial Look at the Twenty-First Century . Arcade Publishing. Kindle Edition.
10. Kroker, Arthur (2012-10-22). Body Drift: Butler, Hayles, Haraway (Posthumanities) (Kindle Locations 53-60). University of Minnesota Press. Kindle Edition.


 

 

 

 

 

David Roden’s: Speculative Posthumanism – Conclusion (Part 8)

While the disconnection thesis makes no detailed claims about posthuman lives, it has implications for the complexity and power of posthumans and thus the significance of the differences they could generate. Posthuman entities would need to be powerful relative to WH to become existentially independent of it.1

 In his final chapter David Roden takes up the ethical or normative dimensions of his disconnection thesis. He will opt for a posthuman accounting that will allow us to anticipate the posthuman through participation in its ongoing eventuality. Yet, he recognizes there are both moral, political, and other factors that argue for both its necessary constraint and limits through control pressure from normative and political domains. (previous post) As we approach David Roden’s final offering we should remember a cautionary note by Edward O. Wilson from his The Social Conquest of the Earth would caution:

We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology. We thrash about. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life.2

In the first section Roden will face objections to his disconnection thesis from both phenomenological anthropocentrism and naturalist versions of species integrity, and find both wanting. Instead of going through the litany of examples I’ll move toward his summation which gives us his base stance and philosophical/scientific appraisal. As he states it:

…the phenomenological species integrity argument for policing disconnection-potent technologies presupposes an unwarrantable transcendental privilege for Kantian personhood. Since the privilege is unwarrantable this side of disconnection, the phenomenological argument for an anthropocentric attitude towards disconnection fails along with naturalistic versions of the species integrity argument such as Agar’s. Thus even if we accept that our relationships to fellow humans compose an ethical pull, as Meacham puts it, its force cannot be decisive as long we do not know enough about the contents of PPS (posthuman possibility space) to support the anthropocentrist’s position. What appears to be a moral danger on our side of a disconnection could be an opportunity to explore morally considerable states of being of which we are currently unaware.*(see notes below)

 Reading the arguments of both Agar and Meacham against the disconnection thesis it brings to mind the sense of how many thinkers, scientists and philosophers fear the unknown element, the X factor in the posthuman equation. What’s difficult and for me almost nonsensical in both arguments is their sense of Universalism, as if we could control what is viable a nominalistic universe of particulars through either a universal and normative set of theory and practices (let’s say a Sellarsian/Brandomonian normativity of “give” and “take” in a space of reasons; creating a navigational mapping of the pros/cons of the posthuman X factor and develop a series of reasoning’s for or against its emergence, etc.) as if we have a real say in the matter. Do we? Roden has gone through the pros/cons of technological determinism and found it lacking in any sense of foundation.

Yet, his basic philosophy seems grounded in the surmises of phenomenological theory and practice rather than in the sciences per se. So from within his own perspective in philosophical theory all seems viable for or against the posthuman. But do we live in a phenomenological world. Do we accept the philosophical strictures of the Kantian divide in philosophy that have led to the current world of speculation, both Analytical and Continental?

As Roden will suggest against the threat of phenomenological species integrity is one that attacks the actual foundations of the whole ethical and political enterprise rather than an specific or putatively “human” norms, values or practices (Roden, KL 4130). I think its safe to say that most of the species that have ever existed (99%) are now extinct according to evolutionists. So humans are part of the natural universe, we are not exceptional, and do not sit outside the realm of the animal kingdom. When it comes down to it do we go with those who fear extinction at the hands of some unknown X factor, some unknown posthuman break and disconnect that might or might not be the end point for the human? Or, do we opt for the challenge to participate in its emergence and realize that it might offer the next stage in – if not biological evolution (although transhumans opt for this), but technological innovation and evolution? Roden will try to answer this in his final section.

 Vital posthumanism: a speculative-critical convergence

In this section (8.2) Roden will opt for a post-anthropocentric ethics of becoming posthuman, one that does not require posthumans to exhibit human intersubjectivity or moral autonomy. Such an ethics would need to be articulated in terms of ethical attributes that we could reasonably expect to be shared with posthuman WHDs (wide human descendants) whose phenomenologies or psychologies might diverge significantly from those of current humans (Roden, 4164).

One prerequisite as he showed in earlier sections of the book was the need for functional autonomy:

A functionally autonomous system (FAS) can enlist values for and accrue functions ( § 6.4 ). Functional autonomy is related to power. A being’s power is its capacity to enlist other things and be reciprocally enlisted (Patton 2000: 74). With great power comes great articulation ( § 6.5 ). (Roden, 4168)

To build or construct such an assemblage he will opt for a neo-vitalist normativity, one that is qualified materialism following Levi R. Bryant against any form of metaphysical vitalism. Instead he will broker an ontological materialism that denies that the basic constituents of reality have an irreducibly mental character (Roden, KL 4180). Second, he will redefine the conceptual notions underpinning vitalism by offering a minimal definition of the posthuman as living because they must exhibit functional autonomy. This is a sufficient functional condition of life at best (Roden, KL 4187). This does not imply any form or essentialism either, there is not implied set of properties etc. to which one could reduce the core set of principles.

He will work within the framework of an assemblage ontology first developed by Gilles Deleuze. It assumes that posthumans would have network-independent components like the human fusiform gyrus, allowing flexible and adaptive couplings with other assemblages. Posthumans would need a flexibility in their use of environmental resources and in their “aleatory” affiliations with other human or nonhuman systems sufficient to break with the purposes bestowed on entities within the Wide Human.(Roden, 4202) I’m tempted to think of Levi R. Bryant’s Machine Ontology which is an outgrowth of both Deleuze and certain trends in speculative realism, too. Yet, this is not the time or place to go into that (i.e., read here, here, here).

He affirms an accord between his own project and that of Rosi Braidotti’s The Posthuman. Yet, there are differences as well. As he states it:

“…she is impatient with a disabling political neutrality that can follow from junking human moral subjectivity as the arbiter of the right and the good. She argues that a critical posthumanist ethics should retain the posit of political subjectivity capable of ethical experimentation with new modes of community and being, while rejecting the Kantian model of an agent subject to universal norms. (Roden, KL 4224)”

His point is that Braidotti is mired in certain political and normative theories and practices that bely the fact that the posthuman disconnection might diverge beyond any such commitments. As he will suggest the ethics of vital posthumanism is thus not prescriptive but a tool for problem defining (Roden, KL 4271). The point being that one cannot bind oneself to a democratic accounting, because – as disconnection suggests an accounting would not evaluate posthuman states according to human values but according to values generated in the process of constructing and encountering them. (Roden, KL 4278)

In the feral worlds of the posthuman future our wide-human descendants may diverge so significantly from us, and acquire new values and functional affiliations that it might be disastrous for those who opt to remain human through either normative inaction or policing the perimeters of territorial and political divisions, etc., to the point that the very skills and practices that had sustained them prior to disconnection might be inadequate in the new dispensation. (Roden, KL 4372) Therefore as he suggests:

It follows that any functionally autonomous being confronted with the prospect of disconnection will have an interest in maximizing its power, and thus structural flexibility, to the fullest possible extent. The possibility of disconnection implies that an ontological hypermodernity is an ecological value for humans and any prospective posthumans. … To exploit Braidotti’s useful coinage, ramping up their functional autonomy would help to sustain agents – allowing them to endure change without falling apart (Roden, KL 4376- 4385)

He will summarize his disconnection hypothesis this way:

I will end by proposing a hypothesis that can be put to the test by others working in science and technology, the arts, and in what we presumptively call “humanities” subjects. This is that interdisciplinary practices that combine technoscientific expertise with ethical and aesthetic experimentation will be better placed to sculpt disconnections than narrow coalitions of experts. There may be existing models for networks or associations that could aid their members in navigating untimely lines of flight from pre- to post-disconnected states (Roden 2010a). “Body hackers” who self-administer extreme new technologies like the IA technique discussed above might be one archetype for creative posthuman accounting. Others might be descendants of current bio- and cyber-artists who are no longer concerned with representing bodies but, as Monika Bakke notes, work “on the level of actual intervention into living systems”. (Roden, KL 438)

So in the end David Roden is opting for intervention and experimentation, a direct participation in the ongoing posthuman emergence through both ethical and technological modes. Instead of it being tied to any political or corporate pressure it should become an almost Open Source effort that is open and interdisciplinary among both academic and outsiders from scientists, technologists, artists, and bodyhackers willing to intervene in their own lives and bodies to bring it into realization. He will quote Stelarc, a body hacker, saying,

Perhaps Stelarc defines the problem of a post-anthropocentric posthuman politics best when describing the role of technical expertise in his art works: “This is not about utopian blueprints for perfect bodies but rather speculations on operational systems with alternate functions and forms” (in Smith 2005: 228– 9). I think this spirit of speculative engineering best exemplifies an ethical posthuman becoming – not the comic or dreadful arrest in the face of something that cannot be grasped. (Roden, KL 4397)

One might term this speculative engineering the science fictionalization of our posthuman future(s) or becoming other(s). Open your eyes folks the posthuman could already be among you. In the Bionic Horizon I had quoted Nick Land’s essay Meltdown, which in some ways seems a fitting way to end this excursion:

The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.

—Nick Land, Meltdown

One aspect of Roden’s program strikes me as pertinent, we need better tools to diagnose the technological infiltration of human agency as the future collapses upon the present. Yet, he also points toward a posthuman movement as he sees opportunity in an almost agreement with the tendencies of accelerationism. We might actually see late capitalism as an even more radical form of technological accelerationism which goes beyond any political concerns, and whose goal is reinventing human relations in light of new technology. So that instead of the current mutations  of some phenomenological effort we may be experiencing the strangeness of techno-capital as a speculative opportunity to rethink basic notions of humanity as such. Ultimately, as we’ve seen through time technology and humanity have always already been in symbiotic relationship to emerging technologies from the time of the early implementation of domestication of animals and seed baring agricultural emergence to the world of Industrial Civilization and its narrowing of the horizon of planetary civilization. What next? Roden offers an alliance with the ongoing process, optimistic and open toward the future, hopeful that the alliance with the interventions of technology may hold nothing more than our posthuman future as the next stage of strangeness in the universe. We’ll we become paranoid and fearful, withdraw into combative and religious reformation against such a world; or, will we call it down into our own lives and participate in its emergence as co-symbiotic partners?


*Notes:

Agar: In Humanity’s End, Agar is mainly concerned with the first type of threat from radical technical alteration. His argument against radical alteration rests on a position he calls species relativism (SR). SR states that only certain values are compatible with membership of a given biological species: According to species-relativism, certain experiences and ways of existing properly valued by members of one species may lack value for the members of another species.(Roden, 3869)

Meachem (from a dialogue): Thus a disconnection could be a “phenomenological speciation event” which weakens the bonds that tie sentient creatures together on this world:

This refers us back to a weakened version of Roden’s description of posthuman disconnection: differently altered groups, especially when those alterations concern our vulnerability to injury and disease, might have experiences sufficiently different from ours that we cannot envisage what significant aspects of their lives would be like. This inability to empathize will at the very least dampen the possibility for the type of empathic species solidarity that I have argued is the ground of ethics. (Ibid.)

Meacham’s position suggests that human species recognition has an “ethical pull” that should be taken seriously by any posthuman ethics.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 3832-3834). Taylor and Francis. Kindle Edition.
2. Wilson, Edward O. (2012-04-02). The Social Conquest of Earth (Kindle Locations 179-181). Norton. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 6)

 Given their dated nonexistence, we do not know what it would be like to encounter or be posthuman. This should be the Archimedean pivot for any account of posthuman ethics or politics that is not fooling itself. – David Roden,

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. This will be brief post today. Roden will in chapter six qualify and extend his disconnection thesis by a speculative surmise that it implies that whatever posthumans might become we can start with at least one conceptual leap: they will be functional autonomous systems (FAS).

He will test out various causal theories that might inform such a stance: Aristotelian, Kantian, and others. But will conclude that none of them satisfy the requirements set by disconnection thesis in the sense that most of these theories deal with biological as compared to either hybrid or even fully technological systems and adaptations. Against any form of teleological system whether of the Aristotelian or an ASA (autonomous systems approach) which is intrinsically teleological he will opt for a pluralistic ontology of assemblages (which we’ve discussed in the previous post ), because it comports well with a decomposability of assemblages that entails ontological anti-holism.1

He will survey various forms of autonomy: moral and functional; Aristotelian; Darwinian and ecological; modularity and reuse; and, assemblages. Instead of belaboring each type, which is evaluated and rejected or qualified in turn for various reasons: teleology, biologism, etc. We move to the final section that he appropriates aspects useful from the various types of autonomy studied to formulate a workable hypothesis and working theory that is revisable and situated at the limits of what we can expect as a minimal base of conceptuality to discover if and when we meet the posthuman. It ultimately comes down to the indeterminacy and openness of this posthuman future.

His tentative framework will entail a modular and functional autonomous system because the model provided by biological systems suggests that modularity shields such systems from the adverse effects of experimentation while allowing greater opportunities for couplings with other assemblages. Since humans and their technologies are also modular and highly adaptable, a disconnection event would offer extensive scope for anomalous couplings between the relevant assemblages at all scales. (Roden, 3364-3371)

In some ways such an event or rupture between the human and posthuman entailed by disconnection theory relates to both the liminal and the gray areas between assemblages and their horizons. As he will state it a disconnection is best thought of as a singular event produced by an encounter between assemblages. It could present possibilities for becoming-other that should not be conceived as incidental modifications of the natures of the components since their virtual tendencies would be unlocked by an utterly new environment. (Roden, 3371) Further, such a disconnection could be a process over time, rather than one isolated singular event, which leaves the whole notion of posthuman succession undetermined as well as unqualifiable by humans themselves ahead of such an event. Think of the agricultural revolution between the stone age world of hunting and gathering, and new static systems of farming and hording of grains in large assemblages of cities for fortification, etc. This new technology of farming and its related processes were a rupture that took place over thousands of years from stone age through the Neolithic and onward. Some believe that it was this significant event that would in turn help develop other technologies such as writing (temple and grain bookkeeping), math (again taxation, counting), etc. all related to the influx of agriculture and the cities that grew up in their nexus: each an assemblage of various human and technological assemblages plugged in to each other over time.

Which brings in the notion that it is an event, an intensity, rather than an object or thing, which means that the modulation and development of whatever components leading to this process are outside of the scope of traditional metaphysics or theories of subjectivity. (Roden, 3380) As well it is not to be considered an agent nor a transcendental subject in the older metaphysical sense, rather since it is part of processual and mutually interacting set of mobile components that lend themselves to assemblages with an open-textured capacity for anomalous couplings and de-couplings it need not be wed to some essentialist discourse that would reduce its processes to either biological or technological systems. We just do not have enough information. 

In summary he will tell us that if disconnections are intense becomings, becomings without a subject, then this is something we will need to take into account in our ethical and political assessment of the implications of SP. Becoming human may not be best understood as a transition from one identifiable nature to another despite the fact that the conditions of posthumanity can be analysed in terms of the functional roles of entities within and without the Wide Human. Before we can consider the ethics of becoming posthuman more fully, however, we need to think about whether technology can be considered an independent agent of disconnection or whether it is merely an expression of human interests and powers. What is a technology, exactly, and to what extent does technology leave us in a position to prevent, control or modify the way in which a disconnection might occur? (Roden, KL 3388-3394)

We will explore the technological aspect in the next post.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Location 2869). Taylor and Francis. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 4.2)

The problem of interpretation arises because there are empirical and theoretical grounds for holding that some phenomenology is “dark”.
– David Roden,  Posthuman Life: Philosophy at the Edge of the Human

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In section 4.2 he will introduce us to the notion that not all phenomenology deals with the pure world of surfaces and light. There is a dark side, or should we say ‘A Dark Tale of Phenomenology’. It will be a tale of twined realms: one of perception, and one of time. It will be a tale in which we will never be sure whether what is alien and posthuman can ever be known or shared by our own mental states, or that we will even be able to control or forecast what the posthuman is or could be. We will be in the dark with that which is alien and alienating.

David Roden will give us a beginning to our tale: “Let’s call a feature of experience “dark” if it confers no explicit or implicit understanding of its nature on the experiencer (Roden, KL 1961)”.1 Unlike the phenomenology of Husserl or even Heidegger in which the surface detail that we can intuit and see within the realm of appearance and presence, dark phenomenology would deal with that which cannot directly be seen, touched, felt, smelled, etc., yet affects us and influences our dispositions, feelings, or actions in indirect and strange ways that we cannot describe with any precision. Our access to this dark side would be by indirect ways, much like scientist who uncover the truth of dark energy and dark matter which make up 99% of our universe and yet we never have direct access to such things except through a combination of mathematical theorems and instruments that measure aspects of these unknown unknowns indirectly through experimentation and analyses.

Reading Roden’s surmises about color theory, and of how there are millions of shadings of color that we cannot intuit or describe from a firs-person-singular perspective because we do not have access or it is a form of loss or neglect reminded my of what many in the neurosciences are suspecting. As I suggested from Bakker’s BBT theory in a previous post the brain only ever gives us the information we need to deal with the things evolution and survival have adapted us too in our understanding or ‘intuiting’ of the environment we are embedded within. Yet, as Roden is suggesting there is an amazing realm of experience we never have direct access to, and that in fact we are blind too not because we cannot intuit it, but because the brain only offers our ‘first-person’ of subjective self or temporary agency certain well-defined and filtered pieces of the puzzle. It filters out the rest accept as Roden said previously, there are times when we are affected by things we cannot perceive but are part of reality. Phenomenology is unable to discuss such things because it is not science, it lacks both the conceptual and instrumental technology to graze even a percent of this unknown or blind territory surrounding us. Philosophers like to talk of chaos, etc. When in fact it is a sea of information that the brain analyses at every moment, but delivers to us packaged in byte size representations that we can handle as its evolutionary agents of choice.

(A personal aside: I must admit I wish David would have sunk the philosophy for neuroscience and hard-sciences rather than wasting time with the philosophical community. It always seems reading such works that one must spend an exorbitant amount of time clarifying concepts, ideas, notions for other professional philosophers who will probably reject what your saying anyway. To me science is answering these sorts of questions in terms that leave the poor phenomenological philosopher in a quandary. Maybe its part of the academic game. I’ve never been sure. Yet, as we will see David himself will make much the same gesture later on.)

Either way as I read dark phenomenology is actually trying to deal not with appearance but with what Kant used to call the ‘noumenal’ realm. Which was closed off from philosophical speculation two-hundred years ago as something that could never be described or known. Yet, both philosophy and the sciences have been describing aspects of it ever since and doing it by indirect means without ever name it that. It’s as if we’ve closed our selves off from the truth of our own blindness, and told ourselves we’re not blind.

As Roden will affirm of all these representationalist philosophers in discussing the possibility that time may have a dark side: “For representationalist philosophers of mind who believe that the mind is an engine for forming and transforming mental representations there is good reason to be sceptical about the supposed transcendental role of time” (Rode, KL 2068). Then he will tells us why: “For where a phenomenological ontology transcends the plausible limits of intuition its interpretation would have to be arbitrated according to its instrumental efficacy, simplicity and explanatory potential as well as its descriptive content” (Roden, KL 2081).

 And as if he heard me he will tell us that phenomenology must provide an incomplete account of those dark structures that are not captured in appearance through other modes of inquiry, saying: “If phenomenology is incompletely characterized by the discipline of phenomenology, though, it seems proper that methods of enquiry such as those employed by cognitive scientists, neuroscientists and cognitive modellers should take up the interpretative slack. If phenomenologists want to understand what they are talking about , they should apply the natural attitude to their own discipline. (Roden, 2120)”

And, of course most practicing scientists in these fields would tell Roden and the others: Why don’t you just give it up and join us? Maybe philosophy is not suited to describe or even begin to analyze what we’re discovering, maybe you would be better off closing down philosophy of mind and becoming scientists.” But of course we know what these philosophers would probably say to that. Don’t we. 

Ultimately after surveying phenomenology of Husserl and Heidegger and others Roden will come to the conclusion:

Dark phenomenology undermines the transcendental anthropologies of Heidegger and Husserl because it deprives them of the ability to distinguish transcendental conditions of possibility such as Dasein or Husserl’s temporal subject (which are not things in the world) from the manifestation of things that they make possible. They are deconstructed insofar as they become unable to interpret the formal structures with which they understand the fundamental conditions of possibility for worlds or things. … As bruited, this failure of transcendentalism is crucial for our understanding of SP. If there is no a priori theory of temporality, there is no a priori theory of worlds and we cannot appeal to phenomenology to exclude the possibility that posthuman modes of being could be structurally unlike our own in ways that we cannot currently comprehend. (Roden, KL 2194 – 2206)

 What we’re left with is an open and indescribable realm of possibility that is anyone’s guess. As he will sum it up there is no reason to be bound by a transcendental or anthropological posthumanism, instead SP will have no truck with constraints on the open-endedness of posthumanism (” This is not to say, of course, that there are no constraints on PPS”):

Posthuman minds may or may not be weirder than we can know. We cannot preclude maximum weirdness prior to their appearance. But what do we mean by such an advent? Given the extreme space of possible variation opened up by the collapse of the anthropological boundary, it seems that we can make few substantive assumptions about what posthumans would have to be like.  (Roden, 2378)

In the next post Roden takes up the formal analysis rather than an a priori or substantive account of posthuman life, suggesting that we will not be able to describe the posthuman till we see in in the wild. We will follow him into the wild.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human. Taylor and Francis. Kindle Edition.

 

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 4)

Again I take up from my previous post David Roden’s Posthuman Life: Philosophy at the Edge of the Human. In Chapter Three Dr. Roden would tell us that pragmatism elaborates transcendental humanism plausibly, and that because of that we need to consider its implications for posthuman possibility. In Chapter Four he will elaborate on that by defining pragmatisms notion of language as a matrix  “in which we cooperatively form and revise reasons”, and he will term this the “discursive agency thesis (DAT)” (Roden, KL 1402).1 The basic premise here is simple: that any entity that lacks the capacity for language cannot be an agent. The pragmatist will define discursive agency as requiring certain attributes that will delimit the perimeters of what an agent is:

1) An agent is a being that acts for reasons.
2) To act for reasons an agent must have desires or intentions to act.
3) An agent cannot have desires or intentions without beliefs.
4) The ability to have beliefs requires a grasp of what belief is since to believe is also to understand “the possibility of being mistaken” (metacognitive claim).
5) A grasp of the possibility of being mistaken is only possible for language users (linguistic constitutivity). (Roden, KL 1407-1413)

As we study this list of agency we see a progression from acting for specific reasons, desires, intentions, beliefs to the need for self-reflection and language to grasp these objects in the mind. We’ve seen most of this before in other forms across the centuries as philosophers debated Mind and Consciousness. For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a “thought in your head”, like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something. “It’s not hard to give a commonsense definition of consciousness” observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the “mind-body problem.” A related problem is the problem of meaning or understanding (which philosophers call “intentionality”): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or “phenomenology”): If two people see the same thing, do they have the same experience? Or are there things “inside their head” (called “qualia”) that can be different from person to person?

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

But I get ahead of myself for Dr. Roden begins first analyzing the notions of Analytical philosophy in which “propositional attitudes” or what we term items in the mind: psychological states such as beliefs, desires and intentions (along with hopes, wishes, suppositions, etc.) are part and partial of our linguistic universe of sentences that describe the “that” clause. (Roden, KL 1416) Discussing this he will take up the work of Davidson, Husserl and Heidegger.

Now we know that for Husserl phenomenology is transcendental because it premises its accounts of phenomenon on the primacy of intentionality with respect both to reason and sense. So that Husserl’s transcendental phenomenology begins and ends by a ‘reduction’ of phenomena to its ‘intentional objects’ or the ‘ideal object’ intended by a consciousness.2 

For Roden the conflict is not about intentionality (which he seems to accept) but is more about our cognition and understanding of differing “positions regarding commonly identified objects”: “That is to say, our challenge to the metacognitive claim does not show that advanced posthumans with florid agency powers would not need to understand what it is to be mistaken by being able to using the common coin of sentences.” (Roden, KL 1805-08) He will even suggest that the fact that humans can notice that they have forgotten things, evince surprise, or attend to suddenly salient information (as with the ticking clock that is noticed only when it stops) implies anecdotally that our brains must have mechanisms for representing and evaluating (hence “metacognizing”) their states of knowledge and ignorance. (Roden, KL 1815)

What’s more interesting in the above sentence is how it ties in nicely with R. Scott Bakker’s Blind Brain Theory:

“Intentional cognition is real, there’s just nothing intrinsically intentional about it. It consists of a number of powerful heuristic systems that allows us to predict/explain/manipulate in a variety of problem-ecologies despite the absence of causal information. The philosopher’s mistake is to try to solve intentional cognition via those self-same heuristic systems, to engage in theoretical problem solving using systems adapted to solve practical, everyday problem – even though thousands of years of underdetermination pretty clearly shows the nature of intentional cognition is not among the things that intentional cognition can solve!” (see here)

 This seems to be the quandary facing Roden as he delves into both certain philosophers and scientists who base their theories and practices on intentionality, which is at the base of phenomenological philosophy both Analytical and Continental varieties. Yet, this is exactly his point later in the chapter after he has discussed certain aspects of elminativist theoretic of Paul Churchland and others: evidence for non-language-mediated metacognition implies that we should be dubious of the claim that language is constitutive of sophisticated cognition and thus – by extension – agency (Roden, KL 1893). He will conclude that even if metacognition is necessary for sophisticated thought, this may not involve trafficking in sentences. Thus we lack persuasive a priori grounds for supposing that posthumans would have to be subjects of discourse (Roden, 1896).

I think we’ll stop here for today. In section 4.2 he will take up the naturalization of phenomenology and the rejection of transcendental constraints. I’ll take that up in my next post.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human Taylor and Francis. Kindle Edition.
2. Jeremy Dunham, Iain Hamilton Grant, Sean Watson. Idealism: The History of a Philosophy (MQUP, 2011)

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 3)

Continuing where I left off yesterday in my commentary on David Roden’s Posthuman Life: Philosophy at the Edge of the Human  we discover in Chapter Two a critique of Critical Posthumanism. He will argue that critical humanism like SP understands that technological, political, social and other factors will evolve to the point that the posthuman will become inevitable, but that in critical posthumanism they conflate both transhuman and SP ideologies and see both as outgrowths of the humanist tradition that tend toward either apocalypse or transcendence. Roden will argue otherwise and provides four basic critiques against the anti-humanist argument, the technogenesis argument, the materiality argument, and the anti-essentialist argument. By doing this he hopes to bring into view the commitment of SP to a minimal, non-transcendental and nonanthropocentric humanism and will help up put bones on its realist commitments (Roden, KL 829).1

Critical posthumanism argues that we are already posthuman, that it is our conceptions of human and posthuman that are becoming changing and that any futuristic scenario will be an extension of the human into its future components. SP will argue on the other hand that the posthuman might be radically different from the human altogether, such that the posthuman would constitute a radical break with our conceptual notions altogether. After a lengthy critique of critical posthumanism tracing its lineage in the deconstructive techniques of Derrida and Hayles he will tell us that in fact SP and Critical posthumanism are complementary, and that a “naturalistic position structurally similar to Derrida’s deconstructive account of subjectivity can be applied to transcendental constraints on posthuman weirdness” (Roden, KL 1037). The point being that a “naturalized deconstruction” of subjectivity widens the portals of posthuman possibility whereas it complicates but does not repudiate human actuality (Roden, 1039). As he sums it up:

I conclude that the anti-humanist argument does not succeed in showing that humans lack the powers of rational agency required by ethical humanist doctrines such as cosmopolitanism. Rather, critical posthumanist accounts of subjectivity and embodiment imply a cyborg-humanism that attributes our cognitive and moral natures as much to our cultural environments (languages, technologies, social institutions) as to our biology. But cyborg humanism is compatible with the speculative posthumanist claim that our wide descendants might exhibit distinctively nonhuman moral powers. (Roden, 1045-1049)

When he adds that little leap to “nonhuman moral powers” it seems to beg the question. That seems to align toward the transhumanist ideology, only that it fantasizes normativity for nonhumans rather than enhanced humans. Why should these inhuman/nonhuman progeny of metal-fleshed cyborgs have any moral dimension whatsoever? Some argue that the moral dimension is tied to affective relations much more than cognitive, so what if these new nonhuman beings are emotionless? What if like many sociopathic and psychopathic humans have no emotional or affective relations at all? What would this entail? Is this just a new metaphysical leap without foundation? Another placating gesture of Idealism, much like the Brandomonian notions of ‘give and take’ normativity that such Promethean philosophers as Reza Negarestani have made recently (here, here, here):

Elaborating humanity according to the self-actualizing space of reasons establishes a discontinuity between man’s anticipation of himself (what he expects himself to become) and the image of man modified according to its functionally autonomous content. It is exactly this discontinuity that characterizes the view of human from the space of reasons as a general catastrophe set in motion by activating the content of humanity whose functional kernel is not just autonomous but also compulsive and transformative.
Reza Negarestani , The Labor of the Inhuman One and Two

The above leads into the next argument: technogenesis. Hayles and Andy Clark will argue that there has been a symbiotic relation between technology and humans from the beginning, and that so far there has been no divergence. SP will argue that that’s not an argument. That just because the fact that the game of self-augmentation is ancient does not imply that the rules cannot change (Roden, KL 1076). Technogenesis dismissal of SP invalidly infers that because technological changes have not monstered us into posthumans thus far, they will not do so in the future (Roden, KL 1087).

Hayles will argue a materiality argument that SP and transhumanists agendas deny material embodiment: the notion that a natural system can be fully replicated by a computational system that emulates its functional architecture or simulates its dynamics. This argument Roden will tell us actually works in favor of SP, not against it. It implies that weird morphologies can spawn weird mentalities. 7 On the other hand, Hayles may be wrong about embodiment and substrate neutrality. Mental properties of things may, for all we know, depend on their computational properties because every other property depends on them as well. To conclude: the materiality argument suggests ways in which posthumans might be very inhuman. (Roden, 1102)

The last argument is based on the anti-essentialist move in that it would locate a property of ‘humaneness’ as unique to humanity and not transferable to a nonhuman entity: this is the notion of an X factor that could never be uploaded/downloaded etc. SP will argue instead that we can be anti-essentialists (if we insist) while being realists for whom the world is profoundly differentiated in a way that owes nothing to the transcendental causality of abstract universals, subjectivity or language.  But if anti-essentialism is consistent with the mind-independent reality of differences – including differences between forms of life – there is no reason to think that it is not compatible with the existence of a human– posthuman difference which subsists independently of our representations of them. (Roden, 1136)

Summing up Roden will tell us:

The anti-essentialist argument just considered presupposes a model of difference that is ill-adapted to the sciences that critical posthumanists cite in favour of their naturalized deconstruction of the human subject. The deconstruction of the humanist subject implied in the anti-humanist dismissal complicates rather than corrodes philosophical humanism – leaving open the possibility of a radical differentiation of the human and the posthuman. The technogenesis argument is just invalid. The materiality argument is based on metaphysical assumptions which, if true, would preclude only some scenarios for posthuman divergence while ramping up the weirdness factor for most others. (Roden, 1142-1147)

Most of this chapter has been a clearing of the ground for Roden, to show that many of the supposed arguments against SP are due to spurious and ill-reasoned confusion over just what we mean by posthumanism. Critical posthumanism in fact seems to reduce SP and transhumanist discourse and conflate them into some erroneous amalgam of ill-defined concepts. The main drift of critical posthumanist deliberations tend toward the older forms of the questionable deconstructionist discourse of Derrida which of late has come under attack from Speculative realists among others.

In the Chapter three Roden will take up the work of Transhumanism which seeks many of the things that SP does, but would align it to a human agenda that constrains and moralizes the codes of posthuman discourse toward human ends. In this chapter he will take up threads from Kant, analytical philosophy, and contemporary thought and its critique. Instead of a blow by blow account I’ll briefly summarize the next chapter. In the first two chapters he argued that the distinctions between SP and transhumanism is that the former position allows that our “wide human descendants” could have minds that are very different from ours and thus be unamenable to broadly humanist values or politics. (Roden, KL 1198) While in chapter three he will ask whether there might be constraints on posthuman weirdness that would restrict any posthuman– human divergence of mind and value. (Roden, 1201) After a detailed investigation into Kant and his progeny Roden will conclude that two of the successors to Kantian transcendental humanism – pragmatism and phenomenology – seem to provide rich and plausible theories of meaning, subjectivity and objectivity which place clear constraints on 1) agency and 2) the relationship – or rather correlation – between mind and world. (Roden, 1711) As he tells us these theories place severe anthropological bounds on posthuman weirdness for, whatever kinds of bodies or minds posthumans may have, they will have to be discursively situated agents practically engaged within a common life-world. In Chapter 4 he will consider this “anthropologically bounded posthumanism” critically and argue for a genuinely posthumanist or post-anthropocentric unbinding of SP. (Roden, 1713)

I’ll hold off on questions, but already I see his need to stay with notions of meaning, subjectivity and objectivity in the Western scientific tradition that seem ill-advised. I’ll wait to see what he means by unbinding SP from this “anthropologically bounded posthumanism”, and hopefully that will clarify and disperse the need for these older concepts that still seem to be tied with the theo-philosophical baggage of western metaphysics.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human Taylor and Francis. Kindle Edition.

David Roden’s: Speculative Posthumanism & the Future of Humanity (Part 2)

In my last post on David Roden’s new book Posthuman Life: Philosophy at the Edge of the Human I introduced his basic notion of Speculative Posthumanism (SP) in which he claimed that for “SP … there could be posthumans. It does not imply that posthumans would be better than humans or even that their lives would be compared from a single moral perspective.” The basic motif is that his account is not a normative or moral ordering of what posthuman is, but rather an account of what it contains. 

In chapter one he provides a few further distinctions to set the stage of his work. First he will set his form of speculative posthumanism against the those like Neil Badmington and Katherine Hayles who enact a ‘critical posthumanism’ in the tradition of the linguistic turn or Derridean deconstruction of the humanist traditions of subjectivity, etc.. Their basic attack is against the metaphysics of presence that would allow for the upload/download of personality into clones or robots in some future scenario. Once can see in Richard K. Morgan’s science fictionalization (see Altered Carbon) of humans who can download their informatics knowledge, personality, etc. into specialized hardware that allows retrieval for alternative resleeving into either a clone or synthetic organism (i.e., a future rebirthing process in which the personality and identity of the dead can continually be uploaded into new systems, clones, symbiotic life-forms to continue their eternal voyage).  Hans Moravec one of the father’s of robotics would in Mind’s Children be the progenitor of such download/upload concepts that would lead him eventually to sponsor transhumanism, which as Roden will tell us is a normative claim that offers a future full of promise and immortality. Such luminaries as Frank J. Tipler in The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead would bring scientific credence to such ideas as the Anthropic Principle, which John D. Barrow and he collaborated on that stipulates: “Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.”

Nick Bostrom following such reasoning would in his book Anthropic Bias: Observation Selection Effects in Science and Philosophy supply an added feature set to those early theories. Bostrom showed how there are problems in various different areas of inquiry (including in cosmology, philosophy, evolution theory, game theory, and quantum physics) that involve a common set of issues related to the handling of indexical information. He argued that a theory of anthropics is needed to deal with these. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and showed how they lead to different conclusions in a number of cases. He pointed out that each is affected by paradoxes or counterintuitive implications in certain thought experiments (the SSA in e.g. the Doomsday argument; the SIA in the Presumptuous Philosopher thought experiment). He suggested that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition by “observer-moments”. This could allow for the reference class to be relativized (and he derived an expression for this in the “observation equation”). (see Nick Bostrom)

Bostrom would go on from there and in 1998 co-found (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies. In 2005 he was appointed Director of the newly created Future of Humanity Institute in Oxford. Bostrom is the 2009 recipient of the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement and was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.” (see Bostrom)

Bostrom’s Humanity+ is based on normative claims about the future of humanity and its enhancement, and as Roden will tell us transhumanism is an “ethical claim to the effect that technological enhancement of human capacities is a desirable aim” (Roden, 250).1 In contradistinction to any political or ethical agenda (SP) or speculative posthumanism which is the subject of Roden’s book “is not a normative claim about how the world ought to be but a metaphysical claim about what it could contain” (Roden, 251). Both critical posthumanism and transhumanism in Roden’s sense of the term are failures of imagination and philosophical vision, while SP on the other hand is concerned with both current and future humans, whose technological activities might bring them into being (Roden, KL 257). So in this sense Roden is more concerned with the activities and technologies of current and future humans, and how in their interventions they might bring about the posthuman as effect of those interventions and technologies.

In Bostrom’s latest work Superintelligence: Paths, Dangers, Strategies he spins the normative scenario by following the trail of machine life. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? In my own sense of the word: we want be able to control it. Just a study of past technology shows the truth of that: out of the bag it will have its own way with or without us. The notion that we could apply filters or rules to regulate an inhuman or superintelligent species seems quite erroneous when we haven’t even been able to control our own species through normative pressure. The various religions of our diverse cultures are examples of failed normative pressure. Even now secular norms are beginning to fall into abeyance as enlightenment ideology like other normative practices is in the midst of a dark critique.

In pursuit of this Roden will work through the major aspects of the humanist traditions, teasing out the moral, epistemic, and ontic/ontological issues and concerns relating to those traditions before moving on to his specific arguments for a speculative posthumanism.  I’ll not go into details over most of these basic surveys and historical critiques, but will just highlight the basic notions relevant to his argument.

1. Humanists believe in the exceptionalism of humans as distinct and separate from non-human species. Most of this will come out of the Christian humanist tradition in which man is superior to animals, etc. This tradition is based in a since of either ‘freedom’ (Satre, atheistic humanism) or ‘lack’ (Pico della Mirandola). There will also be nuances of this human-centric vision or anthropocentric path depending stemming from Descartes to Kant and beyond, each with its own nuanced flavor of the human/non-human divide.
2. Transhumanism offers another take, one that will combine medical, technological, pharmaceutical enhancements to make humans better. As Roden will surmise, transhumanism is just Human 1.0 to 2.0 and their descendents may still value the concepts of autonomy, sociability and artistic expression. They will just be much better at being rational , sensitive and expressive – better at being human. (Roden, KL 403-405)
3. Yet, not all is rosy for transhumanists, some fear the conceptual leaps of Artificial General Intelligence (AGI). As Roden tells us Bostrom surmises that “the advent of artificial super-intelligence might render the intellectual efforts of biological thinkers irrelevant in the face of dizzying acceleration in machinic intelligence” (Roden KL 426).
4. Another key issue between transhumanists and SP is the notion of functionalism, or the concept that the mind and its capacities or states is independent of the brain and could be grafted onto other types of hardware, etc. Transhumanist hope for a human like mind that could be transplanted into human-like systems (the more general formulation is key for transhumanist aspirations for uploaded immortality because it is conceivable that the functional structure by virtue of which brains exhibit mentality is at a much lower level than that of individual mental states KL 476), while SP sees this as possible wishful thinking in which thought it might become possible nothing precludes the mind being placed in totally non-human forms.

Next he will offer four basic variations of posthumanism: SP, Critical Posthumanism, Speculative realism, and Philosophical naturalism. Each will decenter the human from its exceptional status and place it squarely on a flat footing with its non-human planetary and cosmic neighbors:

Speculative posthumanism is situated within the discourse of what many term ‘the singularity’ in which at some point in the future some technological intervention will eventually produce a posthuman life form that diverges from present humanity. Whether this is advisable or not it will eventually happen. Yet, how it will take effect is open rather than something known. And it may or may not coincide with such ethical claims of transhumanism or other normative systems. In fact even for SP there is a need for some form of ethical stance that Roden tells us will be clarified in later chapters.

Critical posthumanism is centered on the philosophical discourse at the juncture of humanist and posthumanist thinking, and is an outgrowth of the poststructural and deconstructive project of Jaques Derrida and others, like Foucault etc. in their pursuit to displace the human centric vision of philosophy, etc. This form of posthumanism is more strictly literary and philosophical, and even academic that the others.

Speculative realism Roden tells us will argue against the critical posthumanists and deconstructive project and its stance on decentering subjectivity, saying  “that to undo anthropocentrism and human exceptionalism we must shift philosophical concern away from subjectivity (or the deconstruction of the same) towards the cosmic throng of nonhuman things (“ the great outdoors”)” (Roden, KL 730). SR is a heated topic among younger philosophers dealing with even the notion of whether speculative realism is even a worthy umbrella term for many of the philosophers involved. (see Speculative Realism)

Philosophical naturalism is the odd-man out, in the fact that it’s not centered on posthuman discourse per se, but rather in the “truth-generating practices of science rather than to philosophical anthropology to warrant claims about the world’s metaphysical structure” (Roden, KL 753). Yet, it is the dominative discourse for most practicing scientists, and functionalism being one of the naturalist mainstays that all posthumanisms must deal with at one time or another. 

I decided to break this down into several posts rather than to try to review it all in one long post. Chapter one set the tone of the various types of posthumanism, the next chapter will delve deeper into the perimeters and details of the “critical posthumanist” discourse. I’ll turn to that next…

Visit David Roden’s blog, Enemy Industry which is always informed and worth pondering.

1. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human. Taylor and Francis. Kindle Edition.

David Roden on Posthuman Life

 There evolved at length a very different kind of complex organism, in which material contact of parts was not necessary either to coordination of behaviour or unity of consciousness. . . .
—OLAF STAPLEDON, First and Last Men

When Stapledon wrote that book he was thinking of Martians, but in our time one might think he was studying the strangeness of what our posthuman progeny may evolve into.  In Last and First Men Stapledon presents a version of the future history of our species, reviewed by one of our descendants as stellar catastrophe is bringing our solar system to an end. Humanity rises and falls through a succession of mental and physical transformations, regenerating after natural and artificial disasters and emerging in the end into a polymorphous group intelligence, a telepathically linked community of ten million minds spanning the orbits of the outer planets and breaking the bounds of individual consciousness, yet still incapable of more than “a fledgling’s knowledge” of the whole.1

Modern humans (Homo sapiens or Homo sapiens sapiens) are the only extant members of the hominin clade, a branch of great apes characterized by erect posture and bipedal locomotion; manual dexterity and increased tool use; and a general trend toward larger, more complex brains and societies. We evolved according to Darwinian theory from early hominids, such as the australopithecines whose brains and anatomy in many ways more similar to non-human apes, are less often thought of or referred to as “human” than hominids of the genus Homo some of whom used fire, occupied much of Eurasia, and gave rise to anatomically modern Homo sapiens in Africa about 200,000 years ago where they began to exhibit evidence of behavioral modernity around 50,000 years ago and migrated out in successive waves to occupy all but the smallest, driest, and coldest lands. (see Human)

You begin to see a pattern that evolution moves through various changes and transformations. Yet, there is no end point, no progression, not teleological goal to it all. Instead evolutionary theory – and, more explicitly its modern synthesis, connected natural selection, mutation theory, and Mendelian inheritance into a unified theory that applied generally to any branch of biology. One thing that sticks out in this is that evolution deals with organic evolution. The modern synthesis doesn’t include other types of evolvement that might portend what the posthuman descendants of humans might become. If we follow the logic of evolutionary theory as it exists we could at best extrapolate only the continued organic evolution of humans or their eventual extinction. We know that extinction is a possibility since 99% of the species that have ever existed on earth are now extinct. Something will eventually replace us. But what that ‘something’ might be is open to question, an open ended speculative possibility rather than something a scientist could actually pin down and point to with confidence.

 

This is the basic premise of Dr. David Roden’s new work, Posthuman Life: Philosophy at the Edge of the Human. We are living in a technological era in which a convergence of NBIC technologies (an acronym for Nanotechnology, Biotechnology, Information technology and Cognitive science), as well as certain well supported positions in cognitive science, biological theory and general metaphysics imply that a posthuman succession is possible in principle, even if the technological means for achieving it remain speculative (Roden, KL 157). Roden will term his version of this as “speculative posthumanism”:

Throughout this work I refer to the philosophical claim that such successors are possible as “speculative posthumanism ” (SP ) and distinguish it from positions which are commonly conflated with SP, like transhumanism. SP claims that there could be posthumans. It does not imply that posthumans would be better than humans or even that their lives would be compared from a single moral perspective.2

Roden will develop notions of “Critical Posthumanism” — which seeks to “deconstruct” the philosophical centrality of the human subject in epistemology, ethics and politics; and, Transhumanism — which proposes the technical enhancement of humans and their capacities. Yet, as Roden admits before we begin to speak of the posthuman we need to have some inkling of exactly what we mean by ‘human’: any philosophical theory of posthumanism owes us an account of what it means to be human such that it is conceivable that there could be nonhuman successors to humans (Roden, KL 174).

One thought that Roden brings out is the notion of subjectivity:

Some philosophers claim that there are features of human moral life and human subjectivity that are not just local to certain gregarious primates but are necessary conditions of agency and subjectivity everywhere. This “transcendental approach” to philosophy does not imply that posthumans are impossible but that – contrary to expectations – they might not be all that different from us. Thus a theory of posthumanity should consider both empirical and transcendental constraints on posthuman possibility. (Roden, KL 180)

Yet, such premises of an anti-intentional or non-intentional materialism as stem from Schopenhauer, Nietzsche, Bataille, and Nick Land would opt that we need no theory of subjectivity, that this is a prejudice of the Idealist tradition and dialectics that are in themselves of little worth. Obviously philosophers such as Alain Badiou, Slavoj Zizek, Quentin Meillassoux, and Adrian Johnson stand for this whole Idealist tradition in materialism in one form or another. Against the Idealist traditions is a materialism grounded in chaos and composition, in desire: Nick Land’s sense of libidinal materialism begins and ends in ‘desire’ which opposes the notion of lack: instead his is a theory of unconditional (non-teleological) desire (Land, 37).3 Unlike many materialisms that start with the concept of Being, or an ontology, Libidinal Materialism begins by acknowledging thermodynamics, chaos, and the pre-ontological dimension of energy: “libidinal materialism accepts only chaos and composition” (43). Being is an effect of composition: “being as an effect of the composition of chaos”:

With the libidinal reformulation of being as composition ‘one acquires degrees of being, one loses that which has being’. The effect of ‘being’ is derivative from process, ‘because we have to be stable in our beliefs… one has a general energetics of compositions… of types, varieties, species, regularities. The power to conserve, transmit, circulate, and enhance compositions, the power that is assimilated in the marking, reserving, and appropriation of compositions, and the power released in the disinhibition, dissipation, and … unleashing of compositions (Land, 44) … [even Freud is a libidinal materialist] in that he does not conceive desire as lack, representation, or intention, but as dissipative energetic flow, inhibited by the damming and channeling apparatus of the secondary process (Land, 45).

R. Scott Bakker author of the fantasy series The Second Apocalypse is also the theoretician of what he terms Blind Brain Theory (BBT). Very briefly, the theory rests on the observation that out of the vast amount of information processed by the brain every nanosecond, only a meagre trickle makes it through to consciousness; and crucially that includes information about the processing itself. We have virtually no idea of the massive and complex processes churning away in all the unconscious functions that really make things work and the result is that consciousness is not at all what it seems to be. Even what we term subjectivity is but a temporary process and effect of these brain processes and has no stable identity to speak of, but is rather a temporary focal point of consciousness. (see The Last Magic Show)

So to come back to Roden’s statement that some “philosophers claim that there are features of human moral life and human subjectivity that are not just local to certain gregarious primates but are necessary conditions of agency and subjectivity everywhere (Roden, KL 180)”. We can with BBT and Libidinal Materialism, or what might be better termed an anti-intentional philosophy based on non-theophilosophical concepts throw out the need to base our sense of what comes after the human on either ‘agency’ or ‘subjectivity’ as conditions, for both are in fact effects of the brain not substance based entities. So Roden need not worry about such conditions and constraints. And, as he tells us weakly constrained SP suggests that our current technical practice could precipitate a nonhuman world that we cannot yet understand, in which “our” values may have no place (Roden KL 187). Which is this sense that our human epistemologies, ontologies and normative or ethical practices and values cannot tell us anything about what the posthuman might entail: it is all speculative and without qualification.

But if this is true he will ask:

Does this mean that talk of “posthumans” is self-vitiating nonsense ? Does speaking of “weird” worlds or values commit one to a conceptual relativism that is incompatible with the commitment to realism? (Roden, KL 191)

If posthuman talk is not self-vitiating nonsense, the ethical problems it raises are very challenging indeed. If our current technological trajectories might result in the world turning posthuman, how should we view this prospect and respond to it? Should we apply a conservative , precautionary approach to technology that favours “human” values over any possible posthuman ones? Can conservatism be justified under weakly constrained SP and, if not, then what kind of ethical or political alternatives are justifiable? (Roden, 193)

David comes out of the Idealist traditions which I must admit I oppose with the alternate materialist traditions. As he tells us:

As I mentioned, an appreciation of the scope of SP requires that we consider empirically informed speculations about posthumans and also engage with the tradition of transcendental thought that derives from the work of Kant, Hegel, Husserl and Heidegger. (Rode, KL 200)

These are the questions his book raises and tries to offer tentative answers too:

Table of contents:

Introduction: Churchland’s Centipede
1. Humanism,Transhumanism and Posthumanism
2. A Defence of Pre‐Critical Posthumanism
3. The Edge of the Human
4. Weird Tales: Anthropologically Unbounded Posthumanism
5. The Disconnection Thesis
6. Functional Autonomy and Assemblage Theory
7. New Substantivism: A Theory of Technology
8. The Ethics of Becoming Posthuman.

I’ve only begun reading his new work so will need to hold off and come back to it in a future post. Knowing that his philosophical proclivities bend toward the German Idealist traditions I’m sure I’ll have plenty to argue with, yet it is always interesting to see how the current philosophies are viewing such things as posthumanism. So I looked forward to digging in. So far the book offers so far a clear and energetic, and informative look at the issues involved. After I finish reading it completely I’ll give a more informed summation. Definitely a work to make you think about what may be coming our way at some point in the future if the technologists, scientists, DARPA, and capitalist machine are any sign. Stay tuned… 

David Roden has a blog, Enemy Industry which is always informed and worth pondering.

For others in this series look here.

1. Dyson, George B. (2012-09-04). Darwin Among The Machines (p. 199). Basic Books. Kindle Edition.
2. Roden, David (2014-10-10). Posthuman Life: Philosophy at the Edge of the Human (Kindle Locations 165-168). Taylor and Francis. Kindle Edition.
3. Nick Land. A Thirst for Annihilation. (Routledge, 1992)

Speculative Posthumanism: R. Scott Bakker, Mark Fisher and David Roden

“A posthuman is any WHD [Wide Human Descendent] that goes feral; becomes capable of life outside the planetary substance comprised of narrow biological humans, their cultures and technologies.”

– Dr. David Roden, Hacking Humans

“So really think about it now,” Thomas continued. “Everything you live, everything you see and touch and hear and taste, everything you think, belongs to this little slice of mush, this little wedge in your brain called the thalamocortical system. The neural processing that makes these experiences possible—we’re talking about the most complicated machinery in the known universe—is utterly invisible. This expansive, far-reaching experience of yours is nothing more than a mote, an inexplicable glow, hurtling through some impossible black. You’re steering through a dream…”

– R. Scott Bakker,  Neuropath

In his novel Neuropath Thomas Bible, one of R. Scott Bakker’s characters – an atypical academic, not one of your pie-in-the-sky type, theorists, reminisces with a friend about an old professor who once presented theories on the coming “semantic apocalypse,” the apocalypse of meaning. He tells this friend, Samantha, that this is when the Argument started and conveys to her its basic tenets:

“Remember how I said science had scrubbed the world of purpose? For some reason, wherever science encounters intention or purpose in the world, it snuffs it out. The world as described by science is arbitrary and random. There’s innumerable causes for everything, but no reasons for anything.”1(58)

After a few arguments on how the neural process of the brain itself weaves the illusions of free-will, mind, etc. Thomas lays down the bombshell of Bakker’s pet theory: Blind Brain Theory, saying: “The brain, it turned out, could wrap itself around most everything but itself—which was why it invented minds . . . souls.”(61) Suddenly Samantha wakes up realizing that all this leads to moral nihilism and begins babbling defenses against such truths as Thomas has revealed. For Thomas this all seems all too familiar and human, he reminisces a similar conversation he’d had with his friend and co-hort, Neil Cassidy, who on realizing just where the argument led stated (stoned and pacing back and forth like a feral beast):

“Whoa, dude . . . Think about it. You’re a machine—a machine!—dreaming that you have a soul. None of this is real, man, and they can fucking prove it.” (62)

**   **   ***

Mark Fisher: A Critique of Practical Nihilism: Agency in Scott Bakker’s “Neuropath”

My post was generated by rereading Mark Fisher’s excellent critique of Bakker’s novel in INCOGNITUM HACTENUS Volume 2: here (downloadable in .pdf format). What interested me in Fisher’s critique was his conclusions more than his actual arguments. You can read the essay yourself and draw your own conclusions, but for me the either/or scenario that Fisher draws out is how either the technocapitalists or the technosocialists (‘General Intellect’) in the immediate future might use such knowledge to wield powers of control/emacipation never before imaginable:

For whatever the theoretical implications of neuroscience, Bakker is surely right that its practical applications will in the first instance be controlled by the dominant force on the planet: capital. Capital can use neuroscientific techniques to stave off the semantic apocalypse: ironically, it can control people by convincing them that they are free subjects. This is already happening, via the low-level neurocontrol exerted through media, advertising and all the other platforms through which communicative capitalism operates. Whether neuroscience’s practical nihilism will do more than reinforce capital’s domination will ultimately depend on how far the institutions of techno-science can be liberated from corporate control. Certainly, there are no a priori reasons why Malabou’s question “what should we do with our brain?” should not be answered collectively, by a General Intellect free to experiment on itself. (11)

He brings up two notions, both hinging on the amoral ‘practical nihilism’ of neuroscience itself: 1) the reinforcement by the dominant ideology, technocapitalism, to use such technologies to gain complete control over every aspect of our lives through invasive techniques of brain manipulation; or, 2) the power of some alternative, possibly Leftward, collectivist ideology that seeks through the malleability or plasticity of these same neurosciences to use the ‘General Intellect’ to freely experiment on itself. Do we really want either of these paths?

Continue reading

The Poem of the Sea: Speculative Materialism and Realism

“Art makes things. There are… no objects in nature, only the grueling erosion of natural force, flecking, dilapidating, grinding down, reducing all matter to fluid, the thick primal soup from which new forms, bob, gasping for life.”
Camille Paglia, Sexual Personae

“The rupture with the idealist tradition in the field of philosophic study is of great necessity today.”
Alain Badiou

“And this brings me to the great underlying problem: the status of the subject. Meillassoux’s claim is to achieve the breakthrough into independent ‘objective’ reality. For me as a Hegelian, there is a third option: the true problem that arises after we perform the basic speculative gesture of Meillassoux… is not so much what more can we say about reality-in-itself, but how does our subjective standpoint, and subjectivity itself, fit into reality. … we need a theory of subject which is neither that of transcendental subjectivity nor that of reducing the subject to a part of objective reality. This theory is, as far as I can see, still lacking in speculative realism.”
– Slavoj Zizek 

Timothy Morton on his blog Ecology without Nature mentioned the music of Sun0))) and Boris, which was weird because I was listening to their album Altar at the moment I saw his article on them… a Jungian synchronicity? – or, just another speculative event among like minded connoisseurs of the transcendent real. Anyway Alter is a performative music in which one enters an arena of the erotics of the technological subject, a subject that is no longer bound by our concepts of the human: or, as Slavoj Zizek has so eloquently put it – the “subject as the Void, the Nothingness of self-relating negativity”. [1]

As we enter the Age of the Real when the Dionysian fluidity of the chthonian, a radical contingency in which – as Quentin Meillasoux brilliantly states it: “…not only are there no laws which hold with necessity, every law is in itself contingent, it can be overturned at any moment” (ibid. 215), vies with the Apollonian formalism of science, we discover the terminal phase of postmodern culture in an electrical gaze between masks that forms a new object: an erotic, molten dance of sensual objects and thought out of which emerges the “notion of virtuality, supported by the rationality of the Cantorian decision of intotalising the thinkable, makes of irruption ex nihilo the central concept of an immanent, non-metaphysical rationality.” [2]

Continue reading

Excision Ethos: The Posthuman Object/Subject

“Transhumanism for me is like a relationship with an obsessive and very neurotic lover. Knowing it is deeply flawed, I have tried several times to break off my engagement, but each time, it manages to creep in through the back door of my mind.”
– N. Katherine Hayles

“In fact, for me, the facticity, the object as a support quelconque of facticity, you can iterate it, without any meaning. And that’s why you can operate with it, you can create a world without deconstruction and hermeneutics. And this is grounded on pure facticity of things, and also of thinking. It is not correlated.”
– Quentin Meillassoux

David Roden’s essay Excision Ethos, published on enemyindustry.net,  offers a flat ontological reading of the posthuman, which, he says, implies “an excision of the human”. He tells us that the “the logic of excision forces us to accept that there is no rigorous or pure demarcation between theoretical and practical thinking.” [2] He argues that a “flat ontology would allow emergent discontinuities between the human and non-human. Here we understand radical differences between humans and non-humans as emergent relations of continuity or discontinuity between populations, or other such particulars, rather than kinds or abstract universals.”  To understand his use of flat ontology we must dig deeper into the many theories surrounding flat ontology as the central term underlying his posthumanist philosophy.

In his own essay on flat ontology Flat Ontology II: a worry about emergence Roden tells us that the terminological justification for a flat ontology originally came from Gilles Deleuze, and was then appropriated by his ephebe Michael Delanda. Delanda in his work Intensive Science and Virtual Philosophy describes ontology within the Deleuzian enterprise as a “becoming without being”, or as “a universe where individual beings do exist but only as the outcome of becomings, that is, of irreversible processes of individuation” (Delanda, 84). [1] This forms the nucleus of Delanda’s flat ontology in which he describes individual organisms as being “component parts of species, much as individual cells are parts of the organisms themselves, so that cells, organisms and species form a nested set of individuals at different spatial scales” (Delanda, 85).  This is a non-hierarchical position, which Delanda further explicates, saying, a “flat ontology of individuals, like the one I have tried to develop here, there is no room for reified totalities. In particular, there is no room for entities like ‘society’ or ‘culture’ in general. Institutional organizations, urban centres or nation states are, in this ontology, not abstract totalities but concrete social individuals, with the same ontological status as individual human beings but operating at larger spatio-temporal scales” (63).  Paul Ennis remarking on flat ontologies in general in a humorous aside tells us that there “is no vertical ontological totem pole.” [3]  As Delanda in his book emphasizes: “……while an ontology based on relations between general types and particular instances is hierarchical, each level representing a different ontological category (organism, species, genera), an approach in terms of interacting parts and emergent wholes leads to a flat ontology, one made exclusively of unique, singular individuals, differing in spatio-temporal scale but not in ontological status” (47).

Continue reading