Re-reading R. Scott Bakker: The Post-Intentional World

Was re-reading R. Scott Bakker’s post on Reza Negarestani, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts. A couple of quotes:

Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically post–intentional terms.

The big question isn’t, ‘Will Artificial Intelligence be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior. (my italics)

If R. Scott Bakker is even close, what remains of humanity after the Singularity — after the Great Purge of the human from the social, cultural, and economic structures of commerce and civilization due to efficiency exclusions; along with the demise of Reason at the hands of Optimized Intelligence; and the ultimate integration of all remaining human resources and systems within the mechanical and machinic systems, those organizational forms of the non-symbolic and post-intentional apparatuses; and, the integration into the machinic assemblages that replace and obsolesce our current regimes of politics — is nothing less than — as Nick Land, once surmised, half-ironically in Metldown, saying, Nothing human makes it out of the near-future“.  And, more so, none of our current intentional and predictive efforts can even fathom what form this will take… (See David Roden’s thesis on non-symbolic work spaces)

The notion that the future holds much more than we imagine (or can even imagine) seems inevitable according to Scott. He sees an exponential leap in intelligence over the next decades, and further into the 21st century, that precludes any thought form we could supply based on our past intentional philosophical heritage of helping us understand the predicament we’re facing. Why? Because it is based on ‘medial neglect’: i.e., our very knowledge is itself based on ignorance and error. The sooner we accept this datum he tells us the better off we’ll be. All our efforts in trying to decipher the human enigma — the human condition, is doomed to failure, because the very neural feedback systems (i.e., our brain) we use to know and understand such things is itself a product of the hidden elements behind the screen of neural activity that produces consciousness in the first place; and, that we will never have direct access to this self-referencing productive and unconscious system that produces and uses language and Reason to begin with. Caught in feed-back loop, not of some correlational circle of Kantian phenomenon/noumenal divide, but of the circle of the limiting power of consciousness itself, which was never constructed by the brain to tackle the problem of its own origins and ends we face the impossible task of describing processes that we are essentially blind too. Processes that were shaped by evolution and accidental environmental pressures as coping mechanisms for survival and replication, and nothing else.

If Scott is correct then we are already being integrated into a Global Machinic Assemblage, a machinic organism that is automating and purging the less efficient elements of our intentional and human heritage. Purging it of its less than adequate performances and efficiencies as part of an ongoing optimization of intelligence coordination at all levels of social, cultural, economic, scientific, and machinic dimensions as part of an algorithmic program of optimizing Intelligence. The planet itself is being integrated into a machinic organism whose self-organizing tendencies are based not on language or intention, but rather on the very real hierarchical and heuristically inclined devices of a superordinate reason, arising not as some Transcendental Coordinator, but rather as the immanent force of optimizing Intelligence itself internal to its own alien and inhuman needs.

As part of this transition, as Scott sees it, language and Reason itself might be eliminated from the equation and replaced with some more optimized system of communication and collective coordination. What that might entail is not known, and probably not knowable by humans with out “low dimensional” toolset (Bakker) at this point. As David Roden argues in the paper cited above:

Might a nonsymbolic workspace (NSW) mimic or exceed this representational power?
No such technology exists at present, so the only way in which to begin to evaluate this possibility is by considering how the properties of non-symbolic media might furnish this cognitive potential.

One effort cited is the work of Brian MacLennan who develops a theory of simulacra, Roden tells us,  that allows us to envisage a representational format which is a) non-symbolic and b) has computational resources unavailable to symbolic systems and c) capable of representing its own computational procedures and grammatical structures in terms of its own imagistic resources. The point here is that this is an alien and post-intentional system that need not be based on our human intentional structures, nor our symbolic modes of language and mathematics, but might very well be of another type and level of natural system altogether. So that as Roden ends,

… given linguistic constitutivity the successful displacement of public language by a powerful non-symbolic medium would remove the conditions that make propositional attitudes possible. Given that propositional attitudes are human-distinctive, in the way described, human minds would cease to exist. They would be replaced by posthuman minds with characteristic repertoire of nonpropositional attitudes exploiting non-linguaformal media for mental representation.

In other words this alien future of the machinic might very well optimize the human into the inhuman not by way of our own intentional efforts, nor the normative efforts of some Transcendental Reason and speculative apparatus of “give and take of reasons” (Brandom/Negarestani), but rather through the very post-intentional elimination of those human elements themselves. Therefore producing both a post-intentional world, and the elimination of the human from the equation. An elimination through the very optimization of non-symbolic spaces, and the coordination, integration, and eliminative strategies of an optimized Intelligence; not based on Reason as we know it, but on an unforeseen transformation or mutation of non-symbolic systems that have sloughed off the skin of human Reason and thereby produced post-human forms of which we remain in the dark.

1 thought on “Re-reading R. Scott Bakker: The Post-Intentional World

  1. Last night I heard a talk at Duke entitled “AI and Morality.” The presenter was Walter Sinnott-Armstrong, a professor of philosophy at Duke — our daughter is just starting a half-time research assistantship gig in his lab, which is how I happened to attend. Among other topics he outlined a project currently underway in his research group, where they’re trying to build a moral decision-making app. As I understand it the scheme is this: Through crowdsourcing a wide array of people will report on a moral dilemma they faced, what decision they made, and why they think they made that decision. The aggregated data will be iteratively analyzed by an AI self-learning module into algorithms by extracting, combining, and weighting factors that humans purport to be using in making moral judgments. Next, people will be asked to make moral judgments about one another’s dilemmas, and to select which of the decision-making factors were most important in their decisions. This phase will serve as a validation sample for the AI’s initial factorization, as well as building preliminary datasets for computing separate algorithms simulating each individual’s moral decision-making criteria. These individual algos would form the basis of the app, in which individual users facing moral dilemmas would enter into virtual dialog with their simulated selves about how to decide. By compiling and analyzing the big dataset across all the users, the app would also let individuals compare their own factors and weightings with others.

    Note that this project isn’t trying to converge on a best morality or to make people act more morally. It’s more an attempt to enhance individuals’ self-awareness of their own moralities. Will this reflexive self-awareness make people more morally self-critical, letting them learn from their own algorithmic double about their own tacit standards and inconsistencies? Will awareness of their own tacit unconscious moral biases enable them to override those biases through conscious effort and repetition, or perhaps by letting the algo do the heavy lifting for them? Will the aggregate big-data AI serve as a circuit-breaker or informant to whatever singularity optimizer converges at the transhuman edge, preventing it from collapsing all morality into a unified, abstract, posthuman global standard? Or will feedback from the big dataset algo squeeze all the individuals into a collective groupthink morality? The lab should be able to shed some empirical light on these speculations.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s