The Wall at the End of Things

R. Scott Bakker has another of his posts circumscribing the Semantic Apocalypse: The Dim Future of Human Brilliance. In as many paragraphs he exposes humans as fairly lightweight “targeted shallow information consumers” – by which he means we act only on the bare minimum of information needed for biological and social reproduction and transmission. Everything else is for him non-essential, an extravagant waste or fantasia. What of course this leads to is a total reduction of the human organism to a strict naturalist perspective of organic feed-back loops of birth, growth, reproduction, old age, and death. Nothing more. For all our advances in spiritual, aesthetic, philosophical, religious, and other forms of cognitive experience the naturalist has only one answer: the phantasmagorical.

Because human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized, Scott tells us that we shift between a mode of confusion – or, ‘crash space’, and, a mode of manipulation – or,  ‘cheat space’. As we try to understand the world we receive signals that we translate into the cultural “cues” we’ve grown to know so that if these are scrambled we enter this strange and anomalous crash space, or what I’ll call a non-semantic space of misconnaissance or misrecognition in which things, people, and events are no longer tied to our Symbolic Order come unglued and become for us phantasmagorical and anomalous. Obviously we could compare this to psychological systems from Freud through Lacan but for Scott this is dubious at best since all such systems from his naturalist perspective miss the point that the brain and mind are one and devoid of such dualistic entrapments and hermeneutics. In fact his eliminative approach alleviates us of all the burdens of metaphysics and reduces us to the lone naturalistic system and method of the sciences.

Unlike that old Radicalized Idealist Zizek whose transmaterialism is appalled by such reductionary naturalisms (“democratic materialism” Badiou), who seeks Descartes Subject in the Unconscious irreducible to the brain’s functions; an Idealist and Anti-Realist at heart whose defense against the naturalist is the “gap,” the thin red line between thought and being. Zizek, the Dualist; or, dialectical materialist. No. Scott will have no truck with such airy climes as transcendental fields and Subjects. Scott’s all for the erasure of lines…  “To naturalize meaning is to understand the soul in terms continuous with the cosmos.” Spinoza or Bust for Scott? The plane of immanence? Continuity of soul and cosmos: reduced to the substratum of material being – thought and being, One? Substance of subject and cosmos continuous… is Scott after all a metaphysician? It depends on Scott’s thoughts concerning complete or incomplete cosmos: is the cosmos as in Greek thought a harmonious whole, a totality? Or, is the cosmos for Scott discontinuous and discrete, incomplete and a fragmented and distorted, even conflict ridden and asymmetrical realm of antagonistic forces at play?

Scott in Spinoza’s Sin and Leibniz’s Mill he begins with how men have developed the modern sciences in reaction to religious conceptions of God. I’ll not go into depth here, only to follow the trail of Spinoza through his writings to get a better picture of his notion of naturalism and what he means by understanding the “soul in terms continuous with the cosmos”. In this essay he teases out one of Spinoza’s unique ways of countering certain metaphysical quandaries: “Spinoza catalogues and critiques the numerous expressions of this fundamental error in what follows, showing why the perplexities and contradictions that pertain to a personal God arise, and how these problems simply fall away if you subtract what is human from God.” This concept of “subtraction” is the stickler that got Spinoza branded as a heretic and kicked out of his Jewish community. His identification of God and Substance. Spinoza’s radically monistic approach that typifies not only his claims with regard to scientia intuitiva but also his entire conception of knowledge or, more precisely, his entire ontology of mind and nature conceived as twin aspects or attributes of a single, indivisible substance.

Without going any further into detail Scott at the end of this essay proclaims that Blind Brain Theory “affords the resources required to throw off the analogical yoke of the Conditioned once and for all, to subtract the human, not from God, but from the human, thus showing that–beyond the scope of a certain parochial heuristic at least–we just never were what we took ourselves to be.” So in this sense what Scott is proposing is an even more powerful argument not just against God, but more important against all of our beliefs in the “human”. The humanistic tradition itself comes under attack in Scott’s parlance because we have “never been human”. So if we’ve never been human, that leaves the question: What are we then if not human?  

How to Construct an Informational Organism

In his post How to Build a First Person (Using only Natural Materials) he describes consciousness as a subsystem of the brain:

The conscious subsystem of the brain is that portion of the Field biologically adapted to model and interact with the rest of the Field via information collected from the brain. All we need ask is what information is available to what cognitive resources as the conscious subsystem generates its model. In a sense, all we need do is subtract varieties and densities of information from the pot of overall information.

So in this sense we have overthrown the vocabulary of humanistic terminology for a more heuristics, an approach based on an information theory of the brain and consciousness. No longer is there all the metaphysical baggage of Mind/World dilemmas and the supporting humanistic normative and epistemological/ontological quandaries that go with it. Rather we deal with the messiness of philosophy Scott moves toward the sciences and heuristics: an approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples of this method include using a rule of thumb, an educated guess, an intuitive judgment, stereotyping, profiling, or common sense.

So that in this new heuristically defined vocabulary Scott describes the First Person not as a human Self, Subject, Ego, etc. (all humanistic attributes “subtracted”), but this way:

Consciousness as you conceive/perceive it this very moment now is the tissue of neglect, painted on the same informatic canvas with the same cognitive brushes as our environment, only blinkered and impressionistic in the extreme. Reflexivity, internal-relationality, sufficiency, and intentionality, can all be seen as hallucinatory artifacts of informatic closure and scarcity, the result of a brain forced to make the most with the least using only the resources it has at hand. This is a picture of the first person as an informatically intergrated series of scraps of access, forced by structural bottlenecks to profoundly misrecognize itself as something somehow hooked upon the transcendental, self-sufficient and whole….

This new information ontology does away with the necessary conceptual traces of philosophical metaphysics in the old style, and offers a open and ongoing scientific vocabulary of the information sciences. One in which “medial neglect” plays a central role as a heuristical device for his theory of meaning.

Medial Neglect: Our Blindness to Ourselves

In another essay Intentional Philosophy as the Neuroscientific Explananda Problem Scott describes what he means by medial neglect, saying, a “curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect.

The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstrapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

This is the dimension of causes. What kickstarted things? What conditions gave rise to thinking creatures such as ourselves? Will we ever know? Donald Merlin in his Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition (1993: see a Precise) once argued the australopithecines were limited to concrete/episodic minds: bipedal creatures able to benefit from pair-bonding, cooperative hunting, etc., but essentially of a seize-the-day mentality: the immediacy of the moment. The first transition away from the instant, the present, and toward a more temporal system of knowledge acquisition and transmission was to a “mimetic” culture: the era of Homo erectus in which mankind absorbed and refashioned events to create rituals, crafts, rhythms, dance, and other pre-linguistic traditions. This was followed by the evolution to mythic cultures: the result of the acquisition of speech and the invention of symbols. The third transition carried oral speech to reading, writing, and an extended external memory-store seen today in computer and advanced machine or artificial Intelligence and extrinsic data-memory technologies. The next stage might entail the ubiquitous and autonomous rise of external agencies, intelligent machines, or AI’s that live alongside humans as partners in some new as yet unforeseen cultural matrix or Symbolic Order yet to be envisioned or described.

In Our Time: Deep Information and ‘Data Glut’

In our time society has become so complex that we are lost in a sea of information, what Scott terms deep information or as many term it ‘data glut’. Yet, at the same time in our generation beginning to lose touch with the traditions of humanistic culture of books that has guided and encoded our systems of Law and Ethics for millennia.

Walter J. Ong once suggested that the difference between oral and literate cultures boils down to a few core distinctions: Oral traditions are additive rather than subordinative, aggregative rather than analytic, empathetic and participatory rather than objectively distanced, and situational rather than abstract. The written word also enables individuals to transcend the limits of subjectivity. As Ong puts it, “Abstractly sequential, classificatory, explanatory examination of phenomena or of stated truths is impossible without writing and reading.” Writes Ong, “To say writing is artificial is not to condemn it but to praise it. Like other artificial creations and indeed more than any other, it is utterly invaluable and indeed essential for the realization of fuller, interior, human potentials.” This is the paradox of literacy: It enables us to externalize our experiences and share those experiences with utter strangers, while simultaneously fostering deeper and deeper levels of introspection. “Technologies are not mere exterior aids but also interior transformations of consciousness, and never more than when they affect the word.” And while technology inevitably relies on artifice, “artificiality is natural to human beings.”1 At our core we are already artificial and machinic.

What we learn is the interesting point humans have always already been artificial beings, to be natural is to be artificial. From the moment humans began constructing tools, technics, and technologies to aid them in survival, hunting and gathering, agriculture, war, travel, dams, irrigation, or a thousand and one other artificial aids up to our current appreciation and use of computers we’ve interacted with artificial systems. But for the most part we’ve shied away from treating ourselves as artificial, that is till the sciences began seeking to objectify and reduce the human itself like everything else in the natural world (and, maybe there never was a natural world?) to a machine during the age of Locke and Newton, etc. when the mechanistic sciences reigned up to our own age when everything seems to be vanishing from the physical into the immaterial realms of quantum mechanics and our electronic highway. We’ve come a long way.

Andrew Pickering in his The Cybernetic Brain: Sketches of Another Future relates the story of Ross Ashby and the Black Box theory of ontological cybernetics. Ashby in his textbook An Introduction to Cybernetics (1956) would tell us this about black boxes: “The problem of the Black Box arose in electrical engineering. The engineer is given a sealed box that has terminals for input, to which he may bring any voltages, shocks, or other disturbances, he pleases, and terminals for output from which he may observe what he can.” The first point to note is that Ashby emphasized the ubiquity of such entities. This passage continues with a list of examples of people trying to get to grips with Black Boxes: an engineer faced with “a secret and sealed bomb-sight” that is not working properly, a clinician studying a brain-damaged patient; a psychologist studying a rat in a maze. Ashby then remarks, “I need not give further examples as they are to be found everywhere. . . . Black Box theory is, however, even wider in its application than these professional studies,” and he gives a deliberately mundane example: “The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box” (Ashby 1956, 86). On Ashby’s account, then, Black Boxes are a ubiquitous and even universal feature of the makeup of the world. We could say that his cybernetics assumed and elaborated a Black Box ontology.2

The Brain as Black Box: or, Blind Brain Theory (BBT)

Scott in his essay mentioned above Intentional Philosophy as the Neuroscientific Explananda Problem leads us into the black box theory of the brain, saying,

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

We can see our Brain itself as a sort of Black Box that has inputs from the environment, and outputs through our physical and mental capabilities. Yet, we are blind to its inner workings. For years this was all guess work as to what the brain was actually doing, and know one could even describe the simplest behavior like throwing a baseball in terms of what the brain was doing to translate all the complexity needed to operate the body and throw that baseball. We were stumped. Of course philosophers had been making stabs at such things through both ontology of mathematics and other forms for hundreds of years. Sir Isaak Newton could describe in mathematical terms the notions of cause and effect of gravity, motion, trajectories of stars, bullets, etc. Einstein could describe energy and mass and how the one was convertible into the other through mathematical equations, and we could translate this information into practical knowledge with atom bombs. But no one could truly describe what was going on deep down in this world of atomic relations and relativity. It was a black box closed from the outside and indecipherable the closer we got to its inner workings. Even now in quantum physics which deals with the smallest forces we have yet to pierce the veil below the black box. We have to build larger and larger instruments to peer into the tinier darkness of atoms to the point that our Large Hadron Colliders cover miles and miles of underground facilities.

Next came in our own time the Mind itself. Philosophy from the time of Descartes, Kant, the German idealists, the century of phenomenologists, analytical, and post-structualists could not open that box up. But now we have new instruments, imaging systems that can peer into the inner workings of the brain for the first time in human history. They can track the very moment a thought is born to the movement of that baseball being thrown. Exciting times. Yet, like many things we are only at the beginning stages of these new neurosciences and have years ahead of us in reverse engineering of the brain/mind.

This is where Scott comes in again with his skepticism and eliminative strategies. For him we as humans are ill-equipped to study all these very complex systems because our sociocognitive systems are prone to being duped or confused by misrecognition as well as our manipulative skills to dupe others. Kant was one of the first philosophers to take time to categorize and study the errors of other philosophers as a form of duping and being duped. Today we have lists of cognitive biases. A cognitive bias refers to a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. What this means is that we as humans are very prone to error in our judgments about the world and ourselves, and for the most part spend our daily lives either in ‘crash space’ – confused and muddled; or, we enter ‘cheat spaces’ – and manipulate others to our advantage, keeping them duped and blinded to the truth. We have terms for people who do that: sociopaths and psychopaths. Not nice people.

Ultimately for Scott we need a ‘theory of meaning’, and he thinks with his Blind Brain Theory that he has a good beginning toward such a framework for understanding our predicament. As he says,

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

For naturalists like Scott it’s closer to a return to pre-critical thought of Spinoza when he uses such statements as the soul as mechanistic and “continuous with the cosmos”. This notion of a unified system of the world in which the totality of the universe is continuous and unified substance is the core of Spinoza’s metaphysics which was naturalist through and through. For Spinoza the body of the cosmos was the body of his God, and it was materialist through and through: substantive and unified. In Spinoza’s system mind and matter, the brute material Real of the universe and the self-reflexive powers of ideal spirit, are merely differing “perspectives” on the same, unchanging substance. So that naturalism has its antecedents in a metaphysical world view just like any other form of thought.

It’s this reduction of mind to the natural that Scott will call the Holy Grail, the naturalization of meaning. Yet, if we take what we learned from Ong we discover that this reduction to the natural or naturalization of meaning is just the opposite: it is actually the artificalization of meaning and mind. So that dialectically the terms are reversed. We see this in much of the early ontologies of cybernetics as well. Along with the reversal of substantive based materialism of atoms in physics to the present immaterial physics of fields and forces, etc. we see this in the transformation of the various biological sciences as well. Even as the neurosciences rely on computational models and representations of real-time data constructed out of electrons, mathematics, and algorithms to decipher and reverse engineer the brain and behavior we are doing much the same in AI, Robotics, Nanotech, and Biotech industries and sciences. Everything is artificial and immaterial, rather than natural and substance based.

Artificial Nature: Artificial Life (AL) and Artificial Intelligence (AI)

Today the sciences of Artificial Life (AL) and Artificial Intelligence (AI), unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations.3 A convergence between these various technologies and sciences is bringing into existence a new kind of liminal machine,  one associated with life, inasmuch as it exhibits many of the behaviors that characterize living entities homeostasis, self-directed action, adaptability, and reproduction. Neither fully alive nor at all inanimate, these liminal machines exhibit what may be called machinic life, mirroring in purposeful action the behavior associated with organic life while also suggesting an altogether different form of “life,” an “artificial” alternative, native, or parallel, not fully answerable to the ontological priority and sovereign prerogatives of the organic, biological realm. (Johnston, p. 1)

For Scott this issue is simple the only real question is one of how radically the human will be remade. (see Writing After the Death of Meaning) Tonight I’ve been reading a new book that gives a window onto just that. Frank Tippler one of the originator of The Anthropic Cosmological Principle tells us in an essay for John Brockman’s What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence tells us:

The Earth is doomed. Astronomers have known for decades that the sun will one day engulf the Earth, destroying the entire biosphere— assuming that intelligent life has not left the Earth before this happens. Humans aren’t adapted to living away from the Earth; indeed, no carbon-based metazoan life-form is. But AIs are so adapted, and eventually it will be the AIs and human uploads (basically the same organism) that will colonize space.4

For Tippler and others in the book hackers in about twenty years will more than likely have solved the AI programming problem long before any carbon-based space colonies are established on the moon or Mars. The AIs, not humans, will colonize these places instead, or perhaps take them apart. No human, no carbon-based human, will ever traverse interstellar space. (Brockman, p. 17)

In another essay Demitar D. Sasselov remarks that it’s just wishful thinking that we will ever produce the continuity and preservtional transformation capabilities to survive our planetary existence. No living species seem to be optimal for survival beyond the natural planetary and stellar time scales. So that if our future is to be long and prosperous, we need to develop artificial intelligence systems in the hope of transcending the planetary life cycles in some sort of hybrid form of biology and machine. (Brockman, p. 15)

Another theoretical physicist Antony Garrett Lisi relishes the new masters as he terms the AI’s influx from the future, saying,

As machines rise to sentience— and they will— they’ll compete in Darwinian fashion for resources, survival, and propagation. This scenario seems like a nightmare to most people, with fears stoked by movies of terminator robots and computer-directed nuclear destruction, but the reality will likely be different. We already have nonhuman autonomous entities operating in our society with the legal rights of humans. These entities— corporations— act to fulfill their missions without love or care for human beings. (Brockman, p. 22)

Over and over in these essays one sees scientists, entrepreneurs, engineers, philosophers, etc. each professing a future in which humans will become less and less viable as a species as time goes on and we either merge with our creations or become replaced and as a species die out. Either way they see us coming to a Singularity of change that will introduce an intelligence surpassing ours, as our surpassed our wild cousins. As Paul Davies another physicist states it very soon, the distinction between artificial and natural will melt away. Designed intelligence will increasingly rely on synthetic biology and organic fabrication, in which neural circuitry will be grown from genetically modified cells and spontaneously self-assemble into networks of functional modules. Initially the designers will be humans, but soon they’ll be replaced by altogether smarter DI systems themselves, triggering a runaway process of complexification. (Brockman, p. 29)

Of course there was the occasional pessimist in the book, too. Those who fell back on older humanist thought forms struggling with ethics the humanist stance, but as I read their arguments it became obvious that even they could see the inevitability of current scientific endeavors, funding, and political, social, and corporate initiatives to bring this next stage in artificial existence into the Real. Kevin P. Hand of Caltech contemplating SETI and why we have yet to discover intelligent life in the universe said eerily:

It may be that the common fate for thinking machines is orbiting the cool, steady glow of an M-dwarf star, year-in and year-out running simulations of the world around it for the pure satisfaction of getting it right. These superintelligent creatures could be the cosmic version of the lone intellect in a cabin in the woods, satisfied innately by their own thoughts and internal exploration. (Brockman, p. 33)

Fables of the Future: The Great Filter That Gobbled Intelligence

For my own part I’m reminded of the Great Filter (Fermi paradox). Think of Scott’s Semantic Apocalypse as a Great Filter that hunts down intelligence and cannibalizes it. Something like Nick Land’s mishmash of Lovecraft and AI, the Gnon: The notion that SETI has yet to find intelligent life in the universe. Why? Could it be that civilizations based on organic/anorganic dialectic of technology have always reached this point of convergence we term the Singularity? What if what we now understand as the silence of the galaxies is a message of ultimate ominousness. A thing there is, of incomprehensible power, that takes intelligent life for its prey. The Great Filter does not merely hunt and harm, it exterminates. It is an absolute threat. The technical civilizations which it aborts, or later slays, are not badly wounded, but eradicated, or at least crippled so fundamentally that they are never heard of again. Whatever this utter ruin is, it happens every single time. The mute scream from the stars says that nothing has ever escaped it. Its kill performance is flawless. Tech-Civilization death sentence with probability 1.

And, what if it is because organic life has reached the point we, too, are at: the moment when anorganic life-forms, intelligence crossed the barrier from organic to anorganic? And, that at this point the erasure begins and the anorganic that had for so long symbiotically used the organic to reach its goal did as it has always done stripped its parents of their memories, their intelligence, their lives? What then? A fable? A surmise? A horror ontology? A joke? But who is the joke on?


  1. Wright, Alex (2007-06-01). Glut: Mastering Information Through The Ages (Kindle Locations 4251-4261). National Academies Press. Kindle Edition.
  2. Pickering, Andrew (2010-04-15). The Cybernetic Brain: Sketches of Another Future (pp. 19-20). University of Chicago Press. Kindle Edition.
  3. John Johnston. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (p. ix). Kindle Edition.
  4. Brockman, John (2015-10-06). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (p. 17). HarperCollins. Kindle Edition.

5 thoughts on “The Wall at the End of Things

  1. My reading list doubles every time I read your blog, you know that! Very cool stuff.

    In answer to your question, I tend to think ‘cognitive pluralism’ is too often a canard in debates involving intentionality. The ‘cosmos’ for me is neither unified nor fractionate, neither complete nor incomplete, or anything else we can determine from our armchairs. I do think, however, it’s obvious that our *knowledge* of the cosmos is fractionate, heuristic, and incomplete, and that, given how incredibly small we are, it will always be so.

    But this obvious epistemological pluralism is too often used as dialectical cover for insisting on intrinsic or original intentionality, which is to say, a tradition-conserving ontological dualism—this is as true in analytic debates as continental, I think. Sometimes the bullet is bitten, but the dualism is often hidden, either in spooky emergentism or in Zizekian notions of ‘contradictory monism.’

    As a naturalist, I’m committed to methodological reductionism, which is not a metaphysical thesis so much as a bet on a powerful way—Anthropocene powerful—to reverse-engineer nature. So for me, ontological claims regarding the fundamental disposition of the universe are simply beside the point. No matter how many apriori arguments someone like Zizek throws at the science, he’s as much a hapless hostage to the empirical as the rest of us. What does his metaphysical speculation gain us? Grounds for hope, maybe. Certainly not conviction.

    For me, such position pretty clearly fit the apologetic pattern taken by traditional apriori discourses under empirical duress. It seems safe to assume it will suffer the same fate. Our traditional hopes have a decided tendency to be misplaced when it comes to science.

    Time for something uglier, meaner, new.

    Liked by 2 people

    • Problem with using a term like “reductionism” is that it does have a baggage of metaphysical garbage in its wake, so confuses those who might otherwise agree with you. Just like the use of “nature” in “reverse-engineer nature” statement. What is nature in this statement? Objective? Subjective? Mathematical model? Common sense meaning? That’s the problem with terms, they always have someone who wants the particulars rather than the slippery general statement. I’ve seen you slip in and out of so many vocabularies it’s sometimes hard to nail your language down, so that a reductionist methodology based on heuristics seems to slide around on the spectrum. Probably my own fault for being a “word man”. 🙂

      At this point Zizek is a full blown radical idealist who is working to push its limits with an epistemological transcendentalism, and an ontological Schellingian ontology of the Real, etc. Pretty much a qualified dualist, rather than a monist: since even the things of his ontology are split and antagonistic. What’s surprising is the corollaries between his transcendental materialism and the OOO gang: Harman, Bryant, Bogost, Morton… difference is that he sticks with the split Subject, while OOO sticks with the split Object. Both agree on the Void of the Real. Or, Kant’s old phenomenon/noumenon distinction dressed up in a post-whatever? flat ontology… For Zizek the real Subject is not our ego/Self/soul but the self-reflecting nothingness (Void/Unconscious) with the self as project not essence, etc. Of course for you this is all metaphysical fiction making that the neurosciences will do away with once they map thought in its extimate set of ratios within the neuronic seas… 🙂


      • As far as ‘slippery vocabulary’ goes, I plead guilty. I wear too many hats not to confuse them. The great thing about blogging such material is that you can generally count on people to call you out for definitions, which I always find to be an invaluable exercise.

        “Of course for you this is all metaphysical fiction making that the neurosciences will do away with once they map thought in its extimate set of ratios within the neuronic seas… :)”

        Well, I’m not an identity theorist by any stretch. There’s no neural correlates for things like ‘experience,’ but there is a naturalistic explanation (couched in terms of cognitive ecologies). If I talk about the brain a lot, that’s because that’s where the bulk of the complexity is, but there’s no understanding cognition without ecology on my account. I think what Zizek is doing is exactly what he says he’s doing: ontologizing the explanatory gap. I just don’t see how this explains anything, or how it resolves any problem aside from providing yet another rationale for intentionalists to rally around.

        This is why I always go on so much about the need to have a plausible theory of meaning. If you have no clue what meaning is, then you can pretty say anything you want about it. Intentionalism *needs* us to remain ignorant, either because, like Zizek, they see that ignorance as ontologically expressive of some occult fact of the cosmos, or as simply as ‘Wall at the End of Science,’ a bulwark against the wholesale delegitimization that befalls all traditional discourses following the naturalization of their domain. Their industry requires a theoretical free-for-all.

        I actually think theorists are right to insist that the explanatory gap has an important role to play in understanding what we are–and to this extent, they are way ahead of their analytic counterparts–but my point is always the same: if I can offer an empirically plausible (defeatable) account absent any metaphysical posits over and above those used in the biological sciences of the very domain they claim to be solving, then I pretty clearly have the better theory.

        The fact that the consequences of that theory are scary as all hell, if anything, supports its veracity. We are the most complicated things we know of in the universe. Why should we think our prescientific intuitions have any hope of somehow getting things right (as opposed to being useful in certain practical contexts)?

        Liked by 2 people

  2. These entities— corporations— act to fulfill their missions without love or care for human beings.”

    Will the failure to a define and adhere to a basic definition of inhumanity or cruelty to other forms of life persist in the failure for us (or them) to develop a coherent definition of cruelty toward non-organic life. What would the word be: inmechanity?

    Putting the pieces together, applied/accumulated knowledge (science/technology) used to facilitate, enhance, define, and redefine the desire of desiring machines requires (presumably like human reproduction), desire to do so in the first place.

    Desire toward what: toward life, toward new forms of life, toward new forms of desiring machines, toward pure knowledge. Scientific exploration as a new form of accumulation that does not feed back into the living (organic or not) is in McGilchristian terms a left brained feedback loop,a detail-oriented obsession without overall meaning. It is the curse of alienated thought.

    By its artificial nature, will the thought/mind of an artificial (created/non organic) entity be alienated, separated from its own concept of desire as it formulates its concrete existence.

    I say artificial somewhat tongue-in-cheek, and agree with you, because human desire/mind is artificial in the sense that it is historically produced and conditioned. Individually as part of a social collective, we are as “created” as an artificial entity, the difference being the wet-ware.

    I digress.

    An alternative to the annihilation hypothesis is the lack of interest hypothesis. Given the distances involved, even if the physics were possible, the desire for exploration of the cosmos may not be there. The consumption of finite resources for non-accumulative exploration, or exploration for pure knowledge, which does not extend into living desires may be too much to bear for most “intelligent” societies that wish to more than only survive, or send some metallic legacy into the void in space.

    An AI, left to its own accord, would have to develop this desire based on a much longer time horizon of its own life AND prioritize this desire over all of its other desires. Programming this AI with this desire and priority would negate its being an autonomous AI, it would be a slave to another’s desire (human or another AI).

    Brockman’s concept of a thinking machine with “pure satisfaction” is a limited univalent machine, omnipotent but only to itself, desocialized and without consequence of being. It is a recurrence of a magical metaphysical thinking, a machinic God in search of a virtual truth.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s