Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes.1
Reading this new work by James Barrat got me to thinking. He seems to misunderstand and fear the very scientists that he is questioning about AI. Little does he understand that these very scientists for the most part have left his folk-psychology terrors far behind, that they live the mechanist/eliminativist paradigm with a vengeance. For these scientists we never were human to begin with and all the ancient religious and philosophical bric-a-brac of folk-psychology is just another illusionary stance which our secular scientists will one day very soon replace with something else, something much like themselves: machines with brains. The only difference will be one of invariance. These new machine intelligences will not be so different from our biomechanical brains as such but will be made of other materials that are different only in kind. Our biomechanical brains and their possibly quantum brains may in fact be closer in resemblance than our fears and folk-psychologies have yet to fathom.
James Barrat like many humans is still caught up in the older folk-psychology portraying a wariness of this maneuver of the scientists in their ever expanding dominion of knowledge and power. As he sees it if it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? (ibid. intro) The problem with these questions if that they are couched in the language of an outmoded humanism. He automatically assumes that machines are different and differing from us in some essentialist way. He also speaks of power and control as if these supposed inhuman alien machines will suddenly rise up in our midst like any good science fiction horror show and take over the world. The fallacy in this is obvious: we are the machines that have already done that job just fine, we don’t need to worry about our progeny doing it again; in fact, they will more than likely just fulfill our direst scenarios in our self-fulfilling prophecies not in spite of but because we have invented them to do just that. The Dream of the Machine is our own secret dream, we are afraid not of the AI’s but of the truth of our own nature, afraid to except that we, too, may already be the very thing we fear most: machines.
My friend R. Scott Bakker would probably say: we have nothing to fear but fear itself, then he would say: “Yes, this is one of those actual nightmares I’ve been in touch with for a long while now.” There is still that part of Bakker that harbors the older folk-psychology beliefs that he otherwise so valiantly despises in his eliminativist naturalism. For him everything is natural all the way down, so that would include these strange alterities we label AI’s. Now, for me, the verdict is still out, but my guess is that yes the scientists because of the vast agglomeration of investment from governments, corporations, etc. known as the great late-capitalist hive of networks supporting the practical sciences will at some point in the near future produce something resembling a simulacrum of our present organic intelligence in some other form. What form that may take is still open to debate.
Even Vernor Vinge who wrote the first tract on this in his now classic The Coming Technological Singularity once stated that “if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue.”2 For Vinge the process was inevitable because “the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet … we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others.”(ibid.)
But what should we do? Should we just pretend this is all a strange far-out surmise on the part of scientists, that surely this is not a possibility for the near future, go hide our heads in the sand? Or should we do something else? David Roden of enemyindustry has been writing about this and other aspects of the posthuman dilemma for a while now. In his essay The Disconnection Thesis tells us that “Vinge’s idea of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one.”3 For David the only way to evaluate the posthuman condition would be to witness the emergence of posthumans. With this he emphasizes that what we need is an anti-essentialist model for understanding this new descent into the posthuman matrix. This concept of descent he describes in a “wide” sense insofar as qualifying entities might include our biological descendants or beings resulting from purely technical mediators (e.g., artificial intelligences, synthetic life-forms, or uploaded minds)(Kindle Locations 7391-7393).
Yet, reading his work I wonder if he too is still caught up in the old outmoded folk-psychology belief that humans are distinct from machines rather than being seen as part of an eliminativist naturalism that harbors only a difference in kind. It’s as if these practitioners are almost afraid to leave the old box of philosophical presuppositions behind and forge ahead and invent new tools and frameworks onto which they might latch their descriptive theories. Here is a sentence in which David stipulates the difference between human / posthuman in which the “human-posthuman difference be understood as a concrete disconnection[my emphasis] between individuals rather than as an abstract relation between essences or kinds. This anti-essentialist model will allow us to specify the circumstances under which accounting would be possible”(Kindle Locations 7397-7399).
But if we have never been human in the old folk-psychological sense of that term then isn’t all this essentialist/anti-essentialist rhetoric just begging the question? What if this dichotomy of the human/posthuman is just another false supposition? What if these terms are no longer useful? What if we were never human to begin with? What then? If the eliminativist naturalists are correct then these questions should just vanish before the actual truth of science itself. Even Roden is moving in this direction when he tells us that in a future article he will “consider the possibility that shared “non-symbolic workspaces”— which support a very rich but non-linguistic form of thinking— might render human natural language unnecessary and thus eliminate the cultural preconditions for our capacity to frame mental states with contents expressible as declarative sentences” (Kindle Locations 7418-7421). What is this but an acceptance of the eliminativist program? Maybe this is just it: the audience that David is trying to convince is those not in the scientific community who already understand very well what is going on, but those who are still trapped within the older folk-psychology, who believe in the myth of mental states and the whole tradition of an outworn intentionality that no longer holds water for those very naturalists that James Barrat above fears.
As David unveils his tale he opens a window on the past, saying, “there are grounds for holding that the process of becoming human (hominization) has been mediated by human cultural and technological activity”(Kindle Locations 7448-7449) . Isn’t this a key? Maybe the truth is that culture is itself a form of technology? Culture as a machine for structuring hominids according to some natural process that we are only now barely understanding? In fact Roden goes on if “in which humans are coupled with other active components: for example, languages, legal codes, cities, and computer mediated information networks” (Kindle Locations 7458-7461). But if R. Scott Bakker is right then even “though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition”.4 For Scott this whole human/posthuman dichotomy would probably be seen in terms of neglect. As he stated in a recent article, which ties in nicely with David’s sense of social assemblages as technological machines, the brain “being the product of an environment renders cognition systematically insensitive to various dimensions of that environment. All of us accordingly suffer from what might be called medial neglect. The first-person perspectival experience that you seem to be enjoying this very moment is itself a ‘product’ of medial neglect. At no point do the causal complexities bound to any fraction of conscious experience arise as such in conscious experience. As a matter of brute empirical fact, you are a component system nested within an assemblage of superordinate systems, and yet, when you reflect ‘you’ seem to stand opposite the ‘world,’ to be a hanging relation, a living dichotomy, rather than the causal system that you are. Medial neglect is this blindness, the metacognitive insensitivity to our matter of fact componency, the fact that the neurofunctionality of experience nowhere appears in experience. In a strange sense, it simply is the ‘transparency of experience,’ an expression of the brain’s utter inability to cognize itself the way it cognizes its natural environments.4
In an almost asymmetrical movement Dr. Roden tells us that “biological humans are currently “obligatory” components of modern technical assemblages. Technical systems like air-carrier groups, cities or financial markets depend on us for their operation and maintenance much as an animal depends on the continued existence of its vital organs. Technological systems are thus intimately coupled with biology and have been over successive technological revolutions” (Kindle Locations 7461-7464). Yet, for Roden the emergence of posthumans out of this technogenesis machine of networks and assemblages will ultimately be seen as a “rupture” in that very system. Yet, I wonder if this is true. What if instead it is just one more natural outcome of the possibilities of science as seen within the eliminativist naturalist perspective. Not seen as an oddity, but as part of a process that was started eons ago within our own evolutionary heritage?
There comes a moment in David’s essay when he comes close to actually affirming the eliminativist naturalist position, saying:
The most plausible argument for abandoning anthropological essentialism is naturalistic: essential properties seem to play no role in our best scientific explanations of how the world acquired biological, technical and social structures and entities. At this level, form is not imposed on matter from “above” but emerges via generative mechanisms that depend on the amplification or inhibition of differences between particular entities (For example , natural selection among biological species or competitive learning algorithms in cortical maps). If this picture holds generally, then essentialism provides a misleading picture of reality.(Kindle Locations 7520-7524).
Not only misleading but erroneous according to the eliminativist naturalist perspective of many cognitive scientists as the slow displacement of a folk-psychology that has been long overdue.
Now I’ve presented this as a neutral interlocutor, not as either an affirmer or denigrator of these views. I just don’t have enough information as of yet to truly make such a judgment call. So take the above with a grain of salt from one who is working within an eliminativist naturalist perspective that he himself still finds strangely familiar and familiarly strange.
I look forward to Dr. David Roden’s new book Posthuman Life: Philosophy at the Edge of the Human coming out next May on Amazon at least, should shed further light on this subject.
1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (Kindle Locations 60-62). St. Martin’s Press. Kindle Edition.
2. Vinge, Vernor (2010-06-07). The Coming Technological Singularity – New Century Edition with DirectLink Technology (Kindle Locations 100-101). 99 Cent Books & New Century Books. Kindle Edition.
3. (2013-04-03). Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection) (Kindle Location 7307). Springer Berlin Heidelberg. Kindle Edition.
4. Cognition Obscura (Reprise)