Crash Space: The Coming Age of Machinic Intelligence
We exchanged a flurry of texts. We weren’t idiots. We knew full well the gravity of what had happened. But we also knew we had nothing to fear, and very little to cover up.
—R. Scott Bakker, Crash Space
Anyone still believing that the “blunt tool” of mass surveillance is protecting us from terrorists should read the Washington Post’s two-year investigation of “Top Secret America.” The detailed series of articles suggested that the United States’ massive surveillance system could possibly make us more vulnerable to terrorism:
“Some 1,271 government organizations and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence in about 10,000 locations across the United States. Analysts who make sense of documents and conversations obtained by foreign and domestic spying share their judgment by publishing 50,000 intelligence reports each year— a volume so large that many are routinely ignored. In the Department of Defense, where more than two-thirds of the intelligence programs reside, only a handful of senior officials— called Super Users— have the ability to even know about all the department’s activities. “I’m not going to live long enough to be briefed on everything” was how one Super User put it. The other (Super User) recounted that for his initial briefing, he was escorted into a tiny, dark room, seated at a small table and told he couldn’t take notes. Program after program began flashing on a screen, he said, until he yelled “Stop!” in frustration. “I wasn’t remembering any of it,” he said.
Billions of personal details about the general population, collected by computers, can overwhelm those officials looking for a particular suspect. As the New America Foundation report indicated, most terrorists are caught using “traditional investigative methods, such as the use of informants, tips from local communities, and targeted intelligence operations . . .”
In the coming years all human intelligence will become mute, AGI (Artificial General Intelligence) machinic systems and the decisions made upon such data depend will be done more “efficiently” through rule based normative functional algorithms, making matrices that will be invented by the artificial minds themselves. All surveillance and Global Security Systems will be in the hands of the AGI’s, since humans such as the SuperUser above will not have the necessary processing power to absorb, much less decide on, filter, collate, and analyze such massive Big Data as will be collected in such great Data Centers as the one being built in Utah.
We’ve entered that strange transitional age when we are as humans obsolescing our own intelligence in favor of machinic gods who will have no sense of our cultural or social value systems, only the algorithmic targeting capabilities of seek and destroy policing of the animal called man. We are building the cages of the future, and enforcing a new breed of policing agents in the frontiers of our brave new worlds of machinic being. Through our fear of terror, we are producing greater terrors. From economics to security the deep-learning algorithms and other plasticity based systems of self-transforming and feed-back systems based on endless rhizomatic loops will surpass our capabilities and move beyond our ability to control or constrain. What then?
Stephen Hawking fears it, saying: “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Tesla CEO and famous technology innovator Elon Musk has repeatedly warned about AI threats. In June, he said on CNBC that he had invested in AI research because “I like to just keep an eye on what’s going on with artificial intelligence. I think there is a potential dangerous outcome there.” He went on to invoke The Terminator. In August, he tweeted that “We need to be super careful with AI. Potentially more dangerous than nukes.” And at a recent MIT symposium, Musk dubbed AI an “existential threat” to the human race and a “demon” that foolish scientists and technologists are “summoning.” Musk likened the idea of control over such a force to the delusions of “guy[s] with a pentagram and holy water” who are sure they can control a supernatural force—until it devours them. As Musk himself suggests elsewhere in his remarks, the solution to the problem lies in sober and considered collaboration between scientists and policymakers. So much for Enlightenment? But these are the extremes, other voices say other things, and the process of making such systems seems inevitable with so many nations and corporations investing so heavily into every aspect of robotics, war machines, and AGI related systems for profit or sex or power.
Mass surveillance programs are run by machines or persons trained to act like machines. Targeted intelligence operations are run by experienced security agents who are allowed to use the knowledge gained through years of training. In the future our urban zones will become more and more integrated into smart infrastructures where the electronic eyes, ear, scent, and prosthetic appendages of sensory outlays once part of the human body will become externalized into the very objects of common everyday work around us. The systems that will shape and secure our systems of command and control within the urban workplace will be a part of a vast integrated system of artificial intelligent centers that will run everything from our basic needs to the most criminal policing enterprise the world has ever seen. It will be invisible, part of the background, so virtualized that we will not even be aware that we’ve become part of a Planetary Prison system that we ourselves built and handed over to the Great Artificial General Intelligent systems to come. To call this paranoiac is to enter into inhuman territory of mind and thought which that term was only a simplified interdiction onto the human, not the machinic.
Watching the recent craze of mobile to mobile Pokémon Go we’ve entered the moment when the virtual is seeping into our world, when men, women, and children stare into the screens of their hand held systems as if they were more real than the world around them. Even criminals have hopped on the wagon. Armed robbers used the game Pokémon Go to lure victims to an isolated trap in Missouri, police reported on Sunday. Pokémon Go warns players to keep aware of their surroundings during their virtual treasure hunt, but after only a few days since its release it has already led people into a string of bizarre incidents. People have ended up in hospitals after chasing nonexistent animals into hazardous spots, and schools, a state agency and Australian police have warned people not to break the law or endanger themselves while “Pokemoning”. The game has also led wanderers to at least one home misidentified as a church, a venue the app considers a public space.
We are so desperate to fill the gap of our meaningless world with meaning, that the virtual worlds of our electronic media are beginning to supervene onto reality and control our very bodies and behaviors. We’ve allowed the virtual to become our reality and left the old worlds of natural existence behind, and yet those world impinge upon our false realms in dangerous and untold ways. Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Bostrom believes that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?” As Paul Ford recently at MIT stated: “No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.” (Our Fear of Artificial Intelligence)
Others like Rodney Brooks tell us hogwash, we have nothing to fear. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least. Yet, others like Stuart J. Russell, a professor of computer science at the University of California, Berkeley disagree with Brooks, saying: ““There are a lot of supposedly smart public intellectuals who just haven’t a clue.” He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.
As Ford concludes we have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them. This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.”
Agency: Human or Artificial?
It is not that reality entered our image: the image entered and shattered our reality (i.e. the symbolic coordinates which determine what we experience as reality). What this means is that the dialectic of semblance and Real cannot be reduced to the rather elementary fact that the virtualization of our daily lives, the experience that we are more and more living in an artificially constructed universe, gives rise to the irresistible urge to ‘return to the Real’, to regain the firm ground in some ‘real reality.’ THE REAL WHICH RETURNS HAS THE STATUS OF A(NOTHER) SEMBLANCE: precisely because it is real, i.e. on account of its traumatic/excessive character, we are unable to integrate it into (what we experience as) our reality, and are therefore compelled to experience it as a nightmarish apparition.
—Slavoj Žižek. Disparities
This sense of loss of reality and the nightmare quality of our lives in this weird world of the artificial seems to pervade every aspect of our socio-cultural lives. Our politics has turned south, gone under into a nightmare zone of strangeness across the First World. People that have sensed this nightmare surrounding them have been desperate to return to the old ways of our ancestral realms in any form or fashion. Ergo, the reason for traditionalist values and pundits on the Right of the spectrum have arisen because of this vacuum in peoples lives living in the artificial worlds of the modern urban megacities where every form of existence has become plastic and plasticity as a thought form has become all too real. Sex and Race pervade our politics now because the barriers of the fantasy worlds of the old mythologies of Monotheism no longer hold, not longer feed people what they need to give their lives meaning. We’ve been demythologizing and leaving these ancient systems behind for a few hundred years. Yet, in small pockets they hold on fiercely and adamantly in certain traditionalist camps.
Catherine Malabou explains in Plasticity at the Dusk of Writing, the concept of plasticity, whose scope and stakes are firmly inscribed in those of our era, has overtaken the schemas of text and the trace. Plasticity “takes over” and “becomes the resistance of difference to its textual reduction.” In The New Wounded: From Neurosis to Brain Damage, Malabou expands her reflection to cerebral pathologies, particularly Alzheimer’s disease. She hosts a dialog between philosophy, psychoanalysis and contemporary neurology, offering to demonstrate how cerebral organization presides over a libidinal economy in current psychopathologies. She also proposes a new theory of trauma and defends the hypothesis of destructive plasticity. In her latest book, Self and Emotional Life, Philosophy, Psychoanalysis, and Neuroscience, written with Adrian Johnston, Malabou continues her exquisite crossing of disciplines, this time in order to explore the concept of wonder.
Without using all the jargon of postmodern shibboleths neuroplasticity in brain and mind is a term that refers to the brain’s ability to change and adapt as a result of experience. When people say that the brain possesses plasticity, they are not suggesting that the brain is similar to plastic. Neuro represents neurons, the nerve cells that are the building blocks of the brain and nervous system, and plasticity refers to the brain’s malleability. There’s both a functional and structural aspect to this neuroplasticity, one which allows other parts of the brain to take over the functions of diseased or traumatized areas (functional); and, the other (structural) refers to the brain’s ability to actually change its physical structure as a result of learning.
Our notions of agency have over the years changed, and the notions of Subject and Self have come under great scrutiny in philosophy and neurosciences. N. Katherine Hayles once suggested that if on the one hand humans are like machines, whether figured as cellular automata or Turing machines, then agency cannot be securely located in the conscious mind. If on the other hand machines are like biological organisms, then they must possess the effects of agency even though they are not conscious. In these reconfigurations, desire and language, both intimately connected with agency, are understood in new ways. Acting as a free-floating agent, desire is nevertheless anchored in mechanistic operations, a suggestion Guattari makes in “Machinic Heterogenesis.” Language, emerging from the operations of the unconscious figured as a Turing machine, creates expressions of desire that in their origin are always already interpenetrated by the mechanistic, no matter how human they seem. Finally, if desire and the agency springing from it are at bottom nothing more than performance of binary code, then computers can have agency fully as authentic as humans. Through these reconfigurations, Deleuze, Guattari, and Lacan use automata to challenge human agency and in the process represent automata as agents.1
If our binary and / or algorithmic systems can already be thought to have agency, what of the more advanced AGI’s that even in their primitive beginnings during our experimental age are already surpassing human intelligence? Many guffaw such surpassing of the human as wishful thinking, as imposing upon the machinic world of things our anthropomorphic thought forms. But is this so? Are we not actually following the trajectory of two thousand years of technics and technology that has always gone hand in hand with human culture and civilization? Isn’t there always a sense of a two-way interactive oscillation between human agency and its creations? Isn’t this dialectical interplay between machine and human always already been a part of the human instrumentalism that was to eventually be termed science? Our elite pundits have tried to spin a story that the Enlightenment was an aberration, that instrumental reason was no more than culturally bound entity, and that it too would be sloughed off for something else. What is this something else if not the AGI’s we are now inventing out of necessity at our own unsurmountable finitude? Building such superintelligences because our own abilities as creatures of finitude and limitation cannot surpass certain barriers due to evolutionary bindings? Because we have created such a desperate need for decomplexifying the data of our world in all its multifarious complexity?
The notion of Agency and Subject developed by Deleuze, Guattari, and Lacan, is a subject in which consciousness, far from being the seat of agency, is left to speculate on why she acts as she does. She is increasingly aware that the origin of agency lies beyond the reach of consciousness, enacted by a computational program that is ultimately controlled by the external agent that has programmed the code to operate as it does. Even at this deep level the ambiguity of agency continues, for program is perceived to act both as an agent on its own behalf and as the surrogate for the will of the human. The ambiguity is repeated within consciousness, where she perceives herself to be exercising agency in the margins, as it were, the grey areas where the objectives of code might be implemented in ambiguous ways. In these complex reconfigurations of agency, the significance of envisioning the unconscious as a program rather than as a dark mirror of consciousness can scarcely be overstated, for it locates the hidden springs of action in the brute machinic operations of code. In this view, such visions of the unconscious as Freud’s repressed Oedipal conflicts or Jung’s collective archetypes seem hopelessly anthropomorphic, for they populate the unconscious with ideas comfortingly familiar to consciousness rather than the much more alien operations of machinic code. (43)
Blindness and Insight: Beyond the Hum of Machines?
Antonio Damasio, argue that body and mind are inextricably linked through multiple recursive feedback loops mediated by neurotransmitters, systems that have no physical analogues in computers. Damasio makes the point that these messages also provide content for the mind, especially emotions and feelings: “relative to the brain, the body provides more than mere support and modulation: it provides a basic topic for brain representations” (xvii). As Hayles tells us ”
The central question … is no longer how we as rational creatures should act in full possession of free will and untrammeled agency. Rather, the issue is how consciousness evolves from and interacts with the underlying programs that operate analogously to the operations of code. Whether conceived as literal mechanism or instructive analogy, coding technology thus becomes central to understanding the human condition. (44)
That great atheist dialectical materialist, Slavoj Zizek in his recent work Disparities will humor us saying that “Einstein was right with his famous claim ‘God doesn’t cheat’ – what he forgot to add is that god himself can be cheated. Insofar as the materialist thesis is that ‘God is unconscious’ (God doesn’t know), quantum physics effectively is materialist: there are microprocesses (quantum oscillations) which are not registered by the God-system. And insofar as God is one of the names of the big Other, we can see in what sense one cannot simply get rid of god (big Other) and develop an ontology without big Other: god is an illusion, but a necessary one.”2
Can we say that this necessary illusion is central to our quest to build the God Mind in our AGI’s? Are we not in fact and deed actually trying to create a god? Isn’t this truly at the heart of the artificial intelligent holy grail quest? To become machinic, to enter into the transitional stage of superintelligence, make our own pact with the impossible? For Zizek we have never been human, we’ve always been in transitional movement, that humans are in themselves absolutely nothing, without any fixed agency or stable self, that nothing pre-exists our being in the world, and that the notion of Subject is of movement toward something else. For Zizek we live in-between the Subject which is nothing in itself, and the world that we do not have direct access too. There is a crack in the world between us and reality, and all of our grand tales, our visions, our fantasies are ways in which we seek to bridge the gap between ourselves and reality. Yet, time after time our bridges built out of mathematics or language cannot bridge the gap so we build even more fantastic schemes:
This is why, from the strict Freudian standpoint, fantasy is on the side of reality, it sustains the subject’s ‘sense of reality’: when the fantasmatic frame disintegrates, the subject undergoes a ‘loss of reality’ and starts to perceive reality as an ‘irreal’ nightmarish universe with no firm ontological foundation; this nightmarish universe – the Lacanian Real – is not ‘pure fantasy’ but, on the contrary, that which remains of reality after reality is deprived of its support in fantasy.(Kindle Locations 285-288)
So once our human illusions, our fantasies are stripped from the world, what is left is the bottomless pit of nightmare —the Universe of machinic life. The endless sea of process and chaos churning on and on and on…
Reality is impenetrable not just because it transcends the constrained horizon of finite human being but also because we humans are unable to control and predict the effects on our own activity on our natural environs. Therein resides the paradox of anthropocene: humanity became aware of its self-limitation as a species precisely when it became so strong that it influenced the balance of the entire life on earth. It was able to dream of being a Subject until its influence on nature (earth) was marginal, that is, against the background of stable nature. The paradox is thus that the more the reproduction of nature is human mediated, the more humanity becomes a ‘decentred’ agent unable to regulate the process of its exchange with nonhuman nature. This is why it is not enough to insist on the nontransparency of objects, on how objects have a hidden core withdrawn from human reach: what is withdrawn is not just the hidden side of objects but above all the true dimension of the subject’s activity. The true excess is not the excess of objectivity which eludes the subject’s grasp but the excess of the subject itself, that is to say, what eludes the subject is the ‘blind spot’, the point at which it is itself inscribed into reality.3
My friend R. Scott Bakker calls this ‘blind spot’ of the Subject our inability to turn back upon ourselves and view the very processes that create consciousness —the Brain. We have no direct path toward reality, nor upon our own processes. We are blind to both reality and ourselves. Bakker defines a crash space as “a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done” (p. 203). Bakker argues, plausibly, that the cognitive and emotional structures that give meaning to our lives and constrain us ethically can be expected to work only in a limited range of environments — roughly, environments similar in their basic structure to those in our evolutionary and cultural history. Break far enough away, and our ancestrally familiar approaches will cease to function effectively. As Bakker reminds us:
Herein lies the ecological rub. The reliability of our heuristic cues utterly depends on the stability of the systems involved. Anyone who has witnessed psychotic episodes has firsthand experience of consequences of finding themselves with no reliable connection to the hidden systems involved. Any time our heuristic systems are miscued, we very quickly find ourselves in ‘crash space,’ a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done. (21)
We are living in such a domain now. We have for a few hundred years moved from our ancient heritage of Hunter/Gatherers, Agriculturalists, and emerged into a new realm both artificial and outside the confines of the natural world environments that were our base and support for millennia. Our philosophies, religions, cultural forms, our mythologies and even our instrumental reasoning powers – both cunning and rational, are no longer bound to the natural earth and environs, but rather have become unmoored within realms unforeseeable by our ancient systems of constraint and reason, our modern civilization. We’ve entered the Crash Space of Modernity in transition and our fantasies that have partially filled the gap of meaning have fallen into fragments and disarray across the planet. Our modern lives in this artificial world or urban cities, mobile to mobiles, electronic virtual realities, etc. has overtaking our ancient ties to the jungles and swamps of our ancient ancestry. Our minds have become unhinged from the natural environments, and have yet to make new ties to the urban zones of our future lives in artificial worlds.
And now we’re set to begin engineering our brains in earnest. Engineering environments has the effect of transforming the ancestral context of our cognitive capacities, changing the structure of the problems to be solved such that we gradually accumulate local crash spaces, domains where our intuitions have become maladaptive. Everything from irrational fears to the ‘modern malaise’ comes to mind here. Engineering ourselves, on the other hand, has the effect of transforming our relationship to all contexts, in ways large or small, simultaneously. It very well could be the case that something as apparently innocuous as the mass ability to wipe painful memories will precipitate our destruction. Who knows? The only thing we can say in advance is that it will be globally disruptive somehow, as will every other ‘improvement’ that finds its way to market. ( Bakker, 22)
I remember back in the seventies at university my English teacher (we still had an English Department back then! long before humanities) once said that Science Fiction was the mythology of our Age of Reason and Modernity. I still believe that is true. We are in the thousands of fictional scenarios of science fiction inventing a path forward, creating stories and tales that seek to understand and immerse us not in the past, not in character studies of Novels, but in the tools necessary to help us move steadily, calmly, and with reasoning awareness into the most impossible region of all —the Future.
As we move forward we realize we are not alone, that around us is a great host of stars, planets, galaxies unbound. The only thing stopping us from change and developing viable paths in cultural, social, politics and life is our own defective and maladaptive minds, blinded by our own immersion in these processes we have no control over and yet control us in ways beyond telling. We live by fantasy, we always have… we create meaning not out of blindly stripping reality of our minds, but by weaving meaningful fantasies based on our awakening to the new and unbidden. Only when we allow our fantasies to rule over us, to suborn us and enslave us as in ancient thought of religious and socio-cultural systems of power and knowledge that weave us into their larger frameworks like so many insectoids to do the bidding of the few rather than the many do we begin to lose sight of the power of mind and its place in the universe at large. As Bakker ominously surmises “Human cognition is about to be tested by an unparalleled age of ‘habitat destruction.’ The more we change ourselves, the more we change the nature of the job, the less reliable our ancestral tools become, the deeper we wade into crash space.” (22)
- Swirski, Peter. The Art and Science of Stanislaw Lem (pp. 28-29). Ingram Distribution. Kindle Edition.
- Slavoj Žižek. Disparities (Kindle Locations 1086-1090). Bloomsbury Publishing. Kindle Edition.
- ibid. (Kindle Locations 721-729).