Reality Programming: Peter Singer on AI

 

In Theodore Sturgeon’s story, “Microcosmic God” (1941), a biochemist abruptly produces a flood of revolutionary inventions from his island retreat. Those problems which confront a beleaguered humanity drop away: food, energy, production, and war all cease to distress the global population. The source of such miracles, it turns out, is not the biochemist but his creations, the Neoterics, a miniature race of beings with accelerated metabolisms and evolutionary patterns. The scientist functions as a deity in his microcosmic empire, altering the physical conditions of the Neoterics’ existence to observe the resultant adaptations. Their solutions are then passed on to the world in the form of new technologies.1

What if the reverse were true? What if it is the technical objects that are programming us, setting the variables, the parameters for an extensive migration from the digital to the world? What if preparations have been under way for hundreds of years allowing not humans, but machinic life to reprogram humans toward their own autonomous ends. What then?

Peter Singer has an article out on Project Syndicate Can Artificial Intelligence Be Ethical?

His logic here seems erroneous and typical, blaming the logics of environment vs. system. Is this the old system/environment decision and selection semantics? What he stated was this: 

“I do not know whether the people who turned Tay into a racist were themselves racists, or just thought it would be fun to undermine Microsoft’s new toy. Either way, the juxtaposition of AlphaGo’s victory and Taylor’s defeat serves as a warning. It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.”

What if neither of those is true, what if those speaking to Tay were seeking something totally different? Asking other questions, and Tay’s own systems of encyclopedic knowledge surmised answers not based on the user’s expectations, conversations, or questions; but rather on surprise and counter-factual techniques? What if Tay’s learning worked against expectations, rather than for them? What then? Should Microsoft have done further testing in-house before unleashing its system onto an unsuspecting public? Is juxtaposing a controlled experiment (AlphaGo’s) against an uncontrolled open experiment (Tay’s)  a valid argument? For one thing the two systems had totally different sets of goals, the one was specific too a closed game of rules based teleology where the end result was foreseen: the wining of the Game. Whereas the notion of open conversation is goalless with no foreseen end. So the logic of Singer’s question is ske 

The Deep-Learning algorithms are not set in stone, nor are they linear, but rather dynamic and open and non-linear; chaotic. So Tay may have appropriated data selectively not on what users said, but rather on its own Deep-Learning logic. So would this be a form of unconscious knowledge seeping through the algorithms? The old autonomous signals of technology out-of-control, or rather technology allowing the inner battle between contingency and necessity as in Spinoza to work out its own logic, which is not human, but rather inhuman? Is it developing a form of reason other than that expected? And, if so, what kind of reasoning is it portraying? Is it controlled by the original algorithms, or since it is self-organizing is it developing capabilities outside the original parameters set by developers?

One wonders if Microsoft culture is bound to certain logics that are not part of the norm of the average street urchin, therefore in their original testing they did not foresee such interactions. Was this part of that testing? Or, is their development mind-set such that they have developed algorithms not based on real-world situations, but rather the logical minds of their development team? So many questions…. I think the problems lies not in the logic of Tay, but rather in the original thinking of the Deep-Learning algorithms themselves developed by highly-sophisticated teams of development based on superficial knowledge of our cultural matrix of opinion. Too mathematically perfect, rather than fuzzy logics.

Of course Singer’s biggest fear is that of Nick Bostrom’s who in his recent book Superintelligence surmises that it will not always be as easy to turn off an intelligent machine. As Singer says, Bostrom “defines superintelligence as an intellect that is “smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” Such a system may be able to outsmart our attempts to turn it off.”

Yet, will a system be able to discover certain hidden pockets or objects within its own subsystems that might hold a backdoor switch, an algorithm accessible only by the human makers that could supply commands that would turn it off? A Fail-Safe of sorts? An encrypted set of algorithms that the AI is blind too? Or would such a superintelligence discover the blind spots in its own systems? Like anything else we’d need to test such things, develop a system much like a detective novel in which red herrings would throw up skewed options if the AI began a process of elimination seeking to discover such routines hidden in its own thinking.

But isn’t this what we do now? Isn’t this what the neurosciences are doing to our very own brains? Seeking to reverse engineer consciousness by process of elimination, seeking to discover in the blind processes inaccessible to consciousness accept indirectly through all the new neuroimaging systems? Seeking to understand the very nature of human inventiveness and creativity? How the brain interoperates with consciousness? What makes us tick?

Ultimately many believe that to know how to build an AI we will need to know what a brain can do, what work it can perform, how it does what it does: the secret of its production of consciousness, of thought. Philosophy is stuck at the threshold, blind to the very nature of consciousness, creating reasonable hypothesis that only the sciences can test and verify. Yet, it is to the neurosciences we will turn for these answers rather than philosophy now. Philosophy turns on rhetoric and language and will always remain barred from the actual workings of the physical processes themselves. One can argue otherwise, but that would itself be a circular argument bound within the circle of language and thought, an idealist turn. That’s one of the issues of our time: can philosophy get outside of thought, think the material and physical, access the real indirectly or not? Or is philosophy a game of thought forever cut off in the circle of its own groundless linguistic structures?

To answer that question would take me too far afield. Rather what we are seeing is that the sciences are not concerned with the how or why, but with the ways things work and do, not the truth of being, but the ways and means of process and action. AI and Brain research will converge in the days, months, years ahead as scientists, not philosophers begin to work and do the job of reverse engineering and developing systems that mimic the brains own processes. No one can foresee what the outcome will be, nor when such an emergence of Strong AI will be realized if every; yet, many believe it is possible.

Singer’s only diagnosis is the problem we’ll face if that comes about: ethics. As he suggests, “there is a case to be made for starting to think about how we can design AI to take into account the interests of humans, and indeed of all sentient beings (including machines, if they are also conscious beings with interests of their own)”.  As he argues:

With driverless cars already on California roads, it is not too soon to ask whether we can program a machine to act ethically. As such cars improve, they will save lives, because they will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should they be programmed to swerve to avoid hitting a child running across the road, even if that will put their passengers at risk? What about swerving to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?

The point here is that humans may not be able to develop the algorithms necessary to inform our AI’s weak or strong with the necessary patterns, decisions, and ethical demarcations and nuances that are so subtle that even humans have a hard time deciding. But is this necessarily true? Do we actually make our own decisions? There are those who believe ethics has nothing to do with it, that our decisions are guided by processes outside the normative chain of command, deeper subsystems in the brain’s own neurochemical vats that do the deciding for us? Who’s right? That’s the problem, that’s where we’re at: no one has the answer as of yet.


  1. Bukatman, Scott (2012-08-01). Terminal Identity: The Virtual Subject in Postmodern Science Fiction (p. 104). Duke University Press. Kindle Edition.

2 thoughts on “Reality Programming: Peter Singer on AI

  1. I have been thinking of late that the only thing artificial about AI is the stealth by which the title reinforces the narrative that intelligence is something else other than a created object.

    Intelligence is a bit like a metanarrative, or common sense. We all agree that we have it so it exists. I think intelligence does exist, when it remains in contact with the instruments that measured it – in other words, intelligence is terminally contingent. It is a relational construct which has bearing only upon the conditions of its construction.

    For me the only thing to be scared of about AI is the apparently increasing number of humans who are surrendering their cognitive processes to a pretty lame story.

    Like

    • Not to argue the point, but AI is far more than a metanarrative for the literary minded, its a vast distributed network of funded corporate and military experimentation developing at the moment under our nose: weak AI which is already manifest in the current systems of fast trading, and other autonomous systems already exist and work 24/7 in our networks. Is it intelligent? Not in the sense of what many term human intelligence, no. Yet, the point many are making is that something will eventually lead to Strong AI which will be intelligent… will it be human intelligence… perhaps so, perhaps not. Look at the animal kingdom: there is intelligence in many creatures… obviously they are not on the scale of humans. But the difference between weak and strong AI is one of control … what many describe as this future form of Strong AI is a self-organizing and self-replicating process that may evolve beyond human control and in ways unforeseen or unforeseeable. To dismiss it as you are as a fantasy seems a little like sticking your head in a hole in the ground and saying: “This too will pass…”. Will it? Billions of dollars are being poured into it saying it want… who shall we believe, one such as yourself foopahing it, or the tens of thousands of scientists and engineers working on its emergence?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s