Kevin Kelley’s Optimistic Take on AI and Cloud Computing; and, the Dark Side Fights Back

Access_Article_KevinKelly

Reading Kevin Kelley is always like taking a trip down fantasy lane, a utopian future full of electronic gadgets, goo-gaws, and fantastic wonders that usually forget the mistakes of the past, a wild ride into an optimistic world of the Jetsons unhinged – a retro-futurism of the pure instant. Was reading his essay on Watson AI and its mutation into a Cloud computing environment where his estimation is actually truncated and mundane for once:

The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. (Read: Three Breakthroughs… )

Where there is optimism can pessimism be far behind?

Samuel Butler in his anti-utopian or dystopian work Erewhon once said: “There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A jellyfish has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.” Hundreds if not thousands of SF novels, short stories, and essays have been written about such fantastic worlds full of helpful and harmful agents, as well both Utopian and monstrous visions of AI dystopias.

Stanislaw Lem that great SF writer of the pessimism of technological worlds once said, speaking of alien contact: “A contact with aliens seems to be impossible, and even if it happened, it might not be a ‘real’ one. A man is only capable of understanding something which is known to him, something that can be expressed within the framework of categories generated during the ages of cultural development of humanity.” 1 If we think of AI as a human artifact, an alien species that we ourselves are generating out of our own inhuman core we may begin to appreciate Lem’s pessimism about our cognitive limits. For Lem the inexorable expansion of technological destructiveness is out of the control of even the best-intentioned civilizations. One sees in his later works the recurring theme of autonomous technoevolution of weapons systems. The purest form of technoscientific imperialism is, after all, an arms race.  (ibid. p. 147)

As Rob Lemos tells us in his article on InfoWorld on AI and Clouds 5 Lessons from the dark side of cloud computing: 1)  The cloud offers little or no legal protection. 2) It’s a shared environment in which no one owns the environment ( I actually see this as an incentive for a social or collective intelligence system!). 3) Strong policies and education required (isn’t it always?). 4) Don’t trust the machines (we never did, did we?) – of course he’s speaking of “pre-configured instances and found authentication keys in the caches, credit-card data and the potential for malicious code to be hidden within the system” (the point here is that your data could be swiped by malefactors!). 5) Don’t believe the hype, which means make sure you know what cloud computing really can and cannot do (he terms it: “rethink you assumptions”).

One analyst Martin Ford in an NPR interview said the typical of AI arising in our midst: “It’s very hard to say exactly when,” Ford said. “One thing we can say though is that things are moving at a faster and faster rate. Technology — and in particular, information technology — is accelerating … and it’s going to have a very big impact at some point, a disruptive impact, I think.” (see The Dark Side of Watson)

Obviously with the Chinese stock market debacle in process the dark side has emerged with all its ferocious and disruptive force. See my World Markets: Spoofing – AI and the Control of the Markets and Civilization? As one analyst tells us: Disruptive technology always surfaces socioeconomic issues that either didn’t exist before or were not obvious and imminent. Some people get worked up because they don’t quite understand how technology works. I still remember politicians trying to blame GMail for “reading” emails to show ads. I believe that Big Data is yet another such disruption that is going to cause similar issues and it is disappointing that nothing much has changed in the last two years. (see The Discriminatory Dark Side Of Big Data)

There is also the threat of governmental spies and intelligence services accesses the clouds for their nefarious observations as Chris Doughtery tells us in “NSA-Proof” Your Cloud Storage“: “With recent revelations about the NSA tapping into cloud-based giants like Yahoo and Google, it is becoming increasingly important for cloud storage users to take additional steps to secure their data. You can no longer blindly trust that your service provider is keeping your data secured and locked away from prying eyes. Networks get attacked, servers compromised and information is leaked. It happens all the time.”

Yet, there is another dark side to the world of Cloud computing too as the Guardian reports: Digital waste has grown exponentially over the last decade as storage of data — such as e-mails, pictures, audio and video files, etc. — has shifted to the online sphere:

According to a recent Greenpeace report, Make IT Green: Cloud Computing and its Contribution to Climate Change, the electricity consumed by cloud computing globally will increase from 632 billion kilowatt hours in 2007 to 1,963 billion kWh by 2020 and the associated CO2 equivalent emissions would reach 1,034 megatonnes.

Of course a few of the newspeak pessimists of AI voice their concerns as well. A number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.

“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig’s disease, and communicates using specialized speech software.)

And Hawking isn’t alone. Musk told an audience at MIT that AI is humanity’s “biggest existential threat.” He also once tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes.”

In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musk told CNBC that he’d like to “keep an eye on what’s going on with artificial intelligence,” adding, “I think there’s potentially a dangerous outcome there.” (see Live Science)

Then of course the Military shadow worlds are working overtime to develop these new systems. “Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” so wrote technology and scientific geniuses in an open letter, signed by 1,000 signatories, which was presented at the International Joint Conference on Artificial Intelligence in Buenos Aires. The researchers acknowledge that sending robots into war could produce the positive result of reducing casualties. However, they write, it lowers the threshold for going to battle in the first place. “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” they write. “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”. – See more at: The Debate…

Yet, it’s weird how the propaganda-hype machine of corporate media has instilled its own stupidity back into the loop to the point that even the propaganda is stupid. You would assume someone would have figured out that China losing 3 trillion dollars was a little more than a bubble on the blip of the progressive road to capitalism…

The more I read about the various companies that are now in this niche market selling AI algorithms and services the more I realize what’s happening is a conclave of idiocy on both sides of the equation. Having been a software engineer for 36+ years over the major thrust of the global internet uptake…. I’ve seen companies push out horrendous spaghetti code that was never intended to be complete, only efficient … code that was in many ways already broken and viral, buggy. When I think of such self-learning algorithms being set loose in the markets to learn on their own my hair stands up on my neck like a horror show fright (i.e., to self-program their own code in a recursive loop based on some unique set of algorithms). The more I realize is that with so many competing AI algorithms, each unique and proprietary, used by multitudes of corporate interests one imagines that the stock market is turning into mush or goo at as these swarm like piranha of the electronic rivers feed on the trades each day.

The Chinese market is a testing bed for this new form of viral predation, a system that is out of control seeking to annihilate and absorb the capital of nations quickly and efficiently, machinic. Just think of it this way: the Chinese discovered what they think were a few dozen malefactor corporate Hedge Funds and other types doing this (24?)… it’s probably much more than this which goes undetected across the world’s markets. With algorithms being given names as “Stealth” (developed by the Deutsche Bank), “Iceberg”, “Dagger”, “Guerrilla”, “Sniper”, “BASOR” and “Sniffer” what does the future portend? (see:  Financial Times, March 19, 2007 pdf) How do you discover a free-floating algorithmic agent that mimics actual human traders, a smart machine investment broker wandering freely among other competing agents when it was programmed to act in every way just like said broker? What of the larger Underworld Organizations: Narco-Mexican Cartels, Russian Mafia, Sicily? What if either the large drug-lords of the world or their investment bank partners who laundry money and support their shadow worlds gain full access to these systems for manipulation? And governments seeking to bring another nation’s economy down? The list goes on and on… sheer paranoia?

For all intents and purposes these new AI systems now pass the Turing test for investment brokerage avatars…. and, yet, it is both blind and intelligent – it does not know it is an agent: it is as my friend R. Scott Bakker has been suggesting for a long while: a non-intentional mathematical miracle of the ghost in the machine that has no soul, intelligent without consciousness – a pure profit making machine. Bakker’s been working on the assumption that human intentionalism will soon go the way of dinosaurs Writing After the Death of Meaning: “‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.”

Hundreds of non-intentional profit making machines all competing for the same hedge bets work the markets daily. AI’s mimicking human agents, traders… no wonder the Chinese market went down so fast. As Kevin Warwick states: “We are proud to declare that Alan Turing’s Test was passed for the first time on Saturday,” a visiting professor at the University of Reading, which organized the event at the Royal Society in London. “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human.” So have machines actually become more intelligent than humans? Or is the truth only that they can mimic humanity better than we can cognitively filter out the illusions we perceive in our limited range of consciousness? If Bakker is right then the truth is we’re just too stupid to perceive the difference.

The stupidity factor must always be thrown into the mix as well. In most ways we are letting loose chaos algorithms on a world of commerce, a world of competing AI’s seek and search machines, profit-machines, given such predator names, each devolving into cannibalistic worlds of Darwinian-Spenser capitalism that is slowly unraveling the advanced systems on the edge of things rather than bringing any sense order. Over the years as a consultant I’ve seen companies push out code that was already buggy, and with each new iteration led to greater and greater errors to the point that in most instances the system would sooner or later have to be scrapped for a new trial run. Our so to speak advanced algorithms will tend toward disruption as more and more companies let loose such bug infested systems on the market. Stay tuned folks the fun has only just begun.

Welcome to the End Game!

1. Swirski, Peter (2006-07-27). The Art and Science of Stanislaw Lem (p. 162). Ingram Distribution. Kindle Edition.


Maybe Pink Floyd had it right all along in their Dark Side of the Moon album – it’s all leading to utter madness and chaos – enjoy the ride:

9 thoughts on “Kevin Kelley’s Optimistic Take on AI and Cloud Computing; and, the Dark Side Fights Back

    • Thanks, Scott!

      Yea, after reading his essay it reminded me of the old humanist critics of ‘New Criticism’ in their defense of the humanities and liberal education against the postmodern deconstructionism, etc. It’s as if he’s saying: oh, by the way, you’re going to lose your memory and mind if you depend too much on machines. Problem with his argument is that its not that we’re losing our brain power, we’re off-loading aspects of its storage systems so we can use this same power for other more important things. In other words what he sees as detrimental could also be seen as improvement from another perspective.

      I mean I already see the improvement from the angle of knowledge acquisition: under the old humanistic regime you had to spend years acquiring knowledge – in the Renaissance individuals could still attain encyclopedic knowledge of all the various subjects of the liberal arts, etc. But as we reach the 18th Century you had the realization under those who like Diderot knew we needed an external data storage systems to organize knowledge and simplify its acquisition, while making it available to others in a form that allowed transfer of that knowledge in the shortest amount of time and density. So right there in the encyclopedist we had the first notion of the computer: the encyclopedia is our first external data storage and retrieval system based on the new mathematical sciences of calculus.

      In our time these same processes of simplification, filtering, retrieval, etc. have been added to all our ubiquitous devices allowing us to have instant access to the information we need, while also simplifying the access to our search and appropriation of knowledge we would on our own (i.e., trying to incorporate the Dewey Decimal Classification and search through the Library of Congress or other massive systems of paper storage data, etc.) have a more difficult time – years of study, filtering through indexical archives, etc. – discovering the knowledge we need on specific mental needs.

      What Dennett fails to address is our dependence on faulty algorithms, and rather he focuses on the loss of memory and simplification of external agents. For me this is a no brainer that in an intelligent world we have always depended on forms of external memory storage, narratives, mnemonic devices to transmit culture and knowledge. In my own studies of anthropology across a wide spectrum I’ve always believed that what we term the older primitive systems of shamanism (mental) and voodoun (physical) data storage systems that involved cultural transmission through elaborate techniques of memory and mapping, etc. (i.e., shamans as the repositories of the tribes history, myths, stories, mind maps – the tree they climbed up and down to retrieve medical information, etc.). Then the more elaborate systems of memory like the great Cities of Ziggurats of ancient middle-east that were external storage devices within which the city itself became a map of the cultural mind, etc.

      The in Rome, Greece and then feudal China, Japan, Europe, etc. the development of great libraries to store information and organize it to hold the repository of the best knowledge.

      To me all these ancient systems of external storage and retrieval have been computers that tried to simplify the task of memory and retrieval, while offloading our need to develop such fantastic memory systems internally. We see in the culture of the Catholic Church the approach that a static systems will take: since it separated out and developed a hierarchy of need, of secrecy and illiteracy for the majority of its citizens, while in its elite caste of religious monks it developed extreme forms of memory retention https://en.wikipedia.org/wiki/Art_of_memory.

      To me we have always developed these computer based systems from the beginning, systems to transmit knowledge and make the cycle of learning easier and easier so that people like Dennett seem to me both retrograde and absolutely wrong. I wonder if it is because as a philosopher scientist he just doesn’t have the historical and literary knowledge to see that broad view or what? Sometimes I am amazed at the ignorance of certain men and women in their approach to problems that were already solved ages ago, but because of our lack of a larger framework, and meta-narrative of technique and knowledge acquisition – a history of the infrastructure of knowledge – if your will – that we can not see these solutions? Most of the studies of ancient times deal with politics and religion, rather than knowledge and its appropriation and transmission. Culture has always been artificial and based on theories of artificial intelligence systems. Over and over I see this.

      I remember studying the aborigines of Australia who for 60,000 years developed memory systems to guide their nomadic travels across the Australian outback using external storage systems which they termed the Dream Time: devices in which physical objects would be impressed with tribal meta-narratives – dream songs, that allowed them to recall hunting grounds, water systems, tribal boundaries, stars, etc. All the elaborate ceremonials, painful transitions of youth, maturity, marriage, old age, etc. in which many difficult and pain based memory systems were imposed on the people to keep the knowledge of environmental survival systems transmitted from generation to generation, etc.

      There is this whole history of computer and computational mentation and memory awaiting some scholar. Kittler, Ong, Bloom (Influence theories)… etc.

      What I do agree is his basic assumptions of our dependency on these machines as having too much authority, and our dependency on them as living things that are so ubiquitous that we no longer have other avenues of knowledge storage as back up plans. As he tells it: what if the electricity or some viral agent ruins the storage systems, the big data systems, the internet and shuts it all down. Will we have this knowledge stored on some other device as back up? Our dependency is on the storage and retrieval mechanisms, not on knowledge per se or even our ability to think for ourselves. As long as we always have some kind of permanent external storage device that we can even under the most dire circumstances continue to retrieve survival information (our scientific and cultural knowledge), we will continue. But if we have no access to some more permanent systems of external storage, and are dependent of electronic based systems that could have the electrical inputs stopped we will have major civilizational issues.

      So what we need is to develop a more permanent non-electronic storage system as a back-up system that can be safely protected like we do all those toxic nuclear waste disposals in the ancient salt mines deep below the earth, etc. As long as we can acquire information manually we can continue civilization, if not we are probably doomed from the beginning and will lose our knowledge to noise…

      Like

      • Dennett’s never sat down to work out an understanding of cognitive ecology, so I think what he says suffers for want of clarity. But this is what he’s angling at, and to that extent I’m inclined to agree with him. His whole position (like mine) turns on evolution sculpting pre-established harmonies between biological systems. Now we’re in the process of demolishing those harmonies. His point is systematic, even if it doesn’t come across that way.

        Like

      • So for you this destruction is a bad thing? We should reverse course? We should become more conservative in our technological innovations? With this destruction between natural systems you see some final cut between the artificial and natural systems, with the artificial colonizing the natural and eliminating it? Obviously this seems the agenda of Kurzweil and his transhumanist cohorts: a technological imperative and goal of transcending the natural into the artificial etc. So for you this is both dangerous and erroneous, and that we need to curtail and eliminate such fantasies?

        I guess you’ll need to clarify what you mean by: “pre-established harmonies between biological systems”. Do you mean hard-wired (pre-established)? Harmonies? And why just “between biological systems”; and, what do you mean by “biological systems”?

        I must say I hate the word “harmonies”, when in most natural systems we see more conflict and disharmony, a sort of struggle among competiting systems using each other to further their own survival mechanisms, etc. Look at us: we’re chock full of parasites, bugs, etc. a bric-a-brac of evolutionary kludginess that all seems to work most of the time, but is dynamic and changing every second – a system of competitive creatures that seem to co-habit the membrane of the human animal. Same for almost all biological systems… from a distance one might see harmony, but dig below the surface and one sees conflict and a continuous war of attrition. I’m very skeptical when someone sees harmonies… a very Platonic concept – Pythagorean, geometry, etc.

        And, What is a “cognitive ecology”? A loose use of ecology – a sort of topographical map of the mental terrain? And, we? Who? Scientists? Culture? Information Theorists? Whose destroying this? Is it due to a certain theoretic, a way of posing questions, a mental orientation? How are we destroying this harmony? And what harmony are we referring too?

        Like

      • Habitat destruction is a useful analogue.

        A ‘cognitive ecology’ consists of those environmental information structures prone to cue various heuristic systems, systems adapted (via evolution/learning) to solve on the cheap. Their economy derives from their selectivity, the fact they need only be sensitive to certain information, and can neglect everything else. They can neglect everything else, take it for granted, simply because, ancestrally at least, it always remained both sequestered and fixed.

        This is why neuroscience and AI pose the myriad conundrums they do. And this is why a neuroscientifically rationalized world filled with AI will very likely scramble our ancestral heuristic regimes. All the lacunae and the continuities we evolved to take for granted can no longer be taken for granted.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s