Reading Kevin Kelley is always like taking a trip down fantasy lane, a utopian future full of electronic gadgets, goo-gaws, and fantastic wonders that usually forget the mistakes of the past, a wild ride into an optimistic world of the Jetsons unhinged – a retro-futurism of the pure instant. Was reading his essay on Watson AI and its mutation into a Cloud computing environment where his estimation is actually truncated and mundane for once:
The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. (Read: Three Breakthroughs… )
Where there is optimism can pessimism be far behind?
Samuel Butler in his anti-utopian or dystopian work Erewhon once said: “There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A jellyfish has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.” Hundreds if not thousands of SF novels, short stories, and essays have been written about such fantastic worlds full of helpful and harmful agents, as well both Utopian and monstrous visions of AI dystopias.
Stanislaw Lem that great SF writer of the pessimism of technological worlds once said, speaking of alien contact: “A contact with aliens seems to be impossible, and even if it happened, it might not be a ‘real’ one. A man is only capable of understanding something which is known to him, something that can be expressed within the framework of categories generated during the ages of cultural development of humanity.” 1 If we think of AI as a human artifact, an alien species that we ourselves are generating out of our own inhuman core we may begin to appreciate Lem’s pessimism about our cognitive limits. For Lem the inexorable expansion of technological destructiveness is out of the control of even the best-intentioned civilizations. One sees in his later works the recurring theme of autonomous technoevolution of weapons systems. The purest form of technoscientific imperialism is, after all, an arms race. (ibid. p. 147)
As Rob Lemos tells us in his article on InfoWorld on AI and Clouds 5 Lessons from the dark side of cloud computing: 1) The cloud offers little or no legal protection. 2) It’s a shared environment in which no one owns the environment ( I actually see this as an incentive for a social or collective intelligence system!). 3) Strong policies and education required (isn’t it always?). 4) Don’t trust the machines (we never did, did we?) – of course he’s speaking of “pre-configured instances and found authentication keys in the caches, credit-card data and the potential for malicious code to be hidden within the system” (the point here is that your data could be swiped by malefactors!). 5) Don’t believe the hype, which means make sure you know what cloud computing really can and cannot do (he terms it: “rethink you assumptions”).
One analyst Martin Ford in an NPR interview said the typical of AI arising in our midst: “It’s very hard to say exactly when,” Ford said. “One thing we can say though is that things are moving at a faster and faster rate. Technology — and in particular, information technology — is accelerating … and it’s going to have a very big impact at some point, a disruptive impact, I think.” (see The Dark Side of Watson)
Obviously with the Chinese stock market debacle in process the dark side has emerged with all its ferocious and disruptive force. See my World Markets: Spoofing – AI and the Control of the Markets and Civilization? As one analyst tells us: Disruptive technology always surfaces socioeconomic issues that either didn’t exist before or were not obvious and imminent. Some people get worked up because they don’t quite understand how technology works. I still remember politicians trying to blame GMail for “reading” emails to show ads. I believe that Big Data is yet another such disruption that is going to cause similar issues and it is disappointing that nothing much has changed in the last two years. (see The Discriminatory Dark Side Of Big Data)
There is also the threat of governmental spies and intelligence services accesses the clouds for their nefarious observations as Chris Doughtery tells us in “NSA-Proof” Your Cloud Storage“: “With recent revelations about the NSA tapping into cloud-based giants like Yahoo and Google, it is becoming increasingly important for cloud storage users to take additional steps to secure their data. You can no longer blindly trust that your service provider is keeping your data secured and locked away from prying eyes. Networks get attacked, servers compromised and information is leaked. It happens all the time.”
Yet, there is another dark side to the world of Cloud computing too as the Guardian reports: Digital waste has grown exponentially over the last decade as storage of data — such as e-mails, pictures, audio and video files, etc. — has shifted to the online sphere:
According to a recent Greenpeace report, Make IT Green: Cloud Computing and its Contribution to Climate Change, the electricity consumed by cloud computing globally will increase from 632 billion kilowatt hours in 2007 to 1,963 billion kWh by 2020 and the associated CO2 equivalent emissions would reach 1,034 megatonnes.
Of course a few of the newspeak pessimists of AI voice their concerns as well. A number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.
“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig’s disease, and communicates using specialized speech software.)
And Hawking isn’t alone. Musk told an audience at MIT that AI is humanity’s “biggest existential threat.” He also once tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes.”
In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musk told CNBC that he’d like to “keep an eye on what’s going on with artificial intelligence,” adding, “I think there’s potentially a dangerous outcome there.” (see Live Science)
Then of course the Military shadow worlds are working overtime to develop these new systems. “Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” so wrote technology and scientific geniuses in an open letter, signed by 1,000 signatories, which was presented at the International Joint Conference on Artificial Intelligence in Buenos Aires. The researchers acknowledge that sending robots into war could produce the positive result of reducing casualties. However, they write, it lowers the threshold for going to battle in the first place. “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” they write. “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”. – See more at: The Debate…
Yet, it’s weird how the propaganda-hype machine of corporate media has instilled its own stupidity back into the loop to the point that even the propaganda is stupid. You would assume someone would have figured out that China losing 3 trillion dollars was a little more than a bubble on the blip of the progressive road to capitalism…
The more I read about the various companies that are now in this niche market selling AI algorithms and services the more I realize what’s happening is a conclave of idiocy on both sides of the equation. Having been a software engineer for 36+ years over the major thrust of the global internet uptake…. I’ve seen companies push out horrendous spaghetti code that was never intended to be complete, only efficient … code that was in many ways already broken and viral, buggy. When I think of such self-learning algorithms being set loose in the markets to learn on their own my hair stands up on my neck like a horror show fright (i.e., to self-program their own code in a recursive loop based on some unique set of algorithms). The more I realize is that with so many competing AI algorithms, each unique and proprietary, used by multitudes of corporate interests one imagines that the stock market is turning into mush or goo at as these swarm like piranha of the electronic rivers feed on the trades each day.
The Chinese market is a testing bed for this new form of viral predation, a system that is out of control seeking to annihilate and absorb the capital of nations quickly and efficiently, machinic. Just think of it this way: the Chinese discovered what they think were a few dozen malefactor corporate Hedge Funds and other types doing this (24?)… it’s probably much more than this which goes undetected across the world’s markets. With algorithms being given names as “Stealth” (developed by the Deutsche Bank), “Iceberg”, “Dagger”, “Guerrilla”, “Sniper”, “BASOR” and “Sniffer” what does the future portend? (see: Financial Times, March 19, 2007 pdf) How do you discover a free-floating algorithmic agent that mimics actual human traders, a smart machine investment broker wandering freely among other competing agents when it was programmed to act in every way just like said broker? What of the larger Underworld Organizations: Narco-Mexican Cartels, Russian Mafia, Sicily? What if either the large drug-lords of the world or their investment bank partners who laundry money and support their shadow worlds gain full access to these systems for manipulation? And governments seeking to bring another nation’s economy down? The list goes on and on… sheer paranoia?
For all intents and purposes these new AI systems now pass the Turing test for investment brokerage avatars…. and, yet, it is both blind and intelligent – it does not know it is an agent: it is as my friend R. Scott Bakker has been suggesting for a long while: a non-intentional mathematical miracle of the ghost in the machine that has no soul, intelligent without consciousness – a pure profit making machine. Bakker’s been working on the assumption that human intentionalism will soon go the way of dinosaurs Writing After the Death of Meaning: “‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.”
Hundreds of non-intentional profit making machines all competing for the same hedge bets work the markets daily. AI’s mimicking human agents, traders… no wonder the Chinese market went down so fast. As Kevin Warwick states: “We are proud to declare that Alan Turing’s Test was passed for the first time on Saturday,” a visiting professor at the University of Reading, which organized the event at the Royal Society in London. “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human.” So have machines actually become more intelligent than humans? Or is the truth only that they can mimic humanity better than we can cognitively filter out the illusions we perceive in our limited range of consciousness? If Bakker is right then the truth is we’re just too stupid to perceive the difference.
The stupidity factor must always be thrown into the mix as well. In most ways we are letting loose chaos algorithms on a world of commerce, a world of competing AI’s seek and search machines, profit-machines, given such predator names, each devolving into cannibalistic worlds of Darwinian-Spenser capitalism that is slowly unraveling the advanced systems on the edge of things rather than bringing any sense order. Over the years as a consultant I’ve seen companies push out code that was already buggy, and with each new iteration led to greater and greater errors to the point that in most instances the system would sooner or later have to be scrapped for a new trial run. Our so to speak advanced algorithms will tend toward disruption as more and more companies let loose such bug infested systems on the market. Stay tuned folks the fun has only just begun.
Welcome to the End Game!
1. Swirski, Peter (2006-07-27). The Art and Science of Stanislaw Lem (p. 162). Ingram Distribution. Kindle Edition.
Maybe Pink Floyd had it right all along in their Dark Side of the Moon album – it’s all leading to utter madness and chaos – enjoy the ride: