The Global Cyberwar: The Alogrithms of Intelligent Malware

When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world . Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.1

Wired has an article by Kim Zetter An Unprecedented Look at Stuxnet, the World’s First Digital Weapon which elaborates on the now widely known collaboration between US and Israeli intelligence agencies seeking a way to infiltrate and slow down or destroy centrifuges in the Natanz nuclear facility in Iran.

Needless to say they were successful, yet in their success they failed miserably. Why? As you read the quoted passage again you notice the code that was originally brought into the closed facility by way of memory sticks was released into the computers by way of flash drives. And after it was slowly unwound, installed and phased into it operative mode it began to work through the networks of the facility until by chance or accident it discovered itself outside the facility and on the internet. So that as James Barrat reminds us we “do not know the downstream implications of delivering this powerful technology into the hands of our enemies. How bad could it get? An attack on elements of the U.S. power grid, for starters.” (Barrat, 261-262)

The article by Zetter doesn’t mention this fatal flaw in the plan, and how the malware is now spreading across the globe and is available for even our enemies to use against us. As Barrat states it a former head of cyberdefense at DHS Sean McGurk was asked on a CBS 60 Minutes interview if he’d been asked would he have built such a malware application:

MCGURK: [Stuxnet’s creators] opened up the box. They demonstrated the capability. They showed the ability and the desire to do so. And it’s not something that can be put back.
KROFT: If somebody in the government had come to you and said, “Look, we’re thinking about doing this. What do you think?” What would you have told them? MCGURK: I would have strongly cautioned them against it because of the unintended consequences of releasing such a code.
KROFT: Meaning that other people could use it against you?
MCGURK: Yes.

(Barrat, 260)

The segment ends with German industrial control systems expert Ralph Langner. Langner “discovered” Stuxnet by taking it apart in his lab and testing its payload. He tells 60 Minutes that Stuxnet dramatically lowered the dollar cost of a terrorist attack on the U.S. electrical grid to about a million dollars. Elsewhere, Langner warned about the mass casualties that could result from unprotected control systems throughout America, in “important facilities like power, water, and chemical facilities that process poisonous gases.”

“What’s really worrying are the concepts that Stuxnet gives hackers,” said Langner. “Before, a Stuxnet-type attack could have been created by maybe five people. Now it’s more like five hundred who could do this. The skill set that’s out there right now, and the level required to make this kind of thing, has dropped considerably simply because you can copy so much from Stuxnet.”

(Barrat, 261-265)

 As one analyst put it Stuxnet is remarkably complex, but is hardly extraordinary. Some analysts have described it as a Frankenstein of existing cyber criminal tradecraft – bits and pieces of existing knowledge patched together to create a chimera. The analogy is apt and, just like the literary Frankenstein, the monster may come back to haunt its creators. The virus leaked out and infected computers in India, Indonesia, and even the U.S., a leak that occurred through an error in the code of a new variant of Stuxnet sent into the Natanz nuclear enrichment facility. This error allowed the Stuxnet worm to spread into an engineer’s computer when it was hooked up to the centrifuges, and when he left the facility and connected his computer to the Internet the worm did not realize that its environment had changed. Stuxnet began spreading and replicating itself around the world. The Americans blamed the Israelis, who admitted nothing, but whoever was at fault, the toothpaste was out of the tube.2

Deibert goes on to say the real significance of Stuxnet lies not in its complexity, or in the political intrigue involved (including the calculated leaks), but in the threshold that it crossed: major governments taking at least implicit credit for a cyber weapon that sabotaged a critical infrastructure facility through computer coding. No longer was it possible to counter the Kasperskys and Clarkeses of the world with the retort that their fears were simply “theoretical.” Stuxnet had demonstrated just what type of damage can be done with black code. (Deibert, KL 2728)

Such things are just the tip of the iceberg, too. The world of cybercrime, cyberterrorism, cyberwar is a thriving billion dollar industry that is flourishing as full time aspect of the global initiatives of almost every major player on the planet. As reported in the NY Times U.S. Blames China’s Military Directly for Cyberattack. The Obama administration explicitly accused China’s military of mounting attacks on American government computer systems and defense contractors, saying one motive could be to map “military capabilities that could be exploited during a crisis.”  While countries like Russian target their former satellites Suspicion Falls on Russia as ‘Snake’ Cyberattacks Target Ukraine’s Government: According to a report published by the British-based defense and security company BAE Systems, dozens of computer networks in Ukraine have been infected for years by a cyberespionage “tool kit” called Snake, which seems similar to a system that several years ago plagued the Pentagon, where it attacked classified systems.

Bloomberg summarized this concept this the following statement:

“The U.S. national security apparatus may be dominant in the physical world, but it’s far less prepared in the virtual one. The rules of cyberwarfare are still being written, and it may be that the deployment of attack code is an act of war as destructive as the disabling of any real infrastructure. And it’s an act of war that can be hard to trace: Almost four years after the initial NASDAQ intrusion, U.S. officials are still sorting out what happened. Although American military is an excellent deterrent, it doesn’t work if you don’t know whom to use it on.”

As Deibert warns we are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some of them open, scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? (Deibert, KL 2819)

At one time President Dwight D. Eisenhower warned of the growing Military-Industrial Complex in the era of the 50’s, now we have Deibert suggests an ever-growing cyber security industrial complex, a world where a rotating cast of characters moves in and out of national security agencies and the private sector companies that service them. (Deibert, KL 2927) For those in the defence and intelligence services industry this scenario represents an irresistibly attractive market opportunity. Some estimates value cyber-security military-industrial business at upwards of US $150 billion annually. (Deibert, KL 3022) The digital arms trade for products and services around “active defence” may end up causing serious instability and chaos. Frustrated by their inability to prevent constant penetrations of their networks through passive defensive measures, it is becoming increasingly legitimate for companies to take retaliatory measures. (ibid., 3079)

Malicious software that pries open and exposes insecure computing systems is developing at a rate beyond the capacities of cyber security agencies even to count, let alone mitigate. Data breaches of governments, private sector companies, NGOS, and others are now an almost daily occurrence, and systems that control critical infrastructure – electrical grids, nuclear power plants, water treatment facilities – have been demonstrably compromised. (Deibert, KL 3490) The social forces leading us down the path of control and surveillance are formidable, even sometimes appear to be inevitable. But nothing is ever inevitable. (Deibert, KL 3532)


In Mind Factory Slavoj Zizek will ask the question: Are we entering the posthuman era? He will then go on to say that the survival of being-human by humans cannot depend on an ontic decision by humans.3

Instead he reminds us we should admit that the true catastrophe has already happened: we already experience ourselves as in principle manipulable, we need only freely renounce ourselves to fully deploy these potentials. But the crucial point is that, not only will our universe of meaning disappear with biogenetic planning, i.e. not only are the utopian descriptions of the digital paradise wrong, since they imply that meaning will persist; the opposite, negative, descriptions of the “meaningless” universe of technological self-manipulation is also the victim of a perspective fallacy , it also measures the future with inadequate present standards. That is to say, the future of technological self-manipulation only appears as “deprived of meaning” if measured by (or, rather, from within the horizon of) the traditional notion of what a meaningful universe is. Who knows what this “posthuman” universe will reveal itself to be “in itself”? (Mind Factory, KL 368-66)

What if there is no singular and simple answer, what if the contemporary trends (digitalisation, biogenetic self-manipulation) open themselves up to a multitude of possible symbolisations? What if the utopia— the pervert dream of the passage from hardware to software of a subjectivity freely floating between different embodiments— and the dystopia— the nightmare of humans voluntarily transforming themselves into programmed beings— are just the positive and the negative of the same ideological fantasy? What if it is only and precisely this technological prospect that fully confronts us with the most radical dimension of our finitude?(Mind Factory, KL 366-83)

With so many things going on in the sciences, military, governments, nations etc. where are the watchdogs that can discern the trends? Who can give answer to all the myriad elements that are making up this strange new posthuman era we all seem blindly moving toward? Or is it already here? With Malware on the loose, algorithms that manipulate, grow, improve on the loose around the globe; as well as being reprogramed by various unknown governments, criminal syndicates, hackers: what does the man/woman on the street do? As Nick Land will say of one of his alter ego’s

Vauung seems to think there are lessons to be learnt from this despicable mess.4


 

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 261). St. Martin’s Press. Kindle Edition.
2. Deibert, Ronald J. (2013-05-14). Black Code: Inside the Battle for Cyberspace (Kindle Locations 2721-2728). McClelland & Stewart. Kindle Edition.
3. Armand, Louis; Zizek, Slavoj; Critchley, Simon; McCarthy, Tom; Wark, McKenzie; Ulmer, Gregory L.; Kroker, Arthur; Tofts, Darren; Lewty, Jane (2013-07-19). Mind Factory (Kindle Locations 367-368). Litteraria Pragensia. Kindle Edition.
4. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Location 9008). Urbanomic/Sequence Press. Kindle Edition.

 

 

 

4 thoughts on “The Global Cyberwar: The Alogrithms of Intelligent Malware

    • exactly… it’s the blind leading the blind for the blind… the thing is that it’s all part of the recombinant remix of ideas embedded in the junk of previous code. Invention is never new: it’s just a remix of things found in older systems in new ways. New assemblages.

      Like

      • One of the main things that Braggart suggested in his book about AGI is that most of the developers are not worried if it is based on human thinking or abilities: and, as he says, they will more than likely stumble upon it as an alien process and algorithm after it has already escaped their clutches. The notion that they could make it AI Friendly or even add emotion, normativity, etc. to it is not even on the plate of most AGI experts except Yudkowsky has written a book-length online treatise about these questions entitled Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures.

        Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 55). St. Martin’s Press. Kindle Edition.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s