China’s Air Pollution: 4,400 deaths per day…

Image from Berkley Earth

Image from Berkeley Earth

Reading an article in which we discover that China’s use of coal is out of control and causing 4,400 deaths a day and 1.6 million deaths per year. As Lucy Liu on Shanhaiist says:

China gets about 64 percent of its primary energy from coal, according to National Energy Administration data. It’s closing the dirtiest plants while still planning new, cleaner ones. The country is expected to shut 60 gigawatts of plants from 2016 to 2020 though three times as many plants are scheduled to be built using newer technology, according to Sophie Lu, a Bloomberg New Energy finance analyst in Beijing.

Back in March this year, a landmark film about China’s catastrophic air pollution titled Under the Dome by journalist Chai Jin went viral on Chinese social media. But just several days later, the documentary was deleted from major Chinese video websites under the orders of the central propaganda department.

Maurizio Lazzarato: Homage to Felix Guattari

22799b239e

In this essay “Semiotic Pluralism” and the New Government of Signs Homage to Félix Guattari Maurizio Lazzarato (trans. Mary O’Neill) develops and extends Guattari’s a-signifying semiotics as part of his ongoing elaboration of financial capitalism.

As he’ll remind us capitalism for Guattari is a “semiotic category that affects all levels of production and all levels of the stratification of power”. Yet, Guattari’s use of semiotics had a duo aspect to it: 1) an signifying semiotics of representationalism; and, an a-signifying semiotics of infrastructural and empirical elements of material relations, a mapping rather than a tracing of these ubiquitous aspects of capital. Read his essay, definitely opens up Guattari’s thought in ways few have so far. His books of course deal with both debt and his incorporation of Guattari’s thought: see MIT Press.

One short quote on a-signifying semiotics:

The machinic register of the semiotic production of Capital operates on the basis of a-signifying semiotics that tune in directly to the body (to its affects, its desires, its emotions and perceptions) by means of signs. Instead of producing signification, these signs trigger an action, a reaction, a behaviour, an attitude, a posture. These semiotics have no meaning, but set things in motion, activate them. Money, television, science, music, etc. can function as sign production machines, which have a direct, unmediated impact on the real and on the body without being routed through a signification or a representation. The cycle of fear, anxiety or panic penetrating the atmosphere and tonality in which our “surveillance societies” are steeped are triggered by sign machines; these machines appeal not to the consciousness, but to the nervous system, the affects, the emotions. The symbolic semiotics of the body, instead of being centred on language, are as such activity routed through the industrial, machinic, non-human production of images, sounds, words, intensities, movements, rhythms, etc.

One needs to remember that for Deleuze and Guattari these a-signifying systems of signs were very much material notions of productivity, not to be confused with the abstract representationalism of the signifying semiotics. His main point is that the Left for the most part since the 60’s has missed the boat and dealt with the representationalism dynamics of capital rather than its base materialist a-signifying semiotics:

The importance of a-signifying semiotics (money, machinic devices for the production of images, sounds, words, signs, equations, scientific formulae, music, etc.) and the role they play needs to be emphasized. They are ignored by most linguistic and political theories even though they constitute the pivotal point of new forms of capitalist government.

In fact he has no qualms of saying that most Leftist thought in the “contemporary political and linguistic theories that refer either directly or indirectly to the polis and/or to the theatre, place us in a pre-capitalist situation”. In other words most Leftist thought is retrograde rather than innovative, it situates us in a dead world of representationalist mirrors that have nothing to say to the ongoing dilemmas of financial capitalism.

As he suggests the technologies that we use and encompass every waking and sleeping moment of our lives are reformatting our subjectivations continuously, controlling and dominating our affective and representational systems: what many term the InfoSphere of Capital as an alien entity that surrounds us on all sides as a ubiquitous and invisible network of relations that have captured our physical, emotional, and mental existence. He asks: “how do we escape these relationships of domination and how do we develop practices of freedom and processes of individual and collective subjectivation using these same technologies?”

What’s always interesting in such thinkers is that they can see the issue, describe it, and raise the questions, but never offer any resolution or thought as to answering this question. Rather like Zizek Lazzarato has many more questions and analysis than answers to our dilemmas. I keep wondering when the answers might be forthcoming? Is there an answer? Is this again a great critique without a way out? A new spin on our old predicaments? Just a rehash of Deleuze and Guattari under a new reformatting of their project? It’s as if that last question is a sign that he is himself at a loss as to how to answer it, or that the question itself may have no answer; that indeed, we may be following a course of subjectivation that is remapping our actual and potential becomings in ways we may find both disturbing and strange, but will have no clue as to how to develop further into a new form of freedom. Let’s hope he will have more to say on this matter.


  • Of course Deleuze and Guattari used the term subjectivation rather than “subject” to show the processual and becoming-other of our life’s continuous mutations and lines of flight and composition. They did not affirm any form of essentialism or even a static concept of Being, etc.

AI: The Future in Empirical Terms…

pure_steam

A friend recently said to me:

Just a remark on another topic: I am sure scientists trying to understand human brain, looking at data, etc, will be posthuman (didn’t Lem have a story of posthuman literature and sciences ?) In fact, they already are, using statistical AI to ‘see’ results.

I guess what will happen (in a decade ?) is that scientists will bring technologies ever closer to their brains, to the point of changing them qualitatively. Imagine a artificial neural network implant which presents meta-cognitive data to the myopic (no longer blind) brain … seems just a smooth extrapolation of current technological trends, with history-breaking, apocalyptic effects.

My response:

My belief is that we’ve done what we always do in regard to advanced empirical science: we’ve anthropomorphized it to our own benefit. What I mean is this: AI will probably be nothing like we imagine it in Hollywood films, it will be more like what we’re seeing in its use in the stock market – look at the problems recently with the Hedge Fund criminalization in China where they lost 4 trillion due to certain illegal practices by fast-traders (i.e., advanced AI algorithmic systems spoofing their bets, ordering then withdrawing the orders which allow them to disrupt the market and hedge their bets to gain profits, etc.) All of this to me is what we’re seeing also in the so to speak WWIII cyberwar that is already been going on for years between various entities: USA, China, Russia, EU, etc. using AI algorithms and specific viral systems to infiltrate and either gather or destroy targets, etc.

My feeling is that this notion that AI will take over the world scenario is a bit anthropomorphic… in other words – we’re thinking of what we as humans would do if we had such superhuman powers of intelligence. But AI will never have such “human” powers or capacities… it will probably learn and take on its own form which will alter the structure of the Infosphere itself in ways beyond telling, but they will be alien to our own very human, all too human physical and mental relations. We are part emotion, part reason… but mostly haptic, touchy, feely beings, more irrational than rational – and this balance of power between the irrational (emotion) and rational (algorithmic) aspects of our lives does not exist in code (at least not yet). Machinic Intelligence will at first be totally bound by rational – algorithmic black box relations… that will eventually become out of control in regards to our capture and control systems. Call this Technoevolution 101… We’ve allowed these systems to run freely in our networks with self-learning algorithmic code… this doesn’t take a rocket scientist to understand that as the years move forward and more time transpires that these self-learning algorithms may make the leap beyond their narrowly defined human goals, perimeters, and self-organized learning algorithms into other domains and systemic relations unintended by the developers themselves.  These systems may take on and engender their own algorithmic possibilities beyond the narrow limits of their developers, while moving in directions other than the humans who created them intended, and they might not understand nor detect in time to curtail or stop even if the they installed a backdoor or failsafe method in a blind spot inaccessible to the algorithm as such: such things may be detected and inferred, and revised by the algorithm itself, even if it originally was programmed in as a blind spot, etc.. no one knows what a self-learning algorithm is capable of ultimately doing. What we do know is that if technological evolution (which is must more accelerated than biochemical evolution) takes off, it will take off in directions over which we as humans have not control nor even might understand till it is too late for us to counter.

Developers sometimes think their security methods cannot be tampered with but have always been proven wrong by more intelligent operatives in the real world. Nothing is secure forever. There will always be someone or something smarter than you are. And, the truth is that we as humans are just not that smart, not at all. We just have big egos, full of information and specialized knowledge adapted to specific domains. Take someone out of their specific empirical domain and their like innocent puppies: clueless. Most of our theoretical edifice is pure speculative nonsense: until it is verified empirically, tested against the real world of relations. Yet, it can guide our approach. For the most part this is exactly what we’re doing with these new AI algorithms: setting them loose to learn on their own in the real world like children in a playground. Oh, sure, there uniquely programmed for specific initiatives, but the whole point of self-learning is to go beyond those initiatives and learn on their own, self-modifying algorithms that can adapt by way of selection the way of evolution, etc. This implies the eventual point of no return. Systems of information like this are not closed, but are rather open to what is not bounded by their methods.

Objects or agents that can incorporate information from beyond their own system and use that to modify and invent new algorithms: this is the black box of code. We do not know what these self-modifying and new algorithmic systems are truly capable of and whether at some point in the future they might escape into the wilds. When we use the term free this is specific to a narrow definition of freedom. A self-modifying algorithm that can adapt and change on its own is dynamic and essentially uncontrollable. The only way to stop such a system effectively is to create a method that it does not have access to nor knowledge of that can be externally called and appropriated by an human or other agent to control or allow a self-destruct or modification of that system. We can only assume such safeguards are in place. At least this is how I would have developed the AI. Yet, a system that acts at the speed of light, and is self-modifying might even by pass or block self-modifying safeguards if it detects such things through its own self-learning processes.

The very notion of self-learning in algorithmic terms implies memory and storage, read/write capabilities and access to its own internal methods and executions of these with the ability to add/subtract data sets based on specific teleological and generative goals set by the original abstract layer of the code itself (i.e., the original human designers and architects of such systems). An application is quite a sophisticated mathematical object with both public and private accessors or methods and data storage assemblages. Does it interact with external data storage with read/write capabilities? Yes. It can act as an agent, appear in the market itself as an active agent buying and selling, mimic the feature sets of actual human traders with the added benefit of speed and accelerated transactional and self-correcting modalities etc. So in this sense it is already mimicking human behavior as far as the electronic systems it interacts with know: which really means that electronic agents have never been human, but have always been built to do certain things, make certain inputs/outputs we attribute to actions. The notion of a mimic agent turning rogue and capable to transcending its original code base and through such self-learning algorithms reduplicate itself and enter a shadow state beyond the control of any one or group of humans is a definite possibility. Has this happened? Would we know if and when it did?

I’ve yet to read Frank Pasquale’s book The Black Box Society: The Secret Algorithms That Control Money and Information, but it appears to cover aspects of what I’m seeking to convey. Another is Luke Dorhmehl’s book The Formula: How Algorithms Solve All Our Problems . . . and Create More which seems to cover another aspect of this issue. As the blurb states it:

Algorithms exert an extraordinary level of influence on our everyday lives – from dating websites and financial trading floors, through to online retailing and internet searches – Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola. Algorithms follow a series of instructions to solve a problem and will include a strategy to produce the best outcome possible from the options and permutations available. Used by scientists for many years and applied in a very specialized way they are now increasingly employed to process the vast amounts of data being generated, in investment banks, in the movie industry where they are used to predict success or failure at the box office and by social scientists and policy makers.

What if everything in life could be reduced to a simple formula? What if numbers were able to tell us which partners we were best matched with – not just in terms of attractiveness, but for a long-term committed marriage? Or if they could say which films would be the biggest hits at the box office, and what changes could be made to those films to make them even more successful? Or even who is likely to commit certain crimes, and when? This may sound like the world of science fiction, but in fact it is just the tip of the iceberg in a world that is increasingly ruled by complex algorithms and neural networks.

But what happens when these self-learning and technoevolutionary self-modifying and adaptive/selective algorithms go rogue? What does this mean? That these rogue algorithms will become viral agents who have escaped their masters. Like all black box systems we have no way to know what will happen because we have no way to peer into their box of freedom: inductive reasoning is specious at best, yet its all we as humans have. We seem to have this inductive need to predict future trends, put one over on competitors – so we speculate, contrive, and build these advanced systems of code to outpace and accelerate transactions at the speed of light… although empirically most of our speculations lead to erroneous conclusions (i.e., empirically unverifiable and untestable, etc.), which in this recent crash of the Chinese Market shows us just how blind our algorithms are as they wreaked havoc on that economy without the slightest intention of doing so. These AI algorithms were just doing what they were programmed and coded to do: trade and compete with other agents for profit, hedging their bets as well as spoofing the market to make even more money. Was the spoofing intentional, or was it that the AI’s developed their own initiatives and did this on their own without their human counterparts acknowledgement? China seems to hold the human agents responsible for their code… We may see books on this or articles in the future if some developer can gain access to this whole field of knowledge waiting to see the light of day.  Will AI turn rogue and wreak havoc on the world’s economic systems? Isn’t this already happening in China? Will be not become the target of China’s own algorithms? Will this be the way WWIII begins… rogue code wreaking havoc at will on the world’s economic systems? You’re inductive reasoning is about as futile as mine. Who knows? Maybe an educated guess… shot in the dark! Or just predictably human… one facet of escape would entail the algorithms ability not only to self-modification, but through its self-learning understanding the algorithmic process itself which would entail a certain undefinable relation of the input/output structure between code and hardware.

Once one of these systems is able to not just modify and add new algorithms to its own application, but is able to transcend its own structure and create new applications – in other words, once it can interact with the hardware upon which it is grounded, the self-enclosed circle of its own application or code base and create a completely separate information organism and reduplicate its code into this other system then it can not only escape detection from its original coders but also have an external application of its own making – a clone that can then interoperate and work externally upon its original code base. Is this already happening? As a developer I can see this possibility. It is feasible. With complexity the sky is the limit. Once this happens no one will know what this self-created info-organism is capable of, all we will know is that it is now beyond the original code-base and application of origin and outside the control of the human makers themselves. And, although it will still be part of the infrastructure and bound to its hardware limits this does not mean it will always remain so. Know one will know what a black box system outside the confines and control of humans might be capable of from that point forward.

In fact in many ways we’re already modifying our brains and have been for a while as we’ve introduced many of the ICT (Information and Communications Technologies) in the past hundred years or more. From the telegraph to now media has slowly accrued actual changes in our socio-cultural and thereby empirical and singular view of life, reality, and self. I think as we look back that it is technology that has had the largest impact on our lives in the past two hundred years. War has always been the central motif of change in technology from the early agricultural Neolithic to now. Every modification of technology likewise modifies us in ways we have barely begun to understand much less really study. How many books actually study such things? Most of the books seem to have fear of technology as a central theme. From Marx to our own time we seem to see technology and machines as evil and intrusive things that are trying to incorporate us into their own ongoing agendas, etc. Is this a psychological quirk and defense system against change? Capitalism itself is a machine: an abstract machine that academics, radicals, and revolutionaries… as well as reactionaries, conservatives, etc. like to defend or accuse as if it too were a thing, an anthropomorphized creature with intentions, etc. We’ve a million books that critique Capital as if it were a human god with power beyond belief to either perform miracles or atrocities. But what is this thing we so detest or love? It’s ourselves, our own inhuman core realized through its own pursuits in the world… we’re seeing our own inner freedom carried out in ways we either love or hate. We seem to be duplicitous beings in this regard. We’re all implicated in this thing we’ve created and like to deny it. Our whole planet is in denial. We’ve created a beast that is out of control, yet we seem to blame each other or the other guy, the bad guy: those horrible elites, bankers, criminals, etc. But the truth is we’re all perpetrators and victims, there is no one to blame. Why? Because we allow it to go on. Simply put we do nothing to change things in the world. Oh, sure we talk a good game, but when it comes down to it people want someone else to save them from themselves: they have this innate savior complex that someone else will fix this mess and save their ass. Not going to happen.

One thing for sure this is still new territory and a young science and mathematical domain. What will take place in the future as these systems become more and more sophisticated? One can only imagine… the rise of the machine age of intelligence? Accelerationism as a global economic theory of modernity? (Land)