Samuel Beckett – Apr 13, 1906 – Dec 22, 1989 (age 83)

This little story by Samuel Beckett “Stirrings Still” always reminds me of those convoluted latinate poems of e.e. cummings, the ones you must not only memorize, but perform internally and externally, fuse and merge beyond the memory of its actual enactment, a forgetting in the moment of saying… no matter how glum I get Beckett can make me smile inside… there has always been and will always be a kindness, a gentleness in his nihilism that goes beyond the banal truths… his ability to put the void in language, to distill out of the abyss a non-meaning that has something like an after spark rather than a taste of death in it… quintessential Beckett… this sense of absence in motion.. a pressing of the void that is this absence against the horizon…But that’s it isn’t it? In Beckett nothing ever truly ends: it’s this endless void dancing in motion, a waiting without end – two clowns in operatic speechlessness speaking past each other to the absence that never was there…yet, has always remained in its very nothingness. Samuel Beckett: the black mote in a comic’s eye, the voice both in and out of the end game. The best complement you could pay in homage to Beckett: Even his shadow is longer than time…

*  *  *

“Waiting to see if he would or would not. Leave him or not alone again waiting for nothing again.”

           – Samuel Beckett,   The Complete Short Prose of Samuel Beckett, 1929-1989

The Ideology of Progress and its Aftermath

Why is the concept of “Progress” in our liquid, flexible, destructured and deterritorialized socio-economic world obsolete? When we go back and look at the original Progressive era and begin to understand its central issues in various cultural and scientific initiatives we realize that the notion of Reform is the central dictum of that project. Think about that: reform is based on the notion that there is something stable and unified to which one can apply change without destroying what one is changing beyond all recognition. The State was the central metaphor to which this notion was applied. The other issue is a notion of knowledge. The notion that one could store what has been learned in coherent, integrated, and widely shared way. Provide an accumulated and unified dataset of foundational knowledge upon which to base the decisional processes of society. The society, the sciences, the whole gamut of methodologies to which these notions once pointed are now gone. We no longer live in a progressive universe to which such notions any longer make even an inkling of rational sense. The notions that we can construct stable institutions, both governmental and scientific, to which stable caches of information can be attached and expanded indefinitely and passed on, intact, from one generation to the next is no longer viable in our flexworld of networks and informational processes that or continuously modified, revised, updated in non-cumulative and desynchronous, and even a-linear ways that such notions of progress in government and sciences is not only erroneous but detrimental.

Since the Progressive Age we’ve seen the opposite of progress as it defined it: we have witnessed the emergence of an almost “geo-engineered” and continuous milieu that triggers an experience of delocalization / detemporalization. In fact in our age of  ubiquitous wireless communication with its vast Information and Communications Technologies, peer to peer exchanges, social networks and algorithmic or mediated environments of all kinds we’ve seen the amplification of the notion of an “ambient deterritorialization” that is always around, always “on”. Jonathan  Crary in his work 24/7: Late Capitalism and the Ends of Sleep documents this flexworld of global non-stop hypercapitalism that never sleeps. We’ve been watching around the planet the ineffectuality of national systems struggling to redefine themselves using these outmoded concepts and scientific notions of Big Data which is nothing more than progressive accumulation and neoliberal fantasy. This wastage on these massive projects of accumulating data seems not only against current scientific and philosophical thinking, but actually the creation of a platform that will lead not to increased intelligence and information but to its opposite.

Even in my own field of software development as an architect we’ve understood for a long time that knowledge is not accumulative, that in fact knowledge is usually dated before the software that is the engine for its use ever reaches market. Forecasting and anticipation rather than accumulation and progress are better indicators of the value of knowledge in our world today. Knowledge is not some massive foundational piece of data to be stored, but a set of practices and informational algorithms to be put to work. We’ve seen the failure of bureaucracy in most democratic nations to predict, understand, or circumvent the economic, political, and social crisis due to being locked into outmoded forms of thought based on progressive ideologies, both on the Left and the Right during the so called “neoliberal” era of free-market economics.

In our networked world the never-ending day now exists across continents, between global cities, time-zones where nothing begins or stops but flows in accelerating speeds of power and data feeds that are no longer localized nor bound to the legal systems of national regulatory agencies and contextualization’s. Living as we do in increasingly artificial zones, or “geo-engineered” environments, that Luciano Floridi terms the InfoSphere, composed of data-generated systems of both internal/external infrastructures based on ubiquitous sensoriums, in which we are mutating and merging with our intelligent environments that will provide temporal flows and activities in which both human and artificial systems will co-habitate in networked societies based on a new form of Onlife.

This notion of Onlife is based on the centrality of the ICTs not as mere tools but rather social forces that are increasingly affecting our self-conception, our mutual interactions; our conception of reality; and our interactions with reality. Because of this we need to develop new ethical, legal, and political strategies for our global world that will adapt to and transform a complex set of relations among both First World and Third World participants as our social, political, religious, and cultural systems begin to interact within these new environments.

What’s more important in problem solving: the accumulation of data; or, the elimination of redundant and spurious data? Companies like Google depend on providing their users access to the information they truly need in an accurate and timely manner. But is the internet the most useful site for such access? Has the process of weeding out redundant and spurious information (i.e. noise…disruptive information) becoming an insurmountable and deleterious exercise in futility? As companies add more powerful processing chips as well as deeper mathematical algorithms based on categorical and synthetic analysis will we begin to see more cost effective systems than even a Google? On Amazon.com alone one can find hundreds of books, not even counting the specialized journals and papers in think-tanks, academia, etc. that deal with the notion of “organizing knowledge”. The question is a utilitarian one: what is useful knowledge, and for whom is it useful? The notion of filtering out must entail and be guided by the actors or networks to which said knowledge will be used; as well as possible unknowns.

When one thinks about it the notion of organizing knowledge is like pinning the tail on a donkey at a birthday party: All the participants are blind as to the target information, and must stumble blindly over an uncharted territory to discover the specific location for the placement of the tail. All good for such an analogy, but for access to information for which one has no tail what then? The other problem is that knowledge in neither static nor fully dynamic, but a shifting territory much like the Sahara desert sands. Such bureaucracies as those of government (i.e., for example the NSA – National Security Agency) accumulate and store even noise or useless data that to most utilitarian approaches would be anathema, but to such entities might hold encrypted or coded messages that pass by  typical algorithms that would eliminate data that otherwise is camouflaged. The other problem is the notion of organization itself: it presupposes a set or synthesis of data that is accumulated, or that is in processof being either permanent or temporarily stored, and can be replayed, retrieved, manipulated, bifurcated, spliced, etc. into categories of  useful ‘quanta’ or ‘qualia’ for transposition or composition/decomposition. The more one delves into the organization of knowledge that is useful for a multiplicity of agents the more the complexity of the issue is accelerated beyond human comprehension.

What if instead of the massive impossibility of organizing knowledge one takes another approach: learning how to learn? Instead of assuming we can continue to surmount the limits of knowledge management through mathematical algorithms we instead invent an alternative practice based on systems teach themselves techniques of learning how to learn? Adaptive systems capable of capable of processing the very structures of information not as data but as a self-revisable educational system with its own built in modes , capacities, powers of acquisition and memory algorithms? Rather than vast data storage facilities that we assume total control over, we have smart systems that share in this task and take on more and more of the intelligent and self-reflecting filtering of data on the fly that we would typically assign to static approaches of either linear or a-linear dynamics? Typically the organization of knowledge is by way of knowledge ontologies, taxonomies, and other schemas. This approach has served well and has used both the dictionary and the encyclopedia as relational matrices both close and open. Most of these are based on certain notions of semantics, which according to those constructing the systems can vary widely.

This is actually where both the neurosciences and philosophy begin to impinge on information sciences. In our quest to understand how the brain itself works and relates to what it ‘means’ to be human in our behavior and our mentation, both our thinking and our affective relations the new imaging technologies and real-time apprehension of brain processes has opened up new paths for knowledge as well as presented many new problems both in the sciences and philosophy. As one studies DARPA, Google, and other entities one realizes all these various sciences are already being brought to bare upon these issues in both commercial and governmental systems: robotics, AI, Big Data, etc. Underpinning most of these projects both commercial and governmental are certain fundamental philosophical notions that for the most part go unregistered in most public discourse on the topics accept as cursory concepts here and there float to the folk-psychology blip-screen. Without a basic understanding of even the bare minimum of notions, concepts, and ideas that are driving Google, DARPA, and other initiatives in the area of robotics, artificial intelligence, biotechnology, etc. we remain ignorant as to the actual scope and intent of these very enterprises and their relation to our accelerating futures.

I use futures in the plural because its not some monolithic project, but a network of competition energies and forces immanently working  themselves out through these very systems. This whole notion being purported by several neoliberal sites in connection to a notion of accelerating progress is false, and, as Nick Land proposed to me – the very notion of “progress” itself should be expunged from our vocabulary. Stephen Jay Gould once wrote a complete series of essays marking the idiocy of this cultural, economic, and political term and its abuse. Yet, you still have events like this one “Citizen Scientists Accelerate Bio Progress” promoting erroneous notions that do more harm and confuse science with pseudo-science in their need to politicize and control sciences.

Think about it: What did the Progressive Era in politics during the early decades of the 20th Century bring us? Propaganda, Federal Reserve, and the Eugenics Movement. Under Woodrow Wilson’s watch both the first propaganda and public relations (Edward Bernays) systems to coerce the vast populace through media was born; and, the Federal Reserve Bank was accepted, too. The Eugenics Movement sadly grew out of early reform movements on health, mental illness, population control, etc. All born out of East Coast Brahmanism of the elite bankers, corporate Moghuls, and paid off government lackeys.