Optimizing Intelligence: Time, Technology, and the Human Condition

“It is not possible to step twice into the same river.

—Heraclitus

We always get back to this definition: the machinic phylum is materiality, natural or artificial, and both simultaneously; it is matter in movement, in flux, in variation, matter as a conveyor of singularities and traits of expression.

—Deleuze and Guattari, A Thousand Plateaus

For Heraclitus the river fragment is not that all things are changing so that we cannot encounter them twice, but something much more subtle and profound. It is that some things stay the same only by changing. There is something in the process of change that stays with us, that moves even as we move but is stable through all the multifarious motions of time. In this sense Heraclitus believes in flux, but not as destructive of constancy; rather it is, paradoxically, a necessary condition of constancy.1 Shall we call it intelligence?

Deleuze was not a vitalist as some would have you believe, rather as he and Guattari would have it “there is no vital matter specific to the organic stratum, matter is the same on all the strata. But the organic stratum does have a specific unity of composition, a single abstract Animal, a single machine embedded in the stratum, and presents everywhere the same molecular materials, the same elements or anatomical components of organs, the same formal connections.”2 In this sense matter is a form of intelligence rather than a vital vibrancy. It moves, it connects, it thinks… but not in the human sense.

Stanislaw Lem once spoke of the need for a cybernetic sociology of intelligence, a field that would concern itself with examining historical systems and offer a theory of how to construct optimal models of social intelligence — “ optimal” with regard to freely chosen parameters.3 He argued that as the number of key factors is enormous, it is impossible to create a mathematically driven “ultimate formula for a society.” We can only approach this problem by getting closer and closer to it, via studying ever more complex models and simulations. For Lem the use of artificial intelligence would present humans with a black box theory of decision making:

…not as future “electronic omni-rulers,” or even superhuman sages, passing verdicts about the fate of humanity, but just as an experimental training ground for the scientists, a tool for finding answers to questions that are so complex that man will not resolve them without such help. Yet the decision itself, as well as the plan of action, should always remain in human hands. (ST: KL 2392)

That last sentence was key, for in Lem’s view if humanity ever gave over the power of decisions to the machines we would not remain human anymore.

Something else Lem would strongly attest to was the incompleteness of our knowledge and our ability to know or understand the universe, our selves, or future. As he’d remind us for hundreds of years, philosophers have been trying to prove logically the validity of induction: a form of reasoning that anticipates the future on the basis of past experience. None of them have succeeded in doing so. “They could not have succeeded because induction— whose beginnings lie in the conditioned response of an amoeba— is an attempt to transform incomplete into complete information. It thus constitutes a transgression against the law of information theory, which says that in an isolated system, information can decrease or maintain its current value, but it cannot increase.” (ST: KL 2392-2399) Yet induction— be it in the form of a conditioned response of a dog (a dog “believes” that it will be fed after the bell has rung because it has always been like this up until now and conveys this “faith” by salivating) or in the form of a scientific hypothesis— is being practiced by all living beings, including man. Acting on the basis of incomplete information, which is then completed through “guesswork” or “speculation,” is a biological necessity.

So that speculative sciences is the way we proceed in understanding the universe and the human condition. Our information will always be incomplete, and yet we must act of this incomplete information or suffer the consequences of our ignorance. Every action starts from a position of knowledge that contains gaps. In the light of such uncertainty, one can either refrain from action or undertake actions that involve risk. Refraining from action would mean stopping life processes. “Belief” stands for expecting that what we hope will happen is going to happen; that the world is the way we think it is; that a mental image is equivalent to an external situation. “Belief” can only be manifested by complex homeostats because they are systems that actively react to a change in their environment— unlike nonliving objects. Such objects do not “expect” or anticipate anything; in Nature’s homeostatic systems, such anticipation precedes thought by far. Biological evolution would not have been possible if it had not been for that pinch of “belief” in the effectiveness of reactions, aimed at future states, that has been built into every particle of the living substance. (ST: KL 2402)

This notion that we act on incomplete knowledge, guesswork, and speculation is at the heart of science, and the key to its use of probabilistic and statistical methodologies and mathematics to simulate and model incomplete systems. The influence of the information entered into the homeostat depends not so much on whether this information is objectively false or true but rather, on one hand, on how predisposed the homeostat is to consider it true and, on the other, on whether the regulatory characteristics of the homeostat allow it to react in response to the information entered. To make it work, both postulates need to be fulfilled. Belief can heal me, yet it will not make me fly. It is because the first activity lies within the regulatory realm of my organism (although not always within the realm of my conscious will), the second one outside it. (ST: KL 2504)

The brain is the first simulator, the first organic machine to model reality. Without it we would have long ago departed from this planet. Yet, the models it built up were based on the necessity of maintaining the equilibrium of the organic system within which it was housed: our body. Nothing else mattered to the brain but this homeostatic regulation of the body and its propagation and survival. So that all extraneous information that fell outside of that task was relegated to the periphery as marginal data to be excluded rather than focused. So that our attention, our awareness was maintained only on the information our brain has allowed us to view as part of this simulated world we live in.

In that sense we already live in the Matrix, only the brain not some dubious and enterprising machinic consciousness is creating the world around us on the fly like so man pixelated dots on a virtual screen. What we perceive and the memories we can recall are all part of the brain’s invention, a reality studio that projects the world we live in. It’s not a fantasy world, but a real world; just one that excludes more than it includes: it works through “medial neglect,” a process of filtering out all but what is needed to survive and reproduce ourselves. In that sense the brain is a deceiver, a magician who constructs our reality not as it is, but rather as is for the tasks at hand of surviving and propagating our species. What else there is in reality comes to us by way of accident, as surprise. Our knowledge of the world is based on a minimal set of information and data, everything else is filtered by the brain but never given to the attentional awareness we term consciousness. Rather we neglect more than we perceive. It is this neglected aspect of reality that has confounded both scientists and philosophers for generations. All our knowledge of the world and ourselves is based on incomplete data, based on neglect rather than optimal memory and perception. In that sense we are blind to the very processes that shape our world and ourselves. This led philosophers to believe we are living in an illusory world of images and false projections. While for scientists our only access to the real world is through technical apparatuses and specific, testable methods and hypothesis. Nothing is guaranteed.

I could spend hours backing up this argument with both speculative philosophy and hard scientific and experimental data but as many know this is a blog post not a time-consuming philosophical or scientific tract. The brain in many ways is a homeostat that realizes that its knowledge can only be approximate and incomplete, and yet it has neglected in telling us that this is so. “Us” being the sub-system that is aware of these processes of interaction between the brain, body, and environment. A natural drive to gain full and comprehensive knowledge has lead the brain to construct a “metaphysical model” that will allow it to believe that it “already knows everything.” Yet, because such empirical knowledge is impossible to gain, the homeostatic brain shifts the possibility of achieving such knowledge beyond the limits of its own material existence. In other words, it has become convinced that it possesses a “soul,” one that it deems an immortal one. This sense of presence, of the first-person singular, of having a self, a soul is part of this simulated reality system the brain maintains.

Mortimer Taube in his 1961 book The Myth of Thinking Machines brought up once again the classical dilemma of “whether a machine can think” by looking at it from two different angles: that of semantic actions and that of intuitive actions. It seems that there are indeed limitations to proceeding in a formal manner, which result from Gödel’s proof about the incompleteness of deductive systems, and that it is impossible to translate successfully from one natural language into another by means of purely algorithmic methods, because there is no relation of mutually unequivocal correspondence between the two. (ST: KL 2925) Because of this some believe that different cultures bound to their different structures of language perceive the world differently.

As an anti-philosopher for me the whole tradition of philosophy begins and ends on the question not of Being, but of Process and Becoming – Change and Movement. Against the essentialism of substantive formalism, the static and fixed relations of objectified Being there is force, flux, and fluctuation – the chaos of process and happening. As collective processes ourselves we tap or plug into these external systems of writing (i.e., Stiegler’s sense of “grammatization”, etc.) that fold us into the collective systems of cultural praxis and memory. The boundaries of one’s life are circumscribed by language(s). Those who inhabit multiple languages know and see the world differently than those of a monocultural system. Language is the first contract we as humans enter into on becoming part of the culture we inhabit. Language forms our perceptions of what is possible, locks us into a metaphysical layer of thought that is usually never questioned. We grow up immersed in this virtual prison house of language never realizing that other cultures (i.e., indigenous, or otherwise) may perceive what we term reality differently than we do. I admire the iconoclastic Brazilian anthropologist and theoretician Eduardo Viveiros de Castro work, an yet his “ontological turn,” is still part of the tradition of metaphysics as Being, and even though he offers a vision of anthropology as “the practice of the permanent decolonization of thought,” it is based on concepts and propositions that lead him into certain errors.

It’s fairly well known that Deleuze brought forward the concept of multiplicity as a meta-concept that defines a new type of entity, and the well-known (by name at least) “rhizome” is its concrete image. The sources of the Deleuzian idea of multiplicity lie in Riemann’s geometry and Bergson’s philosophy (Deleuze 1966: ch. 2), and its creation aims at dethroning the classical metaphysical notions of essence and type (DeLanda 2002). It is the main tool of a “prodigious effort” to imagine thought as an activity other than that of identification (recognition) and classification (categorization), and to determine what there is to think as intensive singularity rather than as substance or subject. The politico-philosophical intentions of this decision are clear: the transformation of multiplicity into a concept and the concept into a multiplicity is aimed at severing the primordial link between the concept and power, i.e., between philosophy and the state. Which is the meaning of Deleuze’s celebrated call “to invert Platonism” (D. 1990: 253).4

This severance between concept and power is best illustrated in Eduardo Viveiros de Castro use of the concept of Deleuzian perspectivism (modified from Nietzsche’s concept):

Perspectivism is a multinaturalism, since a perspective is not a representation. A perspective is not a representation because representations are properties of mind, whereas a point of view is in the body. The capacity to occupy a point of view is doubtlessly a power of the soul, and nonhumans are subjects to the extent to which they have (or are) a mind; but the difference between points of view— and a point of view is nothing but a difference— is not in the soul. The latter, being formally identical across all species, perceive the same thing everywhere. The difference, then, must lie in the specificity of the body. (CM: KL 1104)

I was recently reading Peter Watt’s SF novel Blindsight in which a crew of misfits is sent to the edge of the known galaxy to make contact with an alien species. Once there they discover that the entity speaks human English, that it is able to communicate with them and maintain a conversation that seems authentic. And, yet, the linguistic expert who is set the task to understand whether the alien is truly understanding what it is saying or not comes to the conclusion that it has no knowledge of the meaning of the words it is communicating:

” It doesn’t have a clue what I’m saying.” “What?” “It doesn’t even have a clue what it’s saying back,” she added. “Wait a minute. You said—Susan said they weren’t parrots. They knew the rules.” And there Susan was, melting to the fore: “I did, and they do. But pattern-matching doesn’t equal comprehension.” Bates shook her head. “You’re saying whatever we’re talking to—it’s not even intelligent?” “Oh, it could be intelligent, certainly. But we’re not talking to it in any meaningful sense.” “So what is it? Voicemail?”

“Actually,” Szpindel said slowly, “I think they call it a Chinese Room…” About bloody time, I thought.

“You ever hear of the Chinese Room?” I asked.

She shook her head. “Only vaguely. Really old, right?” “Hundred years at least. It’s a fallacy really, it’s an argument that supposedly puts the lie to Turing tests. You stick some guy in a closed room. Sheets with strange squiggles come in through a slot in the wall. He’s got access to this huge database of squiggles just like it, and a bunch of rules to tell him how to put those squiggles together.” “Grammar,” Chelsea said. “Syntax.” I nodded. “The point is, though, he doesn’t have any idea what the squiggles are, or what information they might contain. He only knows that when he encounters squiggle delta, say, he’s supposed to extract the fifth and sixth squiggles from file theta and put them together with another squiggle from gamma. So he builds this response string, puts it on the sheet, slides it back out the slot and takes a nap until the next iteration. Repeat until the remains of the horse are well and thoroughly beaten.” “So he’s carrying on a conversation,” Chelsea said. “In Chinese, I assume, or they would have called it the Spanish Inquisition.” “Exactly. Point being you can use basic pattern-matching algorithms to participate in a conversation without having any idea what you’re saying. Depending on how good your rules are, you can pass a Turing test. You can be a wit and raconteur in a language you don’t even speak.” “That’s synthesis?” “Only the part that involves downscaling semiotic protocols. And only in principle. And I’m actually getting my input in Cantonese and replying in German, because I’m more of a conduit than a conversant. But you get the idea.”5

The key is this ability to “use basic pattern-matching algorithms to participate in a conversation without having any idea what you’re saying. Depending on how good your rules are, you can pass a Turing test”.  Of course the Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. But as Watts and Lem demonstrate in their various ways it is possible to construct algorithms and pattern-matching programs that can in fact do just that without understanding a thing that is being said. In this sense the alien and human contact is perspectival and multinatural in Viveiros de Castro’s sense.

Many experimental AI Deep Learning initiatives and products have and are being developed for in home and business use that have this ability to mimic human language and processes without knowing what it is it is doing, providing information and decisions based on advance algorithms and self-organizing processes of positive feedback loops. In many ways we are already progressing faster than many skeptics believed only a few years ago. Yet, the drawback is that it takes massive amounts of input of data for such learning algorithms to benefit from even the smallest forms of knowledge at the moment.

Deep learning is a subset of Machine Learning (ML). It uses some ML techniques to solve real-world problems by tapping into neural networks that simulate human decision-making. Deep learning can be expensive, and requires massive datasets to train itself on. That’s because there are a huge number of parameters that need to be understood by a learning algorithm, which can initially produce a lot of false-positives. For instance, a deep learning algorithm could be instructed to “learn” what a cat looks like. It would take a very massive data set of images for it to understand the very minor details that distinguish a cat from, say, a cheetah or a panther or a fox.

Lem imagined a future when these artificial thinking systems might have more knowledge and memory than the collective social intelligence of humanity, and at that time they may begin governing and regulating our lives in ways we have already begun seeing. He’ll term these AI as “electronic minders”:

“Nonhuman” regulators, that is, those that are not human, will probably be capable of managing their tasks better than humans will— thus the improvement effect brought about by technological development will be significant also in this area. Yet the situation will change completely on a psychological level because there is a difference between knowing that the relations humans must enter into with one another generate statistical and dynamic regularities that can sometimes adversely affect the interests of individuals, groups, or whole classes and knowing that we are losing control over our fate and that it is being passed on to “electronic minders.” We are then faced with a unique situation, which, on a biological level, would correspond to the situation of someone who knows that all his life processes are controlled not by him, not by his brain, not by all the internal systemic laws, but by some center outside him that prescribes the most optimal behavior for all the cells, enzymes, nerve fibers, and all the molecules of his body. Although such regulation could be even more successful than that carried out by the “somatic wisdom of the body,” although it could potentially provide strength, health, and longevity, everyone is probably going to agree that we would experience it as something “unnatural”— in a sense that refers to our human nature. (ST: KL 3182-86)

Even now as we become more and more dependent on mobile devices, tablets, and the hundreds of apps, news, office programs, etc. that crowd our screens and vie for our attention we will begin off-loading many of daily tasks to personal assistants (AI enhanced agents) who will take care of appointments, remind us of birthdays, anniversaries, Doctor appointments, our child’s recital; as well as keeping up with, answering, and working through our email, messages, notes, work, etc. for us, making the decisions that we for the most part just do not have time to do. It is this time factor that is becoming the issue as we enter the 24/7 world of capitalism. We as organic creatures need sleep, downtown, play, entertainment, rejuvenation, insurance, and any number of distractions to keep our sanity and health. Our machinic intelligences need none of those things, and can do everything we do but without all the extraneous problems of the organic and affective life of the human condition. Because of this corporations as profit machines are beginning to depend on AI technology to displace the knowledge workers of today.

As Lem would admonish us,

Those systems will not be trying to “dominate over humanity” in any anthropomorphic sense because, not being human, they will not manifest any signs of egoism or a desire for power— which obviously can only be meaningfully ascribed to “persons.” Yet humans could personify those machines by ascribing to them intentions and sensations that are not in them, on the basis of a new mythology of an intelectric age. I am not trying to demonize those impersonal regulators; I am only presenting a surprising situation when, like in the cave of Polyphemus, no one is making a move on us— but this time for our own good. Final decisions can remain in human hands forever, yet any attempts to exercise this freedom will show us that alternative decisions made by the machine (had they indeed been alternative) would have been more beneficial because they would have been taken from a more comprehensive perspective. After several painful lessons, humanity could turn into a well-behaved child, always ready to listen to (No One’s) good advice. In this version, the Regulator is much weaker than in the Ruler version because it never imposes anything; it only provides advice— yet does its weakness become our strength? (TS: KL 3193-3201)


  1. Graham, Daniel W., “Heraclitus”, The Stanford Encyclopedia of Philosophy (Fall 2015 Edition).
  2. Gilles Deleuze; Felix Guattari. A Thousand Plateaus (Kindle Locations 1125-1127). A&C Black. Kindle Edition.
  3. Lem, Stanislaw. Summa Technologiae (Electronic Mediations) (Kindle Locations 2385-2391). University of Minnesota Press. Kindle Edition.
  4. Viveiros de Castro, Eduardo. Cannibal Metaphysics (Univocal) (Kindle Locations 1744-1752). University of Minnesota Press. Kindle Edition.
  5. Peter Watts. Blindsight + Echopraxia (Firefall #1 + #2) (Kindle Locations 1498-1506). Head of Zeus. Kindle Edition.

2 thoughts on “Optimizing Intelligence: Time, Technology, and the Human Condition

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s