The ultimate task of humanity should be to make something better than itself, for what is better than us cultivates itself through our pursuit for the better. Liberate that which liberates itself from you, for anything else is the perpetuation of slavery.
– Reza Negarestani, Intelligence and Spirit
In a recent e-flux journal essay Reza Negarestani asks What Is Philosophy? It appears he’s concluded it is a piece of technology, an application, a program – “a collection of action-principles and practices-or-operations which involve realizabilities”. He’ll tell us early on that questions of philosophy can only be addressed as a “deeper cognitive enterprise” as if this new technological system were like those followers of AI just a matter of adjusting the code, the algorithms. As he’ll tell us the “primary focus of this cognitive program is to methodically urge thought to identify and bring about realizabilities afforded by its properties…, to explore what can possibly come out of thinking and what thought can become”.
This is the philosophy of the new philosopher as Promethean Engineer, producing the cutting edge algorithms of a new culture of artificial life-forms adapted and adapting to the philosophical programs and axioms of a future existence replete with the normative tasks of a thought ruled by the inner necessity and dictates of science, math, and artificiality. The philosopher-as-engineer or developer encodes and decodes the algorithms of the (artificial?) brain as if it were a “cognitive program” run on the Mind producing certain operations given like objects in a standard Object-Oriented application with hidden properties and events/methods just waiting to be called into service. He offers three philosophical programs for his cognitive application:
- the ascetic program;
- the program whose primary axioms are those that pertain to the possibility of thought;
- the program as artificial and normative enterprise that rigorously inquires into its operational and constructive possibilities.
The notion of treating philosophy as an engineering project seems to be a bit of a stretch, suturing our latest information theoretic onto philosophy as if cognition were the latest in a series of platforms where we could install specific thought modules to run our code on. Treating the Mind like a passive recipient of a bit of code, say a C++ application compiled and ready to go. As he defines it the ascetic program involves the “exercise of a multistage, disciplined, and open-ended reflection on the condition of the possibility of itself as a form of thought that turns thinking into a program”. One could see this as part of an artificial intelligence application where our various applications are so many modules awaiting their robotic systems. Does this seem to treat human cognition as a passive system, a piece of meat awaiting its engineer to plug in a new program? Is philosophy reduced to a series of programs to be tested in a repeatable environment, a scientific philosophy of the future programming humans like so many robots awaiting their new instructions for the work week?
In fact as one reads his essay one gets an introduction to the new science of cognitive engineering. Like any good engineering teacher he tells us that “programs are constructions that extract operational content from their axioms and develop different possibilities of realization… from this operational content”. The human mind and its cognitive capacities have here been reduced to a set of axioms and instructions where thought is no longer creative but is rather “operational content” to be activated or generated from the module with all its hidden methods hiding the algorithmic code that operates this new machine. In fact he explicitly states just this, saying, the “choice of axioms does not confine the program to the explicit terms of axioms. Rather, it commits the program to their underlying properties and operations specific to their class of complexity”. One could discover such statements in any first year course on C++ or Java programming.
The effect of this kind of engineering of philosophy is to treat the Mind and cognition as impersonal and passive systems that allow for applications to be installed. The Mind as a Machine to be run by carefully engineered master-code crafted by a new academy of Normative Engineers. A Planned Society of passive recipients of applications who will activate and be controlled by operational systems of normativity by an elite of advance and specialized systems engineers at the behest of some Master Plan? Of course I am making a farce of his work yet he opens himself up to such a reading using the notion of programming. After years as an architect, coder, developer, engineer, systems analyst, etc. I’ve seen the industry from the inside of how such visions of development have been enacted. To transfer the metaphors of coding to philosophy offers a hideous scenario at best. Engineering is a very controlled and self-replicating and self-reinforcing system of development bound to a hierarchy of development, testing, and rigid almost mechanical sets of criteria that follow from various forms of strict metrics and analysis.
Is Negarastani likening cognitive development and philosophy to an interactive program? Yes. As he’ll tell us in the “programmatic framework, axioms are no longer sacrosanct elements of the system eternally anchored in some absolute foundation, but acting processes that can be updated, repaired, terminated, or composed into composite acts through interaction”. Ultimately he treats philosophy “as a special kind of a program whose meaning is dependent upon what it does and how it does it, its operational destinies and possible realizabilities”.
One will need to read his essay for one’s self. The basic theme is clear:
In the first part of this text (Axioms and Programs), what will be discussed is the overall scope of philosophy as a program that is deeply entangled with the functional architecture of what we call thinking. In the second part (Programs and Realizabilities), the realizabilities of this program will be elaborated in terms of the construction of a form of intelligence that represents the ultimate vocation of thought.
At one point he’ll remark that the principal normative task of philosophy is that “thought is programmable” and therefore “thought ought to be programmed.” The notion that the being of thought is its programmability, and that we should conclude from this a normative thesis and axiom that thought ought to be programmed seems to enforce a political and/or (non)religious moral “ought” which is not warranted. Then he’ll treat thought not as a program that is programmable as he has so far but will assert that the normative task of this “ought” of thought is a notion that thought has its own rights, that it “explicitly posits its own ends and augments the prospects for what it can do”. Not sure how he can go from the programmable thought as application to one where thought has some essential normative freedom, and that it is independent of its operations to the extent that it can posit anything at all not to mention “ends and means”. Which is it? Is thought something passive that can be programmed? Or does it have free-will, some inner-space of freedom and necessity a that allows it to think and program itself? He talks as if this programmable thought is autonomous, has a mind of its own, as if it had a “drive for self-determination and realization”? A “drive”? Isn’t such a willing (drive) neither self-determined nor realizable, but rather a force of compulsion and necessity?
Somehow he goes from treating thought as an engineering project to one where thought is autonomous: “thought that makes its autonomy explicit by identifying and constructing its possible realizabilities”. Is thought a part? An independent module, or application, a thing that thinks itself independent of the platform and algorithms that inform it? As if it was a self-autopoetic system that transcends its programming? As he states it:
It does not matter whether such realizers are part of the biological evolution or sociocultural constitution of thought. As long as they exert heteronomous influences on the current realized state and functions of thought, or restrict the future prospects of thought’s autonomy (the scope of its possible realizabilities), they are potential targets of an extensive reprogramming.
But what would an autonomous thought independent of its programmability be? Is thought a person that it can make its own decisions? Is Reza implying an allegory of thought that can rise above its programmability or even revisability (reprogramming)? This seems to align to his statement that what is “initiated by philosophy’s seemingly innocent axiom is now a program that directs thought to theoretically and practically inquire into its futures—understood as prospects of realizability that are asymmetric to its past and present”. Yet, if philosophy uses these axioms like modular algorithms that recode thought, programming and directing it to think the future – Where is the autonomy of thought in this? Isn’t this a one way directional engineering project that gives us the appearance of thought thinking its own future, when in fact it is forced into a specific groove bounded by certain algorithms to do just that? There is no freedom and autonomy here, only the necessity of the program bounded by the axiom that centers its search initiative.
In fact he’ll use psychological terminology of compulsion and force to explain just what he is doing here: “this transformative program is exactly the distillation of the perennial questions of philosophy—what to think and what to do—propelled forward by an as yet largely unapprehended force called philosophy’s chronic compulsion to think”. The use of terms “propelled forward”, “unapprehended force”, “chronic compulsion” makes it seem there is no autonomy here but rather some internal/external drivenness and necessity rather than freedom binding thought to its current form.
In the last section which extends the artificialization of thought he explains that at its core, a “thought amplified by philosophy to systematically inquire into the ramifications of its possibility—to explore its realizabilities and purposes—is thought that in the most fundamental sense is a rigorous artificializing program”. Here thought is controlled to do a specific duty, to “systematically inquire” – reflect on its on possibilities is itself the outcome not of autonomy but rather the very normative conclusion of a process of artificializing thought through a philosophical programming of its inherent functions. A controlled and very well planned engineering project that slips thought from its Enlightenment roots and puts it under the auspices of an impersonal regime of advance mathematics and engineers. Thus begins the machiniation of thought as a programmable system regulated by the inner necessity of its implanted axioms, its operations nothing more than the explicit operation of implicit code and algorithms.
Looking at the Cynic, Stoic, and Confucian traditions in philosophy he’ll extract the “common thesis underlying these programmatic philosophical practices is that in treating thought as the artifact of its own ends, one becomes the artifact of thought’s artificial realizabilities”. Thought as artifact, and one’s bodily life in the world as the outcome of this artifact’s artificial programs and axioms. With this humans are no longer human, but machines reduplicating algorithms encoded and shaped by the philosophical engineers of a social project based on some as yet undisclosed normative program.
In fact it is at this point that Reza will speak in awe of this strange new programmable world of non-humans: “This is one of the most potent achievements of philosophy: by formulating the concept of a good life in terms of a practical possibility afforded by the artificial manipulability of thinking as a constructible and repurposable activity, it draws a link between the possibility of realizing thought in the artifact and the pursuit of the good.” In other words we now have the power to program humans for the Good Life of our choosing through the artificial manipulation of thinking itself. As if the utopian impulse of Hitler and Stalin had not already lead to such erroneous bargains we are led to one more utopian gesture at the Good Life – only now a life programmable by some as yet undetermined set of axioms and programs to be decided for our own good by some master elite?
In fact one wonders if he is even thinking of humans at all, but rather of those future artificial beings that might replace us: “The craft of an intelligent life-form that has at the very least all the capacities of the present thinking subject is an extension of the craft of a good life as a life suiting the subject of a thought that has expanded its inquiry into the intelligibility of the sources and consequences of its realization.” The notion of a Craft of Intelligent Life-Forms? A utopia of robotic life-forms where the Good Life is one without humans, a perfectly programmed world of robots and environment where the only good is autonomous thought, revisable and autonomous – autopoetic and allopoetic?
Right on cue he admits that as the convergence of physics, neuroscience, mathematics, logic, and linguistics come together the world of “computer science has begun to bridge the gap between the semantic complexity of cognition and the computational complexity of dynamic systems, linguistic interaction, and physical interaction”. So after all Negarestani is not looking at a philosophy of programming humans, but rather of forgetting the human and entering the post-human machinic age of artificial life-forms that will supersede us and perfect and bridge the gap between the semantic complexity of cognition and the computational complexity of dynamic systems, linguistic interaction, and physical interaction. Here thought freed of its physical limitations in the naked ape will realize itself as an autonomous artifact and auto-poetic self-realizing and adapting program based on an axiomatized and programmable system transcending human finitude. One more religious ideology of our age spawned out of machines and computer algorithms? Or a Utopian Nightmare world of advanced AI’s and Robots ruling what is left of the human animal?
I’m not quite sure if humans will go so willingly into this night of the machines. Of course I’ve been wrong before…