Decided to revisit Reza Negarestani’s two-part essay on e-flux concerning What is Philosophy? – here and here. Since his project implies a form of Left Prometheanism which I take to be – along with Ray Brassier’s Promethean and Posthuman Freedom (analysed succinctly by David Roden on Enemy Industry) – can be associated with the earlier Accelerationist Manifesto, Accelerationist Reader, etc.. In this post I will revisit both Negarestani’s and Brassier’s Prometheanism, which implies a critique of all those philosophies that have been based on forms of Will and Voluntarism.
Voluntarism: A Short History and its Critics
Our notions of voluntarism would arise out of the nominalist traditions of the late Middle Ages theology of such thinkers as John Duns Scotus (c. 1265-1308) and William of Ockham (c. 1288-1349) who inaugurated the modern secular separation of nature from the supernatural and the concomitant divorce of philosophy, physics, and ethics from theology that was reinforced by influential early modern figures such as Francisco Suarez (1548-1616).1
St. Thomas Aquinas was a defender of Intellect as a guide to the Good, over the voluntarist notions of Will and the arbitrary interventions of God into human affairs by way of his absolute power. As Pope Benedict XVI would remark “Duns Scotus developed a point to which modernity is very sensitive. It is the topic of liberty and its relation with the will and with the intellect. Our author stresses liberty as a fundamental quality of the will, initiating an approach of a voluntaristic tendency, which developed in contrast with the so-called Augustinian and Thomistic intellectualism. For St. Thomas Aquinas, who follows St. Augustine, liberty cannot be considered an innate quality of the will, but the fruit of the collaboration of the will and of the intellect.”
William of Ockham would affirm the supremacy of the divine will over the divine intellect, and in doing so would encounter a problem: if universals are real (i.e. natures and essences exist in things as Aquinas said they did following Aristotle) then voluntarism cannot be true. Ockham’s solution was unique: he simply denied of the reality of universals. Ockham adopts a conceptualist position on universals: while the universal (or concept) exists in the mind beholding a certain particular, it does not exist in the particular itself. Because there are no universals or common natures, there can only be a collection of unrelated individuals (and arguably the rise of modern individualism). With universals removed from the picture, God is free to will as he chooses.
“Voluntarism denotes those philosophers who generally agree, not only in their revolt against excessive intellectualism, but also in their tendency to conceive the ultimate nature of reality as some form of will, hence to lay stress on activity as the main feature of experience, and to base their philosophy on the psychological fact of the immediate consciousness of volitional activity.”
– Susan Stebbing, Pragmatism and French Voluntarism
Nominalism and Voluntarism became eternal bedfellows from that time forward. Yet, they would not always be so… therein lies the tale! With universals removed humans, too, are free to do and make as they see fit. For only what we make can we understand. And in our age we are learning to re-engineer ourselves beyond the confines of those old theological norms that once constrained us to a false equilibrium, and thereby free to experiment in new modes of being and rationality. Beyond the balance lays the contingent realm of creation rather than possibilities, only the new Promethean dares to enter that medium of exchange.
Ray Brassier and the Promethean Project: Intellect and the Good
As Roden will summarize it Ray Brassier’s “Unfree Improvisation/Compulsive Freedom” (written for the 2013 collaboration with Basque noise artist Mattin at Glasgow’s Tramway) … is a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom, arguing that we should view freedom not as the determination of an act from outside the causal order, but as the self-determination by action within the causal order.2
Ray Brassier in contradistinction to the above tells us that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”.2 He will discover in Martin Heidegger a twentieth century critique of metaphysical voluntarism as his starting point: it will be by way of an essay by Jean-Pierre Dupuy ‘Some Pitfalls in the Philosophical Foundations of Nanoethics’ (download: pdf)3 In Dupuy’s essay the link between technological Prometheanism and Heidegger’s critique of subjectivism come by way Hannah Arendt (471). Brassier will set this religious critique of Prometheanism against the backdrop of both the neoliberal Prometheans found in transhumanist discourse and speculation, and his own account within the Marxist tradition that has been neglected by what Williams and Srnicek in their Accelerationist Manifesto term derisively the Kitsch Marxism of our day.
Brassier will go to the core of the conflict that Dupuy and Arendt see in such transhumanist discourses for human enhancement as breaking of the pact between the given and the made, the fragile equilibrium between human finitude as an ontological fact and its transcendence as Dasein. He will put it pointedly: “Prometheanism denies the ontologisation of finitude” (478). He follows Dupuy’s reasoning through his many works on early cybernetic theory on through his religious works in late life, understanding that from Dupuy’s view it was the whole philosophical heritage of mechanistic philosophy culminating in cybernetic theory that would produce the notion that as we understand ourselves as nothing more than contingently generated natural phenomenon, the less able are we to define what we should be (483). Because of this Brassier remarks our “self-objectification deprives us of the normative resources we need to be able to say that we ought to be this way rather than that” (383).
Yet, Brassier will turn the tables on Dupuy and discover in this very notion of the equilibrium a hidden element that he finds objectionably theological (495). The point being that for Dupuy the world was designed, made (i.e., a creationist argument); and that instead the truth of things – as Brassier will suggest, is that the world was not made: “it is simply there, uncreated, without reason or purpose” (495) which strikes at the heart of modern nihilism (see Ray Brassier, Nihil Unbound). Because of this Brassier will see a new freedom, a release from the false equilibrium, and a way forward: a speculative reasoning for why we as humans should not fear participating in this uncreated world as creators ourselves. “Promethenism is the attempt to participate in the creation of the world without having to defer to a divine blueprint” (495). This leads to another conclusion that if the world is without reason and purpose then whatever disequilibrium we might introduce is no more harmful than the disequilibrium that already exists in the universe (495).
Brassier will bring everything round to his notions of subjectivation from which he started: that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”. He will turn to Alain Badiou’s account of the relation between event and subjectivation and find it objectionable, yet will also discover the need to reconnect his account of subjectivation to an analysis of the biological, economic, and historical processes that condition rational subjectivation (387). Such is the great task before us, Brassier remarks, a new Prometheanism that “promises an overcoming of the opposition between reason and imagination: reason is fuelled by imagination, but it can also remake the limits of imagination” (487).
Reza Negarestani: Inhumanism and the Promethean Labors of Intelligence
Inhumanism … finds the consequentiality of commitment to humanity in its practical elaboration and in the navigation of its ramifications. For the true consequentiality of a commitment is a matter of its power to generate other commitments, to update itself in accordance with its ramifications, to open up spaces of possibility, and to navigate the revisionary and constructive imports such possibilities may contain.
……– Reza Negarestani, The Labor of the Inhuman, Part II: The Inhuman
I’ve already covered two earlier essays from e-flux by Negarestanin on his Inhumanism: here and here (plus links to his original essays: here and here). In these essays he would elaborate a program for philosophy as a sort of deprogramming initiative, one that espouses the notion that “freedom is not liberation from slavery. It is the continuous unlearning of slavery.” (ibid.) His diagnosis is that the Liberal Humanist Subject is unfree and is obsolete: “Liberal freedom, be it a social enterprise or an intuitive idea of being free from normative constraints (i.e. freedom without purpose and designed action), is a freedom that does not translate into intelligence, and for this reason, it is retroactively obsolete.” (ibid.) The inhuman project of freedom that he proposes is that we adapt to an autonomous conception of reason, one that shall require the “updating of commitments according to the progressive self-actualization of reason”, a struggle that will coincide with the “revisionary and constructive project of freedom”.
In these essays his project will further the Promethean agenda with an up to date influx of normative clarity from the Chicago school of neo-Hegeliansm, and especially Robert Brandom’s notions of a new pragmatics. In this upgrade of Kant’s insight into judgments and actions we discover that one must make a normative distinction between knowing and claiming, because the “things we do with language” (pragmatics) is in this model prior to semantics. Why this should be is never fully qualified yet it becomes a part of Brandom’s inclusion of Wilfred Sellars framework in which normative appraisals must be placed within a “space of reason”, and within this space one discovers the inferential patterns by which human “entitlements and commitments” are made explicit. Instead of tying thought to real world referents in some exhaustive manner, the new pragmatists hope to instead work through systematically all our claims and actions that these commit and entitle us to which bring the two practices after Hegel of understanding and reason together.
Karl Marx: Fragments on Machines
… it is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own in the mechanical laws acting through it; and it consumes coal, oil etc., just as the worker consumes food, to keep up its perpetual motion. The worker’s activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite. The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker’s consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself. (Fragments on Machines, 1858)
Marx saw what was happening even then, the perverse inversion and displacement of the human by new economic and technological imperatives was being made obsolete. The Age of Machines had begun. At first they were but the crude labors of a shift toward total abstraction, a world ruled by intelligence rather than based on voluntarist notions of will and freedom. Notions of efficiency, organization, auto-determination, second-order elaboration and algorithmic determination would graft themselves onto both philosophical and scientific projects fit for an age of capitalist accumulation, crisis, and revision. As Marx would reiterate:
The production process has ceased to be a labour process in the sense of a process dominated by labour as its governing unity. Labour appears, rather, merely as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system; subsumed under the total process of the machinery itself, as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery, which confronts his individual, insignificant doings as a mighty organism. (ibid.)
Already we see intelligence as a distributed program not within the “living workers,” but rather in the “living (active) machinery”. The Inhuman was from then on determining its own agenda, one that would make humans as a species obsolete. Let us be clear all these new Promethean projects in one form or another, whether on the Right (Land) or Left (Brassier/Negarestani) seek to empower the inhuman at the expense of the human agenda. Many of these so called turns toward the non-human in philosophy and the arts are playing into the hands of such programs whether consciously or not.
The Promethean Agenda: The Extinction of the Human, and Rise of the Machine
As David Roden will relate for Brassier, an avowed naturalist, it is important that this capacity for agency is non-miraculous, and that a mere assemblage of pattern governed mechanisms can be “gripped by concepts” (Brassier 2011). As he continues:
The act …. remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self…. (Brassier 2013a) – See more at: http://enemyindustry.net/blog/?p=5895#sthash.hnPhRud3.dpuf
Slowly but surely all the philosophies of the last hundred years bound to consciousness and the human project are being de-programmed, obsolesced in favor of impersonalism, inhumanism, and the Promethean Agenda. As Marx would have it:
The development of the means of labour into machinery is not an accidental moment of capital, but is rather the historical reshaping of the traditional, inherited means of labour into a form adequate to capital. The accumulation of knowledge and skills, of the general productive forces of the social brain, is thus absorbed into capital, as opposed to labour, and hence appears as an attribute of capital… (ibid.)
This notion that Capital is the power behind this transmigration and inversion of the human into inhuman, organic into inorganic or machinic civilization is no accident. Nick Land puts it succinctly, stating, “Capital has always sought to distance itself in reality – i.e. geographically – from this brutal political infrastructure. After all, the ideal of bourgeois politics is the absence of politics, since capital is nothing other than the consistent displacement of social decision-making into the marketplace.”3 We’ve seen this in the EU where politics and economics are divorced, where nation states become nothing more than the waystations for taxation and slavery, and citizens of Left or Right have little authority or power over change since their de-throned sovereignty is bound to impersonal rules and norms imposed by the Economic Tribunals out of Belgium. The modern European is living in a totalitarianism without a Leader, an economic inversion of the old fascist state turned corporate instilling the blank impersonal and machinic systems of markets, banks, and bureaucratic anonymity that cannot be reasoned with nor challenged by politics.
Marx adeptly remarks that the “workers’ struggle against machinery. What was the living worker’s activity becomes the activity of the machine. Thus the appropriation of labour by capital confronts the worker in a coarsely sensuous form; capital absorbs labour into itself—‘as though its body were by love possessed’”. (ibid.)
Nick Land: All That Is Over Now
“All that is over. We are all foreigners now, no longer alienated but alien, merely duped into crumbling allegiance with entropic traditions.”4 Already in the 90’s Land was diagnosing our predicament: “The capitalist metropolis is mutating beyond all nostalgia. If the schizoid children of modernity are alienated, it is not as survivors from a pastoral past, but as explorers of an impeding post-humanity.” For Land capitalism is the driving force behind this emergent inversion from the human to inhuman:
Capitalism is not a human invention, but a viral contagion, replicated cyberpositively across post-human space. Self-designing processes are anastrophic and convergent: doing things before they make sense. Time goes weird in tactile self-organizing space: the future is not an idea but a sensation. (ibid.)
Tiziana Terranova will move from pure metaphorics to the languages of software, algorithms, telling us that the “relationship between ‘algorithms’ and ‘capital’—that is, ‘the increasing centrality of algorithms to organizational practices arising out of the centrality of information and communication technologies stretching all the way from production to circulation, from industrial logistics to financial speculation, from urban planning and design to social communication’.” (see here)
After describing the typical nature of algorithms (i.e., what they do, the work they perform, how they are situated within certain material and immaterial assemblages, etc.) she remarks that as far as capital is concerned “algorithms are just fixed capital, means of production finalized to achieve an economic return” just like any other commodity (385). In this sense algorithms have replaced living labor, the worker herself as the site where the temporal aspects of labor time, disposable time, etc. play themselves out. Instead of the alienated presence of the human in the machine as mere appendage driving and guiding the machine through its everyday processes, the human has been stripped out of the process altogether as non-essential or disposable and the algorithm as an abstract machine is now situated in that site.
Yet, against this worker’s ethic Land tells us a “consummate libidinal materialism is distinguished by its complete indifference to the category of work. Wherever there is labour or struggle there is a repression of the raw creativity which is the atheological sense of matter and which – because of its anegoic effortlessness – seems identical with dying.” (Fanged Noumena, KL 3871-3873). Of course he was taking his model from Deleuze/Guattari of Anti-Oedipus fame:
The body without organs is the model of death. As the authors of horror stories have understood so well, it is not death that serves as the model for catatonia, it is catatonic schizophrenia that gives its model to death, zero intensity. The death model appears when the body without organs repels the organs and lays them aside: no mouth, no tongue, no teeth – to the point of self-mutilation. to the point of suicide. (FN, KL 3664)
Land will allegorize on capitalization as the embodiment of Intelligence, as the Thing that coincides with teleoplexy as techonomic naturalism:
or (self-reinforcing) cybernetic intensification, describes the wave-length of machines, escaping in the direction of extreme ultra-violet, among the cosmic rays. It correlates with complexity, connectivity, machinic compression, extropy, free energy dissipation, efficiency, intelligence, and operational capability, defining a gradient of absolute but obscure improvement that orients socio-economic selection by market mechanisms, as expressed through measures of productivity, competitiveness, and capital asset value. (#Accelerate: p. 514)
This formalization of capital under the guise of allegoriesis with Capital as daemonic agent of Intelligence, with its attendant predication of future ontologies that record and predict the virtual objects that are already moving at an accelerating pace toward our ontic horizon. Land offers an Accelerationist Research Program that seeks out the traces of this advanced teleoplexic life-form in the virtual/actual developments of current and predictive capitalization. It is only a matter of time that this entity or hyperteleoplexic object becomes self-aware, thereby breaching the boundaries of the Techonomic Singularity. Land is the obverse or reactionary contributor to this elaborate manifestation, Negarestani and Brassier or its Left Accelerationist harbingers: the Right (Land) seeks exit and escape for the intelligent explosion of this chameleon intelligence to determine its own fate; while the Left (Negarestani/Brassier) seek to govern and regulate this agent to the “space of reasons”, where under normative rules and regulatory algorithms it can be induced to work for the Good.
Reza Negarestani: Philosophy is a Program
…philosophy is, at its deepest level, a program—a collection of action-principles and practices-or-operations which involve realizabilities…
– Reza Negarestani, What Is Philosophy? Part One: Axioms and Programs
From the time of Kant till now philosophy has slowly and methodically replaced the old humanist conceptions of life, self, and culture with an inhumanist and machinic culture based as Deleuze and Guattari and others would admit on madness, schizophrenic acceleration, and capitalism. At each step in the process an inversion and perversion of the older Medieval humanist religious world-view was replaced by a conception of the cosmos as impersonal and indifferent to human wishes and needs. Against voluntarist notions of the advent of God as an arbitrary agent of intervention, whether of Occasionalist causation or parodies of debates between nominalist/realists we’ve emerged into an age when machinic intelligence by way of algorithmic culture, programing, and advanced AI’, Robotics, Nanotechnology, and Biogenetic sciences is remaking the very perimeters of life and matter as we’ve come to know from the early Greek metaphysicians till our latest fads of speculative realism or libidinal or dialectical materialisms. We stand on the cusp of strangeness. Negarestani’s vision incorporates much of this heritage in a way that seeks to revise these depleted frameworks and put them on a new footing under the guiding hand not of some theological God, but rather of the self-determining machinic processes of our latest algorithmic progeny: the machinic intelligences.
As Luciana Parisi states it “If mechanical automation—the automaton of the assembly line, for instance—was a manifestation of the functionalist form that shaped matter, the increasing acceleration of automation led by the development of interactive algorithms (including human-machine and machine-machine interactions) instead reveals the dominance of a practical functionalism whereby form is induced by the movement of matter.” (#Accelerate, Automated Architecture Speculative Reason in the Age of the Algorithm) It’s this same functionalism that guides Negarestani to ask new questions of philosophy: “we should ask “what sort of program is philosophy, how does it function, what are its operational effects, realizabilities specific to which forms does it elaborate, and finally, as a program, what kinds of experimentation does it involve?” (ibid.)
Philosophy as an engineering project and handmaid of the sciences, creating conceptual tools that allow a pragmatic evaluation and “exercise of a multistage, disciplined, and open-ended reflection on the condition of the possibility of itself as a form of thought that turns thinking into a program.” (ibid.) In his algorithmic conceptuality he puts it this way, these programs are based on the “selection of a set of axioms, and the elaboration of what follows from this choice if the axioms were treated not as immutable postulates but as abstract modules that can act upon one another” (ibid.). Anyone versant in today’s object-oriented programing, Java or C++, or any number of other variations will understand this language of algorithmic culture and philosophy. Based on abstraction, impersonalism, mathematics, logics, and axiomatic operational closure this framework “commits the program to their underlying properties and operations specific to their class of complexity. To put it differently, a program constructs possible realizabilities for the underlying properties of its axioms, it is not essentially restricted to their terms” (ibid.). That being said, Negarestani would not reduce or suture philosophy to software development models, for him it need not matter at what scale of the socio-cultural or functional the system is applied. His scheme allows the Program to be used in a number of functional sets that can repeat the basic axioms operating under a multiplicity of data and behavioral models.
Putting it back into philosophical terminology he’ll state it this way:
This is precisely how philosophy is approached here. Rather than by starting from corollaries (the import of its discourse as a specialized discipline, what it discusses, and so on), philosophy is approached as a special kind of a program whose meaning is dependent upon what it does and how it does it, its operational destinies and possible realizabilities. (ibid.)
The two essays work hand in hand. The first elaborates the Axioms and Programs: philosophy as a program that is deeply entangled with the functional architecture of what we call thinking. While in the second part he elaborates on Programs and Realizabilities, in which the realizabilities of the philosophical program is elaborated in terms of the construction of a form of intelligence that represents the ultimate vocation of thought.
This is a high-level abstract overview of a philosophy framework within which he seeks to lure engineers, scientists, and the economic powers in an alliance that entails nothing less than the common thesis underlying these “programmatic philosophical practices as that in treating thought as the artifact of its own ends, one becomes the artifact of thought’s artificial realizabilities” (ibid.). He sees the various traditions of philosophy transformed by this new framework: the idealist-rationalist and materialist-empiricist trajectories have been converging in the most radical way has been computer science, as a place where physics, neuroscience, mathematics, logic, and linguistics come together. This has been particularly the case in the wake of recent advances in fundamental theories of computation, especially theories of computational dualities and their application to multiagent systems as optimal environments for designing advanced artificial intelligence. (ibid.)
Ultimately this movement toward a new framework within which the artificial realization of general intelligence becomes an expression of thought’s autonomy ( in the sense of a wide-ranging program that integrates materials, intelligibilities, and instrumentalities in the construction of its realizabilities) should not, he tells us, be confused with the “fetishization of natural intelligence in the guise of self-organizing material processes, or a teleological faith in the deep time of the technological singularity—an unwarranted projection of the current technological climate into the future through the over-extrapolation of cultural myths surrounding technology or through hasty statistical inductions based on actual yet disconnected technological achievements.” (ibid.)
Against any notion that he is a technological imperialist of “technology for technologies” sake, he admonishes that we should affirm rather an autonomous impersonalism of thought itself: a “mandate from the autonomy of thought’s ends and demands” (ibid.). Ultimately this is a philosophy of the artificial in which the “vocation of thought is not to abide by and perpetuate its evolutionary heritage but to break away from it. Positing the essential role of biology in the evolutionary contingent history of thought as an essentialist nature for thought dogmatically limits how we can imagine and bring about the future subjects of thought. But the departure from the evolutionary heritage of thought is not tantamount to a withdrawal from its natural history.” (ibid.)
Even though he tries to dissuade us that he is not a technologist at heart, his program is aligned with the machine culture of algorithms and the artificial rather than older humanist traditions. In fact as he suggests this form of intelligence can only develop a conception of itself as a self-cultivating project if it engages in something that plays the role of what we call philosophy, not as a discipline but as a program of combined theoretical and practical wisdoms running in the background of all its activities. (ibid.) And his stated goal for this new framework is the “good life”:
…the most defining feature of this intelligence is that its life is not simply an intelligent protraction of its existence but the crafting of a good or satisfying life. And what is a satisfying life for such a species of intelligence if not a life that is itself the crafting of intelligence as a complex multifaceted program comprising self-knowledge, practical truths, and unified striving? (ibid.)
Even more explicitly he admits that for an “intelligence whose criterion of self-interest is truly itself—i.e., the autonomy of intelligence—the ultimate objective ends are the maintenance and development of that autonomy, and the liberation of intelligence through the exploration of what it means to satisfy the life of thought.” (ibid.)
The liberation of intelligence from his biological heritage, the development of autonomous systems based on algorithmic, axiomatic operations and programs, seeking realizability beyond current human wants and needs; only requiring the need to “satisfy the life of thought”. This is Negarestani’s Utopian Paradise of the Artificial worlds of our future, which seem to portend the demise of the human and the rise of the Machine as our successor species. As he says it:
…the ultimate form of intelligence is the artificer of a good life—that is to say, a form of intelligence whose ultimate end is the objective realization of a good life through an inquiry into its origins and consequences in order to examine and realize what would count as satisfying for it, all things considered. It is through the crafting of a good life that intelligence can explore and construct its realizabilities by expanding the horizons of what it is and what can qualify as a satisfying life for it.
One day our artificial children will attain the ethics of the Real that we could only attempt, one based not on some external authority or institution, but rather as part of its own self-determining initiative based on the algorithms of a self-programmed and self-revising culture of the machinic intelligence. Whether one agrees or disagrees with Negarestani he has a clear and precise framework within which he is placing the hopes and dreams of Reason toward other goals than the human. Yes, there are many strands in our current milieu, certain tendencies that seem to be pushing the artificialization of culture and civilization; along with all the philosophical, scientific, and aesthetic aspects of this long trek toward a new species. Will it happen? Will we be replaced, overcome by a machinic intelligence; or, will we ourselves gain a foothold in higher forms of intelligence and collective memory and knowledge working as we have always hand in hand with our technologies? Will there be a new framework for the sciences and philosophy in our time? Are we not reaching after new linguistic and material designations for this transformation in our midst? Transitional beings that we are we do not have the answers to these questions, only the truth that such things might come to pass with or without our intervention. Remember that the heritage of Greece and Rome was once lost in religious, military, and social chaos and blind dictatorial ignorance for a thousand years. We, too, could fall prey to our own worst instincts and become immersed in global turmoil that might spell the collapse of all such dreams of Reason. Nothing is written in stone. Not even intelligence.
Sometimes when I read all these various arguments I return to Nietzsche’s perspectivism: a sense that as the Analyticals would have it we’re seeking to get the concepts honed down to a descriptive or propositional pragmatics or axiomatic system, but that reality is more like Zizek’s an ideological screen or fantasy split between the Real and the Symbolic (big Other), which like the ancient Proteus (Greek Myth) is ever changing and metamorphic; and cannot be locked down to any one specific semantic context or meaning. That seems to be the central problem of our time of transition: the various philosophical and theoretical/naturalist discourses/frameworks are stuck in the older metaphysical universe seeking to slay that dragon of Instrumental (Enlightenment) Reason, and find a way into a new scientific/philosophic framework/reasoning that is more in alignment with the anomalous data of our empirical world and scientific apparatuses, along with an inversion of the past two hundred years of anti-platonic thought. We’ve uncovered data that cannot be reconciled or reduced to the older metaphysical conceptuality, and seem forever stuck in a repetition of transitional forms which echo but do not step outside the older circle of our Kantian despair.
The post-phenomenological framework – whatever it might entail, will begin with a new conceptuality, one that has worked through the old circles of Kant’s errors, and transformed its problems into something utterly new and wild. Of course philosophy never did lead the sciences, as Badiou rightly says: the sciences are but one of the conditions of philosophy. It goes both ways. What’s funny is that after two centuries of anti-Platonism, we’re seeing Plato return in the guise of a hypermaterialist in Badiou, Zizek, and Johnson… how strange bedfellows cast such altered forms. Materialism in both its dialectical (Badiou, Zizek, and Johnson) and vitalist (Deleuze/Braidotti/DeLanda,SR etc.) forms seems to have gone immaterialist in our time, siding more and more with a transformation of the older Platonic notions of the Substance/Subject and Form/Idea debate into a immaterial materialism, whose discourse on Substance is no longer substantialis or Aristotelian, and whose causality is breaking free of its reasons, determinations, necessity (Meillassoux: After Finitude, et. al.); becoming part of a pure contingent universe that is no longer determined by Ananke (Necessity), but totally determined by contingency (Whim/Wildness).
What we’re moving through is Scott Bakker’s Semantic Apocalypse*: the older concepts are breaking apart as in James Joyce (puns), etym(omological)-smasher opening us a universe of dream ontologies based on Hegel/Schelling’s sense of the ‘night of the world’. Interpretation is dead: no sense of semantics will help us. Even your work is a sign of this wavering In-Between, a sense that we cannot speak or say (Sophist/Wittgenstein) what is coming, yet we must invent new forms of reasoning (beyond the instrumental) and conceptuality during the phase of transitional discourse that is not reducible to either the older naturalisms (physicalism) or idealist/materialist debates of substance/object dualities, or even dual aspect monism (Kant) or anomalous monism (Davidson).
*Semantic Apocalypse: the thought being that, as we come to know more about how the brain really works, the more it will seem as though meaning and intent(ionality) are a sort of illusion (folk psychology) — something that the brain generates in order to organize information — and in no way corresponding to what’s really going on. Since the brain is adapted to modeling what is going on in the external environment (including the social environment), it doesn’t need to be good at modeling itself. So the categories we use to describe “mental phenomena”, such as “intentionality”, are just cognitive reductions (metaphor: tropes) we rely on to compensate for the lack of the brain’s transparency to itself. We are essentially blind to our own lack of knowledge, and assume we know what we in fact do not know. (see here)
In Lacan’s terms what we are blind too is the Real – the paradox of the Mobius Strip upon which we live, never able to directly access this realm of being or know it (epistemically) for the simple reason that it cannot be reduced to the Symbolic (big Other: Culture, Language, Discourse etc.). Yet, we interpellate its effects indirectly on our subjectivation as signs which waver between the Imaginary (fantasy of reality) and its ideological screen of materiality (Zizek).
One of the commenters added a link to a book that explores another of Reza’s notions: computational functionalism: http://meson.press/wp-content/uploads/2015/11/978-3-95796-066-5-Alleys_of_Your_Mind.pdf
I think his use of axiomatic systems is to differentiate his own notions, which are based on epistemic relations (psychology) rather than the forms of ontological based axiomatic systems such as Badiou’s matheme = ontology, etc.
I’ll have to read this paper, but it may be true that computational and algorithmic descriptive terms may differ but the truth under the hood is that programs are algorithmic all the way down and up. Even the functional models are based on algorithmic designations. (But I’ll hold off since I haven’t read this paper to see what he’s offering as the ‘difference’ between the two.
As I’m reading this makes the point between intrinsic and algorithmic computation. Intrinsic concerned more with the governance and regulations that constrain functional structures and maintain them from one state to another, while algorithmic is more concerned with the actual state and behavior itself: the input/outputs of the execution of single or multiple programs.
His main argument comes here:
“In reality, neither functionalism nor computationalism entails one another. But if they are taken as implicitly or explicitly related, that is, if the functional organization (with functions having causal or logical roles) is regarded as computational either intrinsically or algorithmically, then the result is computational functionalism. (Page 140).”
I think a great many real world applications fit just such a scenario as he suggests. More and more companies deal in rules based systems based on BUS architectures that allow for interventions and communications among a multiplicity of agents: both human and machinic. Such systems rely on a combination of both governance/regulation intervention (i.e., intervention at the level of the give and take of functions, etc.), while also maintain the usual elaboration of intrinsic algorithms that adapt to the specifics of these interventions, executing internal (hidden) code/matrices based either on auto-modification or advanced replication algorithms of data (state) analysis. (One can think of the current use of such computational functionalism in advance AI stock market transactional hypertrade systems (think of Flash Boys: Michael Lewis; or, Dark Pool: Scott Patterson).
In some ways as these advanced computational functional systems evolve they will display state and behavior that can only be construed as intelligent. Which then opens the question (metaphysical) What is intelligence? Or, non-metaphysical What are the conditions that give rise to intelligence? I think it’s the latter question that Reza is more concerned with rather than the metaphysical question of what intelligence is. He seems to accept that general intelligence is the task of philosophy at this time, so he is concerned with how to bring about the conditions favorable to set intelligence free of its restrictive organic modes of being. Opening up other modes than the organic through a computational functionalism that eliminates the debates of intentionalism, affect, and human relations altogether.
In fact after elaborating several functionalist theories Reza will state flatly:
A philosopher should endorse at least one type of functionalism insofar as thinking is an activity and the basic task of the philosopher is to elaborate the ramifications of engaging in this activity in the broadest sense and examine conditions required for its realization. (Page 141).
He seems to be already seeking to know such conditions in this statement:
Now insofar as this analytical investigation identifies and maps conditions required for the realization of mind-specific activities, it is also a program for the functional realization and construction of cognitive abilities. (Page 142).
He’ll attack classical AI theory at its core refusal of the “irreducible and fundamental interactive-social dimension of the core components of cognition such as concept-use, semantic complexity and material inferences that the classical program of artificial intelligence in its objective to construct complex cognitive abilities has failed to address and investigate. (Page 145).”
In his conclusion he’ll tell us his project is one in which humanity elaborates in practice a question already raised in physical sciences: “To what extent does the manifest image of the man-in-the-world survive?” (Sellars 2007, 386). Arguing against those who fear a human/machine divide on the problem of thinking he’ll tell us that from “a functionalist perspective, what makes a thing a thing is not what a thing is but what a thing does. In other words, the functional item is not independent of its activity. (Page 148).” This pragmatic turn defines his project’s computational functionalism.
Reza will see in the several decenterings or revolutions of the Copernican, Darwinian, Newtonian, and Einsteinian turns a methodical displacement of the human, but with Turing we see not a decentering of the human but rather the elimination of the human from the equation of intelligence. (Page 149). After this the functional conceptuality of the mind is no longer bound to the human equation, and is set free to enable a post-intentional and post-human philosophy and science. He’ll describe it this way:
Whatever arrives back from the future—which is in this case, both the mind
implemented in a machine and a machine equipped with the mind—will be
discontinuous to our historical anticipations regarding what the mind is and
what the machine looks like. (Page 149).
This aligns well with Roden’s “disconnection thesis“. Reza will apply a Sellarsian image updated with the notion of “computational image”, saying, “As the human
imprints and proliferates its image in machines, the machine reinvents the
image of its creator, re-implements, and in the process revises it. (Page 150).” Which brings him to the natural/artificial divide that many fear, but seems in our moment to be breached in various new philosophies and sciences: ”
Realizing the mind through the artificial by swapping its natural constitution or biological organization with other material or even social organizations is a central aspect of the mind. Being artificial, or more precisely, expressing itself via the artifactual is the very meaning of the mind as that which has a history rather than an essential nature. Here the artificial expresses the practical elaboration of what it means to adapt to new purposes and ends without implying a violation of natural laws. To have a history is to have the possibility of being artificial—that is to say, expressing yourself not by way of what is naturally given to you but by way of what you yourself can make and organize (Page 151).
He seems to imply Turing’s new humanism as an update to existing enlightenment traditions:
The significance of the human lies not in its uniqueness or in a special ontological status but in its functional decomposability and computational constructability through which the abilities of the human can be upgraded, its form transformed, its definition updated and even become susceptible to deletion. (Page 153). … Turing’s computational project contributes to the project of enlightened humanism by dethroning the human and ejecting it from the center while acknowledging the significance of the human in functionalist terms. (Page 154).
Yet, why worry over the human? If all these various revolutions have been step by step dethroning the human from the center of thought, why not just elide it altogether? If you elide the question of the ‘human’ from the equation what is left is the primacy of Intelligence without the baggage of past philosophical problems. One need no longer discuss affect, will, emotion, mental intent, etc. One will be left with a mode of being based solely on the conditions of intelligence alone. I may be wrong but this seems to be what his current essays are tending.
- J.B. Schneewind. The Invention of Autonomy. (Cambridge, 1998)
- David Roden. Promethean and Posthuman Freedom: Brassier on Improvisation and Time – See more at: http://enemyindustry.net/blog/?p=5895#sthash.hnPhRud3.dpuf
- Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Locations 808-810). Urbanomic/Sequence Press. Kindle Edition.
- #Accelerate: The Accelerationist Reader. Robin Mackay (Author, Editor), Armen Avanessian (Editor). Urbanomic (May 10, 2014)