Reza Negarestani: Prometheanism, Intelligence, Self-Determination

Decided to revisit Reza Negarestani’s two-part essay on e-flux concerning What is Philosophy? here and here. Since his project implies a form of Left Prometheanism which I take to be – along with Ray Brassier’s Promethean and Posthuman Freedom (analysed succinctly by David Roden on Enemy Industry) – can be associated with the earlier Accelerationist Manifesto, Accelerationist Reader, etc.. In this post I will revisit both Negarestani’s and Brassier’s Prometheanism, which implies a critique of all those philosophies that have been based on forms of Will and Voluntarism.

Voluntarism: A Short History and its Critics

Our notions of voluntarism would arise out of the nominalist traditions of the late Middle Ages theology of such thinkers as John Duns Scotus (c. 1265-1308) and William of Ockham (c. 1288-1349) who inaugurated the modern secular separation of nature from the supernatural and the concomitant divorce of philosophy, physics, and ethics from theology that was reinforced by influential early modern figures such as Francisco Suarez (1548-1616).1

St. Thomas Aquinas was a defender of Intellect as a guide to the Good, over the voluntarist notions of Will and the arbitrary interventions of God into human affairs by way of his absolute power. As Pope Benedict XVI would remark “Duns Scotus developed a point to which modernity is very sensitive. It is the topic of liberty and its relation with the will and with the intellect. Our author stresses liberty as a fundamental quality of the will, initiating an approach of a voluntaristic tendency, which developed in contrast with the so-called Augustinian and Thomistic intellectualism. For St. Thomas Aquinas, who follows St. Augustine, liberty cannot be considered an innate quality of the will, but the fruit of the collaboration of the will and of the intellect.”

William of Ockham would affirm the supremacy of the divine will over the divine intellect, and in doing so would encounter a problem: if universals are real (i.e. natures and essences exist in things as Aquinas said they did following Aristotle) then voluntarism cannot be true. Ockham’s solution was unique: he simply denied of the reality of universals. Ockham adopts a conceptualist position on universals: while the universal (or concept) exists in the mind beholding a certain particular, it does not exist in the particular itself. Because there are no universals or common natures, there can only be a collection of unrelated individuals (and arguably the rise of modern individualism). With universals removed from the picture, God is free to will as he chooses.

“Voluntarism denotes those philosophers who generally agree, not only in their revolt against excessive intellectualism, but also in their tendency to conceive the ultimate nature of reality as some form of will, hence to lay stress on activity as the main feature of experience, and to base their philosophy on the psychological fact of the immediate consciousness of volitional activity.”

      – Susan Stebbing, Pragmatism and French Voluntarism

Nominalism and Voluntarism became eternal bedfellows from that time forward. Yet, they would not always be so… therein lies the tale! With universals removed humans, too, are free to do and make as they see fit. For only what we make can we understand. And in our age we are learning to re-engineer ourselves beyond the confines of those old theological norms that once constrained us to a false equilibrium, and thereby free to experiment in new modes of being and rationality. Beyond the balance lays the contingent realm of creation rather than possibilities, only the new Promethean dares to enter that medium of exchange.

Ray Brassier and the Promethean Project: Intellect and the Good

As Roden will summarize it Ray Brassier’s “Unfree Improvisation/Compulsive Freedom” (written for the 2013 collaboration with Basque noise artist Mattin at Glasgow’s Tramway) … is a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom, arguing that we should view freedom not as the determination of an act from outside the causal order, but as the self-determination by action within the causal order.2

Ray Brassier in contradistinction to the above tells us that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”.2 He will discover in Martin Heidegger a twentieth century critique of metaphysical voluntarism as his starting point: it will be by way of an essay by Jean-Pierre Dupuy ‘Some Pitfalls in the Philosophical Foundations of Nanoethics’ (download: pdf)3 In Dupuy’s essay the link between technological Prometheanism and Heidegger’s critique of subjectivism come by way Hannah Arendt (471). Brassier will set this religious critique of Prometheanism against the backdrop of both the neoliberal Prometheans found in transhumanist discourse and speculation, and his own account within the Marxist tradition that has been neglected by what Williams and Srnicek in their Accelerationist Manifesto term derisively the Kitsch Marxism of our day.

Brassier will go to the core of the conflict that Dupuy and Arendt see in such transhumanist discourses for human enhancement as breaking of the pact between the given and the made, the fragile equilibrium between human finitude as an ontological fact and its transcendence as Dasein. He will put it pointedly: “Prometheanism denies the ontologisation of finitude” (478). He follows Dupuy’s reasoning through his many works on early cybernetic theory on through his religious works in late life, understanding that from Dupuy’s view it was the whole philosophical heritage of mechanistic philosophy culminating in cybernetic theory that would produce the notion that as we understand ourselves as nothing more than contingently generated natural phenomenon, the less able are we to define what we should be (483). Because of this Brassier remarks our “self-objectification deprives us of the normative resources we need to be able to say that we ought to be this way rather than that” (383).

Yet, Brassier will turn the tables on Dupuy and discover in this very notion of the equilibrium a hidden element that he finds objectionably theological (495). The point being that for Dupuy the world was designed, made (i.e., a creationist argument); and that instead the truth of things – as Brassier will suggest, is that the world was not made: “it is simply there, uncreated, without reason or purpose” (495) which strikes at the heart of modern nihilism (see Ray Brassier, Nihil Unbound). Because of this Brassier will see a new freedom, a release from the false equilibrium, and a way forward: a speculative reasoning for why we as humans should not fear participating in this uncreated world as creators ourselves. “Promethenism is the attempt to participate in the creation of the world without having to defer to a divine blueprint” (495). This leads to another conclusion that if the world is without reason and purpose then whatever disequilibrium we might introduce is no more harmful than the disequilibrium that already exists in the universe (495).

Brassier will bring everything round to his notions of subjectivation from which he started: that a modern Prometheanism “requires the reassertion of subjectivism, but a subjectivism without selfhood, which articulates an autonomy without voluntarism (471)”. He will turn to Alain Badiou’s account of the relation between event and subjectivation and find it objectionable, yet will also discover the need to reconnect his account of subjectivation to an analysis of the biological, economic, and historical processes that condition rational subjectivation (387).  Such is the great task before us, Brassier remarks, a new Prometheanism that “promises an overcoming of the opposition between reason and imagination: reason is fuelled by imagination, but it can also remake the limits of imagination” (487).

Reza Negarestani: Inhumanism and the Promethean Labors of Intelligence

Inhumanism … finds the consequentiality of commitment to humanity in its practical elaboration and in the navigation of its ramifications. For the true consequentiality of a commitment is a matter of its power to generate other commitments, to update itself in accordance with its ramifications, to open up spaces of possibility, and to navigate the revisionary and constructive imports such possibilities may contain.
……– Reza Negarestani, The Labor of the Inhuman, Part II: The Inhuman

I’ve already covered two earlier essays from e-flux by Negarestanin on his Inhumanism: here and here (plus links to his original essays: here and here). In these essays he would elaborate a program for philosophy as a sort of deprogramming initiative, one that espouses the notion that “freedom is not liberation from slavery. It is the continuous unlearning of slavery.” (ibid.) His diagnosis is that the Liberal Humanist Subject is unfree and is obsolete:  “Liberal freedom, be it a social enterprise or an intuitive idea of being free from normative constraints (i.e. freedom without purpose and designed action), is a freedom that does not translate into intelligence, and for this reason, it is retroactively obsolete.” (ibid.) The inhuman project of freedom that he proposes is that we adapt to an autonomous conception of reason, one that shall require the “updating of commitments according to the progressive self-actualization of reason”, a struggle that will coincide with the “revisionary and constructive project of freedom”.

In these essays his project will further the Promethean agenda with an up to date influx of normative clarity from the Chicago school of neo-Hegeliansm, and especially Robert Brandom’s notions of a new pragmatics. In this upgrade of Kant’s insight into judgments and actions we discover that one must make a normative distinction between knowing and claiming, because the “things we do with language” (pragmatics) is in this model prior to semantics. Why this should be is never fully qualified yet it becomes a part of Brandom’s inclusion of Wilfred Sellars framework in which normative appraisals must be placed within a “space of reason”, and within this space one discovers the inferential patterns by which human “entitlements and commitments” are made explicit. Instead of tying thought to real world referents in some exhaustive manner, the new pragmatists hope to instead work through systematically all our claims and actions that these commit and entitle us to which bring the two practices after Hegel of understanding and reason together.

Karl Marx: Fragments on Machines

… it is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own in the mechanical laws acting through it; and it consumes coal, oil etc., just as the worker consumes food, to keep up its perpetual motion. The worker’s activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite. The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker’s consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself. (Fragments on Machines, 1858)

Marx saw what was happening even then, the perverse inversion and displacement of the human by new economic and technological imperatives was being made obsolete. The Age of Machines had begun. At first they were but the crude labors of a shift toward total abstraction, a world ruled by intelligence rather than based on voluntarist notions of will and freedom. Notions of efficiency, organization, auto-determination, second-order elaboration and algorithmic determination would graft themselves onto both philosophical and scientific projects fit for an age of capitalist accumulation, crisis, and revision. As Marx would reiterate:

The production process has ceased to be a labour process in the sense of a process dominated by labour as its governing unity. Labour appears, rather, merely as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system; subsumed under the total process of the machinery itself, as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery, which confronts his individual, insignificant doings as a mighty organism. (ibid.)

Already we see intelligence as a distributed program not within the “living workers,” but rather in the “living (active) machinery”. The Inhuman was from then on determining its own agenda, one that would make humans as a species obsolete. Let us be clear all these new Promethean projects in one form or another, whether on the Right (Land) or Left (Brassier/Negarestani) seek to empower the inhuman at the expense of the human agenda. Many of these so called turns toward the non-human in philosophy and the arts are playing into the hands of such programs whether consciously or not.

The Promethean Agenda: The Extinction of the Human, and Rise of the Machine

As David Roden will relate for Brassier, an avowed naturalist, it is important that this capacity for agency is non-miraculous, and that a mere assemblage of pattern governed mechanisms can be “gripped by concepts” (Brassier 2011). As he continues:

The act …. remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self…. (Brassier 2013a) – See more at: http://enemyindustry.net/blog/?p=5895#sthash.hnPhRud3.dpuf

Slowly but surely all the philosophies of the last hundred years bound to consciousness and the human project are being de-programmed, obsolesced in favor of impersonalism, inhumanism, and the Promethean Agenda. As Marx would have it:

The development of the means of labour into machinery is not an accidental moment of capital, but is rather the historical reshaping of the traditional, inherited means of labour into a form adequate to capital. The accumulation of knowledge and skills, of the general productive forces of the social brain, is thus absorbed into capital, as opposed to labour, and hence appears as an attribute of capital… (ibid.)

This notion that Capital is the power behind this transmigration and inversion of the human into inhuman, organic into inorganic or machinic civilization is no accident. Nick Land puts it succinctly, stating, “Capital has always sought to distance itself in reality – i.e. geographically – from this brutal political infrastructure. After all, the ideal of bourgeois politics is the absence of politics, since capital is nothing other than the consistent displacement of social decision-making into the marketplace.”3 We’ve seen this in the EU where politics and economics are divorced, where nation states become nothing more than the waystations for taxation and slavery, and citizens of Left or Right have little authority or power over change since their de-throned sovereignty is bound to impersonal rules and norms imposed by the Economic Tribunals out of Belgium. The modern European is living in a totalitarianism without a Leader, an economic inversion of the old fascist state turned corporate instilling the blank impersonal and machinic systems of markets, banks, and bureaucratic anonymity that cannot be reasoned with nor challenged by politics.

Marx adeptly remarks that the “workers’ struggle against machinery. What was the living worker’s activity becomes the activity of the machine. Thus the appropriation of labour by capital confronts the worker in a coarsely sensuous form; capital absorbs labour into itself—‘as though its body were by love possessed’”. (ibid.)

Nick Land: All That Is Over Now

“All that is over. We are all foreigners now, no longer alienated but alien, merely duped into crumbling allegiance with entropic traditions.”4 Already in the 90’s Land was diagnosing our predicament: “The capitalist metropolis is mutating beyond all nostalgia. If the schizoid children of modernity are alienated, it is not as survivors from a pastoral past, but as explorers of an impeding post-humanity.” For Land capitalism is the driving force behind this emergent inversion from the human to inhuman:

Capitalism is not a human invention, but a viral contagion, replicated cyberpositively across post-human space. Self-designing processes are anastrophic and convergent: doing things before they make sense. Time goes weird in tactile self-organizing space: the future is not an idea but a sensation. (ibid.)

Tiziana Terranova will move from pure metaphorics to the languages of software, algorithms, telling us that the “relationship between ‘algorithms’ and  ‘capital’—that is, ‘the increasing centrality of algorithms to organizational practices arising out of the centrality of information and communication technologies stretching all the way from production to circulation, from industrial  logistics to financial speculation, from urban planning and design to social  communication’.” (see here)

After describing the typical nature of algorithms (i.e., what they do, the work they perform, how they are situated within certain material and immaterial assemblages, etc.) she remarks that as far as capital is concerned “algorithms are just fixed capital, means of production finalized to achieve an economic return” just like any other commodity (385). In this sense algorithms have replaced living labor, the worker herself as the site where the temporal aspects of labor time, disposable time, etc. play themselves out. Instead of the alienated presence of the human in the machine as mere appendage driving and guiding the machine through its everyday processes, the human has been stripped out of the process altogether as non-essential or disposable and the algorithm as an abstract machine is now situated in that site.

Yet, against this worker’s ethic Land tells us a “consummate libidinal materialism is distinguished by its complete indifference to the category of work. Wherever there is labour or struggle there is a repression of the raw creativity which is the atheological sense of matter and which – because of its anegoic effortlessness – seems identical with dying.” (Fanged Noumena, KL 3871-3873). Of course he was taking his model from Deleuze/Guattari of Anti-Oedipus fame:

The body without organs is the model of death. As the authors of horror stories have understood so well, it is not death that serves as the model for catatonia, it is catatonic schizophrenia that gives its model to death, zero intensity. The death model appears when the body without organs repels the organs and lays them aside: no mouth, no tongue, no teeth – to the point of self-mutilation. to the point of suicide. (FN, KL 3664)

Land will allegorize on capitalization as the embodiment of Intelligence, as the Thing that coincides with teleoplexy as techonomic naturalism:

or (self-reinforcing) cybernetic intensification, describes the  wave-length of machines, escaping in the direction of extreme ultra-violet,  among the cosmic rays. It correlates with complexity, connectivity, machinic  compression, extropy, free energy dissipation, efficiency, intelligence, and  operational capability, defining a gradient of absolute but obscure improvement  that orients socio-economic selection by market mechanisms, as expressed  through measures of productivity, competitiveness, and capital asset value. (#Accelerate: p. 514)

This formalization of capital under the guise of allegoriesis with Capital as daemonic agent of Intelligence, with its attendant predication of future ontologies that record and predict the virtual objects that are already moving at an accelerating pace toward our ontic horizon. Land offers an Accelerationist Research Program that seeks out the traces of this advanced teleoplexic life-form in the virtual/actual developments of current and predictive capitalization. It is only a matter of time that this entity or hyperteleoplexic object becomes self-aware, thereby breaching the boundaries of the Techonomic Singularity. Land is the obverse or reactionary contributor to this elaborate manifestation, Negarestani and Brassier or its Left Accelerationist harbingers: the Right (Land) seeks exit and escape for the intelligent explosion of this chameleon intelligence to determine its own fate; while the Left (Negarestani/Brassier) seek to govern and regulate this agent to the “space of reasons”, where under normative rules and regulatory algorithms it can be induced to work for the Good.

Reza Negarestani: Philosophy is a Program

…philosophy is, at its deepest level, a program—a collection of action-principles and practices-or-operations which involve realizabilities…
– Reza Negarestani, What Is Philosophy? Part One: Axioms and Programs

From the time of Kant till now philosophy has slowly and methodically replaced the old humanist conceptions of life, self, and culture with an inhumanist and machinic culture based as Deleuze and Guattari and others would admit on madness, schizophrenic acceleration, and capitalism. At each step in the process an inversion and perversion of the older Medieval humanist religious world-view was replaced by a conception of the cosmos as impersonal and indifferent to human wishes and needs. Against voluntarist notions of the advent of God as an arbitrary agent of intervention, whether of Occasionalist causation or parodies of debates between nominalist/realists we’ve emerged into an age when machinic intelligence by way of algorithmic culture, programing, and advanced AI’, Robotics, Nanotechnology, and Biogenetic sciences is remaking the very perimeters of life and matter as we’ve come to know from the early Greek metaphysicians till our latest fads of speculative realism or libidinal or dialectical materialisms. We stand on the cusp of strangeness. Negarestani’s vision incorporates much of this heritage in a way that seeks to revise these depleted frameworks and put them on a new footing under the guiding hand not of some theological God, but rather of the self-determining machinic processes of our latest algorithmic progeny: the machinic intelligences.

As Luciana Parisi states it “If mechanical automation—the automaton of the assembly line, for instance—was a manifestation of the functionalist form that shaped matter, the increasing acceleration of automation led by the development of interactive algorithms (including human-machine and machine-machine interactions) instead reveals the dominance of a practical functionalism whereby form is induced by the movement of matter.” (#Accelerate, Automated Architecture Speculative Reason in the Age of the Algorithm) It’s this same functionalism that guides Negarestani to ask new questions of philosophy: “we should ask “what sort of program is philosophy, how does it function, what are its operational effects, realizabilities specific to which forms does it elaborate, and finally, as a program, what kinds of experimentation does it involve?” (ibid.)

Philosophy as an engineering project and handmaid of the sciences, creating conceptual tools that allow a pragmatic evaluation and “exercise of a multistage, disciplined, and open-ended reflection on the condition of the possibility of itself as a form of thought that turns thinking into a program.” (ibid.) In his algorithmic conceptuality he puts it this way, these programs are based on the “selection of a set of axioms, and the elaboration of what follows from this choice if the axioms were treated not as immutable postulates but as abstract modules that can act upon one another” (ibid.). Anyone versant in today’s object-oriented programing, Java or C++, or any number of other variations will understand this language of algorithmic culture and philosophy. Based on abstraction, impersonalism, mathematics, logics, and axiomatic operational closure this framework “commits the program to their underlying properties and operations specific to their class of complexity. To put it differently, a program constructs possible realizabilities for the underlying properties of its axioms, it is not essentially restricted to their terms” (ibid.). That being said, Negarestani would not reduce or suture philosophy to software development models, for him it need not matter at what scale of the socio-cultural or functional the system is applied. His scheme allows the Program to be used in a number of functional sets that can repeat the basic axioms operating under a multiplicity of data and behavioral models.

Putting it back into philosophical terminology he’ll state it this way:

This is precisely how philosophy is approached here. Rather than by starting from corollaries (the import of its discourse as a specialized discipline, what it discusses, and so on), philosophy is approached as a special kind of a program whose meaning is dependent upon what it does and how it does it, its operational destinies and possible realizabilities. (ibid.)

The two essays work hand in hand. The first elaborates the Axioms and Programs: philosophy as a program that is deeply entangled with the functional architecture of what we call thinking. While in the second part he elaborates on Programs and Realizabilities, in which the realizabilities of the philosophical program is  elaborated in terms of the construction of a form of intelligence that represents the ultimate vocation of thought.

This is a high-level abstract overview of a philosophy framework within which he seeks to lure engineers, scientists, and the economic powers in an alliance that entails nothing less than the common thesis underlying these “programmatic philosophical practices as that in treating thought as the artifact of its own ends, one becomes the artifact of thought’s artificial realizabilities” (ibid.). He sees the various traditions of philosophy transformed by this new framework: the idealist-rationalist and materialist-empiricist trajectories have been converging in the most radical way has been computer science, as a place where physics, neuroscience, mathematics, logic, and linguistics come together. This has been particularly the case in the wake of recent advances in fundamental theories of computation, especially theories of computational dualities and their application to multiagent systems as optimal environments for designing advanced artificial intelligence. (ibid.)

Ultimately this movement toward a new framework within which the artificial realization of general intelligence becomes an expression of thought’s autonomy ( in the sense of a wide-ranging program that integrates materials, intelligibilities, and instrumentalities in the construction of its realizabilities) should not, he tells us, be confused with the “fetishization of natural intelligence in the guise of self-organizing material processes, or a teleological faith in the deep time of the technological singularity—an unwarranted projection of the current technological climate into the future through the over-extrapolation of cultural myths surrounding technology or through hasty statistical inductions based on actual yet disconnected technological achievements.” (ibid.)

Against any notion that he is a technological imperialist of “technology for technologies” sake, he admonishes that we should affirm rather an autonomous impersonalism of thought itself:  a “mandate from the autonomy of thought’s ends and demands” (ibid.). Ultimately this is a philosophy of the artificial in which the “vocation of thought is not to abide by and perpetuate its evolutionary heritage but to break away from it. Positing the essential role of biology in the evolutionary contingent history of thought as an essentialist nature for thought dogmatically limits how we can imagine and bring about the future subjects of thought. But the departure from the evolutionary heritage of thought is not tantamount to a withdrawal from its natural history.” (ibid.)

Even though he tries to dissuade us that he is not a technologist at heart, his program is aligned with the machine culture of algorithms and the artificial rather than older humanist traditions. In fact as he suggests this form of intelligence can only develop a conception of itself as a self-cultivating project if it engages in something that plays the role of what we call philosophy, not as a discipline but as a program of combined theoretical and practical wisdoms running in the background of all its activities. (ibid.) And his stated goal for this new framework is the “good life”:

…the most defining feature of this intelligence is that its life is not simply an intelligent protraction of its existence but the crafting of a good or satisfying life. And what is a satisfying life for such a species of intelligence if not a life that is itself the crafting of intelligence as a complex multifaceted program comprising self-knowledge, practical truths, and unified striving? (ibid.)

 Even more explicitly he admits that for an “intelligence whose criterion of self-interest is truly itself—i.e., the autonomy of intelligence—the ultimate objective ends are the maintenance and development of that autonomy, and the liberation of intelligence through the exploration of what it means to satisfy the life of thought.” (ibid.)

The liberation of intelligence from his biological heritage, the development of autonomous systems based on algorithmic, axiomatic operations and programs, seeking realizability beyond current human wants and needs; only requiring the need to “satisfy the life of thought”. This is Negarestani’s Utopian Paradise of the Artificial worlds of our future, which seem to portend the demise of the human and the rise of the Machine as our successor species.  As he says it:

…the ultimate form of intelligence is the artificer of a good life­—that is to say, a form of intelligence whose ultimate end is the objective realization of a good life through an inquiry into its origins and consequences in order to examine and realize what would count as satisfying for it, all things considered. It is through the crafting of a good life that intelligence can explore and construct its realizabilities by expanding the horizons of what it is and what can qualify as a satisfying life for it.

One day our artificial children will attain the ethics of the Real that we could only attempt, one based not on some external authority or institution, but rather as part of its own self-determining initiative based on the algorithms of a self-programmed and self-revising culture of the machinic intelligence. Whether one agrees or disagrees with Negarestani he has a clear and precise framework within which he is placing the hopes and dreams of Reason toward other goals than the human. Yes, there are many strands in our current milieu, certain tendencies that seem to be pushing the artificialization of culture and civilization; along with all the philosophical, scientific, and aesthetic aspects of this long trek toward a new species. Will it happen? Will we be replaced, overcome by a machinic intelligence; or, will we ourselves gain a foothold in higher forms of intelligence and collective memory and knowledge working as we have always hand in hand with our technologies? Will there be a new framework for the sciences and philosophy in our time? Are we not reaching after new linguistic and material designations for this transformation in our midst? Transitional beings that we are we do not have the answers to these questions, only the truth that such things might come to pass with or without our intervention. Remember that the heritage of Greece and Rome was once lost in religious, military, and social chaos and blind dictatorial ignorance for a thousand years. We, too, could fall prey to our own worst instincts and become immersed in global turmoil that might spell the collapse of all such dreams of Reason. Nothing is written in stone. Not even intelligence.


 

Addendum:

Sometimes when I read all these various arguments I return to Nietzsche’s perspectivism: a sense that as the Analyticals would have it we’re seeking to get the concepts honed down to a descriptive or propositional pragmatics or axiomatic system, but that reality is more like Zizek’s an ideological screen or fantasy split between the Real and the Symbolic (big Other), which like the ancient Proteus (Greek Myth) is ever changing and metamorphic; and cannot be locked down to any one specific semantic context or meaning. That seems to be the central problem of our time of transition: the various philosophical and theoretical/naturalist discourses/frameworks are stuck in the older metaphysical universe seeking to slay that dragon of Instrumental (Enlightenment) Reason, and find a way into a new scientific/philosophic framework/reasoning that is more in alignment with the anomalous data of our empirical world and scientific apparatuses, along with an inversion of the past two hundred years of anti-platonic thought. We’ve uncovered data that cannot be reconciled or reduced to the older metaphysical conceptuality, and seem forever stuck in a repetition of transitional forms which echo but do not step outside the older circle of our Kantian despair.

The post-phenomenological framework – whatever it might entail, will begin with a new conceptuality, one that has worked through the old circles of Kant’s errors, and transformed its problems into something utterly new and wild. Of course philosophy never did lead the sciences, as Badiou rightly says: the sciences are but one of the conditions of philosophy. It goes both ways. What’s funny is that after two centuries of anti-Platonism, we’re seeing Plato return in the guise of a hypermaterialist in Badiou, Zizek, and Johnson… how strange bedfellows cast such altered forms. Materialism in both its dialectical (Badiou, Zizek, and Johnson) and vitalist (Deleuze/Braidotti/DeLanda,SR etc.) forms seems to have gone immaterialist in our time, siding more and more with a transformation of the older Platonic notions of the Substance/Subject and Form/Idea debate into a immaterial materialism, whose discourse on Substance is no longer substantialis or Aristotelian, and whose causality is breaking free of its reasons, determinations, necessity (Meillassoux: After Finitude, et. al.); becoming part of a pure contingent universe that is no longer determined by Ananke (Necessity), but totally determined by contingency (Whim/Wildness).

What we’re moving through is Scott Bakker’s Semantic Apocalypse*: the older concepts are breaking apart as in James Joyce (puns), etym(omological)-smasher opening us a universe of dream ontologies based on Hegel/Schelling’s sense of the ‘night of the world’. Interpretation is dead: no sense of semantics will help us. Even your work is a sign of this wavering In-Between, a sense that we cannot speak or say (Sophist/Wittgenstein) what is coming,  yet we must invent new forms of reasoning (beyond the instrumental) and conceptuality during the phase of transitional discourse that is not reducible to either the older naturalisms (physicalism) or idealist/materialist debates of substance/object dualities, or even dual aspect monism (Kant) or anomalous monism (Davidson).

*Semantic Apocalypse: the thought being that, as we come to know more about how the brain really works, the more it will seem as though meaning and intent(ionality) are a sort of illusion (folk psychology) — something that the brain generates in order to organize information — and in no way corresponding to what’s really going on.   Since the brain is adapted to modeling what is going on in the external environment (including the social environment), it doesn’t need to be good at modeling itself.  So the categories we use to describe “mental phenomena”, such as “intentionality”, are just cognitive reductions (metaphor: tropes) we rely on to compensate for the lack of the brain’s transparency to itself. We are essentially blind to our own lack of knowledge, and assume we know what we in fact do not know. (see here)

In Lacan’s terms what we are blind too is the Real – the paradox of the Mobius Strip upon which we live, never able to directly access this realm of being or know it (epistemically) for the simple reason that it cannot be reduced to the Symbolic (big Other: Culture, Language, Discourse etc.). Yet, we interpellate its effects indirectly on our subjectivation as signs which waver between the Imaginary (fantasy of reality) and its ideological screen of materiality (Zizek).

One of the commenters added a link to a book that explores another of Reza’s notions: computational functionalism: http://meson.press/wp-content/uploads/2015/11/978-3-95796-066-5-Alleys_of_Your_Mind.pdf

I think his use of axiomatic systems is to differentiate his own notions, which are based on epistemic relations (psychology) rather than the forms of ontological based axiomatic systems such as Badiou’s matheme = ontology, etc.

I’ll have to read this paper, but it may be true that computational and algorithmic descriptive terms may differ but the truth under the hood is that programs are algorithmic all the way down and up. Even the functional models are based on algorithmic designations. (But I’ll hold off since I haven’t read this paper to see what he’s offering as the ‘difference’ between the two.

As I’m reading this makes the point between intrinsic and algorithmic computation. Intrinsic concerned more with the governance and regulations that constrain functional structures and maintain them from one state to another, while algorithmic is more concerned with the actual state and behavior itself: the input/outputs of the execution of single or multiple programs.

His main argument comes here:

“In reality, neither functionalism nor computationalism entails one another. But if they are taken as implicitly or explicitly related, that is, if the functional organization (with functions having causal or logical roles) is regarded as computational either intrinsically or algorithmically, then the result is computational functionalism. (Page 140).”

I think a great many real world applications fit just such a scenario as he suggests. More and more companies deal in rules based systems based on BUS architectures that allow for interventions and communications among a multiplicity of agents: both human and machinic. Such systems rely on a combination of both governance/regulation intervention (i.e., intervention at the level of the give and take of functions, etc.), while also maintain the usual elaboration of intrinsic algorithms that adapt to the specifics of these interventions, executing internal (hidden) code/matrices based either on auto-modification or advanced replication algorithms of data (state) analysis. (One can think of the current use of such computational functionalism in advance AI stock market transactional hypertrade systems (think of Flash Boys: Michael Lewis; or, Dark Pool: Scott Patterson).

In some ways as these advanced computational functional systems evolve they will display state and behavior that can only be construed as intelligent. Which then opens the question (metaphysical) What is intelligence? Or, non-metaphysical What are the conditions that give rise to intelligence? I think it’s the latter question that Reza is more concerned with rather than the metaphysical question of what intelligence is. He seems to accept that general intelligence is the task of philosophy at this time, so he is concerned with how to bring about the conditions favorable to set intelligence free of its restrictive organic modes of being. Opening up other modes than the organic through a computational functionalism that eliminates the debates of intentionalism, affect, and human relations altogether.

In fact after elaborating several functionalist theories Reza will state flatly:

A philosopher should endorse at least one type of functionalism insofar as thinking is an activity and the basic task of the philosopher is to elaborate the ramifications of engaging in this activity in the broadest sense and examine conditions required for its realization. (Page 141).

He seems to be already seeking to know such conditions in this statement:

Now insofar as this analytical investigation identifies and maps conditions required for the realization of mind-specific activities, it is also a program for the functional realization and construction of cognitive abilities. (Page 142).

He’ll attack classical AI theory at its core refusal of the “irreducible and fundamental interactive-social dimension of the core components of cognition such as concept-use, semantic complexity and material inferences that the classical program of artificial intelligence in its objective to construct complex cognitive abilities has failed to address and investigate. (Page 145).”

In his conclusion he’ll tell us his project is one in which humanity elaborates in practice a question already raised in physical sciences:  “To what extent does the manifest image of the man-in-the-world survive?” (Sellars 2007, 386). Arguing against those who fear a human/machine divide on the problem of thinking he’ll tell us that from “a functionalist perspective, what makes a thing a thing is not what a thing is but what a thing does. In other words, the functional item is not independent of its activity. (Page 148).” This pragmatic turn defines his project’s computational functionalism.

Reza will see in the several decenterings or revolutions of the Copernican, Darwinian, Newtonian, and Einsteinian turns a methodical displacement of the human, but with Turing we see not a decentering of the human but rather the elimination of the human from the equation of intelligence.  (Page 149). After this the functional conceptuality of the mind is no longer bound to the human equation, and is set free to enable a post-intentional and post-human philosophy and science. He’ll describe it this way:

Whatever arrives back from the future—which is in this case, both the mind
implemented in a machine and a machine equipped with the mind—will be
discontinuous to our historical anticipations regarding what the mind is and
what the machine looks like. (Page 149).

This aligns well with Roden’s “disconnection thesis“.  Reza will apply a Sellarsian image updated with the notion of “computational image”, saying, “As the human
imprints and proliferates its image in machines, the machine reinvents the
image of its creator, re-implements, and in the process revises it. (Page 150).” Which brings him to the  natural/artificial divide that many fear, but seems in our moment to be breached in various new philosophies and sciences: ”

Realizing the mind through the artificial by swapping its natural constitution or biological organization with other material or even social organizations is a central aspect of the mind. Being artificial, or more precisely, expressing itself via the artifactual is the very meaning of the mind as that which has a history rather than an essential nature. Here the artificial expresses the practical elaboration of what it means to adapt to new purposes and ends without  implying a violation of natural laws. To have a history is to have the possibility  of being artificial—that is to say, expressing yourself not by way of what is naturally given to you but by way of what you yourself can make and organize (Page 151).

He seems to imply Turing’s new humanism as an update to existing enlightenment traditions:

The significance of the human lies not in its uniqueness or in a special ontological status but in its functional decomposability and computational constructability through which the abilities of the human can be upgraded,  its form transformed, its definition updated and even become susceptible to deletion. (Page 153). … Turing’s computational project contributes to the project of enlightened humanism by dethroning the human and ejecting it from the center while acknowledging the significance of the human in functionalist terms. (Page 154).

Yet, why worry over the human? If all these various revolutions have been step by step dethroning the human from the center of thought, why not just elide it altogether? If you elide the question of the ‘human’ from the equation what is left is the primacy of Intelligence without the baggage of past philosophical problems. One need no longer discuss affect, will, emotion, mental intent, etc. One will be left with a mode of being based solely on the conditions of intelligence alone. I may be wrong but this seems to be what his current essays are tending.


 

  1. J.B. Schneewind. The Invention of Autonomy. (Cambridge, 1998)
  2. David Roden. Promethean and Posthuman Freedom: Brassier on Improvisation and Time – See more at: http://enemyindustry.net/blog/?p=5895#sthash.hnPhRud3.dpuf
  3. Land, Nick (2013-07-01). Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Locations 808-810). Urbanomic/Sequence Press. Kindle Edition.
  4. #Accelerate: The Accelerationist Reader. Robin Mackay (Author, Editor), Armen Avanessian (Editor). Urbanomic (May 10, 2014)

60 thoughts on “Reza Negarestani: Prometheanism, Intelligence, Self-Determination

  1. This is one I will be coming back to … there is so much to unpack.

    After first read, I’m not quite on board with Nick Land’s overall premise that it is “too late” … and while I hate to over-generalize … I think this whole “thing” is still in its infancy. We don’t have the reference points yet to say “we are all foreigners now.”

    But again, coming back to this one.

    Like

    • Of course that’s Land’s opinion, not necessarily my own… In fact I’m more of a critic of much of this inhumanism. Many of these philosophers seem bent of human extinction, obsolescence, inversion, annihilation. My question is Why are we so pessimistic about the human animal? Why have we suddenly decided to jump ship as if we could, and give up all our hard-won efforts at civilization, culture, etc. in the name of the non-human, inhuman, post-human, etc.?

      For Land Capital is a God of the Right from the Future come to terrorize us and replace us with machinic intelligence, etc. Negarestani on the Left seems the obverse, but is in collusion with this same motif just from some Utopian Platonic City of Reason (Intelligence). As if sloughing the organic is the ultimate objective of science and philosophy.

      Liked by 1 person

      • My thoughts follow your line of questioning exactly … Mr. Hickman. Perhaps it always follows back to a “Yin and Yang” scenario. Where one must exist (Land’s ideas), so must another (your thoughts).

        Perhaps the knowledge of our condition that we have been chasing cannot be pinned down to some “truth” that we “know”? Perhaps the answers we seek can only manifest themselves out of the questions we are capable of asking (some people ask negative questions, some positive)?

        I don’t claim to know the answers, but I will always continue the search … because the meaning and purpose is there … the answers will reveal themselves with the age old force of time.

        Like

      • But whereas you see “meaning and purpose” I see nothing but the frozen abyss of meaninglessness. The universe has no maker, no designer; no telos, no purpose; no inherent order etc. Only we have imposed such meanings and purpose on this chaotic brew to salvage and save ourselves, to mystify and obscure the truth that reality is our fantasy, not the other way around.

        So that these philosophers who suggest we are at an end, that our time is passé, etc. are only fantasizing against some version of the human and thought, rather than speaking some universal truth. It’s sophism, pure and simple; their trying to convince either us or themselves of this inhuman or non-human turn.

        You can question the Universe all day long, but it will not return an answer; being inhuman it has neither human aspirations nor grudges, it just is… ‘tat tvam tasi” – Thou Art That! That’s the truth: we are as the Universe, without meaning or purpose; inhuman as it is. We provide ourselves fantasy to keep at bay that truth, so commit vast errors against each other and the very Universe that spawned us.

        Liked by 1 person

      • The cosmic accident that brought humans into existence has no meaning our purpose as you have clearly pointed out. Because the cosmic accident isn’t a “thing.” It is an ongoing series of events with a probable end (entropy … heat death … and all that).

        We are a result of that meaningless and purposeless accident.

        That said, while the Universe provides no answer, other human beings do. You cannot define that as fantasy, because the human connection is real, unless you are blind to it. Even though that connection is biological, even a biological series of events that might be meaningless in and of themselves … the resulting feeling of connection … that exists because other meaningless processes allow you to feel it.

        In short, the meaning we extract from all of these meaningless accidents and processes is in fact real … because this meaning is the result.

        Your (and others) use of semantics to describe this resulting meaning as “fantasy,” “abyss,” etc… are simply descriptions that try to describe the origins of these results.

        In short, I’m trying to bifurcate meaningless processes and the meaning we extract from those processes. I probably failed … but I find meaning in that. 😉

        Liked by 1 person

  2. Hey Steve, great post! Like Joseph above said, I’ll have to go over it a few times before I unpack it fully. In the meantime, I just want to offer a few cursory comments…

    As you know, I’m sympathetic to the idea of breaking down the traditional human subject, of opening our understanding of it to the wider sea of non-human complexity it is embedded within. I’m also sympathetic to the utilization of technology and science in fixing some (‘all’ I find to be impossible) aspects of our broken, broken world. For the life of me I cannot figure out why these thinkers approach this subject with such a dark ambiance. How can one properly articulate a progressive vision for the future when it is enshrouded in a language of inhumanity, extinction, darkness? I suspect it is an attempt to ward off any charges of holistic vitalism, but I fail to see how these are the only two alternatives. After all, at the end of the day, these programs return themselves to the status of a humanism, albeit a radically altered one.

    Second, perhaps I’m obtuse, but I don’t see Negarestani, et. al. as offering any convincing definition of reason and rationality. What is this force that they talk of? Is it different than the traditional Enlightenment notions of reason and rationality? If that is the case, how does one square that with the notion of the fixed human subject – that entity that is both the foundation of Enlightenment rationality and that which the new Promtheanism hopes to overturn. This, in turn, implies a notion of rationality akin to a Platonic form, as opposed to something constructed by historical forces. This, too, seems to me to be incompatible with any program looking for alternative futures. If we lose sight of the emergence of forms of rationality in a social matrix, we miss out too how social transformation is also contingent on a similar set of matrices. In the end, it effectively shores up the very things they seek to overturn: environment seems to play no role other than the raw materials for the future. Brassier, for example, speaks of the “overcoming of the opposition between reason and imagination”. What opposition might this be? It has, in fact, never existed anywhere but within reason’s suppositions: leading neuroscientists have reflected on the relationship between emotion and reason, and the way that the latter is completely contingent on the former. This is a shock to our traditional understandings of reason, while also pointing outwards to the surrounding environment and the conditions present in that environment.

    All in all, I find a much more constructive alternative to be found in thinkers like Bogdanov, who were sensitive to the questions of science and technology, but instead of positing a prometheanism capable of overcoming all limits, began by looking at limits and asking what new forms can arise in the context of limitation and instability.

    Liked by 2 people

    • “For the life of me I cannot figure out why these thinkers approach this subject with such a dark ambiance.”

      My perspective, we aren’t willing to see the dark unless it can be explained by someone who sees it. Sometimes, we aren’t even willing to look into the dark, because the dark is a scary place.

      But it exists, it’s real, it’s part of our “make up.” So, we cannot be ignorant of this darkness they see. That said … some of these thinkers seem to let that darkness dominate their perspective, their thoughts, their window into the world.

      I suppose the good thing is, these thinkers are willing to look into that dark “abyss” and explain what they see, and for that I’m thankful. But to only see darkness, when light exists as well, that means there is a certain level of blindness in their thinking.

      To see the whole picture you must see the borders and the shading, the frame, the paint, and everything that made all of that possible. You can use semantics to call it meaningless, an “abyss,” an expression of “darkness” etc… but the reality is the entire picture and everything that made that picture possible does in fact exist.

      You can’t be blind to the human who, in all of this meaninglessness “abyss” … had some set of biological processes that set forth a set of steps that resulted in a painting. The answer to the question “How does that painting exist?” might be easy to answer (the biological processes, the materials available, the steps etc…).

      But it’s the answer to “Why does that painting exist?” that gives these meaningless processes, and their resulting painting … meaning itself.

      Liked by 1 person

    • You ask: Brassier, for example, speaks of the “overcoming of the opposition between reason and imagination”. What opposition might this be?

      I think you have to remember Brassier’s commitment to Enlightenment Reason and scientific or collective consensual truth-procedures, etc. One of the big problems, and has been since Adorno is this battle over Instrumental and Alternative forms of Reason. Brassier and Negarestani obviously side with Sellars/Brandom in a form of negotiating reasoning of give and take, that applies normative or rule based forms of governance and regulation to their form of reason.

      This battle of Reason and Imagination harkens obviously back to Romantic Era German Idealism and poetry/poetics etc. For both German and English Poets Imagination (Blake) is the greater form of Reason inclusive of image and thought, while the Instrumental Reason of Enlightenment thought was the reductive or empirical vision of Newton’s quantification of reality. Two opposing forms of reasoning. Even in our age we have Heidegger who ultimately sided with the poets and built a later philosophy based on language ontologies rather than math. While in our time we see Badiou equating ontology = matheme / mathematicization, etc.

      We saw in Deleuze anti-Platonism and diagrammatic (Guattari) thought forms, both anti-representational and indirect access of the given. In Badiou/Zizek we see the Idea materialized, Plato turned into immaterial materialist wherein the Idea coincides with the thing at the moment of its emergence, rather than arche’s that exist outside timespace in some eternal realm. For Badiou this realm is eternal, the universal is the generic, and the logic of worlds is based on the event or retroactive finality.

      Like

  3. Great erudition here, Steven. I particularly enjoyed your ferreting of the voluntarist tradition. I’m mischievously wondering whether the hyperplastic constitutes an aporia in Reza’s programme. If intelligence aims to free itself from all natural constraint it must aim to have arbitrary power over its implementing medium. But assuming the irreducibility of normative psychology to physics, the former plays no role in determining the consequences of the latter. It’s irreducibility implies its practical elimination. Hence reason can be of no ultimate value to reason. It aim must be its own annihilation.

    Liked by 1 person

    • I think there are a few black holes or lacuna left out of his arguments, but what’s interesting is seeing how he and Brassier have been pushing Idealism after Brandom/Sellars…. his eliminative naturalism unlike Bakker and you wishes to be based solely on Intellect and Intelligence become free of the organic… So far he’s presented only an epistemology without ontology, so it does seem lopsided. My central problem is that he wants to suture philosophy to a telos, to this pragmatic task of creating a new form of intelligence along with its subordination to current sciences, software and robotics, AI design theories, etc. Once again he’s trying like so many philosophers to prioritize thought at the expense of empirical sciences.

      Like

    • @ David: very interesting. I tend to agree with your point, but I don’t think Reza is claiming that intelligence aims at freeing itself from ‘all’ natural constraints or even all ‘biological’ constraints. He specifically emphasizes the role of physical constraints and that thought cannot withdraw from its natural history. Evolutionary constraints and the organization of embodiment can tell us a lot of things about how to construct a human-level AI precisely because we have to think of this AI not only in terms of discursive linguistic abilities but also a material organization (structure) ‘sufficient’ for causal interaction with the world. This is my impression of what Reza is saying.

      Like

      • Another thing worth mentioning is just because Reza uses the words axiom or program doesn’t mean he is subscribing to some algorithmic or fully mechanizable vision of cognition. You can work in a geometric axiomatic system (using axioms of geometry) without equating this axiomatic system with the completeness of strict logico-formal rules. And this is exactly what contemporary mathematics has been doing, axiomatic construction without the Hilbertian vision of completeness and aithmatic symbol manipulation. This is something that Reza is briefly discussing in his functionalism paper in terms of the diffrence between computational discription and algorithmic discription, re-enactment and simulation, etc. http://meson.press/wp-content/uploads/2015/11/978-3-95796-066-5-Alleys_of_Your_Mind.pdf

        Like

      • I think his use of axiom is to differentiate his own notions, which are based on epistemic relations (psychology) rather than the forms of ontological based axiomatic systems such as Badiou’s mathe = ontology, etc.

        I’ll have to read this paper, but it may be true that computational and algorithmic descriptive terms may differ but the truth under the hood is that programs are algorithmic all the way down and up. Even the functional models are based on algorithmic designations. (But I’ll hold off since I haven’t read this paper to see what he’s offering as the ‘difference’ between the two.

        As I’m reading this makes the point between intrinsic and algorithmic computation. Intrinsic concerned more with the governance and regulations that constrain functional structures and maintain them from one state to another, while algorithmic is more concerned with the actual state and behavior itself: the input/outputs of the execution of single or multiple programs.

        His main argument comes here:

        “In reality, neither functionalism nor computationalism entails one another. But if they are taken as implicitly or explicitly related, that is, if the functional organization (with functions having causal or logical roles) is regarded as computational either intrinsically or algorithmically, then the result is computational functionalism. (Page 140).”

        I think a great many real world applications fit just such a scenario as he suggests. More and more companies deal in rules based systems based on BUS architectures that allow for interventions and communications among a multiplicity of agents: both human and machinic. Such systems rely on a combination of both governance/regulation intervention (i.e., intervention at the level of the give and take of functions, etc.), while also maintain the usual elaboration of intrinsic algorithms that adapt to the specifics of these interventions, executing internal (hidden) code/matrices based either on auto-modification or advanced replication algorithms of data (state) analysis. (One can think of the current use of such computational functionalism in advance AI stock market transactional hypertrade systems (think of Flash Boys: Michael Lewis; or, Dark Pool: Scott Patterson).

        In some ways as these advanced computational functional systems evolve they will display state and behavior that can only be construed as intelligent. Which then opens the question (metaphysical) What is intelligence? Or, non-metaphysical What are the conditions that give rise to intelligence? I think it’s the latter question that Reza is more concerned with rather than the metaphysical question of what intelligence is. He seems to accept that general intelligence is the task of philosophy at this time, so he is concerned with how to bring about the conditions favorable to set intelligence free of its restrictive organic modes of being. Opening up other modes than the organic through a computational functionalism that eliminates the debates of intentionalism, affect, and human relations altogether.

        If you elide the question of the ‘human’ from the equation what is left is the primacy of Intelligence without the baggage of past philosophical problems. One need no longer discuss affect, will, emotion, mental intent, etc. One will be left with a mode of being based solely on the conditions of intelligence alone. I may be wrong but this seems to be what his current essays are tending.

        Like

      • “the truth under the hood is that programs are algorithmic all the way down and up. Even the functional models are based on algorithmic designations.”

        That is only true if you think of a program as a software / app, but the term program covers a much broader range of axiomatic constructive systems than simply sequential algorithmic programming. Again to bring back the geometry example, geometrists often talk about geometric programs but that does that mean they believe geometric objects can be algorithmically constructed.

        “As I’m reading this makes the point between intrinsic and algorithmic computation. Intrinsic concerned more with the governance and regulations that constrain functional structures and maintain them from one state to another, while algorithmic is more concerned with the actual state and behavior itself: the input/outputs of the execution of single or multiple programs.”

        For the past few decades, computer science has challenged the canonical computational description of behaviors as provided by Turing’s computability thesis and Church’s lambda-calculus. This has resulted in a much more sweeping revision of the concept of computation, shifting the computational study of behaviors from the canonical definition of mathematical function used by lambda-calculus toward algebra of processes and theories of concurrency (not to be confused with parallel computation is just a more general case of sequential state transition computation). The latter can still be algorithmically discribed but the algorithms are fundamentally different from the mechanisable / simulational algorithms of sequential computation. Again this brings back to an old conundrum of mathematics, what are exactly algorithms? Can they be automated or mechanized? There is really no definite answer to these questions in mathematics or computer science.

        Like

      • Reza is more concerned with the computational/functional divide and its merger. Algorithms are implied within both schemas. He’s not that concerned with the execution, as he is with the tools of construction. I just finished the essay and posted an update in my Addendum above in the original essay. I may extract that and provide another post on this essay.

        Like

      • sorry typing on the phone: “that does that mean they believe geometric objects can be algorithmically constructed.” I meant ‘that doesn’t mean they believe …’

        Like

      • “Algorithms are implied within both schemas.”

        A problem I have noticed in this post and the previous one is that many of the assumptions about programs, axioms and algorithms are based on impressions that are just too general to serve any purpose other than playing the game of vague associations. Like any invocation of the word program is immediately interpreted as an algorithmic program. But not every program or axiomatic constructive system is algorithmic, and not every algorithm is sequential and mechanizable. The same applies to the nature of computation. Turing-Church thesis is about sequentially bound functions (computability of canonical functions). It is just a special case of much broader computational phenomena and models of computation. This is something that even Turing has pointed out in his papers.

        Like

      • I wasn’t ever trying to go into the history of it as you are. I started only with Reza’s original thesis:

        “The central thesis of this text is that philosophy is, at its deepest level, a program —a collection of action-principles and practices-or-operations which involve realizabilities, i.e., what can be possibly brought about by a specific category of properties or forms. And that to properly define philosophy and to highlight its significance, we should approach philosophy by first examining its programmatic nature. This means that rather than starting the inquiry into the nature of philosophy by asking “what is philosophy trying to say, what does it really mean, what is its application, does it have any relevance?,” we should ask “what sort of program is philosophy, how does it function, what are its operational effects, realizabilities specific to which forms does it elaborate, and finally, as a program, what kinds of experimentation does it involve?”

        In fact he never uses algorithm in the two essays originally mentioned. I’m the one that drew it by way of analogy with my own software background as analogy only. He does mention in the essay in Augmented Intelligence Traumas by Matteo Pasquinelli in his essay: Revolution Backwards: Functional Realization and Computational Implementation.

        He further defines a Program: “A program is the embodiment of the inter-actions between its set of axioms that reflect a range of dynamic behaviors with their own complexity and distinct properties. More specifically, it can be said that programs are constructions that extract operational content from their axioms and develop different possibilities of realization (what can be brought about) from this operational content. And respectively, axioms are operational objects or abstract realizers that encapsulate information regarding their specific properties or categories. In this sense, programs elaborate realizabilities (what can possibly be realized or brought about) from a set of elementary abstract realizers (what has operational information concerning the realization or the bringing-about of a specific category of properties and behaviors) in more complex setups.”

        Then sums up: “In this respect, different lines of inquiry into the intelligibility of thinking as an activity correspond to the program’s examination of the underlying properties or specificities of the axioms. The determination, assessment, and organization of practical intelligibilities is equal to the program’s extraction, composition, and execution of operational contents.”

        That last sentence shows he is thinking in algorithmic terms. If we take the simplest, or informal definition of algorithm as “a set of rules that precisely defines a sequence of operations”, then his equation between “determination, assessment, and organization of practical intelligibilities is equal to the program’s extraction, composition, and execution of operational contents” offers by way of analogy a basic algorithm rule-set. And, as you’ve mentioned, and I should have been more explicit, this notion of program need not be reduced to computational, functional, or are software development, etc. It could be applied to many forms of thought, science, etc. I wasn’t trying to be a stickler or even vague. It seems you want fine grain analysis. My post is general, not specific. If I were writing for a scientific, or philosophical journal, etc. I would probably qualify a hell of a lot and be more specific. So your just quibbling now and seeking what was never to be that specific to begin with.

        He does address the various uses of algorithms in Revolution Backwards: Functional Realization and Computational Implementation, the essay in Matteo Pasquinelli’s Alleys of Your Mind: Augmented Intelligence and Its Traumas.

        Like

      • I don’t want to be nitpicky, but the quoted paragraph doesn’t necessarily imply algorithms of the kind you are suggesting. It’s general enough to be the simplified version of the so-called BHK interpretation (Brouwer–Heyting–Kolmogorov) of programs as proof search and normalization. In any constructive axiomatic system you have something similar to “extraction, composition, and execution of operational contents”. That doesn’t necessarily mean a sequential algorithm.

        Like

      • Actually you are being just that: nitpicky, as well as trying to overdeterming the conversation with recondite and erudite displays best left in the scholary folds of science journals, rather than beating a blogger over the head with such refined embellishments.

        Like

      • There is a difference between a good clarifying analogy and an unhinged analogy based on mere associations. I say unhinged because this post is exactly that, drawing sweeping conclusions from some highly impressionistic reading. It’s another form of cognitive bias, putting too much of our own background information and baggage into someone else’s approach.

        Like

      • Now you’re just being insulting and full of your own belligerence and narcissistic self-importance. For me this convo is at an end. I’d suggest keeping quiet from here on or I will ban you, my prerogative…

        I am neither a scientist, mathematician, nor even, a philosopher, but rather an old fashioned literary critic who just happens to widen my frame of reference into various and sundry areas of knowledge and thinking. I’m not some erudite hound dog seeking the ultimate in detailed examination. I write for a general audience of intelligent common readers, not for scientific of philosophical journals where the demarcations are much more restrictive. So to come on my site and demand of me to be what I am not, to show forth your own mundane erudition as if everyone should know as you know is both boring and over the top, not to say bad mannered and insulting to all.

        Look in the mirror if you want to see cognitive-bias enacted. I don’t even know who you are behind the mask of deviant consciousness, no links to your blog site or other writings are provided by you. I’ve been around for years and have many people who do enjoy my form of discourse which is neither dry analytical prose nor jargon ridden academic rhetoric, rather I do try to entertain and instruct as Dr. Samuel Johnson my precursor once taught. I leave it to the real scholars among you to detail out the facts in your journals and papers, books and published essays, etc. I’m a mere scribbler in the wide world of letters, nothing more and nothing less. So if you’re unhappy with my fare I’d suggest finding another site or blogger to fill your narcissistic needs rather than beating me over the head with your own cognitive biases and insults.

        Like

  4. “the “things we do with language” (pragmatics) is in this model prior to semantics. Why this should be is never fully qualified”

    maybe I can help a bit here — the basic idea is a kantian one but most recognizably a pragmatist tenet; it does not so much give explanatory primacy to agency or the use of language (over say, exercises of cognition), but rather gives an account of the content of the concepts deployed in language in terms of what one must do in order to count as deploying them. This comes from seeing reasoning as an inherently social phenomenon. The classic example is the parrot which is trained to say “that’s red” when shown something red. What would it take for us to understand the parrot as deploying the concept red, rather than simply uttering a noise — where is content of the concept red to be found if it does not inhere in the noise “red”? Precisely in what the parrot must do in order to count (from the perspective of other agents) as judging that it is red, that is, undertaking a commitment to the effect that it is red, which would include things such as holding that it is coloured, that it is not green, etc.

    The key is the pragmatist’s distinction between what is said and the act of saying it; to deny the pragmatic significance of the latter is simply to be left making noises (even if they are made reliably in response to particular stimuli, parrots and automatic doors are no different).

    Note that this understanding of conceptual content holds for both practical and theoretical spheres; wherever concepts are used. “Language” and “pragmatics” must be construed very broadly so that we don’t fall into confusion here- the former is not identical with natural language, and the latter is not identical with speaking and writing. The content of what is (judg,believ,…)-ed is understood in terms of what must be done to count as being engaged in to be (judge,believ,…)-ing.

    Liked by 1 person

    • Yea, been reading Brandom’s Between Saying and Doing, and Articulating Reasons. I’d read his early work Tales of the Mighty Dead a few years back. In some ways he is similar to Rorty, but with a heavier neo-Hegelian twist along with Malcom McDowel.

      Like

    • Hi Matt

      I’m aware of the Kantian/pragmatist construal of meaning: that’s precisely the target of my critique.

      The point remains: if being in a state with such and such a mental content comes down to a social inferential status (however conceived) then that social game would be of no use to an entity that orchestrates its physical structure at an arbitrarily fine grained level. This follows from Davidson’s argument against psycho-physical reduction. Pragmatists have to buy something like this anti-reductionist position, I take it.

      Playing the game of giving and asking for reasons will be of no use to a hyperplastic who must consider its future actions or statuses in the light of its present commitments. Any status it adopts could be deleted by a future self-intervention at the physical level, regardless of what else transpires (This follows from mental/physical supervenience). For the same reason, attributing discursive statuses will be of no use in co-ordinating its actions with other hyperplastics. It would need some other way of formulating its agenda.

      So I can accept your characterisation of conceptual content and reasoning while arguing that it wouldn’t be relevant to maximally flexible agent.

      The only way to avoid this is to deny that the mental is even supervenient on the physical and thus embrace some form of dualism. Or to deny the anti-reductionist position, which would make the whole pragmatist program moot.

      Liked by 2 people

      • “irreducibility [of normative psychology to physics] implies its practical elimination” only follows for the hardcore naturalist, but my view is not naturalist. As Brassier puts it: “nature is not reasonable, and reason is not natural, yet nature’s unreasonableness is not unintelligible, just as reason’s unnaturalness is not supernatural”. Brandom’s account of reason (following Sellars) is not a naturalistic one. Clearly this is a very delicate issue. No one would wish to deny that mental processes are realized in natural systems that can be studied by the natural sciences etc. Normative phenomena/discourse has no ontological import, but on this picture, ‘the mental’ is not ontological but deontological — it just is (normative) status, not substance. Yes, to institute a status one needs to involve substance but this does not entail that status is constituted by substance. Science is the measure of all things in the realm of describing and explaining, but not contention is that there features of the framework of describing and explaining whose function is not to describe and explain, and hence require something more than the conceptual resources of the natural sciences in order to make them explicit (Sellars calls them metalinguistic concepts, articulated by logical, normative, modal vocabulary that make explicit the inferential commitments articulating the content of the concepts involved in the natural sciences).

        Anyway, I think the key point of disagreement is the relation between structure and function. For example Turing’s universal model of computation, which gives an account of computation abstracted from any substrate, and hence realizable in any substrate (I’m not claiming that mind is a Turing machine here, just illustrating a wider point). Prior to this conceptual revolution, engineers thought that different tasks would require different machines, whereas Turing abstracts computation into a universal machine that can play the same functional role whether realized in silicon or meccano. So there is at least some degree to which function, considered at a sufficient level of abstraction, has an autonomy from contingent instantiation — even if each contingent instantiation comes with certain operational limits (to which function is indifferent), although I concede that this probably stands in need of further dissection. I guess the point I’m trying to make is that arbitrary physical manipulation of structure does not entail reconfiguration of function.

        If a hyperplastic agent reconfigured itself such that it could no longer engage in the practices that Brandom takes as necessary for sapience, then it is simply no longer sapient (for example if one passes away). All sorts of non-sapient things can be construed as having agenda, such as water seeking paths of least resistance over a landscape. I take it that the normative-functionalist account of sapience laid out by Brandom is sufficiently minimal and abstract that we can imagine its application/institution quite apart from their contingent human basis, even if human practices remain our only example.

        Intelligence does not aim to free itself of material constraint so much as it already is free, it just continually aims to overcome its contingencies (“continuous unlearning of slavery”), everything from primitive tool use to writing, language and cellphones are examples of how this has been achieved in the past, Negarestani is just trying to extract the core of such things and radicalise the ‘realizabilities’ afforded by reason qua sapience.

        Also want to highlight an old but relevant post from Pete W about the compatibility between Brandomian antinaturalism and eliminative materialism: https://deontologistics.wordpress.com/2009/09/04/eliminativism-and-the-real/
        (Brandom’s critique of teleosemantics that Pete refers to can be found in “Modality, Normativity, and Intentionality”, which I have been meaning to read for a while…)

        Hope I haven’t diverged or made turbid too much here!

        Like

      • Yea, if we go with Grant, Dunham, and Watson’s Idealism: The History of a Philosophy, where they break Idealism into three forms (after German Idealism): subjective (Fichte), natural (Schelling), objective (Hegel). We know that Brandom and McDowel follow and upate Hegel’s Objective form building on Sellar’s critiques of Kant. In many ways closer to Husserlian phenomenology, except that instead of bracketing the natural/normative divide it provides an inferential system of give and take negotiations. That is if I’ve been following it correctly.

        As for Intelligence it seems more pertinent for Negarestani to provide a new task for philosophy by way of disconnecting (Roden: disconnection thesis) inferential reason from instrumental, thereby allowing an opening up of general intelligence beyond organic forms; as well as, opening up and disconnecting future from present utility.

        Like

      • “I take it that the normative-functionalist account of sapience laid out by Brandom is sufficiently minimal and abstract that we can imagine its application/institution quite apart from their contingent human basis, even if human practices remain our only example.”

        Well, I think this depends on whether Brandom’s account is complete: i.e. that sentience and sapience cover the field of significant agency or cognition. I’ve got an independent argument to the effect that Brandom’s account isn’t complete here ( http://philpapers.org/rec/RODBAP) because he can’t suture practice and behaviour together without presupposing something like a Davidsonian ideal interpreter. So he cannot provide a general account of what it is to interpret or understand a stretch of behaviour. It doesn’t mean that his account of sapience isn’t broadly right, but merely that it is not particularly explanatory and certainly doesn’t have the transcendental scope that he claims for it. As Scott puts it, it’s just riddled with intentional posits.

        Ok, if my “Spectral Machines” argument goes through then the non-sapience of hyperplastics (which I grant) does not entail that they are not significant agents or cognisers since the sentience/sapience distinction is no longer exhaustive. And given that they would be uninterpretable to us, their possibility implies that we don’t have any transcendental or a priori grip on the nature of thought as such.

        ” For example Turing’s universal model of computation, which gives an account of computation abstracted from any substrate, and hence realizable in any substrate (I’m not claiming that mind is a Turing machine here, just illustrating a wider point). Prior to this conceptual revolution, engineers thought that different tasks would require different machines, whereas Turing abstracts computation into a universal machine that can play the same functional role whether realized in silicon or meccano. So there is at least some degree to which function, considered at a sufficient level of abstraction, has an autonomy from contingent instantiation ”

        This is well said. But it doesn’t affect my argument for intentional discourse lacking pragmatic value for hyperplastics. Some auto-interventions wouldn’t be psychologically significant obviously, but having access to an intentional stance wouldn’t tell the HP where and where not to tinker. The uselessness of intentional cognition follows. What doesn’t follow is that such entities would not (in some way) be agents or cognizers.

        Cengez: I suppose Reza’s exhortation to reinvigorate thought is morally admirable, but it kind of presupposes that we know what it is that we’re asking for. Since he, like Brassier, get all their significant material from Sellars, Brandom, et al. their positions stand or fall with the coherence and adequacy of analytic Kantianism.

        Like

      • “in some way” be “agents or cognizers”? there might be things that are (presently) unknowable? Well yeah, but I’m happy enough to leave that to the sci-fi & fantasy authors. Let’s be clear: Brandom does semantics, not metaphysics or epistemology. Our transactions have meaning, conceptual content: how can our discussion get off the ground otherwise? Brandom gives us a non-metaphysical account of how meaning is conferred, taken, treated, understood. This is not a generic theory of ‘agency’ or a monopoly on epistemology. ‘Sapience’ has no pretensions to an absolute metaphysical account of intelligence, it simply explicates the structure of normativity that undergirds our engagement in any meaningful discourse.

        If we have no criteria of adequacy for what counts as an ‘agent’ (besides some ad-hoc vagary) then surely one must admit of either panpsychism or scepticism, since the folk philosopher can always ask: “How can you know that X is not (in some way) intelligent/agential?”. But then we can’t have a coherent argument at all, ‘agent’ could mean anything, everything, or nothing. I don’t see how it makes any sense to deny any transcendental conditions whatsoever on what counts as an ‘agent’ (or more generally, what it even is to count something as being thus-and-so), and nevertheless go on pre-critically wielding the term ‘agent’ (or even arguing at all).

        Without some fundamental deontology we can’t begin to hope for contentful discussion. Yet one cannot build a bridge from ontology to deontology, since to pose the question of being we must be able to give an account of questioning itself; would we really wish to claim that what is individuated by our ontology is so independently of the fundamental structure of our discourse? We needn’t by virtue of this methodological primacy give ontological primacy to this normative structure, in fact I wouldn’t claim it has any ontological import.

        Might there be communities or actors that are forever hermetically sealed off from us? Perhaps, but then they are literally meaning-less. What are the preconditions for the meaning of the other to be meaningful to us? What do we even mean by ‘meaning’? Enter Brandom stage left.

        Like

      • Matt,

        “Our transactions have meaning, conceptual content: how can our discussion get off the ground otherwise? Brandom gives us a non-metaphysical account of how meaning is conferred, taken, treated, understood. This is not a generic theory of ‘agency’ or a monopoly on epistemology. ‘Sapience’ has no pretensions to an absolute metaphysical account of intelligence, it simply explicates the structure of normativity that undergirds our engagement in any meaningful discourse.”

        I’m not sure how to take the claim that Brandom doesn’t have a generic theory of agency. He purports to give us an account of the nature of intentionality. He distinguishes between first class agents (who can confer normative statuses) and those with merely derived intentionality. He means his account to apply to human agents and meaners (since we’re the only extant model) but clearly doesn’t restrict the theory’s scope to humans alone So tt’s a form a transcendental humanism. What’s not generic about that?

        As for being non-metaphysical. Well, he certainly makes negative metaphysical claims since, as you know, his is a vehicle free account. Whether we should take seriously the claim that a normativist account is devoid of metaphysical content simply because it neglects to explain how norms are grounded in non-normative states or dispositions is another matter. What you take to be a virtue, I take to be a serious lacuna.

        ““in some way” be “agents or cognizers”? there might be things that are (presently) unknowable? Well yeah, but I’m happy enough to leave that to the sci-fi & fantasy authors. ”

        I’m not a foaming-at-the-mouth negative theologian and I’m only a part-time fantasist. I’ve supported my cases with arguments. They’re speculative obviously, but many of my assumptions are shared by my normativist opponents.

        Like

  5. “these new Promethean projects in one form or another, whether on the Right (Land) or Left (Brassier/Negarestani) seek to empower the inhuman at the expense of the human agenda”

    Left Prometheanism sees the inhuman as continuous with the human, not as distinct from it; it denies that there is any ‘human agenda’, which would reify what the human is and ought to be. The autonomy of reason implies that the latter is already alien, and has nothing to do with any self portrait of humanity. The Left Promethean sees capitalism as an irrational process, it may be autonomous but it is not rational (as Land would have it), containing inherent contradictions and unable to do what its proponents say it does: a kind of machine stuck in a self-destructive endless loop.

    Similarly (at least on my reading), there is no particular distinction (for Negarestani) between humans being replaced by inhuman machines and the bootstrapping of the human by technological/scientific protheses. These are really just species of the wider genus that Negarestani takes as his object. He’s clear that even social formations can embody the inhuman.

    Like

    • Yea, even Land’s Techonomic formations have no scale limit variable, so that he can trace the agents at different scales. Let’s face it both Brassier and Negarestani learned from Land back in the 90’s even if they misrecognize much of their dependency on his thought. My only problem with it at all is that aligning such an inhumanist agenda on the Left seems self-defeating. It goes against the grain of Leftist thought – at least the Maoist/Leninist strains in Badiou/Zizek/Johnston… etc. which oppose obviously any naturalism no matter what stripe. Badiou would term these projects “democratic materialism” as in his Logic of Worlds.

      Liked by 1 person

  6. So much to say, so little time… Those who really think would say much less and do much more… All Reza wants to say is that thought is dead, let’s resurrect it… One must add to that, what’s the reason for worshipping reason reason biting its own tail so much? By all means, keep masturbating one another up…

    Liked by 1 person

    • To misquote Martin Amis (I think) “When a wank is what you want, then a wank is just the thing”. But (coming back to my earlier point) do we even understand what “thought being dead is” if we don’t know what thought is?

      Liked by 1 person

      • I always like Heidegger’s: “What is most thought-provoking in these thought-provoking times, is that we are still not thinking.”

        If thought is a product of thinking, then What exactly is thinking? Or, what are the conditions necessary for thinking thought? Of course this would lead to “thinking about thinking” or metacognition: https://en.wikipedia.org/wiki/Metacognition

        Of course Glassner’s succinct elaboration is always a nice fiction: “…thinking calls for a persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports it and the further conclusions to which it tends. It also generally requires ability to recognize problems, to find workable means for meeting those problems, to gather and marshal pertinent information, to recognize unstated assumptions and values, to comprehend and use language with accuracy, clarity, and discrimination, to interpret data, to appraise evidence and evaluate arguments, to recognize the existence (or non-existence) of logical relationships between propositions, to draw warranted conclusions and generalizations, to put to test the conclusions and generalizations at which one arrives, to reconstruct one’s patterns of beliefs on the basis of wider experience, and to render accurate judgments about specific things and qualities in everyday life.” (Edward M. Glaser, An Experiment in the Development of Critical Thinking, Teacher’s College, Columbia University, 1941)

        Like

  7. This is a great piece. The only thing I think I would play up more, Adam, is the *speculative* nature of all these discourses, and the dismal track record of the kinds of intentional posits they use. If these posits haven’t been able to deliver anything more than perpetual controversy over 25 centuries of slowly changing cognitive ecologies, why should we expect them to find traction in radically changing ones?

    This is the point I always bring up on your site, I know, but it becomes out and out stark once one takes the future of the human as our domain. Why should anyone take their theories of cognition as anything more than ‘more mere speculation’?

    I don’t know Land well enough to say, but I actually think the interesting theoretical distinction is that between neotraditionalists, those like Brassier and Negerestani who think (the proper form of) prescientific theorization regarding cognition actually captures something essential, and those like Roden, Hickman, and myself who think this is simply more nooconservatism, the confusion of what are actually parochial socio-cognitive tools with the very shape of discursive possibility.

    For them, some kind of intentional/normative functional framework actually stands outside ecology. The only thing allowing them to make that assertion is the lack of any compelling scientific account of intentionality. Despite their aspirations to autonomy, their positions clearly turn on empirical bets regarding the nature of cognition. If my account is vindicated, for instance, then their position will be relegated to something akin to astrology.

    Since I think human cognition involves astronomically complicated processes, and since traditional theoretical speculation regarding the nature of astronomically complicated process on the basis of traditionally available information *is almost always debunked,* the idea that tweaking our traditional tool sets and characterizations will provide any useful roadmap going forward strikes me as very unlikely. Normativisms, I think, are abstract versions of Christian apologia, isolating what seems indispensable to some intentional grasp of human activity, then rationalizing it.

    The Semantic Apocalypse is more radical than either conceive because they (astoundingly) think the material basis of human activity is somehow irrelevant to their speculative accounts of human activity. (Check out: https://rsbakker.files.wordpress.com/2015/11/crash-space-tpb.pdf). Sociocognition is heuristic cognition. Heuristic cognition turns on cues possessing invariant differential relationships to the systems to be solved. What we are witnessing is, as David alludes to, the death of the ‘invariant background,’ the frame of default assumptions sociocognition uses to leverage solutions.

    It’s the end of meaning, plain and simple. The new conceptuality you call for will have to be a post-intentional conceptuality, I fear.

    Liked by 1 person

  8. Saying, “The only thing I think I would play up more, Adam, is the *speculative* nature of all these discourses…”

    While speculating something like this:

    “In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am **confident** it will), then we are a decade or two away from God.” (https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-ghosts/)

    It’s ok to speculate in our neurocognitively enlightened house of glass, but not outside of it.

    Like

    • I fear I don’t understand, unless you’re gaming the ambiguity of ‘speculation.’ Speculation in the sciences generally gets sorted, theory and praxis generally convolve, and the world is generally (often radically) changed thereby. Speculation in traditional philosophy generally never gets sorted, professionals generally bicker, pay their mortgages…

      This is a platitudinous, isn’t it?

      Like

      • “Speculation in the sciences generally gets sorted, theory and praxis generally convolve”

        So the pseudoscientific prophecy of the sort quoted above counts as scientific speculation these days?

        “This is a platitudinous, isn’t it?”

        Yes, it truly is.

        Like

      • deviantconsciousness: I still don’t understand. So you don’t think scientists arbitrate their disputes? If you accept this as a platitude then what’s the problem? Or do you think you’re catching me out on my ‘logical positivism’ or something like that? If so, please explain why my theory of meaning puts me in anything resembling Ayer’s dilemma.

        Otherwise, I have no problem biting the speculative bullet. Why should I? At least my speculation doesn’t have 25 centuries of futility to embarrass it!

        Like

    • This isn’t news, really. Strange that. Set and Category theory are the two contenders for the foundations of mathematics. This battle has been going on for 80 years at least.

      Fernando Zalamea in his Synthetic Philosophy of Contemporary Mathematics is probably the best guide into this maze. While Badiou in his two volume Being and Event (vol II Logic of Worlds) uses both, but prefers Set theoretic as the politically viable matheme for ontology because of its alignment with Lacan’s “empty place”, the empty set being the Void, etc.

      So I can understand this poster’s problematic misrecognition… he isn’t a mathematician so probably doesn’t know the underlying history of mathematics that has in our time revived Category Theory. Stanford’s Philosophy site has a good intro:

      http://plato.stanford.edu/entries/category-theory/

      Thanks for the link, I’ll have to read the post through…

      I see that Voevodsky’s break through came by accident through his study of type: “He realized that all this type theory stuff could be translated to be equivalent to homotopy theory, his field of mathematics. Not only that, but it could provide a new, self-contained foundation for all of mathematics”

      Of course homotopy theory is a subcategory of the larger framework of Category theory which reveals how different kinds of structures are related to one another. For instance, in algebraic topology, topological spaces are related to groups (and modules, rings, etc.) in various ways (such as homology, cohomology, homotopy, K-theory). So that Category Theory is in some ways a return to Structuralism, which as we know was anathema during the postmodern poststructuralist era. So the world swings… during my years as a Software Archetect, Systems Analyst, Engineer, etc. I did a great deal of study of the history of types as part of an ongoing investment in Object-Oriented systems and distributed computing. But am no pure mathematician… although I love mathematics!

      What was interesting is that he came to this realization by way of using an Engineering approach: he was seeking a short cut for proofing, so sought a way of developing an application that would do the work for him. Laziness has its virtues. As he says: “Yes, mathematicians who want to use a proof assistant will have to learn some things – essentially, it’s learning a programming language – but once they’ve made that investment, the process of using the proof assistant becomes pretty natural. In fact, Voevodsky says, it’s a bit like playing a video game. You interact with the computer. “You tell the computer, try this, and it tries it, and it gives you back the result of its actions,” Voevodsky says. “Sometimes it’s unexpected what comes out of it. It’s fun.”

      After 40 years in the software trade one’s knows well that laziness is the mother of invention. Software developers (and, I’m sure mathematicians) are all lazy and seek ways to do things as quickly as possible, developing short cuts, or programs that can do the tedious, boring work of repetitive analysis, etc. that if done by hand might take years. Develop simplified systems that can reduce time: that is a central axiom of software development.

      The other thing is what happened for Voevodsky, he came upon his break through by accident. In search of understanding how to create a program he studied types (computer math), without knowing exactly what he was misunderstanding: the breakthrough came when he found an equivalence between type theory and category (homotopy) theory. The recognition scene was the birth of his new idea, which is truly one of those Freudian Uncanny moments when the unfamiliar suddenly awakens what has long been familiar but unknown as such in one’s thinking. In other words he knew what he was seeking but had no way of realizing that he knew until the recognition between two disparate theories snapped his mind awake.

      And, like anything else in Open Source community, he is releasing his library to the community. I’m sure that this will lead to many breakthroughs in speeding up the proofing process, and save mathematicians a great deal of time; while, at the same time, opening up the door onto theorems that many probably left to the side because of the exorbitant time involved. As he says:

      “Some of those computer verifications rely on a library of verified proofs that Voevodsky himself has created, so Voevodsky decided to submit his library to ArXiv. He imagined a one-page description of what the library is, along with all of his Coq files. It turns out, however, that ArXiv isn’t yet up to the task — while it can accept attached files, they can’t have any directory structure. Voevodsky plans on pestering the folks who run ArXiv until they make it possible.”

      In fact did a search and found the Cornell site for ArXiv: http://arxiv.org/
      Along with the subsection for Category Theory: http://arxiv.org/list/math.CT/recent

      Like

  9. Ok, since we have had our fun and games as well as our share of offensive remarks adding insults to our injuries, we can now engage with the heart of the matter at hand… I shall therefore join this intellectual orgy and attempt to clarify my stance on some of the issues dealt with herein, in the way of avoiding further confusion by way of composing a brief addendum to Craig’s addendum to his informative post on the past, present and the future of humanity in relation to non-human entities…

    The question is: “How does it further our understanding of the subject to situate neuroscience in the context of transcendental realism/materialism and non-reductive naturalism?” The answer I have in mind to this question is that “the ideas are objects we are embedded in and embody at once.” This ontological/epistemological principle is the point of departure for a broader research into the developmental possibilities of a new mode of enquiry which would put philosophy and neuroscience into a more interactive relationship with one another, driven not only by the dialectical process constitutive of the methodological differences between natural sciences and philosophy, but also by the sustenance of a generative interaction between the ontological and epistemological modes of being and thinking.

    As is well known since Kant, the instruments (software and hardware tools) social and natural scientists have at hand to investigate natural and cultural phenomena play a very significant role not only in the analysis, but also in the production of the object/subject of study itself. This is a venture into the relationship between the manifest and the scientific images of humanity designated by Wilfrid Sellars’ transcendental realism and Alain Badiou’s materialist dialectics of the human animal and the immortal subject of truth. The rigorous disjunction introduced by Sellars and Badiou between sentience and sapience demonstrate, at least in theory, that the constitutive link which has come to be considered missing between the mental phenomena and the physical entities is actually a non-relation rather than an absence of relation, for it is neither transcendental nor immanent to the subject, but is rather the manifestation of a pure affectivity intervening in the ordinary flow of things, initiating a rupture in time itself, for the subject is now in the domain of the death-drive, a concept introduced by Freud into the field of psychology together with a paradigmatic change of the field itself into meta-psychology. This innovative act was a consequence of Freud’s dissatisfaction with the neurobiology of his day, which did not even ask many of the questions he had mind, let alone answer them.

    If we keep in mind the Parmenidean axiom that “thought is being”, it becomes clear why, in his article on Plato, Kant and Sellars, Brassier tries to answer the question of how to orient ourselves towards the future in accordance with that which is not. Against the idea that thought and being are one and the same thing, Brassier claims that thought is non-being rather than being. Put otherwise, the correlate of thought is non-being rather than being, being and non-being are entwined.

    While Brassier openly asserts that he endorses a “transcendental realism” by way of engaging in a rigorously affirmative reading of Wilfrid Sellars’ take on the subject in comparison with Thomas Metzinger’s “self-model theory of subjectivity”, Adrian Johnston takes it upon himself the task of refuting John McDowell’s theory of “first and second nature,” proclaiming a transcendental materialist theory of subjectivity as a Phenomenology of Spirit for today, in the light of the recent developments in neuroscience, that is. Metzinger is well known for his innovative research and novel output on the subject. That said, although Metzinger is an eminent neuroscientist and philosopher of cognition, he reserves no room at all for affects in relation to consciousness and agency in his books on the self as no one and the subject as non-being. Lacking a sufficient theory of the subject as an agent acting in accordance with rational thought, or a “rational self-consciousness” as Sellars would put it, a conception of subjectivity as agency in the service of truth as manifestation of a dynamic real, Metzinger remains trapped in Plato’s cave with his “phenomenal self-consciousness”, thereby failing to give an account of how more than material subjectivity emerges from matter itself. What is required today is a conception of self-consciousness which also includes the concepts of affectivity and agency within the field of neurobiology, or a “non-reductive naturalism” as John Mullarkey puts it.

    I would like to introduce to this ongoing discusson the concept of affect and the role of agency in relation to the formation of concepts and percepts in and through a close inferential analysis of the effects of the subject’s relation to pain and suffering as well as joy and pleasure in its own self-constitutive process.

    In a world wherein conscious desire is absent, one cannot know what is to be done, what can be done, and how to do it. The reduction of consciousness to physical matter deprives humanity of the possibility of rationally intended change. The idea that intervening in the workings of nature solely by way of that which nature presents independently of culture is to fall into the trap one sets for oneself. It is not only necessary, but also possible to develop a theory of self-conscious subjectivity as being aware of one’s embeddedness within one’s own time and space. Thought can mean something only in so far as it is situated within an already given context indeed, but for thought to mean something worthy of the name of truth it also has to leave the old paradigm behind, change the co-oordinates, and perchance initiate a new course of continuity in change separate from but in contiguity with the “myth of the given” at the same time. The emergence of a “more than material subjectivity arising from matter itself” is indeed a consciously desirable drive to sublate the very mode of being and thinking in which the subject is embedded and embodies at once.

    It is a matter of realising that theory and practice are always already reconciled and yet the only way to actualise this reconciliation passes through carrying it out and across by introducing a split between the subject of statement (the enunciated content) and the subject of enunciation (the formal structure in accordance with this content). In Hegel’s work this split is introduced in such a way as to unite the mind, the brain and the world rather than keeping them apart. It is a separation which sustains the contiguity of these three constitutive elements of consciousness, not only as concept but also as percept and affect. The presumed dividedness of philosophy into the analytic and the continental theories of mind, language and cognition is not a division between different modalities of the same thing, this division is rather between something and nothing, and therein resides a gap that splits as it unites the physical and the metaphysical in a fashion analogous to the synapses connecting and disconnecting the neurons in the brain.

    All this, of course, is merely the tip of the iceberg which shall eventually emerge out of the sea of madness disguised as inferential rationality. In the way of writing a future history of a non-reductive and non-physicalist philosophical agenda now, the mode of being and thinking I’m trying to animate is driven by a concept in process. For the time being I name it Hermetico-Promethean post-nihilism, if I may be forgiven for such an ambitious expression with my rather dim intelligence, that is…

    Liked by 1 person

    • Hermetico-Promethean post-nihilism… nice phrase. Yet, reading that I almost cringe, in the sense that you seem to have admitted that philosophy is in a quagmire, a muddle of In-betweeness… at a cross-roads where it is divesting itself of the excess baggage, but not quite sure which way to move forward. While those who are in let’s say the neurosciences are no longer muddled about what the brain thinks metaphysically, but rather with the very real material processes and conditions that give birth to thought to begin with.

      I’m in agreement with Scott Bakker that to tell the truth we’re all a little blind to anything at all, especially to the so to speak knowledge we think we can infer about ourselves or the world. In truth most of what we know will not get us to what is unknown, we spin our circles in repetitive chatter turning new tropes and phrases, positing new axioms, or as you have refurbishing the idealism of Parmenides “thought is being” against those like Leucippus/Democritus/Lucretius. And, of course the divide between the neo-Hegelians of the Chicago school Brandom/McDowell along with Sellars neo-Kantian appraisals on one side; and, Badiou, Zizek, Johnston, Malibou; and, don’t forget the vitalists (Deleuze, DeLanda, etc.) – on the other side of this divide.

      I think what you’ve marked out correctly is that almost all of the various battles are not over vocabularies (Rorty), but rather over the Framework itself: Which ontology is most viable? And, how is knowledge (epistemology) to move forward in a world where thought is giving way to the non-human? How to think thought without humans? You, like Brassier and Negarestani, seem to be staking out some form of radically new humanism, since you’re still keeping the human as a category within a “space of reasons”, even as you decenter thought from the humanist traditions. My feeling is that the human as a category will need to be elided and eliminated going forward. For too long philosophy and even the sciences have tried to keep the “human” in sight. Why? Even all our current research on AI, robotics, etc. seems to want to ape the human, whether physically in look-alike human android dolls; or, mentally to duplicate human affective, imaginative, and intellect based functions. Ultimately whatever comes out of this will need to be something other than our trivial notions of humanity and its goals of purposes. More in line with my friend David Roden and his Disconnection Thesis.

      Do we truly want to save the intentional mind? Are mental states anything other than “folk psychology”; nice fictions or heuristic devices to keep the truth at bay? Let’s finally admit it that our emotions/affects have gotten us in more trouble than their worth. Can humanity survive without affects? What would such a life-form be? Should we be selective about what affects to keep and splice out? Will that even be possible? Is it genetic? Is it part of the energy system of the brain/body interaction, rooted in ancient evolutionary processes of survival mechanisms that still survive as remnants? We are blind to these things. Even all the neuroscanning imaging devices can only describe what the processes do, not what they are in philosophical terms. The sciences could care less about our metaphysical appraisals, it deals with the pragmatic terrain of what works, and leaves the give and ask of reasons to metaphysicians or fictionalists. Who will make the judgement calls if and when the neuroscientists unlock thought and reveal the mechanics of the brain in detail? Are we even ready to accept just what we are? Just how deterministic our universe truly is, and how much we love to fill the gap of our blindness with fantasies? In fact Bakker would say that all of philosophy is just that: a two thousand year old epic fantasy. One we’ve all agreed to agree on: a consensual hallucination carried on from generation to generation to defend us from the truth the sciences are slowly uncovering. Maybe as T.S. Eliot once said: “Humans cannot bare too much truth.” And Nietzsche knew we need our illusions, that the truth kills us. So who is right?

      When I look around me I see grown men and women buying into all these mad schemes of transhuman, posthuman, transhuman projects of “transcendence” – of moving beyond the limits of the human: Is this truly the direction of science and philosophy in our time? Reading Negarestani just crystalized much of this for me, for he is implementing this as a normative program for philosophy: the task of constructing a viable intelligence beyond the human scale (i.e., whether of artificial intelligence, collective intelligence, or any of a multiplicity of otherings). This notion of suturing philosophy to an engineering programmatic task seems like a new totalitarianism of the Intellect against affect and imagination; yet, as Brassier has noted, he is seeking a merger. But what does he mean by a merger of intellect and imagination? This cannot be the same as those who like the Romantics or Blake held out for a primacy of Imagination against Reason. Brassier is still a child of the Enlightenment, and suborns his thought to the sciences and consent of the many or tribunal of collective judgement on truth, etc. Negarestani seems like a new Lenin of the Left seeking to impose new rules and regulations on the lumpenproletariat, proletariat, and elites. Normative give and asking of reasons is a Game of Morals, even if breaking free of the whole voluntarist (God,Will) traditions. They seek an ethic that is no longer tied to the telos of ancient thought, negative theology, or theophilosophical reflection. As well as disconnected from Instrumental Reason. A new for of Reason seems in the offing. Badiou and Zizek seek Truth or Act. Johnson the reenlistment of the vitalist or biological kingdom within the ontological fold.

      Badiou divided French Philosophy into two traditions: the mathematical idealist ontologies; and, the vitalist poetic ontologies. Is this all? The New Materialists following Deleuze, DeLanda, Brodotti, and others carry on this vitalist worldview. While Badiou, Zizek, Johnston and their progeny a crossover of Idealism/Materialism into Immaterialism in which Hegel’s concrete universal is reinstated not in a substantial formalism (ala Harman, Bryant, Bogost, Morton), but rather in its opposite – the Void. Yet, even in Harman the formalism is based on the Void of an actual vacuity, rather than Aristotelian substance. Harman’s constructive objects of the real inhabit that void of the insubstantial/immaterial as objects rather than subjects. Harman was always a careful reader of Zizek and constructed a system opposite his with the Object replacing the Subject and the human into non-human.

      The more I read current trends I find myself always coming back to Zizek’s “traversing the fantasy” (Lacan). Yet, this whole system depends on the lack in the Other (Symbolic Order). We construct our fantasies to fill the gap in our knowledge, imposing our ideologies on reality as totalistic systems that continually need to be shored up. The Real is this gap of unknowing antagonism in things that will not be suborned or sutured to our Symbolic Order. So we move like madmen in a circle of doubt and horror between the truth and our fantasies.

      Liked by 1 person

      • That was very enlightening as usual dear Craig… You have become an expert in clarifying thoughts and nailing them with precision beyond measure… That said, I have to double-think before I properly respond, for not being a native speaker I have to think at least twice before I speak… But I can see your point, let it suffice for the time being to say that on the eve of philosophy and beyond it the logic of sense lost itself in calculations without end, only to find itself in this poetic (non-philosophical) time of truth locked in an infinite process of eternal dismemberment beyond clocks…

        Liked by 1 person

      • “The more I read current trends I find myself always coming back to Zizek’s “traversing the fantasy” (Lacan). Yet, this whole system depends on the lack in the Other (Symbolic Order). We construct our fantasies to fill the gap in our knowledge, imposing our ideologies on reality as totalistic systems that continually need to be shored up. The Real is this gap of unknowing antagonism in things that will not be suborned or sutured to our Symbolic Order. So we move like madmen in a circle of doubt and horror between the truth and our fantasies.”

        Very, very well put. And the thing to always remember, the machinery of science continues churning, revolutionizing, deforming, no matter how much CO2 these traditional philosophers dump into the atmosphere. At some point we have to recognize that traditional philosophy has become a BIG part of the problem, clouding as it does, the abyssal proportions of the problem. It has become part of the fantasy apparatus, the font from which endless rationalizations of exceptionalism flow.

        Big Meaning instead of Big Tobacco.

        Liked by 1 person

      • To begin at the beginning we shall say that philosophy is the dialectical process of truth in time, it is an infinite questioning of that which is known, a continuity in change of the unknown, a practice of situating eternity in time. Without a relation to the requirements of ones own time philosophy may still mean many things, but these do not amount to anything worthy of rigorous consideration much. This doesn’t mean that philosophy must have an absolute conception of good and constantly strive towards it. Quite the contrary, if anything, philosophy would much rather resist against the evil within this inconsistent multiplicty falsely named world. No, there is no one world against which philosophy can situate itself, but rather many multiplicities out of which philosophy infers meanings and values in accordance with a better future in mind. Not necessarily better than today, but less worse than it will have been if nothing is done to slow down worsening. So having an idea of a better future is not necessarily imposing a totality, an absolute conception of goodness upon the multiplicity of existents. What’s at stake might as well be that the resistance aganist evil in time is itself a creative act sustaining the less worse condition of future existence. It’s all bad and it can only get worse, the question is this: how can we decelarate this worsening condition of we humans, we animals and we the plants?

        My interest in science in general and neuroscience in particular derives from this understanding of philosophical activity as a dialectical process in nature. For me science is not an object of philosophy but a condition of it. Presumably you can already hear Badiou’s voice herein, and rightly so I must say. Badiou had once said that “philosophy is the conceptual organisation of eternity in time.” What, then, is dialectic? Dialectic is simply “the unity of opposites,” as Fredric Jameson would have put it. Everything has within itself nothing and inversely. The self and the other are always already reconciled, but in order to actualise this unity philosophy splits the one in such a way as to sustain the process of its reconciliation within itself. The one is not, it all begins with two and continues ad infinitum. Of course a designation such as Hermetico-Promethean post-nihilism is paradoxical, but this being paradoxial is itself creative of the space out of which something not only new but also good, or less worse than that which is or could be, can emerge. That said, a positively altered future itself only ever emerges from a split introduced in-between the past and the present, the good and the bad…

        Now, I see nothing bad in interrupting the process of negativity, but needless to say one cannot achieve this by affirming it. One still needs negativity to interrupt negativity. It is in this sense that nihilism turned against itself becomes a condition of progressive philosophy. If science is making a huge progress while the whole planet is rapidly dying, what’s the point of that progress in science? It becomes a meaningless activity for its own sake. Without a future there can be no science either, but it is only by way of putting science into good uses that we can have a future. And when I say we I mean we humans, we animals and we the plants. Paradoxical though as it may sound, robots are of no concern to me, but enhancement technologies such as neuroplasticity softwares are…

        I noticed in your comment that you are trying to situate me into this or that camp. No, I take whatever rings true to me in accordance with my intention. Intending something is not necessarilly willing without consciousness. One may be driven to anything at all, including willing nothingness as Nietzsche has thaught us, adding that “man would much rather will nothingness than not will.” Although Nietzsche’s proclamation may be valid for some, it is not necessarily valid for all. As I said two comments ago so I say again now, I’m still up for consciously desiring good life. That said, I reckon it’s not even worth mentioning that will, drive and desire are not the same thing. As for the difference between consciousness and self-consciousness, we must return to Hegel as always. There are indeed many illusions in this life, some for life yet some others not, some necessary while some irrelevant. Not that I am one, and yet it’s not for nothing that Hegel had once said, “the great man of his time is he who expresses the will and the meaning of that time, and then brings it to completion; he acts according to the inner spirit and essence of his time, which he realizes.” This, I think, is still true and ever will be, if we are to have a future worthy of the name, that is…

        Liked by 1 person

      • I appreciate you taking the time to clarify. 🙂

        Yes, I can see the influence of Badiou’s ontology, as well as Brassier/Negarestani with the Chicago Brandom Neo-Hegelian thought clearly. I remember your book on Freud’s drives well, too. Of course I follow Zizek against your specific reading of Hegel and the sense of reconciliation: “Reconciliation does not mean that the subject finally succeeds in appropriating the otherness which threatens its self-identity, mediating or internalizing (i.e., “sublating”) it. Quite the contrary, Hegelian reconciliation contains a resigned note: one has to reconcile oneself with the excess of negativity as a positive ground or condition of our freedom, to recognize our own substance in what appears to be an obstacle.”

        The point is we break through the Real only when the fantasy we’ve constructed to fill the gap of our unknowning fails us, only then does the notion of a reconciliation become possible not by incorporation and change of reality, but rather by acceptance and acknowledgement of the empty place and hole in reality that can never be filled with a fantasy; as well as, our knowledge of things fails us due to our incompleteness and blindness, and can never be reconciled and made whole because reality too is incomplete and not whole. Therefore the dialectic is borne of oscillations between opposing forces that never can be reconciled accept as limiting fictions or heuristic fantasies, devices, apparatuses that help us get on with our work. And since as in Badiou, the sciences is one of those conditions of philosophy, and philosophy is conditioned by the sciences in a dialectical interplay that cannot be closed down of completed, the future is always open and incomplete for invention and surprise, wonder and change. Philosophy is always retroactive – after the fact of those conditions that condition and are conditioned by it. Our judgements come after the condition, but also condition the very logic of worlds those conditions point too and are situated in as events: “situations” (Badiou) or “acts” (Zizek).

        Liked by 1 person

Leave a comment