Question from a comment on Plato and the Horizon of Meaning

Jan Cavel asked:

On http://veraqivas.wordpress.com/2014/12/08/plato-is-not-platonism/ you comment „Platonism is the fact that one is always bound by his horizon of meaning”. Could you please expand this? Of course, the whole article is about this small excerpt, but I would really appreciate if you could find the time to take one or more hits at this „binding of one with his own meaning-horizon”. Thank you… 

The point being that we are bound to Plato’s horizon of meaning even if we oppose it. He set the terms of the debate, and no singular philosopher – not Descartes, not Kant, not Heidegger, etc. have yet to escape this circle of meaning or produce something new and outside its horizon. Can we think the other? Can we move outside or from within the labyrinth or navigate the multiplicities and produce something else: another ‘horizon of meaning’? Perhaps, not… or yes?

Long ago I remember my university philosophy mentor used to use the example: he’d draw a circle on the blackboard and place us in it, and then draw another circle just beyond it and place certain key thinkers in it. He would suggest that what these thinkers do is revise and remap the truths of the former circle retroactively and give them a larger stamp for the mind that allows us to think new ideas, thoughts that have shifted due to our technologies – accidents of that intersection between mind and its creations. It’s this strange anomaly at the intersection of technology and thought that new Ideas emerge in time and expands our original horizon of meaning. That notion stuck with me long ago and I’ve been studying the dialectical interactions of humans and technology in philosophers and other thinkers since that time. For me it is this dialectical interaction not of ideas in our mind, but of those processes we shape that in turn reshape us and open up possibilities for further exploration and creation.

Ideas are not the immortal engines of creation, but are rather the accidents of time: and arise at the intersection of humans and technology in a dialectical relationship that over time has become so ubiquitous we no longer see this process for what it is. Technology is not the artifact of eternal Ideas, and neither is it some objectified Idea in the mind, etc. Technology is this dialectical process in praxis, an ongoing temporal interaction and negotiation of reality rather than a trace run of our finitude. Technology is the way we navigate the world, a vehicle for exploring the farthest reaches of our own horizons of meaning. As we invent new forms of technologies they open up our horizons of meaning, and those circles revise the maps of the mind and offer greater possibilities.

Language itself is the most ubiquitous technology we’ve invented so far, and in turn it has shaped our cultures and civilizations beyond the base set of relations we needed to survive on this planet. It did not come full blown, but was a slowly modulated process of give and take as we used it to forge relations with reality and each other. Language is a technology. It was developed over time, and as many linguists agree it doesn’t last (i.e., all languages change and become obsolete or are transformed through temporal processes, etc.). Words are tools for negotiating reality. As our understanding changes so do the tools, and new words are grafted onto the structure of language to shape new ideas till they too die and are once again replaced by better tools, etc. But this is only an example, not the reduction to linguistic signs of the Linguistic Turn.

I simplified, obviously. I mean that one is always either a proponent, neutral, or an enemy of Plato’s realism of Ideas: whether they exist eternally beyond, within, or in nature: the central core of Idealism; or whether there might be something else to explain this.

Take for instance Slavoj Zizek’s use of this tradition out of the German Transcendental movement – what he terms ‘dialectical materialism’ does not oppose this notion of Ideas per se, but rather stipulates the obverse – that instead Immortal Ideas as efficient causation engines of reality, he tells us they are accidents of time, that they are mortal; they are not sources of efficient causation, but rather the endpoint in a process of imminent production (not Schelling’s productivity, Ideas or not essences: rather ideas emerge from the pre-ontological forces of two vacuums in flux, etc.): that they emerge in time and are succeeded by other ideas and die off and are replaced (the main drift is Ideas exist, but only in time not outside it in some eternal sphere of immortal splendor, etc.). Yet, even Zizek is bound by the horizon of meaning that Plato set two thousand years ago and works against this tradition of meaning of Ideas. Zizek takes his notions from Den of Democritus and aspects of modern String Theory and quantum flux, etc..

Was Plato a Platonist: The Theory of Forms

My friend Virgilio A. Rivas over at Kafka’s Ruminations thinks I have reduced Plato to the tradition of Platonism, accusing him of being an Idealist. I was not the first, nor will I be the last to do so. It all hinges on Plato’s Theory of Forms. As Virgilio describes it:

The chief problem of reducing Plato to an idealist is the assumption rarely interrogated that Plato is Platonism. History should be our guide. Platonism is not Plato.

If anyone began the whole tradition of Platonism as Idealism it would have to be Plato’s prime pupil, Aristotle who described Plato in the first book of the Metaphysics  (Metaph. A6, 987a32–b10):

In his youth he [Plato] had become familiar first of all with Cratylus and with Heraclitean views to the effect that all perceptible things are always in flux, and there is no knowledge that relates to them. This is a position he later subscribed to in these terms. Socrates, on the other hand, engaged in discussion of ethics, and had nothing to say about the general system of nature. But he was intent on finding out what was universal in this field, and was the first to fix his thinking on definitions. Plato followed him in this, and subscribed to the position that definition relates to something else, and not to the perceptibles—on the kind of grounds indicated: he thought it impossible for there to be a common definition of any of the perceptibles, since they were always changing. Plato, then, called these kinds of realities “ideas,” and claimed that the perceptibles were something in addition to them, and were all spoken of in terms of them—what he said was that by virtue of participation, the many shared their names with the forms.1

This notion of imperceptible Universals (“ideas”, “Forms”: from Greek εἶδος (eidos) and ἰδέα (idea)) as the organizing force of perceptibles is the central tenet of both forms of Idealism: the two-world theory based on abstract Universals, and the one-world or immanent theory based on Hegel’s “concrete universals”, etc. This notion that perceptibles (objects of the senses) were supplements to the “ideas” or properties and appendages of the ideas themselves is central to Aristotle’s conception of Plato’s theory of forms. This intermingling of form and property begins the whole battle of what I’ve termed substantial formalism and its traditions in Platonism.

But before we tease out the history of Platonism we need to understand first what Plato himself taught us in his own dialogues. I’ll admit that for me (not being a scholar of ancient Greek) a handicap, in that I usually depend heavily on both etymological understanding and the history of translations and transliterations of terms. To speak of Plato or Aristotle would be to have invested in an understanding of the terms they used, otherwise one is truly handicapped and not able to tease out the nuances of the linguistic signs that harbor specific flavors and colors (i.e., tropes of rhetoric, figures of thought or speech, etc.).

As we find even on Wiki the notion of form has a pre-history in its linguistic use (here):

The Greek concept of form precedes the attested language and is represented by a number of words mainly having to do with vision: the sight or appearance of a thing. The main words, εἶδος (eidos) and ἰδέα (idea) come from the Indo-European root *weid-, “see”. Eidos (though not idea) is already attested in texts of the Homeric era, the earliest Greek literature. Equally ancient is μορφή (morphē), “shape”, from an obscure root. The φαινόμενα (phainomena), “appearances”, from φαίνω (phainō), “shine”, Indo-European *bhā-, was a synonym.

The point to be made here is that even for Plato there was a ready made concept floating in the language that he was able to appropriate and turn toward his theory of Universals (i.e., the notion of Forms has a history, and is not a neologism). Plato’s most explicit statement on the Theory of Forms (i.e., one finds in in many dialogues on Beauty, Goodness, Justice, etc., but implicit rather than explicit) comes late in his Republic where he describes the Allegory of the Cave.

In the allegory, Plato likens people untutored in the Theory of Forms to prisoners chained in a cave, unable to turn their heads. All they can see is the wall of the cave. Behind them burns a fire. Between the fire and the prisoners there is a parapet, along which puppeteers can walk. The puppeteers, who are behind the prisoners, hold up puppets that cast shadows on the wall of the cave. The prisoners are unable to see these puppets, the real objects, that pass behind them. What the prisoners see and hear are shadows and echoes cast by objects that they do not see. 

What Plato hints at is that these prisoners because of their place in the cave, unknowing of the real world behind and above them will mistake appearance (φαινόμενα (phainomena), shadows) for reality. They will take the shadows on the wall of the cave for the real, never knowing that it is the ideas casting their shadows on the wall. All of this comes to Plato’s point that when we speak of things we are wrong, when I point to a dog, the dog I point to is a shadow of the real dog lodged somewhere behind and above me in the real world of Ideas or Forms. My concrete dog in front of me is an illusion of the senses according to Plato.

If the prisoners are released Plato tells us, they can turn their heads and see the real objects. Then they realize their error. What can we do that is analogous to turning our heads and seeing the causes of the shadows? We can come to grasp the Forms with our minds he tells us. For Plato every appearance we perceive through the senses participates in these eternal Forms: what we see is a reflection of the Forms rather than their reality. Yet, we can never gain access to this eternal realm of ideas by way of the senses, but only through Reason and the arduous path of philosophy Plato tells us.

At the end of the Phaedo when Socrates confronts his friend Crito with the stark fact of his physical death, he reminds Crito that his corpse is not Socrates, that Socrates will continue on because his true Form is deathless:

I do not convince  Crito that I am this Socrates talking to you here and ordering all I say, but he thinks that I am the thing which he will soon be looking at as a corpse, and so he asks how he shall bury me. I have been saying for some time and at some length that after I have drunk the poison I shall no longer be with you but will leave you to go and enjoy some good fortunes of the blessed, but it seems that I have said all this to him in vain in an attempt to reassure you and myself too. Give a pledge to Crito on my behalf, he said, the opposite pledge to that he gave the jury. He pledged that I would  stay; you must pledge that I will not stay after I die, but that I shall go away, so that Crito will bear it more easily when he sees my body being burned or buried and will not be angry on my behalf, as if I were suffering terribly, and so that he should not say at the funeral that he is laying out, or carrying out, or burying Socrates.2

The point Plato makes here is that the Idea, the real Socrates, the Idea that is concrete (here and now) is not the physical appearance of Socrates, but rather the idea that immanently organizes and orders his speech and thoughts is the real Socrates, not the dead corpse (physis) that Crito will bury or burn later on. Rather it is this very soul, the essence, the very real eidos and substance of Socrates that will soon be sitting at the banquet table of the gods making merry, etc.

One could provide example after example to illustrate the point of the Forms, but now I need to turn to its reception and use within what my friend Virgilio calls “Platonism”. For Platonism is this very reception of the terms of Plato and their use or abuse in the long shadow of Plato’s infestation across the centuries within other followers and detractors of Plato’s Ideas.

I’ll take this up in another post… I need a break and a moment to walk my old bones, being a “lover of the body” rather than a “lover of learning” I like to wander among the shadows. 🙂

1. Fine, Gail (2008-07-16). The Oxford Handbook of Plato (Oxford Handbooks) (p. 50). Oxford University Press. Kindle Edition.
2. Plato; Cooper, John M.; Hutchinson, D. S. (2011-08-25). Complete Works (Kindle Locations 3129-3136). Hackett Publishing. Kindle Edition.

Slavoj Zizek: On Hegel’s Identity of Opposites

The same goes for crime and the law, for the passage from crime as the distortion (negation) of the law to crime as sustaining the law itself, that is, to the idea of the law itself as universalized crime. One should note that, in this notion of the negation of negation, the encompassing unity of the two opposed terms is the “lowest,” “transgressive,” one: it is not crime which is a moment of law’s self-mediation (or theft which is a moment of property’s self-mediation); the opposition of crime and law is inherent to crime, law is a subspecies of crime, crime’s self-relating negation (in the same way that property is theft’s self-relating negation).

A Habermasian “normative” approach imposes itself here immediately: how can we talk about crime if we do not have a prior notion of a legal order violated by the criminal transgression? In other words, is not the notion of law as universalized/ self-negated crime ultimately self-destructive ? But this is precisely what a properly dialectical approach rejects: what is before transgression is just a neutral state of things, neither good nor bad (neither property nor theft, neither law nor crime); the balance of this state is then violated, and the positive norm (law, property) arises as a secondary move, an attempt to counteract and contain the transgression. In Martin Cruz Smith’s novel Havana Bay, set in Cuba , a visiting American gets caught up in a high nomenklatura plot against Fidel Castro, but then discovers that the plot was organized by Castro himself. 30 Castro is well aware of the growing discontent with his rule even in the top circle of functionaries around him, so every couple of years his most trusted agent starts to organize a plot to overthrow him in order to entrap the discontented functionaries; just before the plot is supposed to be enacted, they are all arrested and liquidated. Why does Castro do this? He knows that the discontent will eventually culminate in a plot to depose him, so he organizes the plot himself to flush out potential plotters and eliminate them. What if we imagine God doing something similar? In order to prevent a rebellion against His rule by His creatures, He Himself— masked as the Devil— sets a rebellion in motion so that He can control it and crush it. But is this mode of the “coincidence of the opposites” radical enough? No, for a very precise reason: because Castro-God functions as the unity of himself (his regime) and his opposite (his political opponents), basically playing a game with himself. One has to imagine the same process under the domination of the opposite pole, as in the kind of paranoiac scenario often used in popular literature and films. For example: when the internet becomes infected by a series of dangerous viruses, a big digital company saves the day by creating the ultimate anti-virus program. The twist, however, is that this same company had manufactured the dangerous viruses in the first place— and the program designed to fight them is itself the virus that enables the company to control the entire network. Here we have a more accurate narrative version of the Hegelian identity of opposites.

V for Vendetta deploys a political version of this same identity. The film takes place in the near future when Britain is ruled by a totalitarian party called Norsefire; the film’s main protagonists are a masked vigilante known as “V” and Adam Sutler, the country’s leader. Although V for Vendetta was praised (by none other than Toni Negri, among others) and, even more so, criticized for its “radical”— pro-terrorist, even— stance, it does not have the courage of its convictions: in particular, it shrinks from drawing the consequences of the parallels between V and Sutler. 31 The Norsefire party , we learn, is the instigator of the terrorism it is fighting against—but what about the further identity of Sutler and V? We never see either of their faces in the flesh (except the scared Sutler at the very end, when he is about to die): we see Sutler only on TV screens, and V is a specialist in manipulating the screen. Furthermore , V’s dead body is placed on a train with explosives, in a kind of Viking funeral strangely evoking the name of the ruling party: Norsefire. So when Evey— the young girl (played by Natalie Portman) who joins V— is imprisoned and tortured by V in order to learn to overcome her fear and be free, does this not parallel what Sutler does to the entire British population, terrorizing them so that they rebel? Since the model for V is Guy Fawkes (he wears a Guy mask), it is all the more strange that the film refuses to draw the obvious Chestertonian lesson of its own plot: that of the ultimate identity of V and Sutler. (There is a brief hint in this direction in the middle of the film, but it remains unexploited.) In other words, the missing scene in the film is the one in which, when Evey removes the mask from the dying V, we see Sutler’s face. How would we have to read this identity? Not in the sense of a totalitarian power manipulating its own opposition, playing a game with itself by creating its enemy and then destroying it, but in the opposite sense: in the unity of Sutler and V, V is the universal encompassing moment that contains both itself and Sutler as its two moments. Applying this logic to God himself, we are compelled to endorse the most radical reading of the Book of Job proposed in the 1930s by the Norwegian theologian Peter Wessel Zapffe, who accentuated Job’s “boundless perplexity” when God himself finally appears to him.

Expecting a sacred and pure God whose intellect is infinitely superior to ours, Job finds himself confronted with a world ruler of grotesque primitiveness, a cosmic cave-dweller, a braggart and blusterer, almost agreeable in his total ignorance of spiritual culture …

What is new for Job is not God’s greatness in quantifiable terms; that he knew fully in advance … what is new is the qualitative baseness. In other words, God— the God of the Real— is like the Lady in courtly love, He is das Ding, a capricious cruel master who simply has no sense of universal justice . God-the-Father thus quite literally does not know what He is doing, and Christ is the one who does know, but is reduced to an impotent compassionate observer, addressing his father with “Father, can’t you see I’m burning?”— burning together with all the victims of the father’s rage. Only by falling into His own creation and wandering around in it as an impassive observer can God perceive the horror of His creation and the fact that He, the highest Law-giver, is Himself the supreme Criminal. Since God-the-Demiurge is not so much evil as a stupid brute lacking all moral sensitivity, we should forgive Him because He does not know what He is doing. In the standard onto-theological vision, only the demiurge elevated above reality sees the entire picture, while the particular agents caught up in their struggles have only partial misleading insights. At the core of Christianity, we find a different vision— the demiurge is a brute, unaware of the horror he has created, and only when he enters his own creation and experiences it from within, as its inhabitant, can he see the nightmare he has fathered.

Slavoj  Zizek, (2014-10-07). Absolute Recoil: Towards A New Foundation Of Dialectical Materialism (pp. 269-271).

Accelerationism: The New Prometheans – Part Two: Section Two

The utopian currents of socialism, though they are historically grounded in criticism of the existing social system, can rightly be called utopian insofar as they ignore history …, but not because they reject science.”

     – Guy Debord,  Society of the Spectacle

“…the best Utopias are those that fail the most comprehensively.”

– Fredric Jameson, Archaeologies of the Future

But what of the history of the future? – Has anyone written of that territory beyond the moment: of its struggles or its failures; and, what of its successes? Who will mention a nostalgia for the future? Jameson would ask the question of culture: whether culture could be political; that is, whether it could be both critical and subversive, or is it necessarily reappropriated and coopted by the very social system it seeks to escape?1 Daniel Rosenberg and Susan Harding will remark in their Histories of the Future a sense of loss, saying, “our sense of the future is conditioned by a knowledge of, and even a nostalgia for, futures that we have already lost.”2

One remembers the Japenese film Battle Royale (2000) where civilization is in state of chaos, and violence by rebellious teenagers in schools is completely out of control. The government hits back with a new law: every year a school class picked at random will be cast away on a desert island to fight it out among themselves. The rules are simple: it lasts three days, everyone gets water, food and a weapon and only one may survive.

Ghost in the Shell (1995): Set in the year 2029 and following World Wars III and IV, a Japanese-led Asian block dominates world affairs. The alliance maintains its international supremacy through its elite security force whose cybernetically enhanced operatives tackle an array of hi-tech terrorists and other threats to international security. Major Motoko Kusanagi, a cybernetically augmented female agent, has been tracking a virtual entity known as the Puppet Master with her crack squad of security agents.

The Giver (2015): One of the big components of the 1993 novel was that, due to the Sameness of society, there was no war, no hunger, but also, no color. The receptors had been blocked, as it were, and we all saw the world in a plain, black and white. A place where euthanasia became the remedy for almost all infractions.

More and more the future becomes a site where we can dump civilizations dirty little secrets rather than as a place to test the waters of change. While we are taught to believe in the emptiness of the future, or even that no future exists, or that the future is a dead end going no-where, or, even – a catastrophe zone best left in the abyss of its own death knell, we all now live as if the future were already here: saturated by future-consciousness that permeates the spectacle around us like so many electronic toys we seem to busy ourselves with, moment by moment, not knowing that we are not only using them but they are using us back in ways beyond telling. As Rosenberg-Harding relate the “Future” is a placeholder, a placebo, a no-place, but it is also a commonplace that we need to investigate in all its cultural and historical density (9).

Cataclysms – The Future has been Cancelled

Yes, cataclysms: climate change; terminal resource depletion – water and energy shortages; mass starvation; famine; economic collapse; hot and cold wars; austerity and governmental control (Fascism); privatization of welfare and prisons; automation of even the cognitariat itself. All this will be the opening gambit of Williams and Srnicek’s #Accelerate: Manifesto for an Accelerationist Politics. A Politics of Fear? or, Concern? Let us listen: “While crisis gathers force and speed, politics withers and retreats. In this paralysis of the political imaginary, the future has been cancelled.” Caput! Finito! Done! The future is no more, or is it?

Antonio Negri – scholar of Spinoza, collaborator along with Michael Hardt on a trilogy of works against the neoliberal order Empire, Multitude: War and Democracy in an Age of Empire, and Commenwealth – will tell us not to be worried about the cataclysmic events coming our way: “There is nothing politico-theological here. Anyone attracted by that should not read this manifesto.” Simple. Effective. To the point. If your looking for the Apocalypse of John be our guest and find a preacher in your local parish, for there will be no one here preaching salvation by God or any other big Other. Instead Negri will hone in on the core truth to be found in this manifesto revealed as ‘the increasing automation in production processes, including the automation of “intellectual labor”‘, which would explain the secular crisis of capitalism (365).3 As Negri explicates it the neoliberal global order is afraid: to continue they had to “block the political potentiality of post-Fordist labor (i.e., the inforgs, cognitariat, intellectual workers). Neither the left nor the right will escape Williams and Srnicek’s derision, both have become a part of the neoliberal machine because both have put an end to any opening toward the future: canceled by the “imposition of a complete paralysis of the political imaginary (366).” Negri states it simply that the manifesto offers us nothing less than the potentiality against power – “biopolitics against biopower“.(366) It is because of this new potentiality that the future has opened up again, says Negri: “the possibility of an emancipatory future radically opposed to the present capitalist dominion” (366).

For Negri the manifesto hinges on the “capacity to liberate the productive forces of cognitive labor” (366), cognitive labor being the new class or precariat within this post-capitalist project. The Fordist era of labor has shifted, there will be no return. For better or worse we are in the midst of an immaterial informational economy in which the cognitariat are workers of knowledge rather that producers of hard commodities, intellectual laborers in a game of tech patents both medical-pharmaceutical and science-tech. Negri tells us that to move forward will take decisive planning and organization: – “planning the struggle comes before planning the production” (369). It’s about unleashing this power of cognitive labor as well as tearing it from its latency (its delays) through education and learning. Next comes – as Negri states it, the most important passage in the manifesto, the notion of the reappropriation of “fixed capital” under its many guises: “productive quantification, economic modeling, big data analysis, and the most abstract cognitive models are all appropriated by worker-subjects…” (370). As for a new Leftist hegemony or techno-social body he tells us: “we have to mature the whole complex of productive potentialities of cognitive labor in order to advance a new hegemony” (371).

Negri commends them for a reinvigorating the Enlightenment project, for their humanist and Promethean proclivities; and even sees a tendency in their work as opening out toward posthuman utopian thought; yet, most of all he approves their movement toward reconstructing the future – one in which we “have the possibility of bringing the Outside in, to breathe a powerful life into the Inside” (372). Yet, I wonder if Negri reads them aright: are they humanists in the old sense? And, what of the Enlightenment: which Enlightenment is he referring too, there being multiple or plural enlightenments? I assume, Negri being a Spinozaean scholar – that he’d be more in tune with the “radical enlightenment” – as Jonathan I. Israel will tells us “the Radical Enlightenment arose and matured in under a century, culminating in the materialistic and atheistic books of La Mettrie and Diderot in the 1740s. These men, dubbed by Diderot the ‘Nouveaux Spinosistes’, wrote works which are in the main a summing up of the philosophical, scientific, and political radicalism of the previous three generations” (6-7).4 Yet, by the time of Kant a more moderate Enlightenment would oust the radicals from there place in the sun, and a compromise with the traditionalist or conservatives would be the ruination of French Revolution in the end: “Insofar as anything did, the coup of Brumaire of the Year VIII (November 1799), and the new Constitution of 13 December 1799, ended the Revolution. …The 1799 Constitution, in short, effectively suspended the Rights of Man, press freedom, and individual liberty, as well as democracy and the primacy of the legislature, wholly transferring power to initiate legislation from the legislature to the executive, that is, the consulate, making Bonaparte not just the central but the all-powerful figure in the government. The Declaration of the Rights of Man was removed from its preambule. (Israel, 694).

After Negri’s initial praise of the manifesto he discovers a flaw: “there is too much determinism in this project, both political and technological” (373). He sees a difficulty in their project, a tendency toward teleological openness which might lead to perverse effects in the end, producing a “bad infinity” if not corrected (373). To correct this tendency he suggests they need to specify in details what the “common” is in any technological assemblage, while at the same time providing an anthropology of production(375). Having been subsumed within a global information economy, one in which production is now defined by the socialization of cognitive work and social knowledge, we must also understand, Negri tells us that informatization being the most valuable form of fixed capital, and automation the cement of capital, we are all slowly being enfolding by “informatics and the information society back into itself” (375). He remarks that this is a weakness within the manifesto in that the cooperative dimension of production (and particularly the production of subjectivities) is underestimated in relation to technological criteria (375).

He argues that in the future the battles will be over the “currency of the common” (i.e., money as a type: gold, bitcoin, dollar, etc.). As he tells it the “communist program for a postcapitalist future should be carried out on this terrain, not only by advancing the proletarian reappropriation of wealth, but by building a hegemonic power – thus working on the ‘common’ that is at the basis of both the highest extraction/abstraction of value from labor and its universal translation into money” (377).

Finally, Negri reminds us that we should remember what the slogan ‘Refusal of labor’ meant: a reduction in automation and labor time “disciplined or controlled by machines”, and an increase in real salaries. Last is the nod toward a favorite theme of Negri: the production of subjectivities, the “agonistic use of passions, and the historical dialectics that opens against capitalist and sovereign command” (378). All in all a favorable review by Negri. I do like that he wants to see in the manifesto more details concerning its mapping of a transformative anthropology of the workers’ bodies (373), one that centers the relation between subject and object as a relation between the “technical composition and the political composition of the proletariat”. As Negri states it in this way the “drift of pluralism into a ‘bad infinity’ can be avoided” (374).

In the end though Negri will remind us that we need a new ‘currency of the common’: that the authors of the manifesto are well aware that money functions as an abstract machine (Deleuzeguattari) – acts as the real measurement of value extracted from society through the real subsumption of the current society by capital (377). Yet, this same process used by capital also points to new forms of resistance and subversion: “the communist program for a postcapitalist future should be carried out on this terrain, not only by advancing the proletarian reappropriation of wealth, but by building a hegemonic power – thus working on ‘the common’ that is at the basis of both the highest extraction/abstraction of value from labor and its universal translation of money” (377).

In a brief Cyberlude we’ll revisit Nick Land’s ‘Circuitries’ essay in the reader before moving on to Tizaianna Terranova and Luciana Parisi who both deal with the new algorithmic worlds of culture and technology and their impact on an accelerationist politics.

Previous post: Accelerationism: The New Prometheans – Part Two: Section One

Next post: Accelerationism: The New Prometheans – Cyberlude

1. Fredric Jameson, Archaeologies of the Future (Verso, 2005)
2. Histories of the Future. Editors Daniel Rosenberg and Susan Harding. (Duke University Press, 2005)
3. #Accelerate# the accelerationist reader. Editors Robin Mackay & Armen Avanessian (Urbanomic, 2014)
4. Israel, Jonathan I. (2001-02-08). Radical Enlightenment: Philosophy and the Making of Modernity 1650-1750 (pp. 6-7). Oxford University Press. Kindle Edition.

Reza Negarestani: Navigating the Game of Truths

By entering the game of truths – that is, making sense of what is true and making it true – and approaching it as a rule-based game of navigation, philosophy opens up a new evolutionary vista for the transformation of the mind. 

– Reza Negarestani, Navigate With Extreme Prejudice 

Reza Negarestani, an Iranian philosopher who has contributed extensively to journals and anthologies and lectured at numerous international universities and institutes, has begun a new philosophical project that is focused on rationalist universalism beginning with the evolution of the modern systems of knowledge and advancing toward contemporary philosophies of rationalism, their procedures, as well as their investment in new forms of normativity and epistemological control mechanisms. He recently hooked up with Guerino Mazzola, a Swiss mathematician, musicologist, jazz pianist as well as author and philosopher. He is  qualified as a professor of mathematics (1980) and of computational science (2003) at the University of Zürich.

On the Urbanomic blog I noticed a new entry: Deracinating Effect – Close Encounters of the Fourth Kind with Reason (see here). It appears that Reza and Guerino took part in a recent event in March name The Glass Bead Game after the novel by that name by Herman Hesse. It was organized by Glass Bead (Fabien Giraud, Jeremy Lecomte, Vincent Normand, Ida Soulard, Inigo Wilkins) and Composing Differences (curated by Virginie Bobin). Reza and Guerino both presented talks on philosophy, mathematics, games and the paradigm of navigation.

I’ve been interested of late in Reza’s shift in tone and effect, his philosophical framework seems to in the past few years undergone a mind-shift toward what he terms the ‘Paradigm of Navigation’. Doing a little research for this post I came upon his recent entry for the Speculations on Anonymous Materials Symposium paper transcribed by Radman Vrbanek Arhitekti from the youtube.com video. In this essay he aligns himself with the history of systems history, which grew out of a very rigid approach to engineering in the 19th Century but has over the past 30 years unfolded in a new and completely different epistemology of matter and its intelligibility.

What he discovered different in these newer systems theories is that against an architectural or engineering approach based on input/outputs these new systems theorists had moved from an essentialist view of system dynamics toward a functionalist approach: the notion that its the behavior and the functional integration underlying that behavior, or what these theorists termed the ‘functional organization’ of the system that matter. He tells us this is important, saying:

This becomes important because functions… systematic or technical understanding of function is that functions are abstractly a realizable entities meaning that they can be abstracted from the content of their constitution. So a functional organization can emerge, it can be manipulated, it can get automated and it can actually gain a form of autonomy that developed not because of the constitution in which it was embedded but in spite of it. Hence, functions allows for an understanding of the system that is no longer tethered or chained to an idea of constitution.

At the heart of this new form of systems theory is the use of heuristics. It entails a move away from analytics and toward synthetics. The sense is that heuristics are not analytical devices, but rather are synthetic operators. As he states it:

They treat material as a problem. But they don’t break this problem into pieces. They transform this problem into new problem. And this is what the preservation of invariance is. Once you transform a problem by way of heuristics to a new problem, you basically eliminate so much of the fog around this problem that initially didn’t allow us to solve it.

In this sense one sees an almost Deleuzean turn in systems theory, for it was Deleuze who believed philosophy was about problems to be solved. In their What is Philosophy? Deleuze and Guattari explain that only science is concerned with the value of claims and propositions; philosophy searches for solutions to problems, rather than the truth. In this sense they were returning to Nietsche who told us he was waiting for those who would come, those philosophical physicians who were no longer concerned with truth but rather something else:

I am still waiting for a philosophical physician in the exceptional sense of the term – someone who has set himself the task of pursuing the problem of the total health of a people, time, race or of humanity – to summon the courage at last to push my suspicion to its limit and risk the proposition: what was at stake in all philosophizing hitherto was not at all ‘truth’ but rather something else – let us say health, future, growth, power, life. . .(6)

–  Friedrich Nietzsche,  The Gay Science

But is this what Reza is seeking? We’ll return to this later. What Reza tells us in this essay is that heuristics as a new tool, an apparatus allows us to remove both the lower and upper boundaries of materiality. At the lower boundary where the understanding of constitution and understanding of fundamental assumptions or axiomatic conceptual behaviors exist; and, at the upper boundary where it basically turns materiality into living hypothesis and its behavior can be expanded. Its evolution, i.e. its constructability becomes part of the project of its self-realization. As he states it:

Hence, the understanding that the system is nothing but its behavior and behavior is a register of constructability – the same thing about materiality and how engineers approach materiality by way of heuristics – which is rooted in this new understanding of systematicity by way of understanding in in the sense of functions and behaviors.

In his essay The Glass Bead Game he lays down the gauntlet telling us that by “simulating the truth of the mind as a navigational horizon, philosophy sets out the conditions for the emancipation of the mind from its contingently posited settings and limits of constructability”. Continuing he says: “Philosophy’s ancient program for exploring the mind becomes inseparable from the exploration of possibilities for reconstructing and realizing the mind by different realizers and for different purposes.”

Of course being the creature I am I want to ask: I see talk of the Mind as if it were some autonomous entity in its own right disconnected from both body and its command system, the brain. So I ask: Where is the brain in all this discussion of emancipation and the limits of constructability? As Bakker on his blog keeps pounding away at “Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions.”1 Reza’s notion of simulating the truth of the mind would entail information that we – as of yet, just do not have access to; in fact. because of medial neglect and the inability of second order reflection ever to catch its own tail, we will never have access to it through intentional awareness. Instead we will have to rely not on philosophy but the sciences (and especially the neurosciences) to provide both the understanding and the testable hypothesis before such experimental constructions and reconstructions could begin to even become feasible as more than sheer fantasy.

We see just how much fantasy is involved in his next passage:

In liberating itself from its illusions of ineffability and irreproducible uniqueness, and by apprehending itself as an upgradable armamentarium of practices or abilities, the mind realizes itself as an expanding constructible edifice that effectuates a mind-only system. But this is a system that is no longer comprehensible within the traditional ambit of idealism, for it involves ‘mind’ not as a theoretical object but as a practical project of socio-historical wisdom or augmented general intelligence.

How is such an liberation from illusions of ineffability and irreproducible uniqueness to come about? And, how can this apprehension come about? (Which can only mean second-order self-reflexivity that, if Bakker in his Blind Brain Theory is correct, is based on medial neglect (i.e., the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.))

Be that as it may what Reza is trying to do is remap the cognitive territory that has for too long been overlaid with certain scientistic mythologies for more than a century. As he sees it the mind is a “diversifiable set of abilities or practices whose deployment counts as what the mind is and what it does”. This ontological and pragmatic mixture abstraction and decomposition that allow “philosophy is able to envision itself as a veritable environment for an augmented nous precisely in the sense of a systematic experiment in mind simulation”. This turn toward the pragmatic-functionalist perspective and development of a philosophy of action and gestures rather than of contemplation and theory is at the heart of a new movement toward Synthetic Category Theory in Mathematics. Several philosophers seem to be at the center of this theory of the gesture:  Guerino Mazzola, Fernando Zalamea, and Gilles Chatelet. Along with Alain Badiou these philosophers of math have changed the game and invented new paths forward for philosophy.

It’s as if this network of scientists, mathematicians, information specialists, geophilosophers, etc. are planning on reengineering society top-down and bottom-up. Of course the metaphor of the Glass Bead Game is almost apposite to the purpose of such an effort. The Glass Bead Game of Das Glasperlienspiel of Herman Hesse was of a secularization of the communal systems of the Medieval Ages of Monk Monasteries and their vast Libraries. In this novel the hero practices a contemplative game of the Mind in which knowledge is grafted onto a strategy game of 3D projections in yearly contests among participants. These contemplative knowledge bearers are excused from the menial life of work and allowed to pursue at their own discretion strange pursuits in knowledge. The whole thing goes against what Reza and his cohorts seek in their action oriented pragmatic philosophy. It was Arendt herself that spoke of this division in philosophy between the ‘vita contemplativa’ and the ‘vita activa’ as a continuing battle along the course of the past two millennia of philosophy. Reza tips his hat toward the active stance.

It reminds me in some ways of the EU Onlife Initiative  which takes a look at the ICT’s – The deployment of information and communication technologies (ICTs) and their uptake by society affect radically the human condition, insofar as it modifies our relationships to ourselves, to others and to the world. These new social technologies are blurring of the distinction between reality and virtuality; blurring of the distinctions between human, machine and nature; bringing about a reversal from information scarcity to information abundance; and, the shift from the primacy of entities to the primacy of interactions. As they see it the world is grasped by human minds through concepts: perception is necessarily mediated by concepts, as if they were the interfaces through which reality is experienced and interpreted. Concepts provide an understanding of surrounding realities and a means by which to apprehend them. However, the current conceptual toolbox is not fitted to address new ICT-related challenges and leads to negative projections about the future: we fear and reject what we fail to make sense of and give meaning to. In order to acknowledge such inadequacy and explore alternative conceptualisations, a group of scholars in anthropology, cognitive science, computer science, engineering, law, neuroscience, philosophy, political science, psychology and sociology, instigated the Onlife Initiative, a collective thought exercise to explore the policy-relevant consequences of those changes. This concept reengineering exercise seeks to inspire reflection on what happens to us and to re-envisage the future with greater confidence.

This new informational philosophy approach seems to align well with Reza’s sense of philosophy establishing a “link between intelligence and modes of collectivization, in a way that liberation, organization and complexification of the latter implies new odysseys for the former, which is to say, intelligence and the evolution of the nous”. Ultimately Reza’s project hopes to break us out of our apathetic circle of critique and theoretical spin bureaus of polarized idiocy that has entrapped us in useless debates and provide a new path forward by “concurrently treating the mind as a vector of extreme abstraction and abstracting the mind into a set of social practices and conducts, philosophy gesticulates toward a particular and not yet fully comprehended event in the modern epoch – as opposed to traditional forms – of intelligence: The self-realization of intelligence coincides and is implicitly linked with the self-realization of social collectivity. The single most significant historical objective is then postulated as the activation and elaboration of this link between the two aforementioned dimensions of self-realization as ultimately one unified project”.

Next he tells us that the first task of philosophy is to locate an access or a space of entry to the universal landscape of logoi. I think this attends Seller’s notions of the “space of reasons” which describes the conceptual and behavioral web of language that humans use to get intelligently around their world, and denotes the fact that talk of reasons, epistemic justification, and intention is not the same as, and cannot necessarily be mapped onto, talk of causes and effects in the sense that physical science speaks of them. In this sense as Reza tells it the “landscape of logoi is captured as a revisable and expandable map of cascading inferential links and discursive pathways between topoi that make sense of truth through navigation”.

At the core of this new philosophical project is the ‘self-realization of intelligence’: (1) by pointing in and out of different epochs and activating the navigational links implicit in history; (2) by grasping intelligence as a collective enterprise and hence, drawing a complex continuity between collective self-realization and the self-realization of intelligence as such, in a fashion not dissimilar to the ethical program of an ‘all-encompassing self-construction bent on abolishing slavery’ articulated by the likes of Confucius, Socrates and Seneca.(ibid.)

The explicit hope of this philosophy is according to Reza the notion of keeping pace with intelligence, which implies that philosophy always reconstitutes what it was supposed to be.

I wonder if this sort of endeavor is doomed to begin with? When one thinks of how machine intelligence as it moves into the quantum era of ubiquitous computing will ever be able to keep pace with the vast amounts of processing power that will come available to these future AI entities?

Next he tells us that localization is the constitutive gesture of conception and the first move in navigating spaces of reason. ‘To localize’ means ‘to conceive’ the homogenous and quantitative information into qualitatively well-organized information-spaces endowed with different modalities of access. Obviously we must conceive of advanced computer simulation systems that allow almost rhizomatic access from anywhere in the world, with multiple entry points and departures. When we think about the new Zettabyte Era and the impact of dataglut one realizes that even a team of philosophers would be hard pressed to sift through the datamix:

In 2003, researchers at Berkeley’s School of Information Management and Systems estimated that humanity had accumulated approximately 12 exabytes of data (1 exabyte corresponds to 1018 bytes or a 50,000-year-long video of DVD quality) in the course of its entire history until the commodification of computers. A zetabyte equaling 1,000 exabytes.2

Tools will need to be developed, as well as new algorithms that can churn through such massive data and combine advanced simulations or automatons for filtering out the noise and making smart choices or decisions on that data before passing it on to their human counterparts. Much like the trillions of operations that go on in the human brain that the average person is hardly aware of, and the decisional processes that go on below the threshold of consciousness before we ever see an idea or notion arise, we are caught in the trap of believing we have enough information to make coherent and intelligent decisions based on the minimal data received at the end of that brain processing initiative. We’re not. We are deluded into thinking we know what in fact we do not know. We make conscious decisions after the fact, and are usually motivated by dispositions and powers we do not even have access too.

Yet, Reza, would have us believe that there is a navigable “link between the rational agency and logoi through spaces of reason that marks the horizon of knowledge” (ibid.). When he speaks of ‘rational agency’ is this the human, the AI, the collective subjectivication? His notion of  universality that presents knowledge and by extension philosophy as platforms for breaking free from the supposedly necessary determinations of local horizons in which the rational or advanced agency appears to be firmly anchored, seems to portend more issues and problems that it resolves. How does one break free of these local determinations? What would such a universal knowledge assume on a global scale? As he puts it “without this unmooring effect, philosophy is incapable of examining any commitment beyond its local implications or envisaging the trajectory of reason outside of immediate resources of a local site”. So against all those microhistories and labors of the postmodern era poststructuralists we are to return to the beginning of the Enlightenment project, but with a twist in that we shall have the new technologies of simulation at hand to empower this age of informational and rational governance and agency. As he calls it: “Philosophy proposes analytico-synthetic methods of wayfinding in what Robert Brandom discribes as the rational system of commitments”.

But what of all those dark corners of the irrational that Freud, Lacan, Deleuze, and so many other discovered in the mind? What of that irrational core? We know that the neoliberal think-tanks that gave us Rational Choice Theory and the economics of the free market have led us into destruction, how better shall another rational system fare – even one from the Left?

He seems to understand the issues, saying:

Philosophy sees the action in the present in terms of destiny and ramifications, which is to say, based on the reality of time. It constructively adapts to an incoming and reverse arrow of time along which the current cognitive or practical commitment evolves in the shape of multiple future destinations re-entering the hori- zon of what has already taken place. Correspondingly, philosophy operates as a virtual machine for forecasting future commitments and presenting a blueprint for a necessary course of action or adaptation in accordance with a trajectory or trajectories extending in reverse from the future. It discursively sees into the future. In short, philosophy is a nomenclature for a universal simulation engine.

In fact it is inside this simulation engine that the self-actualization of reason is anticipated, the escape plan from localist myopias is hatched and the self-portrait of man drawn in sand is exposed to relentless waves of revision. In setting up the game of truths by way of giving functions of reason their own autonomy – in effect envisioning and practicing their automation – philosophy establishes itself as the paradigm of the Next (computational) Machine, back from the future.(ibid.)

But why philosophy? Why not the neurosciences that actually deal with the inner workings not of the Mind but of the brain? Will philosophy ever acknowledge that the sciences must play a great part in the coming information age? Or will it continue to go blindly down its own intentional path, directing its own blind goals without a true knowledge of things as they are? With the advent of the NBIC (Nanotech, Biotech, InfoTech, CongitiveTech) and the Information and Communications Technologies or ICT’s we have already entered or go beyond recourse to much of what philosophy can say. Many like Luciano Floridi and his team have already entered this information age leaving much of the intentional drift of phenomenology, idealism, and materialism as they derive certain information structural realisms and ontologies for a path forward. Only time will tell if Reza and his cohorts do the same… I have much to catch up on and probably need more data on Reza and his cohorts efforts to truly make a definitive judgment so I’ll refrain from such problematique statements.

This is a commendable project and one that we should continue to look into and keep an eye on over the coming months and years. I would only ask that Reza and these Mathematicians begin extending their borders into the sciences of the brain as well as many of the new features transpiring on the Continent in the Information Philosophy fields. I still have questions about his reliance on Brandomian normativity since it is a fall back to retrograde intentionalism rather than a move toward a post-intentional world view. My hopes is that he will look long and hard at other alternatives and begin question the very notion of ‘intentionality’ and ‘directedness’ as an outmoded tool of a phenomenological perspective that needs recasting in the light of new sciences and philosophies.

—————

*appending the youtube.com video by Guerino Mazzola Melting the Glass Beads – The Multiverse Game of Strings and Gestures:

1. R. Scott Bakker. (see The Blind Mechanic)
2. Floridi, Luciano (2010-02-25). Information: A Very Short Introduction (Very Short Introductions) (p. 6). Oxford University Press. Kindle Edition.

The Rise of the Machines: Brandom, Negarestani, and Bakker

Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery.

– R. Scott Bakker, The Blind Mechanic

Ants that encounter in their path a dead philosopher may make good use of him.

– Stanislaw Lem, His Master’s Voice 

We can imagine in some near future my friend R. Scott Bakker will be brought to trial before a tribunal of philosophers he has for so long sung his jeremiads on ignorance and blindness; or as he puts it ‘medial neglect’ (i.e., “Medial neglect simply means the brain cannot cognize itself as a brain”). One need only remember that old nabi of the desert Jeremiah and God’s prognostications: Attack you they will, overcome you they can’t… And, like Jeremiah, these philosophers will attack him from every philosophical angle but will be unable to overcome his scientific tenacity.

Continue reading

Reza Negarestani: On Inhumanism

Inhumanism, as will be argued in the next installment of this essay, is both the extended elaboration of the ramifications of making a commitment to humanity, and the practical elaboration of the content of human as provided by reason and the sapient’s capacity to functionally distinguish itself and engage in discursive social practices.

– Reza Negarestani, The Labor of the Inhuman, Part I: Human

On e-flux journal   Reza enjoins us to move beyond both humanism and anti-humanism, as well as all forms of a current sub-set of Marxist theoretic he terms “the fashionable stance of kitsch Marxism today”. Taking up both Sellarsian notions of the “space of reasons” as well as the inferential and normative challenges offered by Robert Brandom. Brandom developed a new linguistic model, or “pragmatics”, in which the “things we do” with language is prior to semantics, for the reason that claiming and knowing are actings, production of a form of spontaneity that Brandom assimilates to the normative “space of reasons” (Articulating Reasons 2000).1

Reza starts with the premise that inhumanism is a progressive shift situated within the “enlightened humanism” project. As a revisionary project it seeks to erase the former traces within this semiotic field of discursive practices and replace it with something else, not something distinctly oppositional but rather a revision of the universal node that this field of forces is. It will be a positive project, one based on notions of “contructivism”: “to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.” I’m always a little wary of such notions as models, construction, constructible hypothesis, as if we could simulate the possible movement of the real within some information processing model of mathematical or hyperlinguistic, algorithmic programming. We need to understand just what Reza is attempting with such positive notions of constructions or models otherwise we may follow blindly down that path that led through structuralism, post-structuralism, and deconstruction: all those anti-realist projects situated in varying forms of social constructivsm and its modifications (i.e., certain Idealist modeling techniques based as they were on the Linguistic Turn).

Right off the bat he qualifies his stance against all those philosophies of finitude or even the current trend in speculative realism of the Great Outdoors (Meillassoux, Brassier, Iain Hamilton Grant, Graham Harmon). Against in sense of an essence of the human as pre-determined or theological jurisdictions. Against even the anti-humanist tendencies of both an inflationary and deflationary notion of the human that he perceives even in microhistorical claims that tend toward atomism, he offers a return to the universalist ambitions of the original enlightenment project voided of its hypostasis in glorified Reason. Against such anti-humanist moves he seeks a way forward, a way that involves a collaborative project that redefines the enlightenment tradition and its progeny and achieves the “common task for breaking out of the current planetary morass”.

Continue reading

The Neoliberal Vision: The Great Escape Artist

“As for living, our servants will do that for us.”

– Auguste Villiers de l’Isle-Adam, Axël

In the posthuman context one wants to rephrase that to say: “As for dying, our proxies will do that for us.” In the age of neoliberal fragmentation the self is no longer confined to some unified sphere of consistency that can be tracked, identified, and commoditized according to external market pressure, but is as Deleuze once described it a ‘dividual’, a datatized agent of the simulated virtual economy.  The neoliberal self is dispersed in the free-floating bits of flotsam and jetsam of that vast assemblage of phantasmatic networks to be exploited by machinic algorithms in a posthuman economy of the endless transactions and brokered financialization. Self as Proxy, a self-constructed kit of affective relations built not by some internal mechanism but by the neoliberal market forces and their minions in the Grand Cathedral of the Neoliberal Thought Collective (so well documented by Philip Mirowski). In today’s neoliberal hypercaptalist state the self is immersed in the flows of data, unhinged from its physical status within the water bag we call the body, it is seen as a flexible and liquid commodity, neither manufactured or fabricated, but more of a neurogram: a programmable commodity of accelerating human capital moving toward greater and greater energy flows within the digital marketplace guided by neither rational choice nor the neoclassical sense of self identity but as performative player in a vast game script structured by a mathematical information economy modulated second by second in a global system run amok on the shores of a desperate elite that no longer believes in its own mystical religion of money.

Why worry about job loss in such a world? Think of it as an opportunity for a major overhaul, an upgrade, a self-modified algorithm one can install as part of an everyday  maintenance program. Designer drugs to modify not one’s brain but one’s desensitized body laid on ice awaiting the expected post-singularity when humans and machines merge in immortalist visions of economic heaven. We are told over and over that the self is illusion, that the brain’s plasticity allows for multiple roles to be cast in flexible functions and mechanisms, just another graft of the fractured rhythms of accelerating world.  Accountability? The legal definitions are evolving too, at least that’s the latest bit of wisdom from the neoliberal ignorance. Slowly but surely the neoliberal self is dissolving into the very fabric of the market where rules are just another set of algorithms pumping the fluid of wealth from the poor to the rich. As Philip Mirowski describes it satirically:

This is the true terminus of the neoliberal self: to supplant your own mother and father; to shrug off the surly bond ratings of earth; to transform yourself at the drop of a hat or the swallow of a pill; to be beholden to no other body but only to the incorporeal market. It doesn’t matter if the procedure actually lies within the bounds of contemporary scientific possibility, because it is the apocalypse and the Rapture of the neoliberal scriptures.1

Continue reading

Pete Mandik: On Neurophilosophy

An introduction to reductionism and eliminativism in the philosophy of mind, by Professor Pete Mandik of William Paterson University. Three youtube.com vids that give a basic intro to Paul and Patricia Churchland’s notions following W.V. Quine that science and philosophy should inform each other, and the establishment of that within the philosophy of mind termed neurophilosophy. Might skip the first five minutes of the vid one, mainly speaking to his class. (In fact you could probably skip the first vid, which basically introduces the aforementioned philosopher/scientists and move right into the second vid which immediately speaks directly to the topics) Otherwise a good basic intro for those that want to know the difference between the reductionist and eliminativist approaches.

Continue reading

Convergence Technologies: NBIC and the Future

If they will not understand that we are bringing them a mathematically faultless happiness, our duty will be to force them to be happy.

– Yevgeny Zamyatin, WE

Dr. Mihail C. Roco Senior Advisor for Nanotechnology at the National Science Foundation tells us that the convergence of knowledge and technology for the benefit of society is the core opportunity for progress in the 21st century, based on five principles:

  1. the interdependence of all components of nature and society,
  2. decision analysis for research and development based on system-logic deduction,
  3. enhancement of discovery, invention and innovation through evolutionary processes of convergence that combine existing principles and competencies, and divergence that generates new ones,
  4. higher-level cross-domain languages to generate new solutions and support transfer of new knowledge, and
  5. vision-inspired basic research embodied in grand challenges. It allows society to answer questions and resolve problems that isolated capabilities cannot, as well as to create new competencies, knowledge and technologies on this basis.

A book that will support this new progressive agenda tell us the convergence in knowledge, technology, and society is the accelerating, transformative interaction among seemingly distinct scientific disciplines, technologies, and communities to achieve mutual compatibility, synergism, and integration, and through this process to create added value for societal benefit. It is a movement that is recognized by scientists and thought leaders around the world as having the potential to provide far-reaching solutions to many of today’s complex knowledge, technology, and human development challenges. Four essential and interdependent convergence platforms of human activity are defined in the first part of this report: nanotechnology-biotechnology-information technology and cognitive science (“NBIC”) foundational tools; Earth-scale environmental systems; human-scale activities; and convergence methods for societal-scale activities. The report then presents the main implications of convergence for human physical potential, cognition and communication, productivity and societal outcomes, education and physical infrastructure, sustainability, and innovative and responsible governance. As a whole, the report presents a new model for convergence. To effectively take advantage of this potential, a proactive governance approach is suggested.  The study identifies an international opportunity to develop and apply convergence for technological, economic, environmental, and societal benefits.

Continue reading

R. Scott Bakker: Why not simply yet another affection, this one dispositionally prone to yelp, ‘Me-me-me!’

My friend R. Scott Bakker makes a point about my recent posts here and here on Hume and his views of the Self as interpreted by Gilles Deleuze in his Empiricism and Subjectivity, and the conclusions I draw from my reading, saying:

I said: “This reflexive movement of synthesis is an intervention or cut in time and its extension in historical reflection upon that cut or splice in time. It is this gap between two intervals, the time of intervention and the time of reflection between affection marked and affection reflected that produces the sense or synthesis of self. The self is this process of a double reflection. Neither form nor substance the self is the gap or cut between two modalities that is resolved not at the level of understanding but within the moral and political domain of culture. Neither intentional nor directed the self becomes a synthetic unity brought into play by the mind’s own innate processes, and yet these very processes cannot be reduced to the physical manifestations of the brain itself which is both origin and qualifier of the mind’s reflexive nature.”

Scott asked: What ‘gap’? I just don’t see what motivates the distinction into two modalities here. If ‘reflection’ is affection (and what else would it be?), then what makes it different than any other kind of affection? Why should affection working the trace of previous affections give rise to anything so exotic as ‘cuts’ and ‘gaps’ and ‘irreducible entities’? Why not simply yet another affection, this one dispositionally prone to yelp, ‘Me-me-me!’

As soon as that particular affection subsides, the self subsides with it, as it does in sleep.

Ok, if we take the standard definition of the term “affection” as: attraction, infatuation, or fondness – a “disposition or rare state of mind or body”. And, a  disposition as a habit, a preparation, a state of readiness, or a tendency to act in a specified way.

Continue reading

Rene Descartes: The Diversity of the Sciences as Human Wisdom

Distinguishing the sciences by the differences in their objects, they think that each science should be studied separately, without regard to any of the others. But here they are surely mistaken. For the sciences as a whole are nothing other than human wisdom, which always remains one and the same, however different the subjects to which it is applied, it being no more altered by them than sunlight is by the variety of the things it shines on. Hence there is no need to impose any restrictions on our mental powers; for the knowledge of one truth does not, like skill in one art, hinder us from discovering another; on the contrary it helps us.

– René Descartes,  The Philosophical Writings of Descartes

This notion that the common thread that unites all the diverse sciences is the acquisition of human wisdom must be tempered by that further statement about the freeing of the mind from any intemperate restriction or regulation that would force it to down the path of specialization and expertise. What I mean by this is the fact that for Descartes like many in that era were discovering the sciences in all their diversity during a time when the tendency toward almost guild like enclosure and secrecy was taking effect rather than an open and interdependent,  pluralistic investigation; and, in that way they were becoming more and more isolated and closed off from one another in such a way that the truths of one field of study were no longer crossing the demarcated lines as knowledge in a universal sense of shared wisdom. Instead learning in one field of the sciences was becoming restrictive, segmented, and closed off from other fields in such a way that knowledge as a source of wisdom was becoming divided as well as divisive.

Continue reading

The Mind-Body Debates: Reductive or Anti-Reductive Theories?

More and more I have come to see in the past few years that the debates in scientific circles seem to hinge on two competing approaches to the world and phenomena: the reductive and anti-reductive frameworks. To really understand this debate one needs to have a thorough understanding of the history of science itself. Obviously in this short post I’m not going to give you a complete history of science up to our time. What I want to do is to tease out the debates themselves, rather than provide a history. To do that entails to philosophy and history rather than to specific sciences. For better or worse it is in the realm of the history of concepts that one begins to see the drift between these two tendencies played out over time. Like some universal pendulum we seem to see the rise and fall of one or the other conceptual matrix flit in and out as different scientists and philosophers debate what it is they are discovering in either the world or the mind. Why? Why this swing from reductive to anti-reductive then back again in approaches to life, reality, and mind-brain debates?

Philosophers have puzzled over this question from the time of Pre-Socratics, Democritus, Plato, Aristotle onwards… take the subject of truth: In his book TruthProtagoras made vivid use of two provocative but imperfectly spelled out ideas: first, that we are all ‘measures’ of the truth and that we are each already capable of determining how things are for ourselves, since the senses are our best and most credible guides to the truth; second, given that things appear differently to different people, there is no basis on which to decide that one appearance is true rather than the other. Plato developed these ideas into a more fully worked-out theory, which he then subjected to refutation in the Theaetetus. In his Metaphysics  Aristotle argued that Protagoras’ ideas led to scepticism. And finally Democritus incorporated modified Protagorean ideas and arguments into his theory of knowledge and perception.

Continue reading

Thomas Nagel: Idealism and the Theological Turn in the Sciences

The view that rational intelligibility is at the root of the natural order makes me, in a broad sense, an idealist— not a subjective idealist, since it doesn’t amount to the claim that all reality is ultimately appearance— but an objective idealist in the tradition of Plato and perhaps also of certain post-Kantians, such as Schelling and Hegel, who are usually called absolute idealists. I suspect that there must be a strain of this kind of idealism in every theoretical scientist: pure empiricism is not enough.

– Thomas Nagel, Mind and Cosmos

Now we know the truth of it, and why Thomas Nagel has such an apparent agenda to ridicule and topple the materialist world view that he seems to see as the main enemy of his own brand of neutral monism: a realist of the Idea, whether one call it mind or matter – it’s neutral. What’s sad is his attack on scientific naturalism and its traditions even comes to the point where he offers the conclusion that even religion upholds a more appropriate view of reality than the naturalist:

A theistic account has the advantage over a reductive naturalistic one that it admits the reality of more of what is so evidently the case, and tries to explain it all. But even if theism is filled out with the doctrines of a particular religion (which will not be accessible to evidence and reason alone), it offers a very partial explanation of our place in the world.(25)

Continue reading

The Mind-Body Debates: Beginnings and Endings

Jaegwon Kim tells us it all started with two papers published a year apart in the late fifties: “The ‘Mental’ and the ‘Physical'” by Herbert Feigl in 1958 and “Sensations and Brain Processes ” by J. J. C. Smart the following year. Both of these men brought about a qualitative change in our approach to the study of the brain and its interactions with the physical substrate. Each of them proposed in independent studies an approach to the nature of mind that has come to be called the mind-body identity theory, central-state materialism, the brain state theory, or type physicalism. That the identity theory in itself would lose traction and other theories would come to the fore, the actual underlying structure of the debates would continue to be set by the framework they originally put in place. As Kim suggests:

What I have in mind is the fact that the brain state theory helped set the basic parameters and constraints for the debates that were to come – a set of broadly physicalist assumptions and aspirations that still guide and constrain our thinking today.1

This extreme form of reductionist Physicalism was questioned by the multiple realizability argument  of Hilary Putnum and the anomalous argument by Donald Davidson. At the heart of Putnum’s argument as the notion of functionalism, that mental kinds and properties are functional kinds at a higher level of abstraction than physicochemical or biological kinds. Davidson on the other hand offered the notion of anomalous monism that the mental domain, on account of its essential anomalousness and normativity , cannot be the object of serious scientific investigation, placing the mental on a wholly different plane from the physical. At first it seemed to many of the scientists of the era that these two approaches, each in its own distinctive way, made it possible for “us to shed the restrictive constraints of monolithic reductionism without losing our credentials as physicalists” (4). Yet, as it turned out this, too, did not last.

Continue reading

Thomas Nagel: Constitutive Accounts – Reductionism and Emergentism

Thomas Nagel in his Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False starts from the premise that psychophysical reductionism, a position in the philosophy of mind that is largely motivated by the hope of showing how the physical sciences could in principle provide a theory of everything has failed to prove its case. As he states the case:

This is just the opinion of a layman who reads widely in the literature that explains contemporary science to the nonspecialist. Perhaps that literature presents the situation with a simplicity and confidence that does not reflect the most sophisticated scientific thought in these areas . But it seems to me that, as it is usually presented, the current orthodoxy about the cosmic order is the product of governing assumptions that are unsupported, and that it flies in the face of common sense.1

You notice the sleight of hand was move from “unsupported” to “flies in the face of common sense”. He seems over an over in his book to fall back on this common sense doxa approach when he’s unable to come up with legitimate arguments, admitting his amateur status as “nonspecialist” as if this were an excuse; and, then qualifying his own approach against the perceived “sophisticated scientific literature” as a way of disarming it in preference to his own simplified and colloquial amateurism.  The sciences of physics, chemistry, and biology are the key sciences that he wishes to use to prove his case. Behind it is a notion of a philosophy of “neutral monism” that he seems to favor: he tells us he “favors some form of neutral monism over the traditional alternatives of materialism, idealism, and dualism” (KL 71-72). As he tells it: “It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection. We are expected to abandon this naïve response, not in favor of a fully worked out physical/ chemical explanation but in favor of an alternative that is really a schema for explanation, supported by some examples.(KL 85-88)” To support his book’s overall theme he asks two major questions of the scientific community of reductionists:

First, given what is known about the chemical basis of biology and genetics, what is the likelihood that self-reproducing life forms should have come into existence spontaneously on the early earth, solely through the operation of the laws of physics and chemistry? The second question is about the sources of variation in the evolutionary process that was set in motion once life began: In the available geological time since the first life forms appeared on earth, what is the likelihood that, as a result of physical accident, a sequence of viable genetic mutations should have occurred that was sufficient to permit natural selection to produce the organisms that actually exist?(KL 89-93)

Continue reading

Georges Canguilhem: A Short History of Milieu: 1800 to the 1960’s

The notion of milieu is becoming a universal and obligatory mode of apprehending the experience and existence of living beings…

– Georges Canguilhem, Knowledge of Life

Reading these essays by Georges Canguilhem I can understand why he had such an impact on many of those like Michael Foucault, Gilbert Simondon to name only two French Intellectuals of that era. He brings not only an in depth understanding of the historical dimensions of concepts, but he conveys it in such a way that one makes the connections among its various mutations and uses with such gusto and even handed brilliance that one forgets that one is reading what might otherwise be a purely abstract theatre of concepts in their milieu. Even if I might disagree with his conclusions I think he had such a wide influence on those younger philosophers that it behooves us to study his works. In the The Living in its Milieu he gives a short history of this concept as it is used by scientists, artists and philosophers. The notion of milieu came into biology by way of mechanics as defined by Newton and explicated in the entry on milieu in the Encyclopédie Methodique of Diderot and d’Alembert attributed to Johann (Jean) Bernoulli. From here it was incorporated both in a plural and a singular form by other biologists and philosophers in the 19th Century. Among them Lamark, inspired by Buffon in its plural form, and established by Henri de Blainville; while in the singular form it was Auguste Comte and Etienne Geoffroy Saint-Hilaire who clarified its use. Yet, for most people of the 19th Century is through the work of Honoré de Balzac (in his preface to his La Comédie humaine), as well as in the work of Hippolyte Taine who used the term as one of three analytic explanatory concepts guiding his historical vision, the other two being race and moment. After 1870 the neo-Lamarckian biologists would inherit this term from Taine ( such biologists as Alfred Girard, Félix Le Dantec, Frédéric Houssay, Johann Costantin, Gaston Bonnier, and Louis Roule).

The eighteenth century mechanists used the term milieu to denote what Newton referred to as “fluid”. As Canguilhem relates the problem that Newton and others in his time faced was the central problem in mechanics of action of distinct physical bodies at a distance (99).1 For Descartes this was not an issue since for him there was only one mode of action – that of collision, as well as one possible physical situation – contact (99). Yet, when early experimental or empirical scientists tried to use Descartes theory they discovered a flaw: bodies blend together. While Newton solving this issue discovered that instead what was needed was a medium within which these operations could take place: so he developed the notion of ‘ether‘. The luminiferous ether in Newton’s theory became an intermediary between two bodies, it is their milieu; and insofar as the fluid penetrates all these bodies, they are situated in the middle of it [au milieu de lui]. In Newton’s theory of forces one could speak of the milieu as the environment (milieu) in which there was a center of force.

Continue reading

Canguilhem, Simondan, Deleuze

Tracing certain concepts back into the murky pool of influence can be both interesting but at the same time troubling. The more I study Deleuze the more perplexed I become. Was he a vitalist as some suggest? Or, was he against such notions in his conception of life? Trying to understand just where the truth is to be found has taken me into the work of two other French thinkers, one a philosopher of the sciences, Georges Canguilhem; and, the other, a philosopher of technology, Gilbert Simondon.

On Canguilhem

We learn from Wikipedia (here) that Canguilhem’s principal work in philosophy of science is presented in two books, Le Normal et le pathologique, first published in 1943 and then expanded in 1968, and La Connaissance de la vie (1952). Le Normal et la pathologique is an extended exploration into the nature and meaning of normality in medicine and biology, the production and institutionalization of medical knowledge. It is still a seminal work in medical anthropology and the history of ideas, and is widely influential in part thanks to Canguilhem’s influence on Michel Foucault [and, thereby, indirectly on the work of Gilles Deleuze]. La Connaissance de la vie is an extended study of the specificity of biology as a science, the historical and conceptual significance of vitalism, and the possibility of conceiving organisms not on the basis of mechanical and technical models that would reduce the organism to a machine, but rather on the basis of the organism’s relation to the milieu in which it lives, its successful survival in this milieu, and its status as something greater than “the sum of its parts.” Canguilhem argued strongly for these positions, criticising 18th and 19th century vitalism (and its politics) but also cautioning against the reduction of biology to a “physical science.” He believed such a reduction deprived biology of a proper field of study, ideologically transforming living beings into mechanical structures serving a chemical/physical equilibrium that cannot account for the particularity of organisms or for the complexity of life. He furthered and altered these critiques in a later book, Ideology and Rationality in the History of the Life Sciences.

Continue reading

Thoughts on Philosophy and the Sciences

The deficiencies of each of these alternatives, in each of their variations, have been well demonstrated time and again, but this failure of philosophers to find a satisfactory resting spot for the pendulum had few if any implications outside philosophy until recent years, when the developments in science, especially in biology and psychology, brought the philosophical question closer to scientific questions – or, more precisely, brought scientists closer to needing answers to the questions that had heretofore been the isolated and exclusive province of philosophy.

Daniel C. Dennett,  Content and Consciousness

Rereading Denett’s book Content and Consciousness makes me see how little has changed between 1969 and now in philosophy. The point of his statement above is to show how over time (history) the questions of philosophy are replaced by the questions of scientists. Why? Is there something about philosophy that keeps it at one remove from reality? Are we forever barred from actually confronting the truth of reality? Is it something about our tools, our languages, our particular methodologies, etc.? What is it that the sciences have or do that makes them so much better equipped to probe the truth about reality? What Denennett is describing above is the movement between differing views of reality that philosophers time and again seem to flow through from generation to generation, shifting terms from nominalism/realism, idealism/materialism, etc. down the ages always having a battle over approaches to reality that seem to be moving in opposing ways. While the sciences slowly and with patient effort actually do the work of physically exploring and testing reality with varying probes, instruments, and apparatuses that actually do tell us what is going on.

Levi R. Bryant has a couple of thought full posts on his blog Larval Subjects (here) and (here) dealing with the twined subjects of philosophy’s work and reality probing. In the first post he surmises:

Here I think it’s important to understand that philosophy is not so much a discipline as a style of thought or an activity.  We are fortunate to have a discipline that houses those who engage in this sort of conceptual reflection, that provides a site for this reflection, and that preserves the thought of those who have reflected on basic concepts.  However, I can imagine someone objecting that certainly the scientist can (and does!) ask questions like “what is causality?”  To be sure.  However, I would argue that when she does this she’s not doing science but rather philosophy.  Philosophy doesn’t have to happen in a department to be philosophy, nor does it have to be in a particular section of the bookstore.  One need not have a degree in philosophy to engage in this sort of reflective activity; though it certainly helps.  It can take place anywhere and at any time.

The notion that scientists ask questions that are philosophical is true and that in that process they are doing philosophy is also true, yet I think it overlooks the fact that scientists not only ask questions that are philosophical they also answer these questions scientifically rather than philosophically and that seems to make all the difference between the two domains of knowledge. Science is not only as Levi points out of philosophy a “sort of reflective activity”. The sciences utilize a set of methodologies that allow them to probe reality not only using conceptual tools as in philosophy, but also with very real scientific instruments, apparatuses, etc. Obviously Levi would not disagree with this, and I’m sure he knows very well that this was not the question he was pursuing. This is not an argument with Levi about philosophy. In fact I have no problem and agree with Levi in the points he was making. The point of his post was more about What philosophy is? In other words the ontological question not about the differing goals of philosophy and science and what they do. Yet, my point is just that: would the typical scientist stop with the question “What is causality?” – would he like the philosopher be satisfied with reflecting on what is – stay with the metaphysical and speculative ontological question? No. The typical scientist wouldn’t stop there he would also ask the same question as the philosopher but instead of trying to solve the nature of causality as an ontological problem his emphasis would not be on the is but on the activity of causality itself(i.e., what is it that causality is doing?). The difference is subtle, for the philosopher this reflection on the nature of causality is about what causality is, while the same question for the scientist is about what causality does: under what conditions could I test the mechanisms of causality? etc.  That is the rub, the splice, the cut or suture between the two disciplines or styles or approaches toward the nature of causality.

There is a subtle connection between philosophy and science as well. You can ask of science how it pictures the world, study its laws, its theories, its models, and its claims – you can know and listen to what it says or describes about the world: the is of the world. But you can also consider not just what it says about the world – but what is done: experimental sciences not only reflect the is they also understand the actual workings of causality by experimental methods that under controlled or highly contrived circumstances allow them to peak into the nature of causality and what it does not just what it is.

Nancy Cartwright: Nomological Machines

What is a nomological machine?

– Nancy Cartwright, The Dappled World – A Study of the Boundaries of Science

In a simple, concise, and pithy definition Nancy Cartwright answers her own question, saying: “It is a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behavior that we represent in our scientific laws”.1

In reading this sentence again we get the feeling that Nancy is not quite as sure of all the required needs of her statement to enact the production of these laws that it supports. With the use of “enough” as “moderately, fairly, tolerably” (good enough) in several places we get the feeling of an unwritten complicit statement that this is all based on a notion of heuristics which  refers to experience-based techniques for problem solving, learning, and discovery that give a solution which is not guaranteed to be optimal. Where the exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution via mental shortcuts to ease the cognitive load of making a decision.

Continue reading

The Impersonal Self: Autonomy, Ownership, and Eliminative Subject

Most of the time in our everyday lives we talk of this person or that person as having psychological states and who does things, performs actions: as someone who owns their psychological states and actions. The notion of self-ownership has a long history (of which more later). We might call this the Sovereign Self or the Autonomous Subject theory of the person. The invention of autonomy as a concept was the creation of a unique philosopher, Immanuel Kant. As J.B. Schneewind in his epochal history of this concept tells us Kant used the notion of invention for this term of autonomy in the same way as Leibniz, another philosopher of the 17th Century for whom Kant had great respect but often disagreed with:

“Lebniz thought up a simple substance which had nothing but obscure representations, and called it a slumbering monad. This monad he had not explained, but merely invented; for the concept of it was not given to him but was rather created by him” – from Critique of Pure Reason by Immanuel Kant

Kant saw autonomy as the moral center of our sense of identity and subjectivity. For Kant autonomy required what is termed ‘contracausal freedom’ or free will: he believed that in the unique experience of the moral ought we are “given” a “fact of reason” (Schneewind, 3). For him free will was part of a mechanism of law, the imposition of certain codified rules and regulatory processes that internalized our need to obey. In his writings he alludes to the sense of persons as agents who are self-governed and in this way were considered autonomous agents with free will. I’ll not go into the full arguments presented by Kant for this view, to do so would entail an explication of his mature moral philosophy.

Continue reading

Anthony Elliott: Rise of Anti-Self

…territories of the self – both positive and negative – are being powerfully reshaped by our world of intensive globalization, and indeed it is my view that processes affecting the globalization of self are likely to intensify.

– Anthony Elliott,  Concepts of the Self

The notion that the Self, Subject, Subjectivity have a history may be a commonplace in out post- whatever age of transformation, but the notion of an Anti-Self suddenly displacing the older sense of individualism, freedom, and the moral ethical version of Kantian notion of the autonomous individual moral Subject is another thing entirely. As Anthony Elliot states it contrary to received opinion the task of a reflective social theory of the self, broadly speaking, is to take apart the received wisdom that globalization creates a flattening or diminution of lived experience and to probe the complex, contradictory global forces that shape our current ways of life and trajectories of self. In this sense, recent social theory has had much of interest to contribute to debates on selfhood, since various social analysts have detected signs that contemporary identities are moving in a more cosmopolitan, post-traditional or global direction.1

With the rise of such strange modernities as Zygmaut Bauman’s now classic Liquid Modernity we have come to know the self not as some well defined cognitive bastion of the western imagination, but as something different, more liquid and open, changing, metamorphosizing into otherwise unknown perimeters of a fragile, fragmented self with no solid identity. Instead we are entering the age of the anti-self which as Elliott suggests referring to such theoreticians as John Law (see Complexities: Social Studies of Knowledge Practices)

To acknowledge the chaos of the world is, according to this viewpoint, to recognize the centrality of heterogeneity and dissemination of social fabrics, and to give the slip, once and for all, to our culture’s narcissistic over-estimation of self, identity and agency. Our present social order, it is argued, is based on connections, attributions and distributions of the non-human as well as the human – and this precisely is what is overlooked in many social theories of the self. On this view, what is now needed is the replacement of the self as privileged actor by, instead, the conceptual recognition that the self is just one actor (or, another actor) in a network of actors – human, non-human, technical and semiotic. Only through recognizing that the self is not pre-given, but is, rather, something that emanates from an external web of entities, connections and distributions, can we grasp what John Law (an acolyte of such anti-self theory) has dubbed ‘heterogeneous orderings in networks of the social’.

Continue reading

Hans-Jörg Rheinberger: A Short History of Epistemology

Hans-Jörg Rheinberger – as we learn from the blurb on his Max Planck Institute site, main focus in research lies in the history and epistemology of experimentation in the life sciences. By bridging the gap between the study of history and contemporary cutting-edge sciences, such as molecular biology, his work represents an example of transdisciplinarity as emerging in the present knowledge-based society.

In his short book On Historicizing Epistemology: An Essay  he tells us that the classical view of epistemology was a synonym for a theory of knowledge that inquires into what it is that makes knowledge scientific, while for many of the contemporary practioners of this art, following the French practice, it has become a form of reflecting on the historical conditions under which, and the means with which, things are made into objects of knowledge.1

This subtle difference between the classical and the contemporary epistemology hinges on a specific set of historical transformations in philosophy and the sciences during the twentieth century and it is to this that his book directs its inquiry. From the nineteenth century of Emil Du Bois-Reymond and Ernst Mach on through the works of Polish immunologist Ludwik Fleck and the French epistemologist Gaston Bachelard to Karl Popper, Edmund Husserl, Martin Heidegger, Ernst Cassirer, Alexandre Koyre, Thomas Kuhn, Stephen Toulmin, and Paul Feyerabend, Georges Canguilhem, Louis Althusser, and Michel Foucault, as well as Jacques Derrida, and on up to contemporary practioners such as Ian Hacking for the English-speaking world, and by Bruno Latour for France we follow the course of a slow process of historicizing and internal transformation of philosophy, the sciences, and epistemology as they interacted with each other.

As he shows in this short work even the problematique, the very problems that epistemology set out to answer changed in route from the early thinkers to the later:

Not by chance, an epistemology and history of experimentation crystallized conjointly. The question now was no longer how knowing subjects might attain an undisguised view of their objects, rather the question was what conditions had to be created for objects to be made into objects of empirical knowledge under historically variable conditions.(Kindle Locations 44-45).

For anyone needing a basic history and overview of this fascinating history of the conjunctions and disjunctions of science and philosophy this is a great little introduction and not too costly.

1. Hans-Jorg Rheinberger. On Historicizing Epistemology: An Essay (Cultural Memory in the Present) (Kindle Locations 38-39). Kindle Edition.