Visions of the Future: Apocalypse or Paradise?

Continuing with a frontal assault of our conceptions of the future in both their negative and positive modes I’d like to continue down the path from previous notes on John Michael Greer’s assessment for America and the world’s prospects (here). He ended his book telling us that Americans need a new vision, a new Dream, one “that doesn’t require promises of limitless material abundance, one that doesn’t depend on the profits of empire or the temporary rush of affluence we got by stripping a continent of its irreplaceable natural resources in a few short centuries“. Yet, he also warned us that “…nothing guarantees that America will find the new vision it needs, just because it happens to need one, and it’s already very late in the day. Those of us who see the potential, and hope to help fill it, have to get a move on“.1

Michio Kaku in his book Physics of the Future will offer what he terms an “insider’s view” of the future. I thought it ironic that he would pull the old trick of insider/outsider that opposes scientific authority to the folk-wisdom of the tribe, and assumes scientific knowledge has some greater privilege and access to the future than that of historians, sociologists, science fiction writer’s, and “futurologists” – who he gently removes from authority and truth, saying in his preface that they are all “outsiders” – “predicting the world without any firsthand knowledge of science itself” as if this placed them in a world of non-knowledge or folk-wisdom that could be left behind, as if they were mere children in a grown-ups world of pure scientific mystery that only the great and powerful “insider”, the scientist as inventor, investigator, explorer of the great mysteries of the universe could reveal.

Yet, in the very next paragraph after dismissing the folk-wisdom of the tribal mind, and bolstering the power of science and scientists he will ironically admit that “it is impossible to predict the future with complete accuracy”, that the best we can do is to “tap into the minds of scientists on the cutting edge of research, who are doing the yeoman’s work of inventing the future”.2 One notices that science is now equated with “invention” of the future, as if the future was a product or commodity that we are building in the factories of knowledge, both material and immaterial that will – as he terms it “revolutionize civilization”. Of course etymologically invention is considered “a finding or discovery,” a noun of action from the past participle stem of invenire to “devise, discover, find”. And as he uses the words “yeoman’s work” for scientists as inventors of the future we will assume the old sense of that as “commoner who cultivates his land”, or an  “attendant in a noble household,” so that these new scientists are seen as laborers of the sciences producing for their masters, or the new nobility of the elite Wall-Street and Corporate Globalist machine.

(I will come back to the notion of the future as Invention in another essay in this series. What is the future? How do we understand this term? Is the future an invention, a discovery, a finding; or, is it rather an acceleration of the future as immanent in our past, a machinic power unfolding, or a power invading us from the future and manipulating our minds to deliver and shape us to its will? Time. What is this temporality? What is causality? Do we shape it or does it shape us? )

So in Kaku we are offered a vision of the future in alignment with the globalist vision of a corporatized future in which scientists are mere yeoman doing the bidding of their masters in inventing a future that they are paying for through the great profit making machine of capitalism. It’s not that his use of differing metaphors and displacements, derision of the outsider and ill-informed or folk-wisdom practices of historians, sociologists, science-fiction writers, and futurologists was in itself a mere ploy; no, its that whether consciously or unknowingly he is setting the stage, which on the surface appears so positive, so amiable, so enlightening and informing for a corporate vision of the future that is already by the virtue of a dismissal of its critics a done deal, a mere effort of unlocking through the power of “devices, inventions, and therapies”. Kaku is above all an affirmer of technologies dream, of science as the all-powerful Apollonian sun-god of enlightened human destiny that will revolutionize civilization.  

I doubt this is the dream that John Michael Greer had in mind when he mentioned that we need a new American Dream. Or is it? For Greer there only the ultimate demise of the last two-hundred years of Fordism or the Industrial Age:

Between the tectonic shifts in geopolitics that will inevitably follow the fall of America’s empire, and the far greater transformations already being set in motion by the imminent end of the industrial age , many of the world’s nations will have to deal with a similar work of revisioning.(Greer, 276)

Yet, this is where Greer leaves it, at a stage of revisioning to come, of dreams to be enacted. He offers no dream himself, only the negative critique of existing dreams of the Fordist era utopias that have failed humanity and are slowly bringing about disaster rather than transformation.

Kaku on the other hand, whose works sell profitably, a man who has the ear of the common reader as well as the corporate profiteers seeks his own version (or theirs?) of the American Dream. Unlike his previous book Visions, which offered his vision of the coming decades; instead, this new one offers a hundred year view of technology and other tensions in our global world that as he tells it ominously “will ultimately determine the fate of humanity”.

I’ll leave it there for this post, and will take up his first book, Visions: How Science Will Revolutionize the 21st Century in my next post, then his Physics of the Future in the third installment. 

1. Greer, John Michael (2014-03-17). Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America . New Society Publishers. Kindle Edition.
2. Michio Kaku. Physics of the Future. (Doubleday, 2012)

The Mad Hatter’s Tool-Box: How the Fixit Man can Move Us Into an Uncertain Future

We should attend only to those objects of which our minds seem capable of having certain and indubitable cognition.

– René Descartes,   The Philosophical Writings of Descartes

Descartes would develop twenty-one rules for direction of the Mind as if these would carry us toward that ultimate goal of certainty. In our age mathematicians have relied on probabilistic theorems to narrow down the field of uncertainty in such things as physics, economics, physiology, evolutionary biology, sociology, psychology, etc.  Ludwig Wittgenstein in his book On Certainty would develop the notion that claims to certainty are largely epistemological, and that there are some things which must be exempt from doubt in order for human practices to be possible (Ali Reda has a good background if you need: here).

For the rationalist Descartes “someone who has doubts about many things is no wiser than one who has never given them a thought; indeed, he appears less wise if he has formed a false opinion about any of them. Hence it is better never to study at all than to occupy ourselves with objects which are so difficult that we are unable to distinguish what is true from what is false, and are forced to take the doubtful as certain; for in such matters the risk of diminishing our knowledge is greater than our hope of increasing it”.1 Of course things change and in the 19th Century engineers would need a way to narrow down the range of uncertainty in practical problems so the Probabalistical Revolution arose.

Thomas Kuhn in his now famous essay What are scientific revolutions? would argue that what characterizes revolutions is change in several of the taxonomic categories prerequisite to scientific descriptions and generalizations. He would constrain this statement saying that an adjustment not only of the criteria relevant to categorization, but of the way in which given objects and situations are distributed among preexisting categories.2

Bernard Cohen in the same work admitted that in the twentieth century a real revolution in the physical sciences did come about with the incorporation of probability and statistical mathematics that replaced the older Newtonian simple rules of causality of assigned cause and effect. The same with biology in its genetics and evolutionary forms. From its birth in the 19th Century probability theory in social sciences, etc. Yet for him it was not a revolution in theory as much as in the application of theory that was the revolution.(PR, 40)

Ian Hacking would dispute both Kuhn and Cohen and tell us that what was revolutionary was neither the theoretical revolution in the structure of the sciences, nor was it one in the application of those sciences, but was rather in the “taming of chance and the erosion of determinism” that constitute one of the “most revolutionary changes in the history of the mind.” (PR, 54)

Whether we like to think about it or not mathematics has always informed philosophy or vice versa since the advent of the sciences. Many of the terms used in philosophy come directly out of their use in scientific theory and practice. In our day with the advent of Analytical Philosophy one would be hard put to remain a philosopher without some formal education in mathematics and the various forms of logic. Yet, on the Continent this influx of the math and the sciences has for the most part with the rise of phenomenology been more or less put of the back burner and even denied a central role. Oh sure there have been several philosophers that it was central too, but for the most part philosophy in the twentieth century grew out of the language of finitude and the ‘Linguistic Turn’ in phenomenology, existentialism, structuralism, deconstruction, and post-structuralist lines of thought. Yet, at the end of the century one could see math beginning to reemerge within philosophy in the works of Deleuze, Badiou, John Luc Nancy, and many others. In our contemporary setting we are seeing a move away from both phenomenology and its concurrent Linguistic Turn, as well as the Analytical philosophies into a new and vibrant surge toward Synthetic philosophies of mathematics.

With the rise of both the NBIC (NanoTech, BioTech, InfoTech, and Cognitive Sciences) as well as the ICT’s (Information and Communications Technologies) we are seeing the need for a synthetic philosophy. Although Herbert Spenser was probably the first to use the term Synthetic Philosophy which tried to demonstrate that there were no exceptions to being able to discover scientific explanations, in the form of natural laws, of all the phenomena of the universe. Spencer’s volumes on biology, psychology, and sociology were all intended to demonstrate the existence of natural laws in these specific disciplines. The 21st Century use of that term is quite different and less positivistic.

Of late – at the behest of my friend Andreas Burkhardt, I’ve been reading Fernando Zalamea’s Synthetic Philosophy of Contemporary Mathematics. In this work he offers four specific thesis: first, that contemporary math needs both our utmost attention and our careful perusal, and that it cannot be reduced to either those of set theory and mathematical logic, or those of elementary mathematics; second, to understand and even perceive what is at stake in current mathematics we need to discover the new problematics that remain undetected by ‘normal’ and ‘traditional’ philosophy of mathematics as now practiced; third, a turn toward synthetic understanding of mathematics – one based on the mathematical theory of categories, that allows us to observe important dialectical tensions in mathematical activity, which tend to be obscured, and sometimes altogether erased, by the usual analytical understanding; and, finally, we must reestablish a vital pendular weaving between mathematical creativity and critical reflection – something that was indispensable for Plato, Leibniz, Pascal and Pierce – and that, on the one hand, many present day mathematical constructions afford useful and original  perspectives on certain philosophical problematics of the past while, on the other hand, certain fundamental philosophical insolubilia fuel great creative forces in mathematics. (Zalamea, 4-5)

Over time we’ve seen a slow move from analytical to post-analytical philosophy, and the concurrent move from phenomenological to post-phenomenological in both the Continent and Americas for a few years now. One wonders if this philosophical transformation, as well as the changes in and revolutions around certain technological and scientific theories and practices over the past 30 years, is bringing with it a sense of what Kuhn spoke of as the shift in “taxonomic categories prerequisite to scientific descriptions and generalizations”? Are the linguistic along with mathematical frameworks that have guided for a hundred years changing? And, if so , what are the new terms?

We’ve seen in the work of such philosophers as William C. Wimsatt in his Re-Engineering Philosophy for Limited Beings a new turn from rationalism and strategy, game theory and puzzles that were at their height in the 1990’s toward a new empiricism, a shift both methodologically and conceptually towards complexity and the senses.3 As he puts it, for any naturalized account:

We need a philosophy of science that can be pursued by real people in real situations in real time with the kinds of tools that we actually have – now or in a realistically possible future. … Thus I oppose not only various eliminativisms, but also overly idealized intentional and rationalistic accounts. (Wimsatt, 5)

Wimsatt turns toward a what he terms a “species of realism”, a philosophy based top to bottom on heuristic principles of reasoning and practice, but that also seeks a full accounting of other things in our richly populated universe – including formal approaches we have sought in the past. (Wimsatt, 6) He tells us that pace Quine his is an ontology of the rainforest, piecemeal and limited to the local view rather than some rationalizing global view or God’s view of things in general. At the center of this realist agenda is heuristics that help us explore, describe, model, and analyze complex systems – and to “de-bug” their results. (Wimsatt, 8) His is a handyman approach to philosophy and the sciences, the need for a tool-box of tools that can be used when needed, and discarded when something better comes along. Instead of some armchair mentation he would send us back into the streets where the universe is up front and close. Yet, remember to bring that toolbox, all those toys and computer, net connections, databanks… etc. whatever it takes to get on with your work. Be up and doing… a pragmatic approach to science and philosophy that breaks down the barriers of stabilized truth bearing authorities that horde the gold of scientific knowledge like it was some hidden treasure, We need a new breed of active participants, go-getters, and pragmatists to do the dirty work of understanding what reality is up to.

What is interesting to me at this moment in time in both the sciences and philosophy is this sense of stock taking, of sizing up the past couple hundred years, wading through the muck, weighing things in the balance and deciding what’s next, where we’re going with our lives and our work. There seems to be a great deal of thought provoking movement in the air, as if we’re all coming to the same realization that yes we need to change… our governments, our sciences, our philosophies have for the most part failed us, not given us either the answers or the changes we need to build a good life on this planet. In the men and women in both philosophy and the sciences that I’m reading in areas of feminism, racism, species relations,  posthumanism, postnaturalism, postmodernism… etc. blah blah … we seem ready to ditch all these posts and move on to the defining metaphor of our age. There’s an energy running through the web, a good energy as if people are tired of the bullshit, tired of the negative crap, tired of authorities that keep promising change and never delivering… even in the sciences we see the transformation of things happening so fast its hard to keep up. With Virilio, speed… with Noys and Land, acceleration… this fast pace of life wants somewhere to go, but we seem to be on a spinning ginny ready to drop its barker floor below us as we plunge into the abyss. But as we can see from the philosophers and scientist above, there is also a sense of urgency – a sense that we need to be a moving, a sense that we need get off our arse and be about our work… like the Mad Hatter, there’s no time left “I must be on my way!”

1. Descartes, René (1985-05-20). The Philosophical Writings of Descartes: 1 (Kindle Locations 375-378). Cambridge University Press. Kindle Edition.
2. The Probabilistic Revolution. Ed. Lorenze Kruger, Lorraine J. Daston, and Michael Heidelberger (MIT Press, 1987)
3. William C. Wimsatt. Re-Engineering Philosophy for Limited Beings. (Harvard, 2007)

Lucretius and The Making of Modernity

Karl Marx would relate in his essay on French Materialsm the “overthrow of the metaphysics of the seventeenth century could be explained from the materialistic theory of the eighteenth century only in so far as this theoretical movement was itself explicable by the practical shape of the French life of that time. This life was directed to the immediate present, to worldly enjoyment and worldly interests, to the secular world. It was inevitable that anti-theological, anti-metaphysical, materialistic theories should correspond to its anti-theological, anti-metaphysical, its materialistic practice. In practice metaphysics had lost all credit.”

In our time we’ve seen a resurgence in the other direction which seems to me a dangerous reversion to pre-critical thinking and practice. What was it that brought us to the materialist vision of reality and life to begin with? What seemed so attractive to those of the past few centuries that materialism came to the for rather than the continued dogmatic imposition of theology, metaphysics, and the humanist traditions? We see in such works as Words of Life: New Theological Turns in French Phenomenology (Perspectives in Continental Philosophy) by Bruce Ellis Benson we see such philosophers as -Luc Marion, Jean-Yves Lacoste, Kevin Hart, Anthony J. Steinbock, Jeffrey Bloechl, Jeffrey L. Kosky, Clayton Crockett, Brian Treanor, and Christina Gschwandtner, Dominique Janicaud, Jean-Francois Courtine, Jean-Louis Chrtien, Michel Henry, Jean-Luc Marion, and Paul Ricoeur all enquiring into and revitalizing theological notions, concepts, and frameworks in their own theories and practices. And that’s just in the world of French philosophy and phenomenology in particular. I could name philosopher after philosopher from the Continental and even American Analytical streams who seem to be teasing with this supposed theological turn in philosophy.

Continue reading

Building the greatest artificial intelligence lab on Earth

Just read this on Mind Hacks…. looks like Google is becoming an AI company; and, with Ray Kurzeil and other AI and transhumanist theoreticians at the helm what should we expect in the future from Google? Just looking at the $3.2 Billion dollar investment in Nest Labs alone, not to speak of all the other companies it has bought up lately one wonders just what “deep learning” and the future of data mining holds out for our freedom? One of the investors from DeepMind told the reporters at technology publication Re/code two weeks ago that Google is starting up the next great “Manhattan project of AI”. As the investor continued: “If artificial intelligence was really possible, and if anybody could do it, this will be the team. The future, in ways we can’t even begin to imagine, will be Google’s.”

Kurzeil says that his main job mission is to offer an AI intelligence system based on natural language “my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.” Continuing, he says, “Google will know the answer to your question before you have asked it. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.” Who needs Big Brother when you have Google in your head? And, with Google in collusion with DARPA initiatives, who is to say what military and securitization issues will arise from such systems of intelligence? (see Google dominates Darpa robotics…) Will the WorldMind 1.0 be the militaries secret initiative to take over control not only of all information on the web, but of those hooked into its virtual playpen of false delights? Instead of “dropping out” like my fellow hippies did in the sixties, maybe we should soon think about unplugging, disconnecting, and cutting the neurocircuits that are being rewired by the global brain? Or is it already too late?

Orwell wrote of NewsSpeak… which in our time is becoming “GoogleSpeak” your friendly Avatar of the information highway. What next? A little smiley faced icon on your car google visor, iPhone, or thinkpad, an avatar that follows you everywhere 24/7 chattering away about this or that… all the while smiling as it also relays your deepest medical, social, private or intimate informatics messages to the NSA or any of a multiple other cyberagencies for data crystallization and surveillance recon. Oh, the wonders of the control society… blah, blah, blah…. the naturalization of security in our age: GoogleSpeak is your friend, download her now! Or, better yet, let GoogleMind(tm) back up your brainwaves today, don’t lose another mindless minute of your action filled life: let the GoogleMeisters upload your brain patterns to the Cloud…

As John Foreman at GigaCom remarks on Data privacy, machine learning…

“If an AI model can determine your emotional makeup (Facebook’s posts on love certainly betray this intent), then a company can select from a pool of possible ad copy to appeal to whatever version of yourself they like. They can target your worst self — the one who’s addicted to in-app payments in Candy Crush Saga. Or they can appeal to your aspirational best self, selling you that CrossFit membership at just the right moment.

In the hands of machine learning models, we become nothing more than a ball of probabilistic mechanisms to be manipulated with carefully designed inputs that lead to anticipated outputs.” And, quoting Victor Frankl, he continues: ““A human being is a deciding being.” But if our decisions can be hacked by model-assisted corporations, then we have to admit that perhaps we cease to be human as we’ve known it. Instead of being unique or special, we all become predictable and expected, nothing but products of previous measured actions.” In this sense what Deleuze once described as the “dividual” – “a physically embodied human subject that is endlessly divisible and reducible to data representations via the modern technologies of control” is becoming naturalized in this new world of GoogleSpeak. Just another happy netizen of the slaveworlds of modern globalism where even the best and brightest minds become grist for the noosphere mill of the praxelogical GoogleMind(tm).

Mind Hacks

The Guardian has an article on technologist Ray Kurzeil’s move to Google that also serves to review how the search company is building an artificial intelligence super lab.

Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for…

View original post 134 more words

Lee Smolin: Time, Physics and Climate Change

The most radical suggestion arising from this direction of thought is the insistence on the reality of the present moment and, beyond that, the principle that all that is real is so in a present moment . To the extent that this is a fruitful idea, physics can no longer be understood as the search for a precisely identical mathematical double of the universe. That dream must be seen now as a metaphysical fantasy that may have inspired generations of theorists but is now blocking the path to further progress. Mathematics will continue to be a handmaiden to science, but she can no longer be the Queen.

– Lee Smolin,  Time Reborn: From the Crisis in Physics to the Future of the Universe

What if everything we’ve been taught about time, space, and the universe is not just wrongheaded, but couched in a mathematics of conceptual statements (theorems) that presumed it could map the totality of reality in a one-to-one ratio of identity?  This notion that mathematics can ultimately describe reality, that there is a one to one identity between the conceptual framework of mathematics and the universe – the Cartesian physicist – or, you may know him under the epithet of String theorist – will maintain that those statements about the accretion of the universe which can be mathematically formulated designate actual properties of the event in question (such as its date, its duration, its extension), even when there is no observer present to experience it directly. In doing so, our physicist is defending a Cartesian thesis about matter, but not, it is important to note, a Pythagorean one: the claim is not that the being of accretion is inherently mathematical – that the numbers or equations deployed in the statements (mathematical theorems) exist in themselves. What if all those scientists, philosophers and mathematicians who have pursued this path had in fact taken a wrong turn along the way. This is the notion that Lee Smolin an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo and a member of the graduate faculty of the philosophy department at the University of Toronto puts forward in his new book Time Reborn: From the Crisis in Physics to the Future of the Universe.

Continue reading

Deleuze: Control and Becoming

New cerebral pathways, new ways of thinking, aren’t explicable in terms of microsurgery; it’s for science, rather, to try and discover what might have happened in the brain for one to start thinking this way or that. I think subjectification, events, and brains are more or less the same thing.

– Gilles Deleuze, Control and Becoming

The new information communications technologies form the core infrastructure of what many have termed our Global Information Society and what Deleuze once termed under the more critical epithet “societies of control”.  As Harold Innis once stated in his classic work Empire and Communications: “Concentration on a medium of communication implies a bias in the cultural development of the civilization concerned either towards an emphasis on space and political organizations or towards an emphasis on time and religious organization.”1 With the spread of information culture and technologies the older forms of newspaper, radio, television, and cinema form the core nexus of propaganda machines for both government and corporate discipline and control within national systems, while – at least in the free world, information technologies remain borderless and open systems. Yet, even this being called into question in our time. With both governmental and international agency pressure the protocols for invasive control over the communications of the internet are becoming the order of the day.

Continue reading

Stephen Jay Gould: On the Reduction/Anti-Reduction Debate

At this point in the chain of statements, the classical error  of reductionism often makes its entrance, via the following argument: If our  brain’s unique capacities arise from its material substrate, and if that  substrate originated through ordinary evolutionary processes, then those unique  capacities must be explainable by (reducible to) “biology” (or some other  chosen category expressing standard scientific principles and procedures).

The primary fallacy of this argument has been recognized  from the inception of this hoary debate. “Arising from” does not mean “reducible  to,” for all the reasons embodied in the old cliche that a whole can be more  than the sum of its parts. To employ the technical parlance of two fields,  philosophy describes this principle by the concept of “emergence*,” while science  speaks of “nonlinear” or “nonadditive” interaction. In terms of building  materials, a new entity may contain nothing beyond its constituent parts, each  one of fully known composition and operation. But if, in forming the new entity,  these constituent parts interact in a “nonlinear” fashion—that is, if the  combined action of any two parts in the new entity yields something other than  the sum of the effect of part one acting alone plus the effect of part two  acting alone—then the new entity exhibits “emergent” properties that cannot  be explained by the simple summation of the parts in question. Any new entity  that has emergent properties—and I can’t imagine anything very complex  without such features—cannot, in principle, be explained by (reduced to)  the structure and function of its building blocks.

— Stephen Jay Gould, In Gratuitous Battle

—————————————————-

* A note he qualifies his use of “emergence”:

Please note that this definition of “emergence” includes no  statement about the mystical, the ineffable, the unknowable, the spiritual, or  the like—although the confusion of such a humdrum concept as nonlinearity  with this familiar hit parade has long acted as the chief impediment to  scientific understanding and acceptance of such a straightforward and  commonsensical phenomenon. When I argue that the behavior of a particular  mammal can’t be explained by its genes, or even as the simple sum of its genes  plus its environment of upbringing, I am not saying that behavior can’t be  approached or understood scientifically. I am merely pointing out that any full  understanding must consider the organism at its own level, as a product of  massively nonlinear interaction among its genes and environments. (When you  grasp this principle, you will immediately understand why such  pseudosophisticated statements as the following are not even wrong, but merely  nonsensical: “I’m not a naive biological determinist. I know that intelligence  represents an interaction of genes and environment—and I hear that the  relative weights are about 40 percent genes and 60 percent environment.”)

Rene Descartes: The Diversity of the Sciences as Human Wisdom

Distinguishing the sciences by the differences in their objects, they think that each science should be studied separately, without regard to any of the others. But here they are surely mistaken. For the sciences as a whole are nothing other than human wisdom, which always remains one and the same, however different the subjects to which it is applied, it being no more altered by them than sunlight is by the variety of the things it shines on. Hence there is no need to impose any restrictions on our mental powers; for the knowledge of one truth does not, like skill in one art, hinder us from discovering another; on the contrary it helps us.

– René Descartes,  The Philosophical Writings of Descartes

This notion that the common thread that unites all the diverse sciences is the acquisition of human wisdom must be tempered by that further statement about the freeing of the mind from any intemperate restriction or regulation that would force it to down the path of specialization and expertise. What I mean by this is the fact that for Descartes like many in that era were discovering the sciences in all their diversity during a time when the tendency toward almost guild like enclosure and secrecy was taking effect rather than an open and interdependent,  pluralistic investigation; and, in that way they were becoming more and more isolated and closed off from one another in such a way that the truths of one field of study were no longer crossing the demarcated lines as knowledge in a universal sense of shared wisdom. Instead learning in one field of the sciences was becoming restrictive, segmented, and closed off from other fields in such a way that knowledge as a source of wisdom was becoming divided as well as divisive.

Continue reading

The Mind-Body Debates: Reductive or Anti-Reductive Theories?

More and more I have come to see in the past few years that the debates in scientific circles seem to hinge on two competing approaches to the world and phenomena: the reductive and anti-reductive frameworks. To really understand this debate one needs to have a thorough understanding of the history of science itself. Obviously in this short post I’m not going to give you a complete history of science up to our time. What I want to do is to tease out the debates themselves, rather than provide a history. To do that entails to philosophy and history rather than to specific sciences. For better or worse it is in the realm of the history of concepts that one begins to see the drift between these two tendencies played out over time. Like some universal pendulum we seem to see the rise and fall of one or the other conceptual matrix flit in and out as different scientists and philosophers debate what it is they are discovering in either the world or the mind. Why? Why this swing from reductive to anti-reductive then back again in approaches to life, reality, and mind-brain debates?

Philosophers have puzzled over this question from the time of Pre-Socratics, Democritus, Plato, Aristotle onwards… take the subject of truth: In his book TruthProtagoras made vivid use of two provocative but imperfectly spelled out ideas: first, that we are all ‘measures’ of the truth and that we are each already capable of determining how things are for ourselves, since the senses are our best and most credible guides to the truth; second, given that things appear differently to different people, there is no basis on which to decide that one appearance is true rather than the other. Plato developed these ideas into a more fully worked-out theory, which he then subjected to refutation in the Theaetetus. In his Metaphysics  Aristotle argued that Protagoras’ ideas led to scepticism. And finally Democritus incorporated modified Protagorean ideas and arguments into his theory of knowledge and perception.

Continue reading

The Mind-Body Debates: Beginnings and Endings

Jaegwon Kim tells us it all started with two papers published a year apart in the late fifties: “The ‘Mental’ and the ‘Physical'” by Herbert Feigl in 1958 and “Sensations and Brain Processes ” by J. J. C. Smart the following year. Both of these men brought about a qualitative change in our approach to the study of the brain and its interactions with the physical substrate. Each of them proposed in independent studies an approach to the nature of mind that has come to be called the mind-body identity theory, central-state materialism, the brain state theory, or type physicalism. That the identity theory in itself would lose traction and other theories would come to the fore, the actual underlying structure of the debates would continue to be set by the framework they originally put in place. As Kim suggests:

What I have in mind is the fact that the brain state theory helped set the basic parameters and constraints for the debates that were to come – a set of broadly physicalist assumptions and aspirations that still guide and constrain our thinking today.1

This extreme form of reductionist Physicalism was questioned by the multiple realizability argument  of Hilary Putnum and the anomalous argument by Donald Davidson. At the heart of Putnum’s argument as the notion of functionalism, that mental kinds and properties are functional kinds at a higher level of abstraction than physicochemical or biological kinds. Davidson on the other hand offered the notion of anomalous monism that the mental domain, on account of its essential anomalousness and normativity , cannot be the object of serious scientific investigation, placing the mental on a wholly different plane from the physical. At first it seemed to many of the scientists of the era that these two approaches, each in its own distinctive way, made it possible for “us to shed the restrictive constraints of monolithic reductionism without losing our credentials as physicalists” (4). Yet, as it turned out this, too, did not last.

Continue reading

Thomas Nagel: Constitutive Accounts – Reductionism and Emergentism

Thomas Nagel in his Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False starts from the premise that psychophysical reductionism, a position in the philosophy of mind that is largely motivated by the hope of showing how the physical sciences could in principle provide a theory of everything has failed to prove its case. As he states the case:

This is just the opinion of a layman who reads widely in the literature that explains contemporary science to the nonspecialist. Perhaps that literature presents the situation with a simplicity and confidence that does not reflect the most sophisticated scientific thought in these areas . But it seems to me that, as it is usually presented, the current orthodoxy about the cosmic order is the product of governing assumptions that are unsupported, and that it flies in the face of common sense.1

You notice the sleight of hand was move from “unsupported” to “flies in the face of common sense”. He seems over an over in his book to fall back on this common sense doxa approach when he’s unable to come up with legitimate arguments, admitting his amateur status as “nonspecialist” as if this were an excuse; and, then qualifying his own approach against the perceived “sophisticated scientific literature” as a way of disarming it in preference to his own simplified and colloquial amateurism.  The sciences of physics, chemistry, and biology are the key sciences that he wishes to use to prove his case. Behind it is a notion of a philosophy of “neutral monism” that he seems to favor: he tells us he “favors some form of neutral monism over the traditional alternatives of materialism, idealism, and dualism” (KL 71-72). As he tells it: “It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection. We are expected to abandon this naïve response, not in favor of a fully worked out physical/ chemical explanation but in favor of an alternative that is really a schema for explanation, supported by some examples.(KL 85-88)” To support his book’s overall theme he asks two major questions of the scientific community of reductionists:

First, given what is known about the chemical basis of biology and genetics, what is the likelihood that self-reproducing life forms should have come into existence spontaneously on the early earth, solely through the operation of the laws of physics and chemistry? The second question is about the sources of variation in the evolutionary process that was set in motion once life began: In the available geological time since the first life forms appeared on earth, what is the likelihood that, as a result of physical accident, a sequence of viable genetic mutations should have occurred that was sufficient to permit natural selection to produce the organisms that actually exist?(KL 89-93)

Continue reading

The Rise of Science and the Mathematization of Reality: Competing Views

It [Mathematics] did not, as they supposed, correspond to an objective structure of reality; it was a method and not a body of truths; with its help we could plot regularities—the occurrence of phenomena in the external world—but not discover why they occurred as they did, or to what end.

– Isaiah Berlin, from an entry in Dictionary of the History of Ideas – The Counter-Enlightenment

Isaiah Berlin in his entry on what he termed the “counter-Enlightenment” tells us that opposition “…to the central ideas of the French Enlightenment, and of its allies and disciples in other European countries, is as old as the movement itself”. 1 The common elements that these reactionary writers opposed in the Enlightenment project were notions of autonomy of the individual, empiricism and scientific methodology, its rejection of authority and tradition, religion, and any transcendent notions of knowledge based on faith rather than Reason. Berlin himself places Giambattista Vico (1668-1744) and his Scienza nuova (1725; radically altered 1731) as playing a “decisive role in this counter-movement”. He specifically uses the term “counter-movement” rather than the appellation “counter-Enlightenment”.

I’ve been following – – blog Persistent Enlightenment, and one of the interesting threads or series of posts on his site deals with the concept of “Counter-Enlightenment,” a term coined by none other that Isaiah Berlin in the early 50’s (see his latest summation: here). I believe that he correct in his tracing of this concept and its history and use in scholarship. Yet, for myself, beyond tracing this notion through many different scholars I’ve begun rethinking some of the actual history of this period and of the different reactions to the Enlightenment project itself as well as the whole tradition of the sciences. One really needs to realize the Enlightenment itself is the culmination of a process that started centuries before with the emergence of the sciences.

Stephen Gaukroger’s encyclopedic assessment of the sciences and their impact on the shaping of modernity has been key in much of my own thinking concerning the history and emergence of the sciences as well as the understanding of the underpinnings of the mechanistic world view that informs it in this early period. One of the threads in that work is the battle between those traditionalist scholars of what we now term the “humanities” who seek to protect human learning – the study of ancient literature along with philosophy, history, poetry, oratory, etc. – as Gaukroger says, “as an intrinsic part of any form of knowledge of the world and our place in it” (1).1  He mentions Gibbon’s remark that during his time that the study of physics and mathematics has overtaken the study of belles lettres as the “pre-eminent form of learning” (1). In our own time this notion that philosophy and the humanities are non-essential to the needs of modern liberal democracies has taken on a slight edge as well.

Continue reading

Georges Canguilhem: A Short History of Milieu: 1800 to the 1960’s

The notion of milieu is becoming a universal and obligatory mode of apprehending the experience and existence of living beings…

– Georges Canguilhem, Knowledge of Life

Reading these essays by Georges Canguilhem I can understand why he had such an impact on many of those like Michael Foucault, Gilbert Simondon to name only two French Intellectuals of that era. He brings not only an in depth understanding of the historical dimensions of concepts, but he conveys it in such a way that one makes the connections among its various mutations and uses with such gusto and even handed brilliance that one forgets that one is reading what might otherwise be a purely abstract theatre of concepts in their milieu. Even if I might disagree with his conclusions I think he had such a wide influence on those younger philosophers that it behooves us to study his works. In the The Living in its Milieu he gives a short history of this concept as it is used by scientists, artists and philosophers. The notion of milieu came into biology by way of mechanics as defined by Newton and explicated in the entry on milieu in the Encyclopédie Methodique of Diderot and d’Alembert attributed to Johann (Jean) Bernoulli. From here it was incorporated both in a plural and a singular form by other biologists and philosophers in the 19th Century. Among them Lamark, inspired by Buffon in its plural form, and established by Henri de Blainville; while in the singular form it was Auguste Comte and Etienne Geoffroy Saint-Hilaire who clarified its use. Yet, for most people of the 19th Century is through the work of Honoré de Balzac (in his preface to his La Comédie humaine), as well as in the work of Hippolyte Taine who used the term as one of three analytic explanatory concepts guiding his historical vision, the other two being race and moment. After 1870 the neo-Lamarckian biologists would inherit this term from Taine ( such biologists as Alfred Girard, Félix Le Dantec, Frédéric Houssay, Johann Costantin, Gaston Bonnier, and Louis Roule).

The eighteenth century mechanists used the term milieu to denote what Newton referred to as “fluid”. As Canguilhem relates the problem that Newton and others in his time faced was the central problem in mechanics of action of distinct physical bodies at a distance (99).1 For Descartes this was not an issue since for him there was only one mode of action – that of collision, as well as one possible physical situation – contact (99). Yet, when early experimental or empirical scientists tried to use Descartes theory they discovered a flaw: bodies blend together. While Newton solving this issue discovered that instead what was needed was a medium within which these operations could take place: so he developed the notion of ‘ether‘. The luminiferous ether in Newton’s theory became an intermediary between two bodies, it is their milieu; and insofar as the fluid penetrates all these bodies, they are situated in the middle of it [au milieu de lui]. In Newton’s theory of forces one could speak of the milieu as the environment (milieu) in which there was a center of force.

Continue reading

Canguilhem, Simondan, Deleuze

Tracing certain concepts back into the murky pool of influence can be both interesting but at the same time troubling. The more I study Deleuze the more perplexed I become. Was he a vitalist as some suggest? Or, was he against such notions in his conception of life? Trying to understand just where the truth is to be found has taken me into the work of two other French thinkers, one a philosopher of the sciences, Georges Canguilhem; and, the other, a philosopher of technology, Gilbert Simondon.

On Canguilhem

We learn from Wikipedia (here) that Canguilhem’s principal work in philosophy of science is presented in two books, Le Normal et le pathologique, first published in 1943 and then expanded in 1968, and La Connaissance de la vie (1952). Le Normal et la pathologique is an extended exploration into the nature and meaning of normality in medicine and biology, the production and institutionalization of medical knowledge. It is still a seminal work in medical anthropology and the history of ideas, and is widely influential in part thanks to Canguilhem’s influence on Michel Foucault [and, thereby, indirectly on the work of Gilles Deleuze]. La Connaissance de la vie is an extended study of the specificity of biology as a science, the historical and conceptual significance of vitalism, and the possibility of conceiving organisms not on the basis of mechanical and technical models that would reduce the organism to a machine, but rather on the basis of the organism’s relation to the milieu in which it lives, its successful survival in this milieu, and its status as something greater than “the sum of its parts.” Canguilhem argued strongly for these positions, criticising 18th and 19th century vitalism (and its politics) but also cautioning against the reduction of biology to a “physical science.” He believed such a reduction deprived biology of a proper field of study, ideologically transforming living beings into mechanical structures serving a chemical/physical equilibrium that cannot account for the particularity of organisms or for the complexity of life. He furthered and altered these critiques in a later book, Ideology and Rationality in the History of the Life Sciences.

Continue reading

A New Individuation: Deleuze’s Simondon Connection

Looks like Andrew Iliadis’s from Philosophy of Information & Communcation blog has a new paper out showing the connections and influence of Gilbert Simondon’s work on Gilles Deleuze. He mentions the work of Alberto Toscana The Theatre of Production: Philosophy and Individuation between Kant and Deleuze and its tracing of the lines of flight of the concept of individuation within several philosophers. An excellent read in itself. For what is at stake in both Simondon and Deleuze Iliadis following Toscana, says, “is a critique of the Aristotelian notion of hylomorphism”. What interests Iliadis in Simondon is that his resuscitation of the conceptual framework of the philosophy of individuation allows for a contribution to what is “really a new type of philosophy of information that found similarities with but remained opposed to the mathematical theory of communication”. It also “made our understanding of information more dynamic and in so doing also our understanding of ourselves as individuals… and the world around us from an epistemic-ontological point of view”. Finally, he sees Simondan’s legacy as offering “us a political perspective from which to engage the neoliberal world around us”. I’ll leave it to the reader to investigate the rest of Iliadis’s excellent investigation into Simondon’s concepts. It centers on Simondon’s critique of Aristotle’s hylomorphism, as well as the continuing relevance of three key concepts that Simondan introduced and Deleuze made the bedrock of his own philosophy: information, individuation, and disparation.

Gilbert Simondon: The Conditions of Technical Evolution

What are the reasons for the convergence manifest in the evolution of technical structures?

– Gilbert Simondon, On the Mode of Existence of Technical Objects

In my last post on Simondon’s early dissertation we saw the impetus in his thought toward defining an evolutionary sequence for technics, the technical object, and technical culture. One was tempted to see his critique in both negative and positive light. On the he saw a certain manifestation of regulatory processes guiding both the genesis and telos of the technological object and its culture, and on the other he saw another tendency toward negentropy and resistance to these very processes within the evolutionary sequences that brought about the genesis and evolution of this very technics: “the machine is something which fights against the death of the universe; it slows down, as life does, the degradation of energy, and becomes a stabilizer of the world”.

In describing the process of standardization and replacement of parts within the mode of existence of a technical object Simondon tells us it is neither the extrinsic causes (although they, too apply pressure), but is the necessary conditions of the intrinsic nature of the technical object itself that produce the very concretion of what is in fact contingent: “its being based on an analytical organization which always leaves the way clear for new possibilities, possibilities which are the exterior manifestation of an interior contingency”. 1

Continue reading

Gilbert Simondon: On the Mode of Existence of Technical Objects

On reading Gilbert Simondon’s early dissertation  On the Mode of Existence of Technical Objects (Accursed Share, pdf) also Scott Bakker pointed me to academia for another translation (see here: pdf)I see many aspects of the Society of Control emerging from his specific forms of the incorporation of technics, regulatory processes, and the technical ensemble (object) into alliance with our current socio-cultural problematique. He thinks that somewhere along the way human culture divorced itself from its technologies and in alienating itself from these technical objects (machines) we became dehumanized. So in effect he goes against the grain of many early theorists of technology, philosophers who said we need a suture between our humanity and the rational, regulatory processes of the Industrial age of machines. Instead he believes we need to reincorporate the machines (technics, technical objects) into our lives rather than keeping them at bay.

Because of our fears of the machine we have portrayed these technical objects as alien robotic presences who have their own threatening intentions toward us. This projection of intentionality onto the technical object has in turn humanized the machine and dehumanized us: an inversion or reversal that has brought about our own cultural fragmentation. Instead we should open ourselves to the machines, even as they are themselves open systems in their own right. He sees humans in the light of orchestral conductors who do not so much dictate as interpret the codes and algorithms for the ensemble of machines as part of an intra-dialectical process or feed-back loop of inventiveness and creativity. Our only responsibility is as arbiters and cultural encoders of certain limiting and indeterminate constraints in this open relationship with these technical objects. This new axiomatic knowledge and enculturation of thus phase shift is to incorporate these technical objects back into our socio-cultural dynamic. And part of that process should include a revamping of our educational institutions by providing this new dynamic as a part of every child’s education.

Continue reading

Thoughts on Philosophy and the Sciences

The deficiencies of each of these alternatives, in each of their variations, have been well demonstrated time and again, but this failure of philosophers to find a satisfactory resting spot for the pendulum had few if any implications outside philosophy until recent years, when the developments in science, especially in biology and psychology, brought the philosophical question closer to scientific questions – or, more precisely, brought scientists closer to needing answers to the questions that had heretofore been the isolated and exclusive province of philosophy.

Daniel C. Dennett,  Content and Consciousness

Rereading Denett’s book Content and Consciousness makes me see how little has changed between 1969 and now in philosophy. The point of his statement above is to show how over time (history) the questions of philosophy are replaced by the questions of scientists. Why? Is there something about philosophy that keeps it at one remove from reality? Are we forever barred from actually confronting the truth of reality? Is it something about our tools, our languages, our particular methodologies, etc.? What is it that the sciences have or do that makes them so much better equipped to probe the truth about reality? What Denennett is describing above is the movement between differing views of reality that philosophers time and again seem to flow through from generation to generation, shifting terms from nominalism/realism, idealism/materialism, etc. down the ages always having a battle over approaches to reality that seem to be moving in opposing ways. While the sciences slowly and with patient effort actually do the work of physically exploring and testing reality with varying probes, instruments, and apparatuses that actually do tell us what is going on.

Levi R. Bryant has a couple of thought full posts on his blog Larval Subjects (here) and (here) dealing with the twined subjects of philosophy’s work and reality probing. In the first post he surmises:

Here I think it’s important to understand that philosophy is not so much a discipline as a style of thought or an activity.  We are fortunate to have a discipline that houses those who engage in this sort of conceptual reflection, that provides a site for this reflection, and that preserves the thought of those who have reflected on basic concepts.  However, I can imagine someone objecting that certainly the scientist can (and does!) ask questions like “what is causality?”  To be sure.  However, I would argue that when she does this she’s not doing science but rather philosophy.  Philosophy doesn’t have to happen in a department to be philosophy, nor does it have to be in a particular section of the bookstore.  One need not have a degree in philosophy to engage in this sort of reflective activity; though it certainly helps.  It can take place anywhere and at any time.

The notion that scientists ask questions that are philosophical is true and that in that process they are doing philosophy is also true, yet I think it overlooks the fact that scientists not only ask questions that are philosophical they also answer these questions scientifically rather than philosophically and that seems to make all the difference between the two domains of knowledge. Science is not only as Levi points out of philosophy a “sort of reflective activity”. The sciences utilize a set of methodologies that allow them to probe reality not only using conceptual tools as in philosophy, but also with very real scientific instruments, apparatuses, etc. Obviously Levi would not disagree with this, and I’m sure he knows very well that this was not the question he was pursuing. This is not an argument with Levi about philosophy. In fact I have no problem and agree with Levi in the points he was making. The point of his post was more about What philosophy is? In other words the ontological question not about the differing goals of philosophy and science and what they do. Yet, my point is just that: would the typical scientist stop with the question “What is causality?” – would he like the philosopher be satisfied with reflecting on what is – stay with the metaphysical and speculative ontological question? No. The typical scientist wouldn’t stop there he would also ask the same question as the philosopher but instead of trying to solve the nature of causality as an ontological problem his emphasis would not be on the is but on the activity of causality itself(i.e., what is it that causality is doing?). The difference is subtle, for the philosopher this reflection on the nature of causality is about what causality is, while the same question for the scientist is about what causality does: under what conditions could I test the mechanisms of causality? etc.  That is the rub, the splice, the cut or suture between the two disciplines or styles or approaches toward the nature of causality.

There is a subtle connection between philosophy and science as well. You can ask of science how it pictures the world, study its laws, its theories, its models, and its claims – you can know and listen to what it says or describes about the world: the is of the world. But you can also consider not just what it says about the world – but what is done: experimental sciences not only reflect the is they also understand the actual workings of causality by experimental methods that under controlled or highly contrived circumstances allow them to peak into the nature of causality and what it does not just what it is.

Decoding the Void

dmf in one of his usual cryptic messages to me dropped a link to a site, Radiolab where there is a recording of Patrick Purdon, and his protégé, colleague Emery Brown, Professor of Anesthesia at Harvard Medical School, who tries to answer this question: “What happens in that invisible moment when the patient goes under anesthesia?  And why is it that some patients remain conscious, even when they appear to be knocked out?”  In an experiment that takes the induction of anesthesia and slows it down to a crawl while analyzing the brain’s electrical activity they discover something interesting about how the brain works as the connections that once gave it consciousness suddenly are sutured, cut off.

Listening to the recording is like listening to an old time radio program with comic relief, strange sounds, and quirky effects interspersed with endless conversation: something like the brain itself, maybe. Definitely worth listening to this excellent broadcast. The gist of the message was simple. After the brain was slowed down to a particular point in the application of anesthesia it was noticed that something strange happened: it was as if someone had turned off a light switch. There was no transition from a waking to an unconscious state, nothing but a sudden pop, as if someone had just struck a gong or bell. Silence. Unconsciousness.

Continue reading

Hans-Jörg Rheinberger: A Short History of Epistemology

Hans-Jörg Rheinberger – as we learn from the blurb on his Max Planck Institute site, main focus in research lies in the history and epistemology of experimentation in the life sciences. By bridging the gap between the study of history and contemporary cutting-edge sciences, such as molecular biology, his work represents an example of transdisciplinarity as emerging in the present knowledge-based society.

In his short book On Historicizing Epistemology: An Essay  he tells us that the classical view of epistemology was a synonym for a theory of knowledge that inquires into what it is that makes knowledge scientific, while for many of the contemporary practioners of this art, following the French practice, it has become a form of reflecting on the historical conditions under which, and the means with which, things are made into objects of knowledge.1

This subtle difference between the classical and the contemporary epistemology hinges on a specific set of historical transformations in philosophy and the sciences during the twentieth century and it is to this that his book directs its inquiry. From the nineteenth century of Emil Du Bois-Reymond and Ernst Mach on through the works of Polish immunologist Ludwik Fleck and the French epistemologist Gaston Bachelard to Karl Popper, Edmund Husserl, Martin Heidegger, Ernst Cassirer, Alexandre Koyre, Thomas Kuhn, Stephen Toulmin, and Paul Feyerabend, Georges Canguilhem, Louis Althusser, and Michel Foucault, as well as Jacques Derrida, and on up to contemporary practioners such as Ian Hacking for the English-speaking world, and by Bruno Latour for France we follow the course of a slow process of historicizing and internal transformation of philosophy, the sciences, and epistemology as they interacted with each other.

As he shows in this short work even the problematique, the very problems that epistemology set out to answer changed in route from the early thinkers to the later:

Not by chance, an epistemology and history of experimentation crystallized conjointly. The question now was no longer how knowing subjects might attain an undisguised view of their objects, rather the question was what conditions had to be created for objects to be made into objects of empirical knowledge under historically variable conditions.(Kindle Locations 44-45).

For anyone needing a basic history and overview of this fascinating history of the conjunctions and disjunctions of science and philosophy this is a great little introduction and not too costly.

1. Hans-Jorg Rheinberger. On Historicizing Epistemology: An Essay (Cultural Memory in the Present) (Kindle Locations 38-39). Kindle Edition.

Epistemic Things: The Science of the Concrete

…“epistemic things” are what one does not yet know, things contained within the arrangements of technical conditions in the experimental system. Experimental systems are thus the material, functional units of knowledge production; they co-generate experimental phenomena and the corresponding concepts embodied in those phenomena. In this sense, experimental systems are techno-epistemic processes that bring conceptual and phenomenal entities— epistemic things— into being. Epistemic things themselves are situated at the interface, as it were, between the material and conceptual aspects of science.

– Hans-Jörg Rheinberger, An Epistemology of the Concrete: Twentieth-Century Histories of Life

The notion that what ones does not yet know is of more import than what one does know is counterintuitive to a point. The idea the experimental system and the technical conditions within which it is framed produce and co-generate these finite concrete conceptual and phenomenal entities – “epistemic things” through the “techno-epistemic” processes of the experiment itself is amazing if true. This notion that these epistemic things are situated at the boundary zones, at the gateway and interface or medium of the material and conceptual frontiers of scientific experimentation is both intriguing and questionable. The notion behind this is that once discovered, these epistemic entities become materialized interpretations that form the components of scientific models. Counter to Idealist notions this would be a very real material entity with a history and a finite lifespan. A conceptual unit, quanta, force if you will. As Rheinberger explains it:

The scientific object is gradually configured from the juxtaposition, displacement, and layering of these traces. The experimental systems molecular biologists design are “future generating machines,” configurations of experimental apparatus, techniques, layers of tacit knowledge, and inscription devices for creating semi-stable environments— little pockets of controlled chaos— just sufficient to engender unprecedented, surprising events. When an experimental system is working, it operates as a difference- generating system governed by an oscillatory movement of stabilization- destabilization-re-stabilization— what the molecular biologist François Jacob, echoing a similar statement of Derrida, called the “jeu des possibles.”1

Continue reading

Posthumanism, Neuroscience, and the Philosophy of Information, Science and Technology

As you gaze at the flickering signifiers scrolling down the computer screens, no matter what identifications you assign to the embodied entities that you cannot see, you have already become posthuman.

N. Katherine Hayles,  How We Became Posthuman

Do you use a digital phone, receive text messages? Have an iPad or other comparable device that allows you to interact with others visually, seeing and talking to them as if they were virtually present in the room? How do you know that these messages and images are truly from your friends and loved ones? What makes you assume that these signs on the digital blackboard represent the actual person who is in fact absent while present? Is there something about the message that reflects the essential features of this person hiding behind the screen of digital light and sound? Is it that you trust images, pictures, moving representations on the digital light fields of this technological wonder to be truthful, to show forth the actuality of the embodied figure of your friend or loved one on the other side of the screen? What if someone had faked the messages, spliced together a video program of your friend that was so real that you actually believed this was in fact the person themselves rather than the fabricated images of a very adept machinic intelligence imitating the patterns of your friends behavior?

What if these digital objects we now take for granted in our everyday lives are no longer mere tools but have become a part of our person? And, I may add, that we should not narrow this to just these digital tools, but every tool that we use day by day. What if all these objects that we take for granted as useful things that help us do our work have remade us in their image, transformed our very identities as humans? What if as Katherine Hayles suggests we are, through our daily interactions with these tools merging with our technologies and have already become posthuman?

As I type these words, sitting at my desk, listening to iTunes from some distributed network that might be situated in any city of the U.S., I begin to realize that I and the machine in front of me have become a new thing, a new object. That I’m no longer just me, no longer this singular person whose body is devoid of connection from other things, cut off in its own isolated chamber of integrity. No. Instead I’ve merged with this thing, this object in front of me and become something else, a new thing or object with a distinctly different set of capabilities than if I were not connected to it. What does my use of a computer make me? I use a keypad, a terminal screen, which is in turn connected to a harddrive, which is connected to various devices: sound, networks, storage, etc., all of which have for the most part become almost invisible in the sense that I no longer see these tools in their own right, but as part of a cognitive environmental complex that consists of me, the computer, and the thousands of physically distant terminals across our planet through this interface that defines my machinic relations.

Continue reading

Franz Brentano: Catholicism, Idealism and Immortality

My psychological standpoint is empirical; experience alone is my teacher. Yet I share with other thinkers the conviction that this is entirely compatible with a certain ideal point of view.

The laws of gravitation, of sound, of light and electricity disappear along with the phenomena  for which experience has established them. Mental laws, on the other hand, hold true for our life to come as they do in our present life, insofar as this life is immortal.

Franz Brentano,   Psychology from an Empirical Standpoint

Even in this 1874 preface to his now famous work we get the hint of an Idealist framework working its way insidiously into the very fabric of this otherwise naturalist and empirical perspective. If as Iain Hamilton Grant and his fellow commentators in Idealism: The History of a Philosophy are correct and Idealism is a Realism of the Idea, a one-world idealism that takes nature seriously, and that sees the Idea as a causal agent in terms of organization as well as being neither a pure formalism nor abstract in the separable sense, but rather concretely relating part to whole “as the whole”, then we must know how this transcendental realism entered into the sciences of our day by way of none other than those early practioners of the higher sciences in the twentieth-century: such as Albert Einstein whose mathematical-theoretical cosmology displaced the earlier mechanistic materialist perspective of Newton. But that is a longer tale than my particular post is set to problematize. Much of what we take to be scientific realism and modern science itself is based on many of the unmanifest suppositions of Idealism according to Grant and his fellow commentators.

One aspect of Brentano we should not overlook is his life’s tale. Franz Brentano studied philosophy at the universities of Munich, Würzburg, Berlin (with Trendelenburg) and Münster. He had a special interest in Aristotle and scholastic philosophy. He wrote his dissertation in Tübingen On the manifold sense of Being in Aristotle.

Subsequently he began to study theology and entered the seminar in Munich and then Würzburg, preparing to become a Roman Catholic priest (ordained August 6, 1864). In 1865 – 1866 he writes and defends his habilitation essay and theses and begins to lecture at the university of Würzburg. His students in this period include among others Carl Stumpf and Anton Marty.

Continue reading

Wilfred Sellars: Roots of Eliminativsm and Sensory Consciousness

To reject the Myth  of the Given is to reject the idea that the categorial structure of the world — if it has a categorial structure  — imposes itself on the mind as a seal imposes an image on melted wax.

– Wilfred Sellars, Foundations for a Metaphysics of Pure Process

To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals.

– Ray Brassier, Ray Brassier interview with After Nature blog

As I’ve been slowly tracing down some of the ancestry of this eliminativist naturalism I’ve seen in R. Scott Bakker recently I’ve begun rereading Sellars among others, W.V. Quine, Paul Feyerabend, and Richard Rorty.  It is in this essay quoted above in the second section of article 95 that Wilfred Sellars offers what we have now come to know as the eliminative thesis: “It is rather to say that the one framework is, with appropriate adjustments in the larger context, replaceable by the by other — eliminable in favor of the other. The replacement would be  justified by the greater explanatory power of the new framework.” 1 This is the central insight of the eliminativist argument that then in many ways has taken on its own subtractive life and been used to other effects within a myriad of philosophers, scientists, etc.

As we begin to move toward a post-intentional framework within philosophy we still need to be reminded of the roots within which it was first cast. Sellars was still very much a part of the Kantian tradition, and locked into his own form of intentional philosophical perspectives. I will not go into this in this short post, which is readily available to anyone willing to work through his books and essays. Instead this is specifically trying to understand the elminiativist argument itself.

Continue reading

William James: Empiricist and Naturalist

‘Thoughts’ and ‘things’ are names for two sorts of object, which common sense will always find contrasted and will always practically oppose to each other. Philosophy, reflecting on the contrast, has varied in the past in her explanations of it, and may be expected to vary in the future. At first, ‘spirit and matter,’ ‘soul and body,’ stood for a pair of equipollent substances quite on a par in weight and interest. But one day Kant undermined the soul and brought in the transcendental ego, and ever since then the bipolar relation has been very much off its balance. The transcendental ego seems nowadays in rationalist quarters to stand for everything, in empiricist quarters for almost nothing.

– William James,  Essays in Radical Empiricism

I love reading William James. His ability to cut through the hogwash and strip the hornets nest of any metaphysical argument to its baseline still astounds me. Sometimes this supposed father – if I might say so, of Pragmatism (Pierce being one of the other, alternative ancestral thinkers) is stripped of his actual inheritance in both empiricism and naturalism, and even – dare we add, skepticism. Let’s remember that it is here in his work that we here the battle cry of elminiativists everywhere: “I believe that ‘consciousness,’ when once it has evaporated to this estate of pure diaphaneity, is on the point of disappearing altogether. It is the name of a nonentity, and has no right to a place among first principles. Those who still cling to it are clinging to a mere echo, the faint rumor left behind by the disappearing ‘soul’ upon the air of philosophy” (Kindle Locations 134-136). Now some will see this as James’s reversion to the age old nominalism that denies the existence of universal entities or objects, but accepts that particular objects or entities exist. Now I’ll not take us down the road of the issue of ‘universals’; that’s another tale.

Continue reading

Technogenesis: The Emergence of Machinic Sapiens or Homo Cyborgensis?

Nature … is by the art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the begining whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life?

– Thomas Hobbes, Leviathan

Our role as humans, at least for the time being, is to coax technology along the paths it naturally wants to go.

–  Kevin Kelly, What Technology Wants

Humans have been fascinated with these strange puppet like beings that move of their own accord as long ago as Greece.  Aristotle once said that the “movements of animals may be compared with those of automatic puppets, which are set going on the occasion of a tiny movement; the levers are released, and strike the twisted strings against one another” (Aristotle, On the Motion of Animals, 350 BC.). Yet, up until our own time most of this has been nothing more than a parlor trick entertainment, an illusion for the masses that played off the strangeness of our own inherent affective hopes and fears.

In our Age of Naturalism the underbelly of our psychic life has for the better part of two hundred years played itself out in those strange genres of fantastic literature. In this literature you will find all the old gods, demons, angels, monsters, vampire, werewolves, etc. to your hearts content. In our own time a branching of this fantastic literature became what we term Science Fiction (SciFi). Yet there was a new twist within that genre, one that brought about the qualification of both extrapolation from known facts, as well as the imaginative leap of forecasting the trends of technology in its socio-cultural and moral-ethical movements.

In our own century we’ve had many practitioners and transformations within the old style fantastic. A few of those masters such as Jorge Luis Borges, Italo Calvino, J.G. Ballard, and Stanislaw Lem – and, personally, I would add Philip K. Dick as a crossover between both. The point of this present post is that a great deal of the philosophy, science, etc. that gets written in top rate journals and books for professional scientists and philosophers never truly seeps down into the public at large directly but only indirectly through the imaginative works of those strange fantastic masters. For whatever reason the marginalia and breakthrough ideas, concepts, and notions always seem to be pushed toward an extremity within these fantastic stories. It allows us to take a break from our serious professional minds cast and suddenly dip down into that realm of marvels where the progeny of our imagination can take on a secret life of their own and play out the logic of their futures without harm or trepidation to that fleshy cast of characters in our present Real.

What is uncanny in the fiction of the above mentioned authors is that each in his own way delved into that intersection of technology and bios. The place where humans and their mechanical cousins in one form or another step on the stage as a rival species sharing for the first time since the Neanderthals went extinct the light of the sun. We seem to fill the vacuum of that strangeness with all matter of affective relations. Our fears and hopes, our search for answers and our need for knowledge. Even that old goat Nietzsche once formulated a fable:

In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the haughtiest and most mendacious minute of “world history” – yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die. (‘On Truth and Lie in an Extra-Moral Sense’)

Continue reading

Posthuman Anxiety: Were we ever human?

Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes.1

Reading this new work by James Barrat got me to thinking. He seems to misunderstand and fear the very scientists that he is questioning about AI. Little does he understand that these very scientists for the most part have left his folk-psychology terrors far behind, that they live the mechanist/eliminativist paradigm with a vengeance. For these scientists we never were human to begin with and all the ancient religious and philosophical bric-a-brac of folk-psychology is just another illusionary stance which our secular scientists will one day very soon replace with something else, something much like themselves: machines with brains. The only difference will be one of invariance. These new machine intelligences will not be so different from our biomechanical brains as such but will be made of other materials that are different only in kind. Our biomechanical brains and their possibly quantum brains may in fact be closer in resemblance than our fears and folk-psychologies have yet to fathom.

James Barrat like many humans is still caught up in the older folk-psychology portraying a wariness of this maneuver of the scientists in their ever expanding dominion of knowledge and power. As he sees it if it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? (ibid. intro) The problem with these questions if that they are couched in the language of an outmoded humanism. He automatically assumes that machines are different and differing from us in some essentialist way. He also speaks of power and control as if these supposed inhuman alien machines will suddenly rise up in our midst like any good science fiction horror show and take over the world. The fallacy in this is obvious: we are the machines that have already done that job just fine, we don’t need to worry about our progeny doing it again; in fact, they will more than likely just fulfill our direst scenarios in our self-fulfilling prophecies not in spite of but because we have invented them to do just that. The Dream of the Machine is our own secret dream, we are afraid not of the AI’s but of the truth of our own nature, afraid to except that we, too, may already be the very thing we fear most: machines.

My friend R. Scott Bakker would probably say: we have nothing to fear but fear itself, then he would say: “Yes, this is one of those actual nightmares I’ve been in touch with for a long while now.” There is still that part of Bakker that harbors the older folk-psychology beliefs that he otherwise so valiantly despises in his eliminativist naturalism. For him everything is natural all the way down, so that would include these strange alterities we label AI’s. Now, for me, the verdict is still out, but my guess is that yes the scientists because of the vast agglomeration of investment from governments, corporations, etc. known as the great late-capitalist hive of networks supporting the practical sciences will at some point in the near future produce something resembling a simulacrum of our present organic intelligence in some other form. What form that may take is still open to debate.

Even Vernor Vinge who wrote the first tract on this in his now classic The Coming Technological Singularity once stated that “if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue.”2 For Vinge the process was inevitable because “the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet … we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others.”(ibid.)

But what should we do? Should we just pretend this is all a strange far-out surmise on the part of scientists, that surely this is not a possibility for the near future, go hide our heads in the sand? Or should we do something else? David Roden of enemyindustry has been writing about this and other aspects of the posthuman dilemma for a while now. In his essay The Disconnection Thesis tells us that “Vinge’s idea of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one.”3 For David the only way to evaluate the posthuman condition would be to witness the emergence of posthumans. With this he emphasizes that what we need is an anti-essentialist model for understanding this new descent into the posthuman matrix. This concept of descent he describes in a “wide” sense insofar as qualifying entities might include our biological descendants or beings resulting from purely technical mediators (e.g., artificial intelligences, synthetic life-forms, or uploaded minds)(Kindle Locations 7391-7393).

Yet, reading his work I wonder if he too is still caught up in the old outmoded folk-psychology belief that humans are distinct from machines rather than being seen as part of an eliminativist naturalism that harbors only a difference in kind. It’s as if these practitioners are almost afraid to leave the old box of philosophical presuppositions behind and forge ahead and invent new tools and frameworks onto which they might latch their descriptive theories. Here is a sentence in which David stipulates the difference between human / posthuman in which the “human-posthuman difference be understood as a concrete disconnection[my emphasis] between individuals rather than as an abstract relation between essences or kinds. This anti-essentialist model will allow us to specify the circumstances under which accounting would be possible”(Kindle Locations 7397-7399).

But if we have never been human in the old folk-psychological sense of that term then isn’t all this essentialist/anti-essentialist rhetoric just begging the question? What if this dichotomy of the human/posthuman is just another false supposition? What if these terms are no longer useful? What if we were never human to begin with? What then? If the eliminativist naturalists are correct then these questions should just vanish before the actual truth of science itself. Even Roden is moving in this direction when he tells us that in a future article he will “consider the possibility that shared “non-symbolic workspaces”— which support a very rich but non-linguistic form of thinking— might render human natural language unnecessary and thus eliminate the cultural preconditions for our capacity to frame mental states with contents expressible as declarative sentences”  (Kindle Locations 7418-7421). What is this but an acceptance of the eliminativist program? Maybe this is just it: the audience that David is trying to convince is those not in the scientific community who already understand very well what is going on, but those who are still trapped within the older folk-psychology, who believe in the myth of mental states and the whole tradition of an outworn intentionality that no longer holds water for those very naturalists that James Barrat above fears.

As David unveils his tale he opens a window on the past, saying, “there are grounds for holding that the process of becoming human (hominization) has been mediated by human cultural and technological activity”(Kindle Locations 7448-7449) . Isn’t this a key? Maybe the truth is that culture is itself a form of technology? Culture as a machine for structuring hominids according to some natural process that we are only now barely understanding? In fact Roden goes on if “in which humans are coupled with other active components: for example, languages, legal codes, cities, and computer mediated information networks” (Kindle Locations 7458-7461). But if R. Scott Bakker is right then even “though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition”.4 For Scott this whole human/posthuman dichotomy would probably be seen in terms of neglect. As he stated in a recent article, which ties in nicely with David’s sense of social assemblages as technological machines, the brain  “being the product of an environment renders cognition systematically insensitive to various dimensions of that environment. All of us accordingly suffer from what might be called medial neglect. The first-person perspectival experience that you seem to be enjoying this very moment is itself a ‘product’ of medial neglect. At no point do the causal complexities bound to any fraction of conscious experience arise as such in conscious experience. As a matter of brute empirical fact, you are a component system nested within an assemblage of superordinate systems, and yet, when you reflect ‘you’ seem to stand opposite the ‘world,’ to be a hanging relation, a living dichotomy, rather than the causal system that you are. Medial neglect is this blindness, the metacognitive insensitivity to our matter of fact componency, the fact that the neurofunctionality of experience nowhere appears in experience. In a strange sense, it simply is the ‘transparency of experience,’ an expression of the brain’s utter inability to cognize itself the way it cognizes its natural environments.4

In an almost asymmetrical movement Dr. Roden tells us that “biological humans are currently “obligatory” components of modern technical assemblages. Technical systems like air-carrier groups, cities or financial markets depend on us for their operation and maintenance much as an animal depends on the continued existence of its vital organs. Technological systems are thus intimately coupled with biology and have been over successive technological revolutions” (Kindle Locations 7461-7464). Yet, for Roden the emergence of posthumans out of this technogenesis machine of networks and assemblages will ultimately be seen as a “rupture” in that very system. Yet, I wonder if this is true. What if instead it is just one more natural outcome of the possibilities of science as seen within the eliminativist naturalist perspective. Not seen as an oddity, but as part of a process that was started eons ago within our own evolutionary heritage?

There comes a moment in David’s essay when he comes close to actually affirming the eliminativist naturalist position, saying:

The most plausible argument for abandoning anthropological essentialism is naturalistic: essential properties seem to play no role in our best scientific explanations of how the world acquired biological, technical and social structures and entities. At this level, form is not imposed on matter from “above” but emerges via generative mechanisms that depend on the amplification or inhibition of differences between particular entities (For example , natural selection among biological species or competitive learning algorithms in cortical maps). If this picture holds generally, then essentialism provides a misleading picture of reality.(Kindle Locations 7520-7524).

Not only misleading but erroneous according to the eliminativist naturalist perspective of many cognitive scientists as the slow displacement of a folk-psychology that has been long overdue.

Now I’ve presented this as a neutral interlocutor, not as either an affirmer or denigrator of these views. I just don’t have enough information as of yet to truly make such a judgment call. So take the above with a grain of salt from one who is working within an eliminativist naturalist perspective that he himself still finds strangely familiar and familiarly strange.

I look forward to Dr. David Roden’s new book Posthuman Life: Philosophy at the Edge of the Human coming out next May on Amazon at least, should shed further light on this subject.

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (Kindle Locations 60-62). St. Martin’s Press. Kindle Edition.
2. Vinge, Vernor (2010-06-07). The Coming Technological Singularity – New Century Edition with DirectLink Technology (Kindle Locations 100-101). 99 Cent Books & New Century Books. Kindle Edition.
3.   (2013-04-03). Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection) (Kindle Location 7307). Springer Berlin Heidelberg. Kindle Edition.
4. Cognition Obscura (Reprise)

R. Scott Bakker: The Question of Eliminativism?

How times have changed. The walls of the brain have been overrun. The intentional bastions of the soul are falling. Taken together, the sciences of the mind and brain are developing a picture that in many cases out-and-out contradicts many of the folk-psychological intuitions that underwrite so much speculation within the humanities. Unless one believes the humanities magically constitute a ‘special case,’ there is no reason to think that its voluminous, armchair speculations will have a place in the ‘post-scientific’ humanities to come.

– R. Scott Bakker, blog post

There are those who have over the years like Wilfred Sellars separated out what might be termed the “folk psychology/ manifest image” from the supposed grand edifice of the “scientific image” of knowledge and reference. The idea is that if folk psychology is like a theory, then, like any theory, it could be superceded and replaced by a better theory as scientific psychology and neuroscience progress.  Sellars himself, however, was unmoved by this idea, because the concepts of folk psychology (of the manifest image) are not focused solely (or maybe even principally) on the description and explanation of phenomena.  In the course of science, better descriptions of what is going on in our heads when we think and sense will be developed, but such descriptions are only a part of the function of mentalistic language.  (see Wilfred Sellars)

The roots of eliminativism go back to the writings of Wilfred Sellars, W.V. Quine, Paul Feyerabend, and Richard Rorty. In our own time eliminativists have for the most part held to a clearly expressed view that mental phenomena simply do not exist and will eventually be eliminated from people’s thinking about the brain in the same way that demons have been eliminated from people’s thinking about mental illness and psychopathology. In our time others such as Paul and Patricia Churchland, who deny the existence of propositional attitudes (a subclass of intentional states), and with Daniel Dennett, who is generally considered to be an eliminativist about qualia and phenomenal aspects of consciousness.(see Understanding Eliminativism)

For a while it took me time to work my way through much of Bakker’s backlog of posts, but during this course I discovered aspects of where he was coming from and exactly what it was he thought he’d found in his pet theory of Blind Brain Theory. He starts with the notion that consciousness is like a pin drop in a vast sea of information of which it is almost totally unaware (i.e., we are all “informatically encapsulated” blind to our own brain processes). Because of this not “only are we ‘in the dark’ with reference to ourselves, we are, in a very real sense, congenitally and catastrophically misinformed” (see Spinoza’s Sin and Leibniz’s Mill).

He tells us that BBT seeks to demote ‘traditional epistemology’–treating it as a signature example of the way informatic neglect leads us to universalize heuristics, informatic processes that selectively ignore information to better solve specific problem sets. It pretty much asks the simple question: What if we were never what we thought we were? What if what we’ve considered human was in itself both misinformed and flatly untrue? What if the truth were as simple as subtracting what we’ve so dearly held as being human from our actual humanity? What would be left after we subtracted all the illusory notions, concepts, folk-psychology?

For Bakker the whole gamut of philosophy based as it is on the hidden assumptions of ‘intentionality’ is a dupe, a broken vessel from the age of folk-psychology that will sooner or later be replaced by those stone cold engineers of our future sciences:

Intentionality is a theoretical construct, the way it looks whenever we ‘descriptively encounter’ or theoretically metacognize our linguistic activity—when we take a particular, information starved perspective on ourselves. As intentionally understood, norms, reasons, symbols, and so on are the descriptions of blind anosognosiacs, individuals convinced they can see for the simple lack of any intuition otherwise.

To say cognition is heuristic and fractionate is to say that cognition cannot be understood independent of environments, no more than a screw-driver can be understood independent of screws. It’s also worth noting how this simply follows from mechanistic paradigm of the natural sciences. (here)

Mechanistic and elminiativist Bakker reduces what was once termed intentional consciousness to a small heuristic device, a machine that works to solve only a minor set of problems never knowing the vast sea of information surrounding it of which it is totally blind and unaware. ” The human brain necessarily suffers what might be called proximal or medial neglect. It constitutes its own blind spot, insofar as it cannot cognize its own functions in the same manner that it cognizes environmental functions.”(ibid.) Which leads to “a thoroughgoing natural enactive view—which is to say, a mechanical view—brains can be seen as devices that transform environmental risk into onboard mechanical complexity, a complexity that, given medial neglect, metacognition flattens into heuristics such as aboutness.”

On the Blind Brain Theory, or as I’ve been calling it here, Just Plain Crazy Enactive Cognition, we are natural all the way down. On this account, intentionality is simply what mechanism looks like from a particular, radically blinkered angle. There is no original intentionality, and neither is there any derived intentionality. If our brains do not ‘take as meaningful,’ then neither do we. If environmental speech cues the application of various, radically heuristic cognitive systems in our brain, then this is what we are actually doing whenever we understand any speaker.

Intentionality is a theoretical construct, the way it looks whenever we ‘descriptively encounter’ or theoretically metacognize our linguistic activity—when we take a particular, information starved perspective on ourselves. As intentionally understood, norms, reasons, symbols, and so on are the descriptions of blind anosognosiacs, individuals convinced they can see for the simple lack of any intuition otherwise. The intuition, almost universal in philosophy, that ‘rule following’ or ‘playing the game of giving and asking for reasons’ is what we implicitly do is simply a cognitive conceit. On the contrary, what we implicitly do is mechanically participate in our environments as a component of our environments.

I’ll grant that Scott may or may not be right about the replacement of “folk-psychology” at some point in the future. Fair enough. But his conclusions and reliance on the mechanistic ontology and its supporting framework is another matter altogether. He thinks it has explanatory power and begs for someone to come along an poke holes in his arguments. But it is not as easy as that. One first needs to understand the premises upon which his anti-philosophical framework of this scientistic eliminativism resides before one can tackle the ontological questions that he rests his case on in his conclusions. For a long while I’ve been baited, hooked, trying to understand where he was coming from and to where he was going. For a while I almost gave up the chase, thinking that he had backed himself into such a corner using a blanket of well-trod eliminativist arguments that seem almost unassailable that I might never find in chinks in his anti-philosophical armor. I wonder now even if I have.

Reading through the hundreds of blog posts that have brought him to this point of refinement in his self-proclaimed Blind Brain Theory and its bastion of counter-intuitive scientific labyrinth of ever-changing arguments is a difficult task for anyone who might assail this mercurial intellect. Where to begin? Well first we need to understand just why these scientists, these eliminativists believe what they believe. To do this I’ll need to understand some of the thinkers that led up to this mechanistic/eliminativist naturalism in science to begin with.

So maybe a little break from Continental thought and a dip into those precursors of eliminativism will be in order: Wilfred Sellars, W.V. Quine, Paul Feyerabend, and Richard Rorty. Been a while since I read any of their works, yet one must keep an open mind. If Scott is to convince me of the truth of his claims then it seems one has to have a thorough understanding of just what his brand of eliminativism enacts, what variation it purports to support, its conceptual tools and framed premises.

So this post is more for me: a challenge to understand the roots of eliminativist world-view and why such as Scott Bakker have chosen to defend it so doggedly.

Of late I’ve been reading a work by a scientist who was once a member of the eliminativist tribe (see his now classic From Folk Psychology to Cognitive Science: The Case Against Belief(1985)), a believer in it base set of arguments and program, but has now left the camp become an apostate and has written a work to understand just how this all came about: Stephen P. Stich’s Deconstructing the Mind:

For some years now, deconstructionism has been a pretentious and obfuscatory blight on the intellectual landscape. But buried in the heaps of badly written blather produced by people who call themselves “deconstructionists,” there is at least one idea-not original with them-that is worth noting. This is the thesis that in many domains both intellectual activity and everyday practice presuppose a significant body of largely tacit theory. Since the tacit theories are typically all but invisible, it is easy to proceed without examining them critically. Yet once these tacit theories are subject to scrutiny, they are often seen to be very tenuous indeed; there is nothing obvious or inevitable about them. And when the weaknesses of the underlying theories have been exposed, the doctrines and practices that rely on them can be seen to be equally tenuous. If, as I would suggest, this process of uncovering and criticizing tacit assumptions is at the core of deconstructionism, then eliminativism is pursuing a paradigmatically deconstructionist program. However, if I am right, the eliminativist deconstruction of commonsense psychological discourse has itself tacitly assumed a dubious package of presuppositions about the ways language and ontology are related. 1 (intro)

If Stich is on to something then it is the set of dubious “presuppositions about the ways language and ontology are related” that must first be diagnosed before we can begin to assail the bastion of R. Scott Bakker’s great edifice of the BBT. More on that in the future….

1. Stephen P. Stich. Deconstructing the Mind (Philosophy of Mind).

Plasma Research at the University of Missouri

Ever wonder how stupid our government is? I do all the time. Take plasma fusion for instance. The science underpinning much of fusion energy research is plasma physics. Plasmas—the fourth state of matter—are hot gases, hot enough that electrons have been knocked free of atomic nuclei, forming an ensemble of ions and electrons that can conduct electrical currents and can respond to electric and magnetic fields. The science of plasmas is elegant, far-reaching, and impactful. Comprising over 99% of the visible universe, plasmas are also pervasive. It is the state of matter of the sun’s center, corona, and solar flares. Plasma dynamics are at the heart of the extraordinary formation of galactic jets and accretion of stellar material around black holes. On earth it is the stuff of lightning and flames. Plasma physics describes the processes giving rise to the aurora that gently illuminates the far northern and southern nighttime skies. Practical applications of plasmas are found in various forms of lighting and semiconductor manufacturing, and of course plasma televisions.

University of Missouri engineer Randy Curry and his team have developed a method of creating and controlling plasma that could revolutionize American energy generation and storage. Besides liquid, gas and solid, matter has a fourth state, known as plasma. Fire and lightning are familiar forms of plasma. Life on Earth depends on the energy emitted by plasma produced during fusion reactions within the sun. However, Curry warns that without federal funding of basic research, America will lose the race to develop new plasma energy technologies. The basic research program was originally funded by the Office of Naval Research, but continued research has been funded by MU.

The difference between these multibillion dollar programs and the one offered by the University of Missouri is that physicists usually rely on electromagnetic magnetic fields to harness the power of plasma, the fourth state of matter, in fusion power experiments. But University of Missouri researchers have managed to create rings of plasma that can hold their shape without the use of outside electromagnetic fields—possibly paving the way for a new age of practical fusion power and leading to the creation of new energy storage devices.

Traditional efforts to achieve nuclear fusion have relied upon multi-billion-dollar fusion reactors, called tokamaks, which harness powerful electromagnetic fields to contain the super-heated plasmas resulting from the fusion reactions. The ability to create plasma with self-confining electromagnetic fields in the open air could eliminate the need for external electromagnetic fields in future fusion experiments, and with it, much of the expense.

The researchers created plasma rings about 15 centimeters in diameter that flew through the air across distances up to 60 centimeters. The rings lasted just 10 milliseconds, but reached temperatures greater than the sun’s fiery fusion core surface at around 6600 to 7700 degrees K (6327 to 7427 degrees C). Plasma physicists suspect that magnetic fields are still involved—but that the plasma rings create their own.

“This plasma has a self-confining magnetic field,” said Randy Curry, an engineer and physicist at the University of Missouri in Columbia. “If one can generate and contain it without large magnets involved, of course fusion energy would be an application.” But the researchers’ success in creating self-contained plasma rings came as a surprise. “We did not expect that,” Curry says.

The plasma device at MU could be enlarged to handle much larger amounts of energy, according to Curry. With sufficient funding, they could develop a system within three to five years that would also be considerably smaller. He noted that they used old technologies to build the current prototype of the plasma-generating machine. Using newer, miniaturized parts, he suggests they could shrink the device to the size of a bread box

According to Science President Barack Obama last week submitted a $3.8 trillion budget request to Congress for 2014 that, if enacted, would boost the research budgets of nearly every federal agency. His continued support for science stands out in an otherwise flat budget that aims to shrink the federal deficit by clamping down on entitlement programs and raising money by revising the tax code. The president’s spending blueprint should lift the spirits of a community that, along with all other sectors of the economy, has endured a bumpy political ride for the past year. The president’s $143 billion request for research and development more than erases a nearly $10 billion dip from 2012 to 2013 caused by sequestration—the $85 billion, across-the-board cut in discretionary spending that went into effect in March.

With such breakthroughs as the University of Missouri team is working with one sees just the opposite in both Europe and United States projects using large fusion reactors based on magnetic coils that are over budgeted an continue to cost taxpayers and governments more money than projected. At Joint European Torus (JET), where Euratom is investigating the possibility of recasting JET as an international facility after 2018, asking the other six ITER partners—China, India,    Japan, Russia, South Korea, and the United States—to contribute to the cost of keeping it running. But with ITER already expected to cost several times    the original estimate, the partners may not be keen to shoulder the extra burden. (here)

So if Randy Clark of the University of Missouri and his team can produce plasma fusion that does not need magnetic coils why should we continue funding such large Manhattan style projects as JET and our own Max Planck reactors? “This plasma has a self-confining magnetic field” said Randy Clark: if this is true (see video here) then the cost of maintaining such large fusion reactors would be a thing of the past. What’s interesting is that if the International Community could get behind such projects we could truly have energy for the world that would be clean and safe, because unlike the older forms had waste products while this form does not and is self-renewable. Let’s hope within the next ten years they can make solid headway toward this goal.

Paul Feyerabend: The Practical Philosopher

The knowledge we claim to possess, the very general knowledge provided by modern physical theory included, is an intricate web of theoretical principles and practical, almost bodily abilities and it cannot be understood by looking at theories exclusively.

– Paul Feyerbend, The Tyranny of Science

After this little quip Feyerbend lays it low and true: most popular accounts of science and many philosophical analyses are therefor chimeras, pure and simple (108).1 Being a software engineer / architect I’ve had occasion to relate to Feyerbend’s words, where he mentions on a slightly different note the work of engineers: to evaluate a project an engineer needs both theoretical and on-site experience and this means that he should have theoretical as well as practical schooling (108). He continues, saying:

A variety of disasters has convince some administrators that the top-down (theoretical) approach is defective and that engineering practice is an important part of the education of even an engineering theoretician. (108)

As I read posts by young or old philosophers I am almost tempted to have them go back t school and Major in some practical area, say mechanical engineering, architecture, biology, etc., where they can actually spend summers interning and gaining practical insight into their subject form a bottom-up perspective. I often think of political theorists of the past few hundred years and wonder just how they thought their strange theoretical  generalizations would ever support  practical application. Too bad those young revolutionaries of former eras didn’t have some school of Revolution 101 to show them that in actual application their ideas might just take on a life of their own and connect back to human emotion and anger, follow the death drives right down into the cesspool of some slime bucket of catastrophe.

Continue reading

Heroes of Science: Arthur Galston

In his early research the biologist, Arthur Galston experimented with a plant growth regulator, triiodobenzoic acid, and found that it could induce soybeans to flower and grow more rapidly. However, he also noted that if applied in excess, the compound would cause the plant to shed its leaves.

The Military-Industrial Complex of the era used Galston’s findings in the development of the powerful defoliant Agent Orange, named for the orange stripe painted around steel drums that contained it. The chemical is now known to have contained dioxins, which have proven to be associated with cancers, birth defects and learning disabilities. From 1962 to 1970, American troops released an estimated 20 million gallons of the chemical defoliant to destroy crops and expose Viet Cong positions and routes of movement during the Vietnam War.

As an activist he wrote letters, academic papers, broadcasts and seminars, thet described the environmental damage wrought by Agent Orange, noting that the spraying on riverbank mangroves in Vietnam was eliminating “one of the most important ecological niches for the completion of the life cycle of certain shellfish and migratory fish.” Galston traveled to Vietnam to monitor the impact of the chemical. In 1970, with Matthew S. Meselson of Harvard University and other scientists, Galston charged that Agent Orange also presented a potential risk to humans. The scientists lobbied the Department of Defense to conduct toxicological studies, which found that compounds in Agent Orange could be linked to birth defects in laboratory rats. The revelation led President Richard M. Nixon to order a halt to the spraying of Agent Orange.

Continue reading

Heroes of Science: Pierre Gassendi

Pierre Gassendi,  was one of the prodigies of the early seventeenth century. He was born in 1592 in Provence, went to college at Digne, and by the age of sixteen was lecturing there. After studying theology at Aix-en-Provence, he taught theology at Digne in 1612. When he received his doctorate in theology, he became a lecturer in philosophy at Aix, and then canon of Grenoble. Quite early in life, Gassendi began his extensive scientific researches, assisted and encouraged by some of the leading intellectuals of Aix, like Peiresc. The philosophy course that he taught led Gassendi to compile his extended critique of Aristotelianism, the first part of which appeared as his earliest publication in 1624, the Exercitationes Paradoxicae adversus Aristoteleos. This was followed by several scientific and philosophical works, which gained Gassendi great renown in the intellectual world and brought him into contact with the man who was to be his lifelong friend, Father Marin Mersenne. In 1633, Gassendi was appointed Provost of the Cathedral of Digne, and in 1645, professor of mathematics at the Collège Royal in Paris. Gassendi retired in 1648 and died in 1655.

In spite of his tremendous role in the formation of “the new science” and “the new philosophy,” Gassendi’s fame has survived mainly for his criticisms of Descartes’ Meditations and not for his own theories, which throughout the seventeenth century had rivaled those of his opponent. He is also remembered for the part he played in reviving the atomic theory of Epicurus. But, by and large, until quite recently, Gassendi’s status as an independent thinker has been most neglected. Perhaps this is due in part to Descartes’ judgment of him, and in part to the fact that he usually presented his ideas in extremely lengthy Latin tomes, which are only now being translated into French.

But Gassendi, in his lifetime, had an extremely important intellectual career, whose development, perhaps more than that of René Descartes, indicates and illustrates what J. H. Randall called “the making of the modern mind.” Gassendi started out his philosophical journey as a sceptic, apparently heavily influenced by his reading of the edition of Sextus brought out in 1621, as well as by the works of Montaigne and Charron. This phase of “scientific Pyrrhonism” served as the basis for Gassendi’s attacks on Aristotle as well as on the contemporary pseudoscientists and made Gassendi one of the leaders of the Tétrade. However, he found the negative and defeatist attitude of humanistic scepticism unsatisfactory, especially in terms of his knowledge of, and interest in, the “new science.” He announced then that he was seeking a via media between Pyrrhonism and Dogmatism. He found this in his tentative, hypothetical formulation of Epicurean atomism, a formulation that, in many respects, comes close to the empiricism of modern British philosophy.

– Richard H. Popkin, The History of Scepticism

Note: adding a new category that will offer historical and critical biographical details on the history of science and key players within that history.