Heinz von Foerster: Self-Organization as the reduction of entropy in a System

forester

Even though it was Ross Ashby who first conceived the notion of self-organization in systems, he did not understand its relation to entropy. It would be Heinz von Foerster in his essay, “On Self-Organizing Systems and Their Environments” who would deploy the formula that informs self-organization today:

For von Foerster, a self-organizing system is one whose “internal order” increases over time. This immediately raises the double problem of how this order is to be measured and how the boundary between the system and the environment is to be defined and located. The second problem is provisionally solved by defining the boundary “at any instant of time as being the envelope of that region in space which shows the desired increase in order”. For measuring order, von Foerster finds that Claude Shannon’s definition of “redundancy” in a communication tion system is “tailor-made.” In Shannon’s formula,

R=1-H/Hm3

where R is the measure of redundancy and H/Hthe ratio of the entropy H of an information source to its maximum value H,,,. Accordingly, if the system is in a state of maximum disorder (i.e., H is equal to H,,,), then R is equal to zero there is no redundancy and therefore no order. If, however, ever, the elements in the system are arranged in a such a way that, “given one element, the position of all other elements are determined” , then the system’s entropy H (which is really the degree of uncertainty about these elements) vanishes to zero. R then becomes unity, indicating perfect order. Summarily, “Since the entropy of the system is dependent upon the probability distribution of the elements to be found in certain distinguishable able states, it is clear that [in a self-organizing system] this probability distribution must change such that H is reduced”. 1

The formula thus leads to a simple criterion: if the system is self-organizing, organizing, then the rate of change of R should be positive (i.e., (5R/Bt > 0). To apply the formula, however, R must be computed for both the system and the environment, since their respective entropies are coupled. Since there are several different ways for the system’s entropy to decrease in relation to the entropy of the environment, von Foerster refers to the agent responsible for the changes in the former as the “internal demon” and the agent responsible for changes in the latter as the “external demon.” These two demons work interdependently, in terms of both their efforts and results.  (ibid. 56)

Given the fruitfulness of the idea that a complex order can emerge from a system’s exposure to “noise” or other disturbances, von Foerster’s illustration can only seem disappointing. Or rather, viewed in the light of the sea changes that the concept of self-organization would undergo over the next twenty years, von Foerster’s proposal-and some would say the same about Ashby’s theorizing can have at most an anticipatory value. These sea changes followed from the interaction and relay of two series of developments. On the one hand, Ilya Prigogine (chemistry), Hermann mann Haken (physics), and Manfred Eigen (biology) made groundbreaking  empirical discoveries of self-organizing systems, in which instabilities resulting from the amplification of positive feedback loops spontaneously create more complex forms of organization. On the other hand, Steven Smale, Rene Thom, Benoit Mandelbrot, and Mitchell Feigenbaum, to name a few, made discoveries in topology and nonlinear mathematics that led to a complete revamping of dynamical systems theory. This story, which involves the discovery of how nonlinear factors can produce deterministically chaotic systems, is now fairly familiar. One simple but telling difference this sea change has made in dynamical systems theory is that the concept of the attractor has replaced notions like stability and equilibrium. As a system moves toward increasing instability it may reach a point where two sets of values for its variables and hence two different states are equally viable. (In its phase portrait as a dynamical system this is referred to as a bifurcation.) But which state will it “choose”? There is no way of knowing since the system’s behavior has become indeterminate at this point. The very presence of a bifurcation means that the system is falling under the influence of a different attractor and thus undergoing a dynamical change as a whole. (ibid. 57-58)

To see how these early pioneers shifted the ground both of the sciences and philosophical speculation is an important lesson. Knowing that it was pragmatic and practical engineering thinking and empirical approaches rather than theoretical knowledge seems viable in itself. Yet, these early pioneers because they lacked the breadth of the theoretical base were unable to make the leap necessary to widen their practical empirical knowledge into other domains.

Either way the notion that instabilities resulting from the amplification of positive feedback loops spontaneously create more complex forms of organization would be something that would come back into play decades later. One should also look back toward Thermodynamics, too. When one imagines how an artificial organism might suddenly bifurcate and make that leap into superintelligence one can look back and see just what path it took to get to that indeterminate point.

1. John Johnston. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (p. 55). Kindle Edition.

Jules Verne: Acceleration, Science, and the Future

jules verne

In 1863, the great novelist Jules Verne undertook perhaps his most ambitious project. He wrote a prophetic novel, called Paris in the Twentieth Century, in which he applied the full power of his enormous talents to fore-cast the coming century. His biographers have noted that, although Verne was not a scientist himself, he constantly sought out scientists, peppering them with questions about their visions of the future. He amassed a vast archive summarizing the great scientific discoveries of his time. Verne, more than others, realized that science was the engine shaking the foundations of civilization, propelling it into a new century with unexpected marvels and miracles. The key to Verne’s vision and profound insights was his grasp of the power of science to revolutionize society.1

Science as the engine of progress and development, of modernity as it has come down to us is central to the underlying myths of speed and accelerationism. Jules Verne could be considered the father of Accelerationism. Frank Borman an astronaut on Apollo 8 would comment: “In a very real sense, Jules Verne is one of the pioneers of the space age”. Books like Twenty Thousand Leagues Under the Sea, From the Earth to the Moon, The Adventures of Captain Hatteras and An Antarctic Mystery, Mathias Sandorf, Journey to the Center of the Earth would each inspire scientists like pioneering submarine designer Simon Lake, and other maritime scientists: William Beebe, Sir Ernest Shackleton, and Robert Ballard, and Jacques Cousteau; rocketry innovators Konstantin Tsiolkovsky, Robert Goddard, and Hermann Oberth; explorer Richard E. Byrd, after a flight to the South Pole; Edwin Hubble, the American astronomer; the preeminent speleologist Édouard-Alfred Martel;  and others like Fridtjof Nansen, Wernher von Braun, Guglielmo Marconi, and Yuri Gagarin.

Even Marx himself would understand that science is the engine of production and progress:

“…the entire production process appears not subsumed under the direct skillfulness of the worker, but rather as the technological application of science. [It is] hence, the tendency of capital to give production a scientific character; direct labour is reduced to a mere moment in this process. As with the transformation of value into capital, so does it appear in the further development of capital that it presupposes a certain given historical development of productive forces on one side – science too is among these productive forces – and, on the other, drives and forces them further onwards.”2

 This notion that the cycle of the production process is driven by applied science as a productive force, and that it is a continuous force driving it in a progressive form of continuous process is key to Marx’s understanding. Instead of capital as the driver of production as many assume, Marx would describe a combination of social labour and the “technological application of natural sciences, on the one side, and to the general productive force arising from social combination of total production on the other side” (ibid). These two forces would ultimately lead capital to “its own dissolution as the form of dominating production” (ibid).

Marx as he begins to diagnose the power of science and machines tells us that at first the power of machines to take over human labour was martialed not by the machines, but by mechanizing the worker, but as he says the rise of machines in industry arose by “dissection – through the division of labour, which gradually transforms the workers’ operations into more and more mechanical ones, so that at a certain point a mechanism can step into their places. Thus, the specific mode of working here appears directly as becoming transferred from the worker to capital in the form of the machine, and his own labour capacity devalued thereby” (ibid).

Even now we hear many workers in the labour force worried that robots and intelligent systems will make them obsolete. In From Watson to Siri we discover that as in early Fordist era machine takeovers we’re facing it again:

“…in the infancy of the 21st century, a new revolution is reshaping the American economy, what we might call the “A.I. revolution.” … machines employing natural language processors, voice recognition software and other tools of artificial intelligence are proliferating, just as textile mills and, later, assembly lines proliferated and fundamentally altered the American economy in the 19th and early 20th centuries. Then, American workers won the race against machines by using advances in technology to usher in a new era of consumerism and mass production. This time … we must learn to co-exist with machines, rather than race against them.” (PBS/Need to Know)

It is also interesting, continuing with Marx’s essay, that real wealth creation depends less “on labour time and on the amount of labour employed than on the power of agencies set in motion during labour time, whose ‘powerful effectiveness’ is itself in turn out of all proportion to the direct labour time spent on their production, but depends rather on the general state of science and on the progress of technology or the application of this science to production” (ibid). Again the engine of science and knowledge applied is the driver and engine of wealth creation in which the human worker become more of a “watcher and regulator” of the production process done for the most part by machines. What Marx is ultimately driving at is that humans as scientists and knowledge workers whose “understanding of nature and his mastery over it by virtue of his presence in the social body – it is, in a word, the development of the social individual which appears as the great foundation-stone of production and wealth” (ibid, 62).

The point that Marx is making in contradistinction to many labour theorists is that wealth is produced by promoting less labour time and more free time for social individuals who thereby become artistic, scientific, educated in free time:

“The surplus labour of the mass has ceased to be the condition for the development of general wealth, just as the non-labour of the few, for the development of the general powers of the human head. With that, production based on exchange value breaks down, and the direct, material production process is stripped of its penury and antithesis. The free development of individualities, and hence not the reduction of necessary labour time so as to posit surplus labour, but rather the general reduction of necessary labour of society to a minimum, which then corresponds to the artistic, scientific etc. development of the individuals in the time set free, and with the means created.” (ibid, 63)

 By this of course surplus labour is the labour performed in excess of the labour necessary to produce the means of livelihood of the worker (“necessary labour”). So that the exploitation of surplus labour or making individuals work more than is needed for their basic needs should be put to an end, and the input of wealth distributed to the mass of workers to further their education so that through their artistic and scientific creativity and inventions industry would benefit greatly. As Marx will pointedly tells us the capitalists have no clue, that instead of opening up free time for the workers and giving them an opportunity to further their artistic and scientific education, they force them to work longer hours than is necessary to survive:

“Capital itself is the moving contradiction, in that it presses to reduce labour time to a minimum, while it posits labour time, on the other side, as sole measure and source of wealth. Hence it diminishes labour time in the necessary form so as to increase it in the superfluous form: hence it posits the superfluous in growing measure as a condition – question life or death – for the necessary.” (ibid, 63)

But remember Marx had previously told us that the development of the social individual is the “great foundation-stone of production and wealth”, not surplus labour nor labour time as the source of wealth. The point of the contradiction comes into play in that the capitalists use the powers of science, and the resources of nature as the engine of wealth creation in collusion with the social combination and social intercourse independent of labour time employed on it (ibid, 63). Yet, on the other hand they play the blind-man’s card and have us believe that labour time is the measuring rod for the social forces created, and limit it as the created value of value (ibid, 63).

Yet, Marx will almost surprised by his own analyses remind us that it is the human brain freed up to produce knowledge for the society that is the lynchpin of wealth, and that the “creation of a large quantity of disposable time apart from necessary labour time for society generally” which leads to people being able to pursue artistic and scientific education etc. Yet, the capitalists in contradistinction to their own practices, invert this logic and take hold of the surplus labour to force workers not into free time for education, but to produce excessive material products for the market and as Marx suggests, if it succeeds too well it “suffers from surplus production, and then necessary labour is interrupted, because no surplus labour can be realized by capital” (ibid, 64). The point here is that the capitalist is his own worst enemy, and the cycles of bust and depression, inflation, etc. are brought about by the fantasia of the capitalists.

Ultimately Marx’s diagnosis would be that as the contradiction continues to produce these same cycles of boom and bust over and over and over again, it is up to the workers, not the entrepreneurs and bankers (Capital), to appropriate their own surplus labour (free time): Once they have done so- and disposable time thereby ceases to have an antithetical existence – then, on one side, necessary labour time will be measured by the needs of the social individual, and, on the other, the development of power of social production will grow so rapidly that, even though production is now calculated for the wealth of all, disposable time will grow for all. (ibid, 65) This will lead Marx to his final point, that real wealth is the combined or total productive power of all workers, and the measure of wealth is not labour time but “disposable time”. Instead of the capitalist who bases wealth on labour time, on the exploitation of the worker beyond his necessary time he needs to support himself and his family, he is force to “work longer than the savage does, or than he himself did with the simplest, crudest tools” (ibid. 65).

Instead as Marx will tell us what should occur is the saving of labour time, of turning it into free time, of education and productive time for family and life thereby allowing workers to ultimately accumulate knowledge for society: “this process is then both a discipline, as regards the human being in the process of becoming, and, at the same time, practice, experimental science, materially creative and objectifying science, as regards the human being who has become, in whose head exists the accumulation of knowledge of society” (ibid, 66).

As we move into an era of artificial intelligence, smart cities, technocapitalism the need for creativity and higher performance and inventiveness has come more and more into play, and as Luis Suarez-Villa will tell us this is becoming a era in which creativity itself is becoming the greatest commodity: “The commodification of this most intangible and elusive human quality has characteristics separating it from the commodification of other resources in previous stages of capitalism.”3

In my next post I’ll introduce some of where Luis Suarez-Villa sees our brave new world of technocapitalism is taking us. All of this as lead in to Kaku and others as to the direction of capital, acceleration, and science as they merge and form the new worlds ahead.

1. Michio Kaku. Physics of the Future. (Random House, 2012)
2. Fragment on Machines. #Accelerate# the accelerationist reader. editors Robin Makay and Armen Avanessian (Urbanomic, 2014)
3. Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism (Kindle Locations 357-359). Kindle Edition.

Visions of the Future: Apocalypse or Paradise?

Continuing with a frontal assault of our conceptions of the future in both their negative and positive modes I’d like to continue down the path from previous notes on John Michael Greer’s assessment for America and the world’s prospects (here). He ended his book telling us that Americans need a new vision, a new Dream, one “that doesn’t require promises of limitless material abundance, one that doesn’t depend on the profits of empire or the temporary rush of affluence we got by stripping a continent of its irreplaceable natural resources in a few short centuries“. Yet, he also warned us that “…nothing guarantees that America will find the new vision it needs, just because it happens to need one, and it’s already very late in the day. Those of us who see the potential, and hope to help fill it, have to get a move on“.1

Michio Kaku in his book Physics of the Future will offer what he terms an “insider’s view” of the future. I thought it ironic that he would pull the old trick of insider/outsider that opposes scientific authority to the folk-wisdom of the tribe, and assumes scientific knowledge has some greater privilege and access to the future than that of historians, sociologists, science fiction writer’s, and “futurologists” – who he gently removes from authority and truth, saying in his preface that they are all “outsiders” – “predicting the world without any firsthand knowledge of science itself” as if this placed them in a world of non-knowledge or folk-wisdom that could be left behind, as if they were mere children in a grown-ups world of pure scientific mystery that only the great and powerful “insider”, the scientist as inventor, investigator, explorer of the great mysteries of the universe could reveal.

Yet, in the very next paragraph after dismissing the folk-wisdom of the tribal mind, and bolstering the power of science and scientists he will ironically admit that “it is impossible to predict the future with complete accuracy”, that the best we can do is to “tap into the minds of scientists on the cutting edge of research, who are doing the yeoman’s work of inventing the future”.2 One notices that science is now equated with “invention” of the future, as if the future was a product or commodity that we are building in the factories of knowledge, both material and immaterial that will – as he terms it “revolutionize civilization”. Of course etymologically invention is considered “a finding or discovery,” a noun of action from the past participle stem of invenire to “devise, discover, find”. And as he uses the words “yeoman’s work” for scientists as inventors of the future we will assume the old sense of that as “commoner who cultivates his land”, or an  “attendant in a noble household,” so that these new scientists are seen as laborers of the sciences producing for their masters, or the new nobility of the elite Wall-Street and Corporate Globalist machine.

(I will come back to the notion of the future as Invention in another essay in this series. What is the future? How do we understand this term? Is the future an invention, a discovery, a finding; or, is it rather an acceleration of the future as immanent in our past, a machinic power unfolding, or a power invading us from the future and manipulating our minds to deliver and shape us to its will? Time. What is this temporality? What is causality? Do we shape it or does it shape us? )

So in Kaku we are offered a vision of the future in alignment with the globalist vision of a corporatized future in which scientists are mere yeoman doing the bidding of their masters in inventing a future that they are paying for through the great profit making machine of capitalism. It’s not that his use of differing metaphors and displacements, derision of the outsider and ill-informed or folk-wisdom practices of historians, sociologists, science-fiction writers, and futurologists was in itself a mere ploy; no, its that whether consciously or unknowingly he is setting the stage, which on the surface appears so positive, so amiable, so enlightening and informing for a corporate vision of the future that is already by the virtue of a dismissal of its critics a done deal, a mere effort of unlocking through the power of “devices, inventions, and therapies”. Kaku is above all an affirmer of technologies dream, of science as the all-powerful Apollonian sun-god of enlightened human destiny that will revolutionize civilization.  

I doubt this is the dream that John Michael Greer had in mind when he mentioned that we need a new American Dream. Or is it? For Greer there only the ultimate demise of the last two-hundred years of Fordism or the Industrial Age:

Between the tectonic shifts in geopolitics that will inevitably follow the fall of America’s empire, and the far greater transformations already being set in motion by the imminent end of the industrial age , many of the world’s nations will have to deal with a similar work of revisioning.(Greer, 276)

Yet, this is where Greer leaves it, at a stage of revisioning to come, of dreams to be enacted. He offers no dream himself, only the negative critique of existing dreams of the Fordist era utopias that have failed humanity and are slowly bringing about disaster rather than transformation.

Kaku on the other hand, whose works sell profitably, a man who has the ear of the common reader as well as the corporate profiteers seeks his own version (or theirs?) of the American Dream. Unlike his previous book Visions, which offered his vision of the coming decades; instead, this new one offers a hundred year view of technology and other tensions in our global world that as he tells it ominously “will ultimately determine the fate of humanity”.

I’ll leave it there for this post, and will take up his first book, Visions: How Science Will Revolutionize the 21st Century in my next post, then his Physics of the Future in the third installment. 

1. Greer, John Michael (2014-03-17). Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America . New Society Publishers. Kindle Edition.
2. Michio Kaku. Physics of the Future. (Doubleday, 2012)

The Mad Hatter’s Tool-Box: How the Fixit Man can Move Us Into an Uncertain Future

We should attend only to those objects of which our minds seem capable of having certain and indubitable cognition.

– René Descartes,   The Philosophical Writings of Descartes

Descartes would develop twenty-one rules for direction of the Mind as if these would carry us toward that ultimate goal of certainty. In our age mathematicians have relied on probabilistic theorems to narrow down the field of uncertainty in such things as physics, economics, physiology, evolutionary biology, sociology, psychology, etc.  Ludwig Wittgenstein in his book On Certainty would develop the notion that claims to certainty are largely epistemological, and that there are some things which must be exempt from doubt in order for human practices to be possible (Ali Reda has a good background if you need: here).

For the rationalist Descartes “someone who has doubts about many things is no wiser than one who has never given them a thought; indeed, he appears less wise if he has formed a false opinion about any of them. Hence it is better never to study at all than to occupy ourselves with objects which are so difficult that we are unable to distinguish what is true from what is false, and are forced to take the doubtful as certain; for in such matters the risk of diminishing our knowledge is greater than our hope of increasing it”.1 Of course things change and in the 19th Century engineers would need a way to narrow down the range of uncertainty in practical problems so the Probabalistical Revolution arose.

Thomas Kuhn in his now famous essay What are scientific revolutions? would argue that what characterizes revolutions is change in several of the taxonomic categories prerequisite to scientific descriptions and generalizations. He would constrain this statement saying that an adjustment not only of the criteria relevant to categorization, but of the way in which given objects and situations are distributed among preexisting categories.2

Bernard Cohen in the same work admitted that in the twentieth century a real revolution in the physical sciences did come about with the incorporation of probability and statistical mathematics that replaced the older Newtonian simple rules of causality of assigned cause and effect. The same with biology in its genetics and evolutionary forms. From its birth in the 19th Century probability theory in social sciences, etc. Yet for him it was not a revolution in theory as much as in the application of theory that was the revolution.(PR, 40)

Ian Hacking would dispute both Kuhn and Cohen and tell us that what was revolutionary was neither the theoretical revolution in the structure of the sciences, nor was it one in the application of those sciences, but was rather in the “taming of chance and the erosion of determinism” that constitute one of the “most revolutionary changes in the history of the mind.” (PR, 54)

Whether we like to think about it or not mathematics has always informed philosophy or vice versa since the advent of the sciences. Many of the terms used in philosophy come directly out of their use in scientific theory and practice. In our day with the advent of Analytical Philosophy one would be hard put to remain a philosopher without some formal education in mathematics and the various forms of logic. Yet, on the Continent this influx of the math and the sciences has for the most part with the rise of phenomenology been more or less put of the back burner and even denied a central role. Oh sure there have been several philosophers that it was central too, but for the most part philosophy in the twentieth century grew out of the language of finitude and the ‘Linguistic Turn’ in phenomenology, existentialism, structuralism, deconstruction, and post-structuralist lines of thought. Yet, at the end of the century one could see math beginning to reemerge within philosophy in the works of Deleuze, Badiou, John Luc Nancy, and many others. In our contemporary setting we are seeing a move away from both phenomenology and its concurrent Linguistic Turn, as well as the Analytical philosophies into a new and vibrant surge toward Synthetic philosophies of mathematics.

With the rise of both the NBIC (NanoTech, BioTech, InfoTech, and Cognitive Sciences) as well as the ICT’s (Information and Communications Technologies) we are seeing the need for a synthetic philosophy. Although Herbert Spenser was probably the first to use the term Synthetic Philosophy which tried to demonstrate that there were no exceptions to being able to discover scientific explanations, in the form of natural laws, of all the phenomena of the universe. Spencer’s volumes on biology, psychology, and sociology were all intended to demonstrate the existence of natural laws in these specific disciplines. The 21st Century use of that term is quite different and less positivistic.

Of late – at the behest of my friend Andreas Burkhardt, I’ve been reading Fernando Zalamea’s Synthetic Philosophy of Contemporary Mathematics. In this work he offers four specific thesis: first, that contemporary math needs both our utmost attention and our careful perusal, and that it cannot be reduced to either those of set theory and mathematical logic, or those of elementary mathematics; second, to understand and even perceive what is at stake in current mathematics we need to discover the new problematics that remain undetected by ‘normal’ and ‘traditional’ philosophy of mathematics as now practiced; third, a turn toward synthetic understanding of mathematics – one based on the mathematical theory of categories, that allows us to observe important dialectical tensions in mathematical activity, which tend to be obscured, and sometimes altogether erased, by the usual analytical understanding; and, finally, we must reestablish a vital pendular weaving between mathematical creativity and critical reflection – something that was indispensable for Plato, Leibniz, Pascal and Pierce – and that, on the one hand, many present day mathematical constructions afford useful and original  perspectives on certain philosophical problematics of the past while, on the other hand, certain fundamental philosophical insolubilia fuel great creative forces in mathematics. (Zalamea, 4-5)

Over time we’ve seen a slow move from analytical to post-analytical philosophy, and the concurrent move from phenomenological to post-phenomenological in both the Continent and Americas for a few years now. One wonders if this philosophical transformation, as well as the changes in and revolutions around certain technological and scientific theories and practices over the past 30 years, is bringing with it a sense of what Kuhn spoke of as the shift in “taxonomic categories prerequisite to scientific descriptions and generalizations”? Are the linguistic along with mathematical frameworks that have guided for a hundred years changing? And, if so , what are the new terms?

We’ve seen in the work of such philosophers as William C. Wimsatt in his Re-Engineering Philosophy for Limited Beings a new turn from rationalism and strategy, game theory and puzzles that were at their height in the 1990’s toward a new empiricism, a shift both methodologically and conceptually towards complexity and the senses.3 As he puts it, for any naturalized account:

We need a philosophy of science that can be pursued by real people in real situations in real time with the kinds of tools that we actually have – now or in a realistically possible future. … Thus I oppose not only various eliminativisms, but also overly idealized intentional and rationalistic accounts. (Wimsatt, 5)

Wimsatt turns toward a what he terms a “species of realism”, a philosophy based top to bottom on heuristic principles of reasoning and practice, but that also seeks a full accounting of other things in our richly populated universe – including formal approaches we have sought in the past. (Wimsatt, 6) He tells us that pace Quine his is an ontology of the rainforest, piecemeal and limited to the local view rather than some rationalizing global view or God’s view of things in general. At the center of this realist agenda is heuristics that help us explore, describe, model, and analyze complex systems – and to “de-bug” their results. (Wimsatt, 8) His is a handyman approach to philosophy and the sciences, the need for a tool-box of tools that can be used when needed, and discarded when something better comes along. Instead of some armchair mentation he would send us back into the streets where the universe is up front and close. Yet, remember to bring that toolbox, all those toys and computer, net connections, databanks… etc. whatever it takes to get on with your work. Be up and doing… a pragmatic approach to science and philosophy that breaks down the barriers of stabilized truth bearing authorities that horde the gold of scientific knowledge like it was some hidden treasure, We need a new breed of active participants, go-getters, and pragmatists to do the dirty work of understanding what reality is up to.

What is interesting to me at this moment in time in both the sciences and philosophy is this sense of stock taking, of sizing up the past couple hundred years, wading through the muck, weighing things in the balance and deciding what’s next, where we’re going with our lives and our work. There seems to be a great deal of thought provoking movement in the air, as if we’re all coming to the same realization that yes we need to change… our governments, our sciences, our philosophies have for the most part failed us, not given us either the answers or the changes we need to build a good life on this planet. In the men and women in both philosophy and the sciences that I’m reading in areas of feminism, racism, species relations,  posthumanism, postnaturalism, postmodernism… etc. blah blah … we seem ready to ditch all these posts and move on to the defining metaphor of our age. There’s an energy running through the web, a good energy as if people are tired of the bullshit, tired of the negative crap, tired of authorities that keep promising change and never delivering… even in the sciences we see the transformation of things happening so fast its hard to keep up. With Virilio, speed… with Noys and Land, acceleration… this fast pace of life wants somewhere to go, but we seem to be on a spinning ginny ready to drop its barker floor below us as we plunge into the abyss. But as we can see from the philosophers and scientist above, there is also a sense of urgency – a sense that we need to be a moving, a sense that we need get off our arse and be about our work… like the Mad Hatter, there’s no time left “I must be on my way!”

1. Descartes, René (1985-05-20). The Philosophical Writings of Descartes: 1 (Kindle Locations 375-378). Cambridge University Press. Kindle Edition.
2. The Probabilistic Revolution. Ed. Lorenze Kruger, Lorraine J. Daston, and Michael Heidelberger (MIT Press, 1987)
3. William C. Wimsatt. Re-Engineering Philosophy for Limited Beings. (Harvard, 2007)

We Are Our Brains

Everything we think, do, and refrain from doing is determined by the brain. The construction of this fantastic machine determines our potential, our limitations, and our characters; we are our brains. Brain research is no longer confined to looking for the cause of brain disorders; it also seeks to establish why we are as we are. It is a quest to find ourselves.

— D.F. Swaab, We Are Our Brains

One could almost say that the brain is a biochemical factory, with neurons and glia as both bureaucracy and workers. Yet, even such a literary reduction wouldn’t really get at the truth of the matter. Jacob Moleschott (1822– 1893) was one of the first to observe that what this factory with all its billions of neurons and trillions of glia produces is what we aptly term the ‘mind’. This process of production from life to death entails: electrical activity, the release of chemical messengers, changes in cell contacts, and alterations in the activity of nerve cells.1

Many of the new technologies as imaging, electromagnetic and biochemical are being used to both study and heal certain long standing malfunctions and neurological disorders in the brain, as well as invasive electro and magnetic therapies applied to patients suffering diseases like Alzheimer’s, schizophrenia, Parkinson’s, multiple sclerosis, and depression. (Yet, I interject, that these technologies present us a double-edged sword that while on the one hand they can be used to heal they can also be used by nefarious governments to manipulate and harm both external enemies and internal citizenry.)

Continue reading

Google, DARPA and the Future of Control

Former director of DARPA and Google exec, Regina E. Dugan smiles as she tells us about the new invasive biotechnologies for tattooing and biomedical pharmaceuticals that will allow Google or other agencies to implant invasive sensors/tracking devices to monitor citizens 24/7 for securitization. She is wearing one of the devices and then produces a pill that she describes in detail as having pulsating electronics that can be picked up by GPS satellite, etc. What else is Google planning down the pipe? She even hints that one of the marketing ploys is to target teenagers and young people using the tattoo’s as if in an act of rebellion against their parents. Such Technologies will allow a big Other (Authority) to monitor every step taken in a 24/7 timeframe as well as uploading other types of data to a centralized datamining facility to be manipulated, massaged, and transformed for use by marketers, law enforcement, academia, etc. Is this the future of our technocontrol society? Will corporations enforce our daily pill for access to information? Instead of a token that is slid into one’s computer, one wears it either on one’s person as a tattoo, or as an ingested pill that provides a secure 24/7 access to any and all information in the GoogleMind.  Google seems to be at the forefront of our Brave New World of surveillance and control society. Aldous Huxley in a later set of essays The Brave New World Revisited remarked:

In my fable of Brave New World, the dictators had added science to the list and thus were able to enforce their authority by manipulating the bodies of embryos, the reflexes of infants and the minds of children and adults. And, instead of merely talking about miracles and hinting symbolically at mysteries, they were able, by means of drugs, to give their subjects the direct experience of mysteries and miracles—to transform mere faith into ecstatic knowledge. The older dictators fell because they could never supply their subjects with enough bread, enough circuses, enough miracles and mysteries. Nor did they possess a really effective system of mind-manipulation. In the past freethinkers and revolutionaries were often the products of the most piously orthodox education. This is not surprising. The methods employed by orthodox educators were and still are extremely inefficient. Under a scientific dictator education will really work—with the result that most men and women will grow up to love their servitude and will never dream of revolution. There seems to be no good reason why a thoroughly scientific dictatorship should ever be overthrown.1

The next time your boss offers you a pill with a smile, or your child comes home from school with a whimsical tattoo on her wrist, think about Regina E. Dugan of Google and politely say “No thanks, control is not an option!”

A follow up on the Proteous Digital Pill: http://money.cnn.com/2012/08/03/technology/startups/ingestible-sensor-proteus/index.htm and http://proteusdigitalhealth.com/

More details on the EES Chip tattoo: http://www.the-scientist.com/?articles.view/articleNo/31046/title/Next-Generation–Electronic-Skin/

For those that want the longer version of the above that also goes into the darker Transhumanist agenda behind the Google world-view go here: http://www.youtube.com/watch?v=H4Q7sT2Kk88

1. Huxley, Aldous (2014-01-09). Brave New World Revisited (Kindle Locations 1485-1492). RosettaBooks. Kindle Edition.

Lee Smolin: Time, Physics and Climate Change

The most radical suggestion arising from this direction of thought is the insistence on the reality of the present moment and, beyond that, the principle that all that is real is so in a present moment . To the extent that this is a fruitful idea, physics can no longer be understood as the search for a precisely identical mathematical double of the universe. That dream must be seen now as a metaphysical fantasy that may have inspired generations of theorists but is now blocking the path to further progress. Mathematics will continue to be a handmaiden to science, but she can no longer be the Queen.

– Lee Smolin,  Time Reborn: From the Crisis in Physics to the Future of the Universe

What if everything we’ve been taught about time, space, and the universe is not just wrongheaded, but couched in a mathematics of conceptual statements (theorems) that presumed it could map the totality of reality in a one-to-one ratio of identity?  This notion that mathematics can ultimately describe reality, that there is a one to one identity between the conceptual framework of mathematics and the universe – the Cartesian physicist – or, you may know him under the epithet of String theorist – will maintain that those statements about the accretion of the universe which can be mathematically formulated designate actual properties of the event in question (such as its date, its duration, its extension), even when there is no observer present to experience it directly. In doing so, our physicist is defending a Cartesian thesis about matter, but not, it is important to note, a Pythagorean one: the claim is not that the being of accretion is inherently mathematical – that the numbers or equations deployed in the statements (mathematical theorems) exist in themselves. What if all those scientists, philosophers and mathematicians who have pursued this path had in fact taken a wrong turn along the way. This is the notion that Lee Smolin an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo and a member of the graduate faculty of the philosophy department at the University of Toronto puts forward in his new book Time Reborn: From the Crisis in Physics to the Future of the Universe.

Continue reading

Steven Shaviro: New Materialism and Whitehead

Whitehead’s ontological and cosmological concerns put him in connection with the speculative realists; but pragmatically, he is closer to those contemporary thinkers who have been called new materialists. Jane Bennett’s “vital materialism” and Karen Barad’s “agential realism” both seem to me to have resonances with Whitehead’s thought, even though neither of them mentions Whitehead directly (as far as I know). Donna Haraway, on the other hand, has spoken specifically about the importance of Whitehead for her ideas about companion species. None of the new materialisms are based on Whitehead’s system or his technical terms, but they share his project of reconciling phenomenal experience with natural science, without rejecting either.

– Steven Shaviro, Interview on Figure/Ground

R. Scott Bakker: Post-Intentional Philosophy; or, How the Brain is Blind

Perhaps it is time, once again, to acknowledge that we are smaller than we supposed, know less than we hoped, and are more frightened than we care to admit. “Nature,” as Newton famously wrote, “is pleased with simplicity” even if we are horrified.

– R. Scott Bakker

A great post on Three Pound Brain, the blog of R. Scott Bakker: the first, with Scott summing up the quandary of two creative events in his life, the creation of his latest installment in the epic Sci-Fi cycle of the Aspect Emperor  The Unholy Consult; and, second, is his attempt to sketch out what he terms ‘post-intentional philosophy.’

In How to Build a First Person (Using Only Natural Materials) we discover that the self is not what it seems, that it is quite different than it at first appears, that our conceptions have gone awry and that we will never be the same again. This will not be an easy journey, it’s Scott’s labyrinth we’re entering, and if you go the distance you will not come out the same on the other end; in fact, you may never find the center of it, discover that the fate of the Minotaur is that he is blind, and that the only answer to his dilemma is the Blind Brain Theory (BBT); yet, if  you or persistent in your journey you will discover something else: a central truth dangling down from the scientific world, where the ‘scientific image’ and the ‘human (metacognitive) image’ begin to totter toward each other at an exponential rate. Will they fuse or will they clash like those fated particles in CERN’s sixteen mile Hadron Collider. Will this new theory be the Higgs Boson of the philosophy of mind? Or will it turn out to be just the fruit of that long curve of scientific endeavor to understand human consciousness that started with the early Greek philosophers two-thousand years ago?

So what is this strange beast coming our way? Is W.B. Yeats correct in his appraisal: “And what rough beast, its hour come round at last, slouches towards…” us weary mortals? No, its much simpler than that dark beast, it’s just science doing what it does best, exposing the myths of our long and dubious journey as humans to the light of scientific method and reasoning. Or, maybe not, maybe we are the Beast blinded by our own Brain: a tale of blindness and insight. As Bakker tells us:

It’s all about the worst case scenario. Philosophy, to paraphrase [Ray] Brassier, is no sop to desire. If science stands poised to break us, then thought must submit to this breaking in advance. The world never wants for apologists: there will always be an army of Rosenthals and Badious. Someone needs to think these things, no matter how dehumanizing or alienating they seem to be.  Besides, only those who dare thinking the post-intentional need fear ‘losing’ anything. If meaning and morality are the genuine emergent realities that the vast bulk of thinkers, analytic or continental, assume them to be, they should be able to withstand any sustained attempt to explain them away.

Continue reading

Emergence of Scientific Culture

Just started reading Stephen Gaukroger’s The Emergence of a Scientific Culture: Science and the Shaping of Modernity 1210-1685. This is the first in a projected series of works that will trace the historical emergence and consolidation of scientific culture in the West during the modern era. The second volume The Collapse of Mechanisim and the Rise of Sensibility: Science and the Shaping of Modernity, 1680-1760 was recently published as well. Another volume The Naturalisation of the Human and the Humanisation of Nature: Science and the Shaping of Modernity, 1750-1825 should follow soon. He says in the forward that future volumes will bring us right up to current debates over the unification of Science and scientific naturalism. What I do like is that he doesn’t seem to have any axe to grind. He includes both the religious and non-theistic philosophical and cultural perspectives that underpin that history.

At the center of this work is the theme of natural philosophy as it emerged out of scholastic Aristotelianism during the thirteenth century. It was this enterprise that underpinned the systematic theology of that era, as well as “giving natural philosophy a cognitive priority that was to become one of the key features of early-modern scientific culture” (17). It was during this era that “natural philosophy was transformed from a wholly marginal enterprise into the unique model of cognitive inquiry generally” (17).

I’ve barely begun to skim the surface of this promising work, but am already fascinated by the richness and depth of detail that I see in its 500 or so pages. For those interested he has a short introductory on his second volume on berfrois:

Natural Philosophy and a New World Picture

Liquid History and the Ages of Water

“…anyone who has ever read any Latin texts on mathematics, and more specifically on differential calculus will recognize here two canonic definitions of the potential infinitely small and the actual infinitely small. This is not an anachronism; the relationship of atomism to the first attempts at infinitesimal calculus is well known. From the outset, Democritus seems to have simultaneously produced a mathematical method of exhaustion and the physical hypothesis of indivisibles. We can see here one of the earliest formulations of what will be called a differential. The clinamen is thus a differential, and properly, a fluxion.”

-Michel Serres, The Birth of Physics

“…though we may think that things are solid, here are signs that their atoms are widely spaced: in caves and caverns water trickles through clear-flowing; tear-like drops hang everywhere.”

– Lucretius, On the Nature of Things

Michel Serres in The Birth of Physics offers a reading of physics in light of the Lucretian swerve or “clinamen”. He tells us that Modern science is born, or has its renaissance, in the works of Torricelli , Benedetti, Da Vinci, those of the Accademia del Cimento, which concern fluids as much if not more than solids. A century before Lucretius, the works of Archimedes had raised hydrostatics to a state of perfection equal if not superior to that of ordinary statics. And both in his own time and before him the works and achievements of the Greek hydraulic engineers were remarkable. For Lucretius the subjects of physics are mass, fluids and heat. And since for him everything flows, nothing is truly of an invincible solidity, except for atoms.

In the universe, plasma is the most common state of matter for ordinary matter, most of which is in the rarefied intergalactic plasma and in stars. Much of the understanding of plasmas has come from the pursuit of controlled nuclear fusion and fusion power, for which plasma physics provides the scientific basis. To completely describe the state of a plasma, we would need to write down all the particle locations and velocities and describe the electromagnetic field in the plasma region.

Continue reading

Extinction

The word extinct comes from the Latin stinguere (to quench), which is the verb of choice for killing a flame. Because we live on a planet hospitable to fires, which consume but also heat, we are obsessed with the notion of fires within our own bodies. This is not just a metaphor that came in with the Industrial Age’s dynamos and furnaces; the ancients also wrote of fire in the flesh. When we say something is extinct, we mean literally that the flame in each and every cell has been doused. Yet we use extinct not as a verb but as an adjective attached to the verbs become and go. Even in our use of the word, we are confused about whether extinction happens to a species or is caused by that species. Subconsciously, we think of it as a supreme failure. We do not realize that extinction is normal. There have been huge die-offs in the past, when many species disappeared, discarded by evolution in a doodling with life-forms that may seem heartless, mindless, merciless, but is also unmalicious, intentionless, random. The high extinction rate at the moment is unique within our span of recorded time, so it surprises us; but mass extinctions are not extraordinary. What should unnerve us is that, in the past, large waves of extinction have always wiped out the culprits: when organisms were too abundant, dominating the earth and ruining the environment, they went extinct, with countless other animals. Then a new form of ooze or mouse started evolution all over again. So it’s not that large numbers of animals haven’t gone extinct before, or that nature cannot take care of itself. It’s that when nature does, things start off from scratch in a new line of evolution, and that line may not include beings like us. Humans could be among the fossils other life-forms speculate about one day (if they speculate), puzzling over our tragedy as we puzzle over the dinosaurs’.

– Diane Ackerman,  The Rarest of the Rare: Vanishing Animals, Timeless Worlds

Machinic Life: The Replicants are (among) Us

“‘Organisms are resilient patterns in a turbulent flow— patterns in an energy flow.’

Carl Woese, Noble Prize winner

“I believe that I have somewhere said (but cannot find the passage) that the principle of continuity renders it probable that the principle of life will hereafter be shown to be part, or consequence, of some general law…”

– Charles Darwin in a Letter to George Wallich

“Pan-mechanism is not simply the claim that being is composed entirely of machines, but that all interactions are machinic interactions.”

– Levi R. Bryant (MOO)

For a long while there was a thin red line that divided inanimate matter from animate life forms, chemistry from biology, but in the last few years many scientists working within biophysics and molecular biology are blurring such distinctions and discovering new and surprising things about matter and its operational life. Take the ribosome for instance:

The ribosome is a tiny organelle present in all living cells in thousands of copies that manufactures the protein molecules on which all life is based. It effectively operates as a highly organized and intricate miniature factory, churning out those proteins— long chain-like molecules— by stitching together a hundred or more amino acid molecules in just the right order, and all within a few seconds. And this exquisitely efficient entity is contained within a complex chemical structure that is just some 20– 30 nanometres in diameter— that’s just 2– 3 millionths of a centimetre! Think about that— an entire factory, with all the elements you’d expect to find in any regular factory, but within a structure so tiny it is completely invisible to the naked eye.1

Another scientist, Peter M. Hoffmann, tells us in his work in molecular biology using the touch based rather than site based atomic force microscopy (AFM’s) he “discovered the fascinating science of molecular machines. I realized that life is the result of noise and chaos, filtered through the structures of highly sophisticated molecular machines that have evolved over billions of years. I realized, then, there can be no more fascinating goal than to understand how these machines work— how they turn chaos into life.”2

Attacks against reductionist or methodological naturalism have become a staple of the new turn toward religion in science. Religious philosophers like Alvin Plantinga (2011).’Where the Conflict Really Lies: Science, Religion and Naturalism’ would have us believe that there is a deep and serious conflict between naturalism and science:

“Taking naturalism to include materialism with respect to human beings, I argue  that it is improbable, given naturalism and evolution, that our cognitive  faculties are reliable. It is improbable that they provide us with a suitable  preponderance of true belief over false. But then a naturalist who accepts  current evolutionary theory has a defeater for the proposition that our faculties are reliable. Furthermore, if she has a defeater for the proposition that her cognitive faculties are reliable, she has a defeater for any belief she takes to be produced by her faculties. But of course all of her beliefs have been produced by her faculties—including, naturally enough, her belief in naturalism and evolution. That belief, therefore—the conjunction of naturalism and evolution—is one that she can’t rationally accept. Hence naturalism and evolution are in serious conflict: one can’t rationally accept them both.” (p.xiv)

Yet if we return to the beginning of this form of naturalist tradition in the seventeenth century, with the invention of the first microscopes, scientists searched for the secret of life at ever smaller scales. Biological cells were first described in Robert Hooke’s Micrographia in 1665 (Figure 0.1). It took until 1902 for chromosomes to be identified as carriers of inheritance. The structure of DNA was deciphered in 1953, and the first atomic-scale protein structure was obtained in 1959. Yet, even while scientists dissected life into smaller and smaller pieces, the mystery of life remained elusive.

Continue reading

+1 Standard Model: Experiments Deliver a Death Blow to Supersymmetry?

Cambridge scientists at the Large Hadron Collider (LHC) at CERN, near Geneva, have spotted one of the rarest particle decays ever seen in nature.

The result is very damaging to new theories like the extremely popular Supersymmetry.

Current knowledge about the most fundamental matter particles (quarks and leptons, such as an electron) and the forces between them is embedded in the so-called Standard Model. The particle masses are a consequence of their interactions with the Higgs field. Exciting the Higgs field in particle collisions at the LHC recently resulted in the discovery of the Higgs boson. (Science Daily, Nov. 13, 2012)

In particle physics, supersymmetry (often abbreviated SUSY) is a symmetry that relates elementary particles of one spin to other particles that differ by half a unit of spin and are known as superpartner. In a theory with unbroken supersymmetry, for every type of boson there exists a corresponding type of fermion with the same mass and internal quantum numbers (other than spin), and vice-versa.

There is no direct evidence for the existence of supersymmetry. It is motivated by possible solutions to several theoretical problems. Since the superpartners of the Standard Model particles have not been observed, supersymmetry must be a broken symmetry if it is a true symmetry of nature. This would allow the superparticles to be heavier than the corresponding Standard Model particles.

"Standard Model Lagrangian" mug from CERN.

Standard Model Lagrangian” mug from CERN.

The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong nuclear interactions, which mediate the dynamics of the known subatomic particles. Developed throughout the mid to late 20th century, the Standard Model is truly “a tapestry woven by many hands”, sometimes driven forward by new experimental discoveries, sometimes by theoretical advances. It was a collaborative effort in the largest sense, spanning continents and decades. The current formulation was finalized in the mid 1970s upon experimental confirmation of the existence of quarks. Since then, discoveries of the bottom quark (1977), the top quark (1995), and the tau neutrino (2000) have given further credence to the Standard Model. More recently, (2011–2012) the apparent detection of the Higgs boson completes the set of predicted particles. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a “theory of almost everything”.

Read the article at Science Daily: click here…

Nick Land: On Scientific Pomposity; or a Beach-Comber’s Paradise

“One consequence of the Occidental obsession with transcendence… is a physics that is forever pompously asserting that it is on the verge of completion. The contempt for reality manifested by such pronouncements is unfathomable. What kind of libidinal catastrophe must have occurred in order for a physicist  to smile when he says that nature’s secrets are almost exhausted? If these comments were not such obvious examples of megalomaniac derangement, and thus themselves laughable, it would be impossible  to imagine a more gruesome vision than that of the cosmos stretched out beneath  the impertinently probing fingers of grinning apes. Yet if one looks for superficiality with sufficient brutal passion, when one is prepared  to pay enough to systematically isolate it, it is scarcely surprising that one will find a little. This is certainly an achievement of sorts; one has found a region of stupidity, one has manipulated it, but this is all. Unfortunately, the delicacy to acknowledge this – as Newton so eloquently did when he famously compared science to beach-combing on the shore of an immeasurable ocean requires a certain minimum of tast, of noblisse.”

– Nick Land, A Thirst for Annihilation (34)

_______________________

That most scientists are not philosophers is to the detriment of philosophy. Yet we must not forget the success of science which philosophers seem to gloss over (except within the confines of the Philosophy of Science). As Land tells it the damage has been done, philosophy has even come to the point, the stage of obsolesence that “it has lost all confidence in its power to know … For at least a century, and perhaps for two, the major effort of the philosophers has simply been to keep the scientists out. How much defensiveness, pathetic mimicry, crude self-deception, crypto-theological obscurantism, and intellectual poverty is marked by the name of their recent and morbid offspring…” (35).

________________________

Continue reading

Stephen Hawking: Science vs. Philosophy?

“The strolls of a sceptic through the debris of culture—rubble and dust as far as the eye can see. The wanderer has found everything already in ruins, furrowed down and across by the plough of unremitting human thought. The wanderer puts forth his walking stick with caution; then he comes to a halt, leaning on it, and smiles.”


– Bruno Schulz, The Wanderings of a Sceptic


Stephen Hawking in his new book, The Grand Design, throws down a challenge to all those philosophers who pretend to deal with the great questions:

Why is there something rather than nothing?
Why do we exist?
Why this particular set of laws and not some other? 

He goes on the say that at one time these questions were for philosophy, but now, he tells us – “philosophy is dead”. [1] He attacks philosophy saying that it “has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge” (GD: Loc 42). The arrogance with which he states this position is almost that of and old time dogmatist in its scathing belittlement of philosophy and philosophers.

Just for the fun of it let’s take him at his word and see just what he’s up to with his game of science taking the full helm of traditional metaphysical thought from philosophy, and discover what answers he provides to the questions above.

Continue reading

Epistemic Naturalism: Quine, Goldman, Kuhn, and Brassier

“Philosophy of science is philosophy enough.”
– W.V. Quine

Broadly speaking the Analytical tradition in philosophy can be characterized by an emphasis on clarity and formal logic and analysis of language, and a profound dependence and respect for the natural sciences. Some of the main precursors of this movement in philosophy are Bertrand Russell, Ludwig Wittegenstein, G.E. Moore, Gottlob Frege, and the logical positivists who derive from them.

W.V. Quine was one of the first to propound an influential naturalized epistemologyHe ultimately wanted to replace traditional epistemology with the natural sciences (i.e., psychology ). He felt that the psychological study of how people produce theoretical “output” from sensory “input,” and the other is the logical reconstruction of our theoretical vocabulary in sensory terms. In Quine’s view, the second approach cannot succeed, and so we are left with psychology. The basis of this view is a theory of knowledge that limits its scope and methods to those of the natural sciences and their conclusions. Within this domain there is three main forms of naturalized epistemic theories: replacement, cooperative, and substantive naturalisms. Replacement would have us abandon traditional forms of epistemology in favor of naturalist science and its methods. Cooperative epistemic forms tells us that traditional epistemology would benefit from the cognitive sciences. Substantive epistemic centers on the factual assertions of ‘facts of knowledge’ and ‘natural facts’.

Alvine Goldman on the other hand provided what he termed causal reliabilism. This is a theory of knowledge that states that a justified true belief counts as knowledge only if it is caused in a suitably reliable way. What Goldman tells us is that it is necessary also to construct a theory of what epistemic justification really is, as opposed to how common sense takes it to be. That theory will be grounded in our psychological understanding of how beliefs are formed, and it will include assessments of those processes in terms of reliability.

Thomas Kuhn applied a naturalistic approach to the social sciences using epistemological questions. Kuhn inspired naturalism is not incompatible with the naturalism that draws on psychology and the natural sciences. Such naturalistic epistemologists as Alvin Goldman and Philip Kitcher have fruitfully applied insights from both the natural and the social sciences in the attempt to understand knowledge as a simultaneously cognitive and social phenomenon.

Naturalistic epistemologists seek an understanding of knowledge that is scientifically informed and integrated with the rest of our understanding of the world. Their methods and commitments differ, because they have varying views about the precise relationship between science and epistemology and even about which sciences are most important to understanding knowledge.

Epistemic naturalists usually try two sorts of approaches: 1) either they try to show the issue is empirical and then to apply scientific data, results, methods, and theories to it directly; or, 2),  they try to undermine a problem’s motivation by showing it arises only on certain false, non-naturalistic assumptions.

Yet, despite its efforts, naturalistic epistemology does face serious challenges from the problems of circularity and normativity. They are seeking nothing more nor less than the unification of science and philosophy. Others such as Ray Brassier seek instead a revisionary naturalism within this same tradition.

Brassier in his work Nihil Unbound pushed the limits of nihilism to its final extent. He linked epistemological naturalism in Anglo-American philosophy (Sellears) with anti-phenomenological realism in French philosophy. Against certain post-analytical streams of thought that have tried to bring together Heidegger and Wittgenstein against scientism and scepticism, he offers a version of eliminative materialism loosely coupled with speculative forms of philosophy.

It is in this non-dialectical turn in materialism that I’ve found congenial with my own thought. As Brassier tells us “The junction of metaphysics and epistemology is marked by the intersection of two threads: the epistemological thread that divides sapience from sentience and the metaphysical thread that distinguishes the reality of the concept from the reality of the object.  …For just as epistemology without metaphysics is empty, metaphysics without epistemology is blind. (T 279)” 1

It is this fine line or balancing act between the two disciplines that marks a distinction that makes the distinction needed to obviate many of the difficulties we face within both Analytical and Continental traditions. Against grand theories and final narratives that try to fit science into a ‘Theory of Everything’ Brassier wants to do something different: “Science does not need to deny the significance of our evident psychological need for narrative; it just demotes it from its previously foundational metaphysical status to that of an epistemically derivative ‘useful fiction’.”(interview)

As he recently related, he is a “nihilist precisely because I still believe in truth, unlike those whose triumph over nihilism is won at the cost of sacrificing truth. I think that it is possible to understand the meaninglessness of existence, and that this capacity to understand meaning as a regional or bounded phenomenon marks a fundamental progress in cognition.” (Ibid.) The notion of a regional or bounded conception of phenomenon is key to this form of epistemic naturalism that some have called a revisionary naturalism. His thought is aligned with Wilfred Sellars work in that as he said in correspondence with  on Being’s Poem:  “Sellars is concerned with developing a metaphysical vision in which not only  are secondary qualities integrated and their relationship to primary qualities  explained, but the articulation between the sensation of the former and the conception of the latter is also accounted for.” It is just here that epistemology and metaphyisics touch base with each other without one or the other having some central priority over the other.

1. Elliott, Jane; Attridge, Derek (2012-03-12). Theory After ‘Theory’ (p. 279). Taylor & Francis. Kindle Edition.

Stephen Jay Gould – The Political Side of Science

“This truth involves both a menace and a promise. It shows that the evils arising from the unjust and unequal distribution of wealth, which are becoming more and more apparent as modern civilization goes on, are not incidents of progress, but tendencies which must bring progress to a halt; that they will not cure themselves, but, on the contrary, must, unless their cause is removed, grow greater and greater, until they sweep us back into barbarism by the road every previous civilization has trod.”

– Henry George, Progress and Poverty
 

 Stephen Jay Gould used to love touting that there was no progress in evolution. As he once said: “The fact of evolutionary change through time doesn’t represent progress as we know it. Progress isn’t inevitable. Much of evolution is downward in terms of morphological complexity, rather than upward. We’re not marching toward some greater thing.”

Even though he was an anti-progressivist, Gould, was an avid advicate of leftist politics, founding Science for the People, which is a “magazine for Working Scientists active in the Anti Capitalist Movement”. Gould was born and raised in the Queensborough of New York City, New York. His father Leonard was a court stenographer, and his mother Eleanor was an artist. Raised in a secular Jewish home, Gould did not formally practice organized religion and preferred to be called an agnostic. Politically, though he “had been brought up by a Marxist father,” he has stated that his father’s politics were “very different” from his own. According to Gould, the most influential political book he read was C. Wright Mills’The Power Elite, as well as the political writings of Noam Chomsky. Gould continued to be exposed to progressive viewpoints on the politicized campus of Antioch College in the early 1960s. In the 1970s Gould joined a left-wing academic organization called “Science for the People.” Throughout his career and writings he spoke out against cultural oppression in all its forms, especially what he saw as pseudoscience in the service of racism and sexism.

In an essay Towards a Science for the People, Bill Zimmerman, Len Radinsky, Mel Rothenberg and Bart Meyers argue for a Socialist perspective for a new politicization of science saying that “science is inevitably political, and in the context of contemporary American corporate capitalism, that it contributes greatly to the exploitation and oppression of most of the people both in this country and abroad”. They understand that the difficulties for a scientist resides in the economic funding of the sciences: “Some scientists have recognized this situation and are now participating in nationally coordinated attempts to solve pressing social problems within the existing political-economic system. However, because their work is usually funded and ultimately controlled by the same forces that control basic research, it is questionable what they can accomplish. For example, sociologists hoping to alleviate some of the oppression of ghetto life have worked with urban renewal programs only to find the ultimate priorities of such programs are controlled by the city political machines and local real estate and business interests rather than by the needs of the people directly affected by such programs.”

These radical scientists see little hope in changing the system through effective reform: “Traditional attempts to reform scientific activity, to disentangle it from its more malevolent and vicious applications, have failed. Actions designed to preserve the moral integrity of individuals without addressing themselves to the political and economic system which is at the root of the problem have been ineffective. The ruling class can always replace a Leo Szilard with an Edward Teller. What is needed now is not liberal reform or withdrawal, but a radical attack, a strategy of opposition. Scientific workers must develop ways to put their skills at the service of the people and against the oppressors.”

Gould was a tireless worker against the troubling view of creationism: see McLean vs. Arkansas. Although, as one critic, Rober Wright, maintains that Gould plays unwittingly into the hands of the Creationists beacuse of his “thinking on the fundamental issue of “directionality,” or “progressivism”—that is, how inclined evolution is (if at all) to build more complex and intelligent animals over time”. In his article The Accidental Creationist, Wright tells us “Gould is not helping the evolutionists against the creationists, and the sooner the evolutionists realize that the better. For, as Maynard Smith has noted, Gould “is giving nonbiologists a largely false picture of the state of evolutionary theory.” Gould was a long time promoter of “punctuated-equilibria” as the main engine of evolution rather than the orthodox Darwinists stance on “natural selection”. Most Darwinits see Gould as a popularizer who seems to have a lot of authority in the eyes of the reading public, but is considered out of touch with the mainstream views within his own scientific community. As Daniel C. Dennett, a defender of the orthodox Darwinian stance states it:

“What Darwin discovered, I claim, is that evolution is ultimately an algorithmic process — a blind but amazingly effective sorting process that gradually produces all the wonders of nature. This view is reductionist only in the sense that it says there are no miracles. No skyhooks. All the lifting done by evolution over the eons has been done by nonmiraculous, local lifting devices — cranes. Steve (Gould) still hankers after skyhooks. He’s always on the lookout for a skyhook — a phenomenon that’s inexplicable from the standpoint of what he calls ultra-Darwinism or hyper-Darwinism. Over the years, the two themes he has most often mentioned are “gradualism” and “pervasive adaptation.” He sees these as tied to the idea of progress — the idea that evolution is a process that inexorably makes the world of nature globally and locally better, by some uniform measure.” 

But Gould argued against those like Daniel Dennett who suggest that evolutionary development is driven by a purpose – that there is a guiding hand, as it were, in evolutionary development – an inevitable progress up a ‘ladder’ from lower to higher life forms and, finally, to homo sapiens. Natural selection itself does not imply a progression from lower to higher life forms, argues Gould: “Life is a ramifying bush with millions of branches, not a ladder. Darwinism is a theory of local adaptation to changing environments, not a tale of inevitable progress. ‘After long reflection’, Darwin wrote, ‘I cannot avoid the conviction that no innate tendency to progressive development exists’.” (An Urchin in the Storm, p211)

One of Gould’s recurrent themes was life’s ‘contingency’. He does not deny that natural selection leads to a greater complexity of life forms. But the developing complexity of life, Gould maintains, is only a by-product ‘incidental’ to evolution and not necessary or inevitable. And complex creatures represent only a tiny proportion of the whole.

Whether we agree with Gould’s science or not we can all agree that he tried to fight the good fight, give people hope, to create a body of work that would defend us against ourselves. As one pundit, David Prindle, Ph.D., argues, “Stephen Jay Gould may teach us that the best political theory is not political theory per se but, rather, science expanded to its philosophical potential. A grand theory of life may be a better starting point for addressing legitimacy, justice, and equality than is any set of explicitly political assumptions.” (Stephen J. Gould as political theorist)

 

Is the Sun an Autopoietic System?

The Sun was formed about 4.57 billion years ago from the collapse of part of a giant molecular cloud that consisted mostly of hydrogen and helium and which probably gave birth to many other stars. This age is estimated using computer models of stellar evolution and through nucleocosmochronology.  The result is consistent with the radiometric date of the oldest Solar System material, at 4.567 billion years ago.  Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that form only in exploding, short-lived stars. This indicates that one or more supernovae must have occurred near the location where the Sun formed. A shock wave from a nearby supernova would have triggered the formation of the Sun by compressing the gases within the molecular cloud, and causing certain regions to collapse under their own gravity.  As one fragment of the cloud collapsed it also began to rotate due to conservation of angular momentum and heat up with the increasing pressure. Much of the mass became concentrated in the center, while the rest flattened out into a disk which would become the planets and other solar system bodies. Gravity and pressure within the core of the cloud generated a lot of heat as it accreted more gas from the surrounding disk, eventually triggering nuclear fusion. Thus, our Sun was born.

– from Sun – Wikipedia

Is the Sun an Autopoietic System?

Autopoiesis means self-production, and autopoietic system means the system that produce itself. The concept of “autopoiesis” was originally proposed by biologists Humberto Maturana and Francisco Varela, and the term “autopoiesis” is invented from Greek words: “auto” for self- and “poiesis” for creation or production (Maturana & Varela 1972, Varela et. al. 1974, Maturana & Varela 1980; 1987).

An autopoietic machine is a machine organized (defined as a unity) as a network of process of production (transformation and destruction) of components that produces the components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in the space in which they (the components) exist by specifying the topological domain of its realization as such a network. It follows that an autopoietic machine continuously generates and specifies its own organization through its operation as a system of production of its own components, and does this in an endless turnover of components under conditions of continuous perturbations and compensation of perturbations.” (Maturana & Varela 1980; p.79)

In short, an autopoietic system is a unity whose organization is defined by a particular network of production processes of elements, not by the components themselves or their static relations. Summarizing the concept of autopoiesis, it turns out that the system has three fundamental features; (1) element as momentary event, (2) boundary reproduction of the system, (3) element constitution based on the system.

The crucial point of autopoiesis in systems theory is the shift of viewpoint of element from substances to momentary events. Element of the system conventionally considered to keep existing, for example cell in living system or actor in social system. In the autopoietic system theory, however, the elements are the momentary event that has no duration. It means that elements disappear as soon as they are realized. Consequently, system must produce the elements in order to keep itself existing. Thus, the boundary of system is determined circularly by the production of elements, and it is called autopoietic system.

In this sense, autopoietic system does not emerge from some so-called “bottom-up”,  just because the concept of bottom-up is assumed to be given elements before emerging as system. Autopoietic systems intrinsically imply circular relation between the system and its elements. As Nicklas Luhmann once related:

“Whether the unity of an element should be explained as emergence `from below’ or as constitution `from above’ seems to be a matter of theoretical dispute. We opt decisively for the latter. Elements are elements only for the system that employs them as units and they are such only through this system. This is formulated in the concept of autopoiesis.”(Luhmann 1984; p.22)

In this sense I believe that the Sun and all stars are indeed autopoietic systems.