Self-Deception, Delusion, and the Denial of Reality

Ajit Vark in a book on Denial entertains from a evolutionary geneticists perspective the old notion of Norman O. Brown and Ernst Becker of the Denial of Death thesis, but also adds another:

[the] contrarian view [that we overcame the barrier to human uniqueness by the mastery of self-deception] could help modify and reinvigorate ongoing debates about the origins of human uniqueness and inter-subjectivity. It could also steer discussions of other uniquely human “universals,” such as the ability to hold false beliefs, existential angst, theories of after-life, religiosity, severity of grieving, importance of death rituals, risk-taking behaviour, panic attacks, suicide and martyrdom. If this logic is correct, many warm-blooded species may have previously achieved complete self-awareness and inter-subjectivity, but then failed to survive because of the extremely negative immediate consequences. Perhaps we should be looking for the mechanisms (or loss of mechanisms) that allow us to delude ourselves and others about reality, even while realizing that both we and others are capable of such delusions and false beliefs.

In many ways this supports Robert Trivers another evolutionary biologist’s notions in The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. Trivers unflinchingly argues that self-deception evolved in the service of deceit—the better to fool others. We do it for biological reasons—in order to help us survive and procreate. And, yet, Trivers also takes this notion and then applies it to our overreach, the very sociopathic society full of manipulators and deceivers which is the baseline of our Capitalist societies promotes such self-deception to the point that what once helped us as a species to survive and propagate has now become its greatest enemy and is leading us into a species dead end as we deny too much reality for profit and gain at our own peril. Climate denial etc. are at the center of both this success and it’s overreach… a denialism that could cost us our survival and our future along with all those other non-human species of plants and animals and insects.


  1. Ajit Varki. Denial: Self-Deception, False Beliefs, and the Origins of the Human Mind. Twelve (June 4, 2013)
  2.  Robert Trivers. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. Basic Books; 1 edition (October 25, 2011)

The Neurocognitive Revolution: Triumph or Agony?

Philosophy, in its longing to rationalize, formalize, define, delimit, to terminate enigma and uncertainty, to co-operate wholeheartedly with the police, is nihilistic in the ultimate sense that it strives for the immobile perfection of death.
…….– Nick Land, Fanged Noumena: Collected Writings 1987 – 2007

Donald Merlin in his Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition (1993: see a Precise) once argued the australopithecines were limited to concrete/episodic minds: bipedal creatures able to benefit from pair-bonding, cooperative hunting, etc., but essentially of a seize-the-day mentality: the immediacy of the moment. The first transition away from the instant, the present, and toward a more temporal system of knowledge acquisition and transmission was to a “mimetic” culture: the era of Homo erectus in which mankind absorbed and refashioned events to create rituals, crafts, rhythms, dance, and other pre-linguistic traditions. This was followed by the evolution to mythic cultures: the result of the acquisition of speech and the invention of symbols. The third transition carried oral speech to reading, writing, and an extended external memory-store seen today in computer and advanced machine or artificial Intelligence and extrinsic data-memory technologies. The next stage might entail the ubiquitous and autonomous rise of external agencies, intelligent machines, or AI’s that live alongside humans as partners in some new as yet unforeseen cultural matrix or Symbolic Order yet to be envisioned or described.

At the same time that our external systems of culture and transmission were transforming themselves we gained new heuristic systems, adapting to local invariant conditions. Our sciences came to the forefront as external environmental, exploratory and experimental methods of analysis and data-gathering techniques. More and more humans off-loaded memory, intelligence, and analytical capacity and powers to these externalized systems through several transitions over the past few thousand years as both abstract mathematical and sensuous empirical forms of knowledge acquisition were reorganized into a transition from natural to artificial forms. In fact consciousness itself can be seen as the first anti-natural and artificial system within nature.

We still do not know what the conditions were that allowed the forms of consciousness humans attained to arise, whether it was a gradual form of evolution over hundreds of thousands of years; or whether there was some disjunctive great leap, or punctuated equilibria ( a theory developed by Eldredge and Gould’s (1972) own research on trilobites and snails, a macroevolutionary theory, which lead to a greater appreciation of the hierarchical structure of nature and its implications for understanding evolutionary patterns and processes). Today there are three approaches to the emergence of consciousness: evolutionary psychology, human behavioral ecology, duel-inheritance theory.

The neurosciences take a more interdisciplinary approach to science of the brain/consciousness and its evolution that collaborates with other fields such as chemistry, cognitive science, computer science, engineering, linguistics, mathematics, medicine (including neurology), genetics, and allied disciplines including philosophy, physics, and psychology. Of late there have been heated debates between Computational neuroscience (the study of brain function in terms of the information processing properties of the structures that make up the nervous system), and a Modular functional approach. Much of computational neuroscience focuses on properties of single neurons and small circuits. However, computational approaches to cognitive neuroscience (e.g., the interaction of perception, action and language) must deal with diverse functions distributed across multiple brain regions. It is argued that a modular approach to modeling is needed to build cognitive models and to compare them as the basis for further model development.

Also new Non-invasive brain function measurement technologies, such as functional nuclear magnetic resonance imaging, positron emission tomography, near-infrared spectroscopy, electroencephalograph and magnetoencephalograph (MEG), which are used in neuroscience, have been contributing to the development of medical care and neuroscience. At the same time they are making brain function measurement safer and accelerating decipherment of the brain and mind, sense and behavior, and mental activities.

As Slavoj Žižek speaking of the neurosciences and the brain, this three-pound gray mass, we get a sense of this all-devouring, all-consuming force when we look inside the body and specifically the skull—“the realization that, when we look behind the face into the skull, we find nothing; ‘there’s no one at home’ there, just piles of grey matter—it is difficult to tarry with this gap between meaning and the pure Real.” This raw flow of biochemical and electrical energy is so “terrifying” for two reasons. First, it is faceless, personless—it has absolutely nothing to do with either the orbit of phenomenal experience or the human universe of meaning. There is no indication of any genuine human quality: we are only confronted with anonymous, dull palpitations, which resemble the industrial buzzing of automatic machinery, a machinery that may amaze us with its complexity and dynamism (the plasticity of the neuronal network) but that nevertheless exists as a matrix of closed circuitry locked within its own self enclosed, self sustaining movement, a movement that is not only greater than us but also thereby appears to “threaten” our very existence as free subjects at every step. Second, the passage from the pure, senseless Real of nature in its mechanism to the absolute spontaneity of the I—the rupturing advent of a dialectical leap—is stricto sensu inexplicable, for given our inability to locate the full-fledged human subject in nature, there is always a moment of arbitrariness and fiat.5 The latter is the hard question of consciousness which is two-fold: 1) what were the conditions needed to give rise to consciousness to begin with; and 2) is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes.

In Facing Up to the Problem of Consciousness, Chalmers wrote:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.6

For Žižek time and time again it comes down to this, “there are two options here: either subjectivity is an illusion, or reality itself (not only epistemologically) is not-All (incomplete and open).” In fact for him the question is how a parallax gap could emerge from within the self-regulated biochemical and electrical activity inside the skull, how “the ‘mental’ itself explodes within the neuronal through a kind of ‘ontological explosion.’” Of course, like Chalmers, and other neuroscientists, Žižek has more questions than answers concerning this ‘ontological explosion’ of the ‘mental’ out of the biochemical mass of the brain.

Beyond the question of the emergence of consciousness is the more pragmatic and worldly concern of corporate and governmental funding and utilitarian projects for the neurosciences in war, medicine, economics, ethics, governance and any one of a number of other initiatives from the Brain Mapping initiatives in the EU and America that like the Manhattan project, and the Gene Mapping projects before them have spawned great sums of pressure both economic and political, as well as the large funds necessary for such tasks and undertakings.

Delgado dreamed of using his electrodes to tap directly into human thoughts: to read them, edit them, improve them. “The human race is at an evolutionary turning point. We’re very close to having the power to construct our own mental functions,” he told The New York Times in 1970, after trying out his implants on mentally ill human subjects. “The question is, what sort of humans would we like, ideally, to construct?”
……The Neurologist Who Hacked his Brain

Neuroscience is Big Business

The convergence of knowledge and technology for the benefit or enslavement of society (CKTS) is the core aspect of 21st century science initiatives across the global system, which is based on five principles: (1) the interdependence of all components of nature and society (the so called network society, etc.), (2) enhancement of creativity and innovation through evolutionary processes of convergence that combine existing principles, and divergence that generates new ones (control of creativity and innovation by corporate power), (3) decision analysis for research and development based on system-logic deduction (data-analysis, machine learning, AI, etc.), (4) higher-level cross-domain languages to generate new solutions and support transfer of new knowledge (new forms of non-representational systems and mappings, topological, etc.). As civilization and societal challenges become more and more dependent on external and internalized artificial mechanisms and technological systems we are faced with the convergence of “NBIC” technological reorganization of corporate and socio-cultural fields of business, inquiry, and research into: nanotechnology, biotechnology, information technology, and cognitive and neruosciences. But it is the neuroscientific breakthroughs and initiatives that will underpin the forms of global governance: political and economic systems of rules, negotiations, and navigation systems of impersonal and indifferent regulatory and reason-based imperialism of the future capitalist regimes as they begin to marshal every aspect of life into a data-centric vision of command and control.

Larger initiatives like the Human Brain Project (EU) and the U.S. led Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The European Union’s €1-billion (US$1.3-billion) Human Brain Project (HBP) and the United States’ $1-billion Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative are collaborating in their investigation and mapping of the brain’s functions.

These hard sciences have given birth to a plethora of new interdisciplinary business fields with neuroprefix such as neuroeconomics, neuromarketing, neuroaccounting, neurogovernance, neuroethics, and neuroleadership. Such an exotic union of science and the arts may provide better understanding of human nature and behaviour change. Yet, they are already providing us a future where massive surveillance, data-analysis, manipulation, and exploitation of the Human Security Regime under both governmental and private corporate consumerist societies will be enslaved by their desires.   Imaging technologies such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) reveal unseen neural connections in the living human brain along with brain wave analysis technologies such as quantitative electroencephalography (QEEG). All these various systems will be used in peace and war, and this is only the beginning. Welcome to the NeuroEmpire 101:

Neuroeconomics as an emerging discipline combines neuroscience, economics, and psychology; and uses research methods from cognitive neuroscience and experimental economics. It is “the application of neuroscientific methods to analyse and understand economically relevant behaviour” . such as evaluating decisions, categorising risks and rewards, and interactions among economic agents. Neuroeconomics research draws on the convergence of three major trends. First, using fMRI we can measure brain activity associated with discrete cognitive events and study higher cognitive processes like decision making and reward evaluation. Second, by incorporating economic variables into electrophysiological experiments, we can encode motivationally relevant information through novel recognition of neurons at multiple levels of processing pathways. Third, neuroeconomics draws on behavioural economics to consider psychological variables into economic and decision-making models.

Neuroaccounting is a new way to scientifically view accounting and the brain’s central role in building economic institutions. The measure of brain activity during economic decision-making using neuroscientific methods can prove useful for evaluating the desirability of implementing new policies that run contrary to long-established accounting principles. Dickhaut et al. reviewed neuroscientific evidence that suggest the emergence of modern accounting principles based on the mapping of brain function to the principles of modern accounting.

Neuromarketing is the application of neuroscientific methods to analyse and understand human behaviour in relation to markets and marketing exchanges. Applying neuroscience to marketing may form a basis for understanding how human beings create, store, recall, and relate to information such as brands in everyday life. Neuromarketers now use cognitive neuroscience in marketing research that bears implications for understanding organisational behaviour in a social context , for example whether certain aspects of advertisements and marketing activities trigger negative effects such as overconsumption. Going beyond focus groups in traditional advertising methods, we can now use EEG to detect putative “branding moments” within TV commercials and apply brain imaging to discover the “buy button”. In notable research emerging from Stanford University, Carnegie Mellon University and the Massachusetts Institute of Technology, scientists are using fMRI to identify parts of the brain that influence buying decisions.

Neuroethics is the investigation of altruism in neuroeconomic research, which suggests that cooperation is linked to activation of reward areas. Investigations into such problems could in fact be among the most compelling within neuromarketing. As a new field, it has triggered heated debate and questioned the ethics behind neuromarketing in a 2004 editorial of Nature Neuroscience. Now that we have identified certain key regions of the brain that would be implicated in consumer preferences, it may be possible for marketers to “manipulate” their advertisements and target the brain areas that mediate reward processing. Think of how recently the trends in political persuasion have used polling indicators more and more to sway public opinion and make or break a candidate through neuromarketing positivation – is this an outgrowth of neuroethical strategy gaming techniques adapting the dynamics of politics to marketing through neural feedback-loops based on new notions of ideology and propaganda applied to a pseudo-neuroscientific use of the sciences?

Neurogoverance is the promise of the modification of neuro-mnemonic practices on a population scale, a global neurogovernance, becomes imaginable on the population level as world actors, increasingly fearing “traumatized societies” and the intergenerational transmission of trauma, push toward pre-emptive measures. In this form of governance, the experience of the trigger becomes a threat which must be pre-emptively eradicated. Temporally, it is positioned as a fragmented repetition that keeps societies in the past. Happiness is moving forward; traumatized societies are thus “backwards” (Ahmed, 2010)4. In following the pre-emptive logic of inoculating the population against future trauma, now imaginable as brains’ neural networks become understood as interconnected in a kind of vast, global web, the trigger, a painful experience that is fragmented, becomes positioned as a block slowing down time and threatening the future life of the population. This new paradigm, now spreading to non-Western countries like an export, serves to re-interpret the trigger as a true break with the continuity of time. (see Should We Be Triggered? NeuroGovernance in the Future, Kim Cunningham)

Could we see trauma cultures arise in which our memories are erased, wiped out by neural pre-emptive logics after the dark and nefarious imposition of some world-wide civil-war in which billions suffer loss and death; or, after some climatological, ecological, viral, or cosmological catastrophe occurs; else as part of some global campaign and initiative as part of some neurogoverance task force’s imposition? Would we be absorbed into a reorganization and realignment to a false Symbolic Order all in the name of some false or real emergency: plague, civil-war, climate catastrophe etc.? One almost thinks this is all paranoid conspiracy if one had not all realized that indeed large corporate and government institutions world-wide are heavily invested in backing and funding such problematique investigations in the neurosciences and the other NBIC technologies. As Cunningham suggests among its unexpected gifts, the trigger traumas is that it offers an affective, embodied critique of the normative uses of objects, and a critique of Euclidean notions of space and time as chronological. The trigger’s connective ontology also makes it a creative force for critiquing the social. In my own view we might displace this as a retroactive enactment, a test-run of modeling process, a future forecasting in philo-fictional or hyperstitional scenarios that allow us to push the extremes of these various scenarios to their (not-so) logical conclusions and see what takes effect? What we need to accept or reject in such extrapolations, to counter the nefarious uses of such future situations in a pre-emptive strike against their misuse and abuse against the greater multiplicity.

“Your brain will be infinitely more powerful than the brains we have now,” Kennedy continues, as his brain pulsates onscreen. “We’re going to extract our brains and connect them to small computers that will do everything for us, and the brains will live on.”

“You’re excited for that to happen?” I ask.

“Pshaw, yeah, oh my God,” he says. “This is how we’re evolving.”

……
The Neurologist Who Hacked his Brain – Conversations with Phil Kennedy

Onlife 24/7 and the Intelligent World of the Future?

As we begin to interface with our machinic cousins on a more permanent based (i.e., already the iPod, iPad, tablet universal connectivity culture of Onlife 24/7 is apparent), will we begin to accept the slow inclusive absorption into the neurosphere of information, marketing, and consumerist info-glut as just part of the Order of things? In the  onlife-world, artefacts have ceased to be mere machines simply operating according to human instructions. They can change states in autonomous ways and can do so by digging into the exponentially growing wealth of data, made increasingly available, accessible and processable by fast-developing and ever more pervasive ICTs. Data are recorded, stored, computed and fed back in all forms of machines, applications, and devices in novel ways, creating endless opportunities for adaptive and personalised environments. A sort of solipsistic or infernal paradise for full-blown psychopaths and nerds, where filters of many kinds continue to erode the illusion of an objective, unbiased perception of reality, while at the same time they open new spaces for human interactions and new knowledge practices. (Floridi)

Will such a world become so ubiquitous and naturalized to our children and their children that the age without machines, the age when men and women thought on their own without external systems of artificial intelligence and memory will be a thing of history? An age in which we will have already crossed the Rubicon of the artificial divide and entered the post-human world without even an acknowledgement or nod or recognition? Will the moment of dreaded Singularity that so many fear become at that moment in the future just one more ubiquitous and invisible, even transparent underpinning of the new Symbolic Order of Governance and business as usual?  Will we already be so cyborgized that this unique and ubiquitous AI in our midst will become an acceptable risk, allowing it to take over more and more of our decisions, our governmental and corporate, political and economic tasks without ever questioning why this must be? Will we be phased out over time, allowing our more intelligent heirs and machinic children to inherit our place in the Sun? Will we at some point face the situation of the last human, Oligarchs include being excluded from the inevitable world of machinic civilization, existing in zoos or some form of commons enclosure, farmed out into the ancient and tributary worlds of non-machinic life as quaint but organic artifacts of pre-machinic life?

As J.G. Ballard reminded us: “We are all living in fictions at the moment, one need not write about it; instead the task of the writer, or any astute inquirer is to uncover what is left of reality.” Like insomniacs in some nightmare land out of joint we wander the new world of ubiquitous computing, AI, informational organisms as if in a science fiction novel that has taken over reality. Insomnia corresponds to the necessity of vigilance, to a refusal to overlook the horror and injustice that pervades the world. It is the disquiet of the effort to avoid inattention to the torment of the other. But its disquiet is also the frustrating inefficacy of an ethic of watchfulness; the act of witnessing and its monotony can become a mere enduring of the night, of the disaster . As John Crary tells us history has shown, war-related innovations are inevitably assimilated into a broader social sphere, and the sleepless soldier would be the forerunner of the sleepless worker or consumer. Non-sleep products, when aggressively promoted by pharmaceutical companies, would become first a lifestyle option, and eventually, for many, a necessity. …24/ 7 markets and a global infrastructure for continuous work and consumption have been in place for some time, but now a human subject is in the making to coincide with these more intensively.2

Yet, once the organic worker can no longer keep up the pace, and is outstripped by his machinic cousins, will he not be made obsolete? Already the functions and computational relations of the brain are being mapped to external systems, wired and wireless devices used in experimental laboratories to interface with laptops. Tomorrow the human will become a mere appendage to the 24/7 universe of data, a cog in the wireless mill of electronic heaven. As nanotech and pharmatech strive to revise the human, extending life, health, pleasure the other convergence technologies will incorporate humans into the new machinic environments of the future. A mutation and transformation is well underway, and while many are skeptical of such a transition the corporate, governmental, and global initiatives are hedging their bets and funding the nerds who will offer this posthuman future on a performative platter.

“All the groups working on BCIs are working toward wireless solutions. They are very superior,” said Frank Guenther. Using a neurological model constructed by Guenther, Ramsey’s brain activity is mapped to corresponding mouth and jaw movements. Another program decodes the signals, and synthesizes them in the sound of a tinny, but human-like voice.
……..On Brain to Computer Interfaces

Neurohyperstitional Enactment: Performance Art Invents the Future

Welcome to the neocameral global world vision where humans are machines in a sleepless universe of illuminated unending work. Maybe the parody corporation of ByoLogc is the template for all future syntech dominators: “The modern world expects more from the people who claim to take care of them, and at ByoLogyc, we think they deserve it.” And, although this is a parody, dreamed up by a Toronto art ensemble (Zed.To) to portray the dangers of this future world one can imagine that this is the future that will be portrayed to us down the pipe: beauty, health, happiness, immortality… the dream of perfectibility. ZED.TO was an 8-month narrative told in real-time through an integrated combination of interactive theatrical events and online content. It told the story of the beginning of the end of the world, from a viral pandemic created by ByoLogyc, a fictional Toronto-based biotech company. As one commentator reported: “ByoLogyc’s CEO Chet Getram is a ruthless and manipulative fictional character — a living experiment designed to explore how the language of human-centred design, sustainable business, and social innovation could be used to obscure a nefarious and short-sighted vision of profit as generated by a new biological economy.” 3 This is a parody… but eerily hyperstitional and strangely uncanny of trends we see emerging all around us.

Like a model for the future this parody took on a life of its own, a hyperstitional enactment:

Rather than engaging stakeholders through written scenarios, inaccessible white papers, and policy recommendation PowerPoints, ByoLogyc’s rise and fall was designed as a warning that would surface across media platforms, and come to life all around the people engaged in it. By the time the BRX Pandemic hit full stride in November of 2012, more than 3,500 members of the public, the academic community, and the private sector had engaged with the ByoLogyc story through live-action experiences, with another 40,000 engaging online through the consumption and active creation of content that brought the dystopian scenario to life. Many of these audiences paid premium ticket prices for their participation in the beginning of the end of the world, birthing a new business model for this kind of futures work, though we also managed to make half of our live performances and most online content available free of charge. … The Mission Business is tackling a new audience altogether — the visionaries, entrepreneurs, and engineers who spend their days (and often nights) working at the intersection of scientific and technological advancement and social change. We believe that by creating living and breathing scenarios that spill across media and seem like another element of the real world, we can provide a tremendously useful backdrop for rehearsing crisis response, encouraging out-of-the-box thinking, and understanding the social impacts of the singularity at a personal and communal level. When we share an experience of the future that is believable enough to be real, we internalize and remember what happened in powerful new ways. Things get powerful when teams can refer to a memory of a scenario’s implications, rather than just a memo.

Culture of Death or The Death of Culture?

As we can see from the above the economic and political powers of techno-capitalism are investing heavily in these new sciences seeking to further enable command and control techniques and technologies for purposes of economic, political, social, media, medical, military, and governance knowledge and power. In our time we are witnessing an epochal, unprecedented migration of humanity from its Newtonian, physical space to the neurosphere within the neurosciences are subordinated to economic and political, corporate goals and initiatives of socio-cultural command and control governance. As a result, humans will become more and more dependent on pharmaceutical, transhumanist, and posthuman technologies as inforgs – informational organisms tied to both neural-tech and drug induced therapies and political forms of coercion and heuristics.

Among other (possibly artificial) neuro-inforgs and agents operating in an external/virtual environment that is neither friendly nor built specifically for human beings, but rather more and more for the artificial informational creatures which will begin to supplant humans for the non-human civilizations of the future. As digital immigrants like us are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between Neurosphere and physical world, only a difference in levels of abstraction (Floridi). When the migration is complete, we shall increasingly feel deprived, excluded, handicapped, or impoverished to the point of paralysis and psychological trauma whenever we are disconnected from the Neurosphere, like fish out of water. One day, being an neuro-inforg will be so natural that any disruption in our normal flow of information, communication, and intrinsic/extrinsic messaging and flows will make us sick.1

As we banally battle over outmoded forms of speculative philosophy, Left or Right politics and its depleted traditions of meditainment and dramatized sovereignty collapse of national and economic entities a new world is rising in our midst, out of the ruins of a two-thousand year old farce: Western Civilization and its Judeo-Christian worldview. Without even a whimper our lives are about to change forever and we sit idly by as if all these modern marvels of science were being developed for our benefit, when in fact they are as always being developed for the smaller initiatives of petty Bankers, Oligarchs, and the elite minions that form both private and governmental authority, ideology, and utopian/fantasy.

Caught up within the daily grind of mere survival the masses of the uneducated, the excluded, the neglected workers of the world that are forced into menial labor and no jobs at all situate their lives within the prison house of a circular madhouse of street-drugs, alcohol,  or mindless mediatainment systems of escape and fantasy without every thinking past their daily non-lives. Too tired to belabor the point these beings are trapped within a system that if they even understood a glimmer of its power over their lives would die of sheer horror or enter the asylums of psychotic and schizophrenic inmates.

Am I being a little hyperbolic? Sure I am. This is no laughing matter. But how else approach the madhouse of civilization? Laughter or tears? Maybe Zizek is right, we need our jokes to awaken us out of our stupor, our mundane numb indifference. Dark humor or the older forms of violent farce and comic nihilism were meant to shock and awaken rather than to put you asleep like the canned laughter of your TV. We seem to relish our oblivion, our decadent body-games of mental masturbation, hiding in video games of violence and disaster as if this collective fantasy of catastrophe might keep at bay the real one ticking like a bomb in our environment. While those on the Right deny climatological apocalypse as a Left-wing conspiracy, and the Left belabors the Right-wing conspiracies of religious and a terrorist ingrown warriors and gunmongers the real world just beyond both ideological constructs moves forward with its own impervious and impersonal death drives. We move in a circle of pleasure and pain, driven by those biological forces of violence and death that have for millennia served our competitive and conflict ridden need for master over the natural order. But what served our kind well for hundreds of thousands of years in our emergence from the slime is not turning on us, imploding and bringing the house down around us in one fell swoop of self-lacerating judgment as if we were in this generation moving in two directions at once: Janus faced we wonder from ourselves in amazement, not knowing what we are doing or what we are seeking. Mindless we grasp for external authority and ethical footholds when every last one of the old religious and ethical myths has fallen into silence and disrepute. Now we stand alone or together like fish out of water expecting some grand savior to return and redeem us before it is too late. It is too late. We are responsible for the mess, children afraid to grow up and face the truth of our ignorance and failures to adapt.

Is there a silver lining in there somewhere? Hope? A second chance? Yes, for the very death drive that keeps us restlessly churning for the systems of death, is also the very force of creativity and inventiveness we need to get ourselves out of this mess if we would just act, take a stand, face the truth of things as they are not as we would like them to be. The Real is the great horror vacui of our age, the antagonistic calamity that forced us into the crack of consciousness to begin with. The wound opened up by the poison of existence can only be healed by the instrument of that poison: a conscious decision. Decisions have repercussions, they need commitment and education, pain and memory, an ethical stance not of some external god-infested power but of the very real truth of our semantic depletion and knowledge of our limitations and ignorance, our finitude. Philosophers think we can move past such outmoded notions of limit and finitude when in deed and fact we have and will remain in the circle of consciousness unknowing of the very ground of real physical powers that intervene and create the very freedom and determinations of our being-in-the-world.

Caught between external networks of knowledge and power, and internal drives of biological evolution we act as “vanishing mediators” (Zizek) between these intrinsic/extrinsic forces. The sciences are neither good or evil (non-ethical), but are suborned at the moment to economic and political pressures of a global system that seeks to serve the dictates of larger corporate and governmental institutions (as are academic and think-tanks, Trusts, Funds, NCO’s, R&D’s, Shadow Corps. etc.). We live in a time when these forces of global and corporate governance seek to suborn the great knowledge and power of technology and the sciences to their own agendas. It is men not there knowledge systems that remain as always bound to the determinate forces of good or ill. As we learned from WWII knowledge is power (an old cliché) in which the discovery of atom smashing initiated processes that could lead to either new energy sources or to war mongering systems of annihilation. We know what happened then. Will we repeat the same mistakes with the new NBIC technologies and sciences?

It is only the courage of our acts, our decisions that sets us apart from the impersonal and indifferent forces of the natural without and within us. It is the unnatural in us, the artificial gap of consciousness irreducible to internal or external natural forces of determinism that is both our glory and continuing sorrow; this crack between environment and brain, our conscious mind is the only apotropaic charm we have against being absorbed back into the pre-critical Spinozoism continuum of the Absolute energetic Real. We must forever desuture thought from being, allow this oscillation between the internal/external powers of the natural to play out in the gap of our subjectivity and subjectivation otherwise we will be reorganized into the impersonal and indifferent universe of power out of which our ancestors by some unforeseen leap entered the freedom of conscious awareness millennia ago. The future remains open and incomplete as does the Real and reality, what we do is up to our acts and decisions, our commitments and collective determinations. Will we remain passive victims of indecisiveness and apathy letting our false leaders in government and corporate enclaves dictate their own economic and political agendas, or will we come together in solidarity and cooperation across the globe and say NO to these minions of command and control? It’s truly up to us to act*, no one else will do it for us; in fact, the powers that be are betting on it.


What is an act in the strict Lacanian sense of the term?

“In a way, everything is here: the decision is purely formal, ultimately a decision to decide, without a clear awareness of WHAT the subject decides about; it is non-psychological act, unemotional, with no motives, desires or fears; it is incalculable, not the outcome of strategic argumentation; it is a totally free act, although one couldn’t do it otherwise. It is only AFTERWARDS that this pure act is “subjectivized,” translated into a (rather unpleasant) psychological experience. …[T]he subject reaches the level of a true ethical stance only when he moves beyond this duality of the public rules (Laws, Religion, Ethical authority externalized, etc.) as well as their superego shadow (big Other, Police, all authority figures of governmental and corporate power)… First, we get the straight morality (the set of explicit rules we choose to obey…); then, we experience its obscene underside (the literal and figurative intermedia enactments of crime, revolution, terror, etc.); finally, when, based on this experience, we acknowledges the necessity to BREAK the explicit moral rules of the accepted Culture and Civilization (our Culture of Death), we begin to reach the level of ethics proper.” (from The Act and its Vicissitudes)

Ultimately we enter the no-man’s land of the excluded, the outcast, the non-human realms beyond the current “culture of death”: creating interzones between-the-times, outlaw cultures of the traumatized community and its secret rules, where we begin to subtract ourselves at first through de-education and re-organized cognitive and ethical forms, and then toward a re-organization of the very Symbolic Order itself. Inventing the possibility of the future out of the impossible ruins of global capitalist degradation and collapse. A future open and incomplete, worthy of hope and life, a place where human and non-human alike can begin to cooperate in a world based not of some malformed fantasy of peace, but on the hard-nosed truth of our incomplete and open universe of real and catastrophic existence. Accepting that conflict and antagonism will not go away, that there can be no ultimate closure between thought and being, some total enclosure of imagination and reason in some Utopian Civilization, but rather that the tensions between intrinsic and extrinsic forces, capacities, and powers of the natural Real will remain in excess of all our conceptual and heuristic tools. Born in time we are partners in the labors of temporal change, not its victims. Act like it. Yet, this too is a hyperstitional fantasy, part science, part imaginative science fiction: one that seeks a way out of the overdetermined global fantasy regimes of techno-capitalist command and control. Is this possible? Which path forward: triumph or agony?


 

  1. Floridi, Luciano (2013-10-10). The Ethics of Information (pp. 16-17). Oxford University Press, USA. Kindle Edition.
  2. Crary, Jonathan (2013-06-04). 24/7: Late Capitalism and the Ends of Sleep (pp. 1-2). Verso Books. Kindle Edition
  3. Trevor Haldenby April Fools: The Truth about ByoLogyc (Singularity)
  4. Ahmed, Sara. (2010) The Promise of Happiness. Duke University Press; Durham, NC.
  5. Carew, Joseph. Ontological Catastrophe: Zizek and the Paradoxical Metaphysics of German Idealism (New Metaphysics). (Michigan Publishing, University of Michigan Library, October 29, 2014)
  6. David Chalmers (1995). “Facing Up to the Problem of Consciousness””. Journal of Consciousness Studies 2 (3): 200–219.

R. Scott Bakker: Medial Neglect and Black Boxes

In some ways Scott Bakker’s short post Intentional Philosophy as the Neuroscientific Explananda Problem succinctly shows us the central problem of our time: medial neglect. But what is medial neglect? The simplest explanation is that we are blind to the very internal processes that condition our very awareness of ourselves, our conscious mind. Scott’s point is that no one, not philosophers, not neuroscientists, no one can agree as to why this should be? No one can explain what consciousness is – much less how it emerges from the physical substratum of our brain. Philosophers of Mind have battled over the extremes of pure reductive physicalism (Davidson, etc.) and its opposite the irreductive world of the mind/body dualism of a Descartes. Yet, for all our advances in neuroscience and the technological breakthroughs in brain scan imaging, etc. we still cannot explain this indefinite terrain between brain and consciousness. Not that many have not tried. Opening my library or my e-book reader I have hundreds of books, journals, and publications devoted to just this one subject alone. (Yes, I’m a bibliomaniac, an endless, restless reader of anything and everything… madness? perhaps…)

Of course over time some explanations have through sheer numbers and probabilistic accuracy or clarity taken on a more positive – or negative – ability to narrow our margins onto this difficult problem. There have been for some time in the contemporary intellectual scene two options for understanding the relationship of consciousness and world—their dynamic interconnectivity and unity in phenomenological accounts of the lived body or the outright rejection of the importance of lived first-person experience as a mere epiphenomenal effect due to the mechanical movement of nature or the structures guiding discourse, both of which comprise a disavowal of the primordial self-reflexive ipseity of the subject. The notion of the Cartesian cogito (Subject) opened up a basic metaphysical truth of subjectivity by presenting a world where “the mind and body are, so to speak, negatively related—oppositional discord is, obviously, a form of relation.” (Zizek) But the sciences look for hard evidence to support such metaphysical claims, and when they cannot be found then everything sits there in limbo where it has remained for a few hundred years.

Continue reading

Neuroscience of Memory Erasure: The Ethics of Mindswipe

erase

One of main themes I’ve been working with in my near future novel is with the ethical dilemmas of memory erasure, or what I’ll term “mindwipe”. As a science fiction motif this has been around for ages. But we starting to see the edges of it entering actual scientific regions of knowledge and testability. Not only that but the notion of neural implants, and transplanted or false memories, etc. All these technologies come with a price as they always have. They can be used for good or ill. War or peace. That’s the dilemma.

Continue reading

Neural Love: The Quantum Dynamics of Life, Love, and Neuroscience

Family Gear Discussion

How to entangle, trammel up and snare
Your soul in mine, and labyrinth you there
Like the hid scent in an unbudded rose?
Aye, a sweet kiss — you see your mighty woes.
……– John Keats, “Lamia”

“Enzymes are the workhorses of life.”
……– Life on the Edge: The Coming of Age of Quantum Biology

In Keat’s poem the notion of entanglement is related to the allurement and capture of a sexual object that becomes so inextricably tangled in the web of desire and sensual (scent) snares that it must surrender to the power of this force that has like a spider trapped it in a skein of dark delights. And only in the very moment of her haptic rapture at the touch, the kiss, does she at last become conscious, her reasoning powers of mind and thought coming back to her, awakening her from this dark affective region of immersive and blind passion, allowing her – too late, to understand her engulfment and surrender to the allurements of Love.

So is there a physical basis for such processes? Keat’s being a Romantic poet was not concerned with the deeper neural or physical processes underlying this dark and erotic power of love. This power to ensnare and blind the other in the meshes of these for him mysterious interior forces, that the other, the lover would realize too late she’d surrendered her body, mind, and erotic being to the lover without ever thinking through this strange engulfment with reason or consciousness.

One could recite a long history of the erotic powers of the body and its representations in poetry, literature, philosophy and the strange mixture of science and philosophy that would become psychoanalysis, etc. Yet, in our time we’ve come full circle and begun separating out this intermixture of things realizing that speculation may offer interesting leaps of the figural imagination, but when it comes down to it we have no actual access to the underlying causes of such processes. As Einstein put it, “Gravity cannot be held responsible for people falling in love.” The point being that the science of gravity is still a mystery, but unlike gravity which can be quantified and measured love is beyond the observable measurement of scientific knowledge.

But is it?

“How on earth are you ever going to explain in terms of chemistry and physics so important a biological phenomenon as first love?” Einstein asked.

Part One: The Biochemical Connection

As David DiSalvo suggests thinking about one’s beloved—particularly in new relationships—triggers activity in the ventral tegmental area (VTA) of the brain, which releases a flood of the neurotransmitter dopamine (the so-called “pleasure chemical”) into the brain’s reward (or pleasure) centers, the caudate nucleus and nucleus accumbens. This gives the lover a high not unlike the effect of narcotics, and it’s mighty addictive. At the same time, the brain in love experiences an increase in the stress hormone norephinephrine, which increases heart rate and blood pressure, effects similar to those experienced by people using potent addictive stimulants like methamphetamine.

He mentions the work of Helen Fisher a biological anthropologist, who is a Research Professor in the Department of Anthropology at Rutgers University. She has written five books on the evolution and future of human sexuality, monogamy, adultery and divorce, gender differences in the brain, the chemistry of romantic love, and most recently, human personality types and why we fall in love with one person rather than another. As she recites:

“Love can start off with any of these three feelings,” Fisher maintains. “Some people have sex first and then fall in love. Some fall head over heels in love, then climb into bed. Some feel deeply attached to someone they have known for months or years; then circumstances change, they fall madly in love and have sex.” But the sex drive evolved to encourage you to seek a range of partners; romantic love evolved to enable you to focus your mating energy on just one at a time; and attachment evolved to enable you to feel deep union to this person long enough to rear your infants as a team.”

But these brain systems can be tricky. Having sex, Fisher says, can drive up dopamine in the brain and push you over the threshold toward falling in love. And with orgasm, you experience a flood of oxytocin and vasopressin–giving you feelings of attachment. “Casual sex isn’t always casual” Fisher reports, “it can trigger a host of powerful feelings.” In fact, Fisher believes that men and women often engage in “hooking up” to unconsciously trigger these feelings of romance and attachment.

What happens when you fall in love? Fisher says it begins when someone takes on “special meaning.” “The world has a new center,” Fisher says, “then you focus on him or her. You beloved’s car is different from every other car in the parking lot, for example. People can list what they don’t like about their sweetheart, but they sweep these things aside and focus on what they adore. Intense energy, elation, mood swings, emotional dependence, separation anxiety, possessiveness, a pounding heart and craving are all central to this madness. But most important is obsessive thinking.” As Fisher says, “Someone is camping in your head.”

Fisher and her colleagues have put 49 people into a brain scanner (fMRI) to study the brain circuitry of romantic love: 17 had just fallen madly in love; 15 had just been dumped; 17 reported they were still in love after an average of 21 years of marriage. One of her central ideas is that romantic love is a drive stronger than the sex drive. As she says, After all, if you causally ask someone to go to bed with you and they refuse, you don’t slip into a depression, or commit suicide or homicide; but around the world people suffer terribly from rejection in love.

Fisher also maintains that taking serotonin-enhancing antidepressants (SSRIs) can potentially dampen feelings of romantic love and attachment, as well as the sex drive.

Fisher has looked at marriage and divorce in 58 societies, adultery in 42 cultures, patterns of monogamy and desertion in birds and mammals, and gender differences in the brain and behavior. In her newest work, she reports on four biologically-based personality types, and using data on 28,000 people collected on the dating site Chemistry.com, she explores who you are and why you are chemically drawn to some types more than others.

Yale News Reports

What can the neurosciences tell us about the mystery of these dark desires that up till now we could only tie to poetic or philosophical speculation? As Bill Hathaway reports in YaleNews meditation helps pinpoint neurological differences between two types of love. Yale School of Medicine researchers studying meditators discovered using fMRI scans that a more selfless variety of love — a deep and genuine wish for the happiness of others without expectation of reward — actually turns off the same reward areas that light up when lovers see each other.

As Judson Brewer “When we truly, selflessly wish for the well-being of others, we’re not getting that same rush of excitement that comes with, say, a tweet from our romantic love interest, because it’s not about us at all.” The reward centers of the brain that are strongly activated by a lover’s face (or a picture of cocaine) are almost completely turned off when a meditator is instructed to silently repeat sayings such as “May all beings be happy.”

Huffington Post describing aspects of this recounts Helen Fisher who in a TED talk (Why We Love) about the brain in love, said: “Romantic love is an obsession, it possesses you. You can’t stop thinking about another human being. Somebody is camping in your head…. Romantic love is one of the most addictive substances on Earth.” She went on to describe the sting of being rejected by one’s lover, too:

“When you’re dumped, the one thing you want it to just forget this human being and move ahead with your life, but no, you just love them harder.The reward system for wanting, for motivation, for craving for focus, becomes more active when you can’t get what you want.”

Like a drug we become addicted and can’t get enough so that many end up killing themselves, else turn aggressive, commit crimes of passion and other sordid and dark horrors upon ourselves or others. Carolyn Gregorie would also relate information about a 2011 study published in the journal Social Cognitive and Affective Neuroscience looked at which brain regions are activated in individuals in long-term romantic partnerships (who had been married an average of 21 years), as compared to individuals who had recently fallen in love. The results, surprisingly, revealed similar brain activity in both groups.

As Adoree Durayappah in Psychology Today reports the “key to understanding how to sustain long-term romantic love is to understand it a bit scientifically. Our brains view long-term passionate love as a goal-directed behavior to attain rewards. Rewards can include the reduction of anxiety and stress, feelings of security, a state of calmness, and a union with another.”

As Gregorie tells us in another study conducted by Fisher and her colleagues found that most women who had recently fallen in love showed more brain activity in regions associated with reward, emotion and attention, whereas men tended to show the most activity in visual processing areas, including the area associated with sexual arousal. But that doesn’t mean that men are wired to look for sexual gratification rather than more enduring romantic connections.

Part Two: What about Quantum Biology?

In Nature’s Journal we’re introduced to the new science of quantum biology. Learning from nature is an idea as old as mythology — but until now, no one has imagined that the natural world has anything to teach us about the quantum world. As they describe it discoveries in recent years suggest that nature knows a few tricks that physicists don’t: coherent quantum processes may well be ubiquitous in the natural world. Known or suspected examples range from the ability of birds to navigate using Earth’s magnetic field to the inner workings of photosynthesis — the process by which plants and bacteria turn sunlight, carbon dioxide and water into organic matter, and arguably the most important biochemical reaction on Earth. (Physics of life: The dawn of quantum biology)

Quantum biology refers to applications of quantum mechanics and theoretical chemistry to biological objects and problems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes such as photosynthesis and cellular respiration. Quantum biology uses computation to model biological interactions in light of quantum mechanical effects.

Physicist Roger Penrose, of the University of Oxford, and anesthesiologist Stuart Hameroff, of the University of Arizona, were the first to propose that the brain acts as a quantum computer — a computational machine that makes use of quantum mechanical phenomena (like the ability of particles to be in two places at once) to perform complex calculations. In the brain, fibers inside neurons could form the basic units of quantum computation. Yet, there has been little evidence to support their Orch Or model. Penrose in The Emperor’s New Mind went on to propose: “[T]he evolution of conscious life on this planet is due to appropriate mutations having taken place at various times. These, presumably, are quantum events, so they would exist only in linearly superposed form until they finally led to the evolution of a conscious being—whose very existence depends on all the right mutations having ‘actually’ taken place!”

Orchestrated objective reduction (Orch-OR) is a model of consciousness theorized by theoretical physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff, which claims that consciousness derives from deeper level, finer scale quantum activities inside the cells, most prevalent in the brain’s neurons. It combines approaches from the radically different angles of molecular biology, neuroscience, quantum physics, pharmacology, philosophy, quantum information theory, and aspects of quantum gravity. (Wiki)

In response to the criticisms of the Orch OR model cited in an article by Tanya Lewis on this new theoretic, Stuart Hameroff offers several pieces of evidence. In reply to the objection that the brain is too warm for quantum computations, Hameroff cites a 2013 study led by Anirban Bandyopadhyay at the National Institute of Material Sciences (NIMS) in Tsukuba, Japan, which found that “microtubules become essentially quantum conductive when stimulated at specific resonant frequencies,” Hameroff said.

In reply to the criticism that microtubules are found in (unconscious) plant cells too, Hameroff said that plants have only a small number of microtubules, likely too few to reach the threshold needed for consciousness. But he also noted that Gregory Engel of the University of Chicago and colleagues have observed quantum effects in plant photosynthesis. “If a tomato or rutabaga can utilize quantum coherence at warm temperature, why can’t our brains?” Hameroff said.

In an article by the authors of Life on the Edge: The Coming of Age of Quantum Biology Jim Al-Khalili and Johnjoe McFadden on the Guardian they describe the underlying quantum effects that the biochemical processes of life and brain. An excerpt:

70 years ago, the Austrian Nobel prize-winning physicist and quantum pioneer, Erwin Schrödinger, suggested in his famous book, What is Life?, that, deep down, some aspects of biology must be based on the rules and orderly world of quantum mechanics.

But what about life? Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterizes the inanimate world. He called this new principle “order from order”.

Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behavior in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality. Here are a few of the most exciting examples.

Enzymes are the workhorses of life. They speed up chemical reactions so that processes that would otherwise take thousands of years proceed in seconds inside living cells. Life would be impossible without them. But how they accelerate chemical reactions by such enormous factors, often more than a trillion-fold, has been an enigma. Experiments over the past few decades, however, have shown that enzymes make use of a remarkable trick called quantum tunneling to accelerate biochemical reactions. Essentially, the enzyme encourages electrons and protons to vanish from one position in a biomolecule and instantly rematerialize in another, without passing through the gap in between – a kind of quantum teleportation.

And before you throw your hands up in incredulity, it should be stressed that quantum tunneling is a very familiar process in the subatomic world and is responsible for such processes as radioactive decay of atoms and even the reason the sun shines (by turning hydrogen into helium through the process of nuclear fusion). Enzymes have made every single biomolecule in your cells and every cell of every living creature on the planet, so they are essential ingredients of life. And they dip into the quantum world to help keep us alive.

Another vital process in biology is of course photosynthesis. Indeed, many would argue that it is the most important biochemical reaction on the planet, responsible for turning light, air, water and a few minerals into grass, trees, grain, apples, forests and, ultimately, the rest of us who eat either the plants or the plant-eaters.

The initiating event is the capture of light energy by a chlorophyll molecule and its conversion into chemical energy that is harnessed to fix carbon dioxide and turn it into plant matter. The process whereby this light energy is transported through the cell has long been a puzzle because it can be so efficient – close to 100% and higher than any artificial energy transport process.

The first step in photosynthesis is the capture of a tiny packet of energy from sunlight that then has to hop through a forest of chlorophyll molecules to makes its way to a structure called the reaction center where its energy is stored. The problem is understanding how the packet of energy appears to so unerringly find the quickest route through the forest. An ingenious experiment, first carried out in 2007 in Berkley, California, probed what was going on by firing short bursts of laser light at photosynthetic complexes. The research revealed that the energy packet was not hopping haphazardly about, but performing a neat quantum trick. Instead of behaving like a localized particle travelling along a single route, it behaves quantum mechanically, like a spread-out wave, and samples all possible routes at once to find the quickest way.

A third example of quantum trickery in biology – the one we introduced in our opening paragraph – is the mechanism by which birds and other animals make use of the Earth’s magnetic field for navigation. Studies of the European robin suggest that it has an internal chemical compass that utilises an astonishing quantum concept called entanglement, which Einstein dismissed as “spooky action at a distance”. This phenomenon describes how two separated particles can remain instantaneously connected via a weird quantum link. The current best guess is that this takes place inside a protein in the bird’s eye, where quantum entanglement makes a pair of electrons highly sensitive to the angle of orientation of the Earth’s magnetic field, allowing the bird to “see” which way it needs to fly.

All these quantum effects have come as a big surprise to most scientists who believed that the quantum laws only applied in the microscopic world. All delicate quantum behaviour was thought to be washed away very quickly in bigger objects, such as living cells, containing the turbulent motion of trillions of randomly moving particles. So how does life manage its quantum trickery? Recent research suggests that rather than avoiding molecular storms, life embraces them, rather like the captain of a ship who harnesses turbulent gusts and squalls to maintain his ship upright and on course.

Just as Schrödinger predicted, life seems to be balanced on the boundary between the sensible everyday world of the large and the weird and wonderful quantum world, a discovery that is opening up an exciting new field of 21st-century science

Read the full article: You’re powered by quantum mechanics. No, really… and, their new book: Life on the Edge: The Coming of Age of Quantum Biology.

Conclusion

So the next time you feel compelled to kiss your loved one stop and think about all those miniscule neurons and biochemical factories churning away below that bony skull of yours that are waking up and cooking a little sexual desire in their brain kitchen of Love; and while you’re at it go on and embrace the turbulent quantum storms pulsing below the threshold in the motions of trillions of quantum effects that are drawing you ever so close to that strange world of quantum biology and the effects of light and eros. Embrace your quants!

An interesting reading list:

Michael Graziano: Consciousness and the Social Brain

Began reading Michael Graziano’s Consciousness and the Social Brain as a part of my continuing education in the various aspects of the neurosciences. Not being a scientist I have to rely on these exploratory mission impossibles: that is, I have to rely on the scientists themselves who are actually doing the science in question. As always, some are better than others at using the old style ‘folk language’ of our common lingua franca than others. Graziano starts with a basic premise that the “human brain contains about one hundred billion interacting neurons. Neuroscientists know, at least in general, how that network of neurons can compute information. But how does a brain become aware of information?”1 One thing already obvious is his metaphor of the brain as a computer: as a computational device that processes data, organizes information, carries on the inputs/outputs of the survival mechanics of the human body in its interactions with itself and the world. This notion of the brain as computational is not a known fact, but a theory just like other theories. Yet, the way Graziano words it here in the statement it’s as if this was a known fact among facts rather than another theoretical insight into how the brain in fact works. But is the brain computational? Are their other theories of the brain that neuroscientists support?

We can see already that we’re in trouble. Before I even begin to read Graziano on consciousness I have to know why he thinks the brain can be like a digital device such as a computer; or, why he accepts the theory of the Computational Mind (CTM). Of course such a notion has only arisen within the past 30 years according to the Stanford philosophical encyclopedia – as one scholar tells us:

 This view—which will be called the “Computational Theory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind with computation, including (a) various enterprises at modeling features of the mind using computational modeling techniques, and (b) employing some feature or features of production-model computers (such as the stored program concept, or the distinction between hardware and software) merely as a guiding metaphor for understanding some feature of the mind. This entry is therefore concerned solely with the Computational Theory of Mind (CTM) proposed by Hilary Putnam [1961] and developed most notably for philosophers by Jerry Fodor [1975, 1980, 1987, 1993]. The senses of ‘computer’ and ‘computation’ employed here are technical; the main tasks of this entry will therefore be to elucidate: (a) the technical sense of ‘computation’ that is at issue, (b) the ways in which it is claimed to be applicable to the mind, (c) the philosophical problems this understanding of the mind is claimed to solve, and (d) the major criticisms that have accrued to this view.2

Of course their speaking of Mind not Brain so that there are differences in the technical use of such a theoretic. Philosophers love to speak of the Mind as if it were disconnected from the physical Brain that produces it. And, of course, as usual this leads into another series of questions as to how the Mind and Brain connect; or, the question: Is there really such a thing as Mind? Or, is the Mind like other concepts just an object of philosophical speculation and positing? But then again scientists posit the physicalness of the Brain, too. But as you can see we’ve suddenly found ourselves falling into all kinds of difficult terrain with no end in site.

I originally wanted to tell you about what Graziano is up too, but have found myself in that strange zone of thought where language becomes a stumbling block to the very quest of description of these facts. Of course I could just silently pass over all these little details of computational, modular, functional, etc. aspects and theories surrounding how the brain actually operates, but then if we don’t weed out the truth of this basic foundational view of the physical three-pound lump in my skull how shall we ever begin to describe what awareness is? I think I’ll have to just throw up my hands and begin again from the beginning. Why can’t this stuff be a little easier on my brain? Can you tell me that! 🙂

1. Graziano, Michael S. A. (2013-08-01). Consciousness and the Social Brain (pp. 3-4). Oxford University Press, USA. Kindle Edition.
2. Horst, Steven, “The Computational Theory of Mind“, The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.)

Romancing the Machine: Intelligence, Myth, and the Singularity

“We choose to go to the moon,” the president said. “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”

I was sitting in front of our first Motorola color television set when President Kennedy spoke to us of going to the moon. After the Manhattan Project to build a nuclear bomb this was the second great project that America used to confront another great power in the race to land on the moon. As I listened to the youtube.com video (see below) I started thinking about a new race going on in our midst: the intelligence race to build the first advanced Artificial General Intelligence (AGI). As you listen to Kennedy think about how one of these days soon we might very well hear another President tell us that we must fund the greatest experiment in the history of human kind: the building of a superior intelligence.

Why? Because if we do not we face certain extinction. Oh sure, such rhetoric of doom and fear has always had a great effect on humans. I’ll imagine him/her trumping us with all the scientific validation about climate change, asteroid impacts, food and resource depletion, etc., but in the end he may pull out the obvious trump card: the idea that a rogue state – maybe North Korea, or Iran, etc. is on the verge of building such a superior machinic intelligence, an AGI. But hold on. It gets better. For the moment an AGI is finally achieved is not the end. No. That is only the beginning, the tip of the ice-berg. What comes next is AI or complete artificial intelligence: superintelligence. And, know one can tell you what that truly means for the human race. Because for the first time in our planetary history we will live alongside something that is superior and alien to our own life form, something that is both unpredictable and unknown: an X Factor.

 

Just think about it. Let it seep down into that quiet three pounds of meat you call a brain. Let it wander around the neurons for a few moments. Then listen to Kennedy’s speech on the romance of the moon, and remember the notion of some future leader who will one day come to you saying other words, promising a great and terrible vision of surpassing intelligence and with it the likely ending of the human species as we have known it:

“We choose to build an Artificial Intelligence,” the president said. “We choose to build it in this decade, not because it is easy, but because it is for our future, our security, because that goal will serve to organize our defenses and the security of the world, because that risk is one that we are willing to accept, one we are not willing to postpone, because of the consequences of rogue states gaining such AI’s, and one which we intend to win at all costs.”


Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

—Michael Anissimov, MIRI Media Director

 We are all bound by certain cognitive biases. Looking them over I was struck by the conservativism bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” As we move into the 21st Century we are confronted with what many term convergence technologies: nanotechnology, biotechnology, genetechnology, and AGI. As I was looking over PewResearch’s site which does analysis of many of our most prone belief systems I spotted one on AI, robotics, et. al.:

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade. (see AI, Robotics, and the Future of Jobs)

 This almost universal acceptance that robotics and AI will be a part of our inevitable future permeates the mythologies of our culture at the moment. Yet, as shows there is a deep divide as to what this means and how it will impact the daily lives of most citizens. Of course the vanguard pundits and intelligent AGI experts hype it up, telling us as Benjamin Goertzel and Steve Omohundro argue AGI, robotics, medical apps, finance, programming, etc. will improve substantially:

…robotize the AGI— put it in a robot body— and whole worlds open up. Take dangerous jobs— mining, sea and space exploration, soldiering, law enforcement, firefighting. Add service jobs— caring for the elderly and children, valets, maids, personal assistants. Robot gardeners, chauffeurs, bodyguards, and personal trainers. Science, medicine, and technology— what human enterprise couldn’t be wildly advanced with teams of tireless and ultimately expendable human-level-intelligent agents working for them around the clock?1

As I read the above I hear no hint of the human workers that will be displaced, put out of jobs, left to their own devices, lost in a world of machines, victims of technological and economic progress. In fact such pundits are only hyping to the elite, the rich, the corporations and governments that will benefit from such things because humans are lazy, inefficient, victims of time and energy, expendable. Seems most humans at this point will be of no use to the elite globalists, so will be put to pasture in some global commons or maybe fed to the machine gods.

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.

—Ray Kurzweil, inventor, author, futurist

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

—George Dyson, historian

Kurzweil and Dyson agree that whatever these new beings become, they will not have our interests as a central motif of their ongoing script.  As Goertzel tells Barrat the arrival of human-level intelligent systems would have stunning implications for the world economy. AGI makers will receive immense investment capital to complete and commercialize the technology. The range of products and services intelligent agents of human caliber could provide is mind-boggling. Take white-collar jobs of all kinds— who wouldn’t want smart-as-human teams working around the clock doing things normal flesh-and-blood humans do, but without rest and without error. (Barrat, pp 183-184) Oh, yes, who wouldn’t… one might want to ask all those precarious intellectual laborers that will be out on the street in soup lines with the rest of us that question.

As many of the experts in the report mentioned above relate: about half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

Sounds more like dystopia for the mass, and just another nickelodeon day for the elite oligarchs around the globe. Yet, the other 52% have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution. Sounds a little optimistic to me. Human ingenuity versus full-blown AI? Sound more like blind-man’s bluff with the deck stacked in favor of the machines. As one researcher Stowe Boyd, lead researcher at GigaOM Research, said of the year 2025 when all this might be in place: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?’ Indeed, one wonders… we know the Romans built the great Circus, gladiatorial combat, great blood-bath entertainment for the bored and out-of-work minions of the Empire. What will the Globalists do?

A sort of half-way house of non-commitment came from Seth Finkelstein, a programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, who responded, “The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence; rather it’s a political choice.” 

I love it that one can cop-out by throwing it back into politics, thereby washing one’s hands of the whole problem as if magically saying: “I’m just a technologist, let the politicians worry about jobs. It’s not technology’s fault, there is no determinism on our side of the fence.” Except it is not politicians who supply jobs, its corporations: and, whether technology is determined or not, corporations are: their determined by capital, by their stockholders, by profit margins, etc. So if they decide to replace workers with more efficient players (think AI, robots, multi-agent systems, etc.) they will if it make them money and profits. Politicians can hem and haw all day about it, but will be lacking in answers. So as usual the vast plebian forces of the planet will be thrown back onto their own resources, and for the most part excluded from the enclaves and smart cities of the future. In this scenario humans will become the untouchables, the invisible, the servants of machines or pets; or, worst case scenario: pests to be eliminated.

Yet, there are others like Vernor Vinge who believe all the above may be true, but not for a long while, that we will probably go through a phase when humans are augmented by intelligence devices. He believes this is one of three sure routes to an intelligence explosion in the future, when a device can be attached to your brain that imbues it with additional speed, memory, and intelligence. (Barrat, p. 189) As Barrat tells us our intelligence is broadly enhanced by the mobilization of powerful information technology, for example, our mobile phones, many of which have roughly the computing power of personal computers circa 2000, and a billion times the power per dollar of sixties-era mainframe computers. We humans are mobile, and to be truly relevant, our intelligence enhancements must be mobile. The Internet, and other kinds of knowledge, not the least of which is navigation, gain vast new power and dimension as we are able to take them wherever we go. (Barrat, p. 192)

But even if we have all this data at our braintips it is still data that must be filtered and appraised, evaluated. Data is not information. As Luciano Floridi tells us “we need more and better technologies and techniques to see the small-data patterns, but we need more and better epistemology to sift the valuable ones”.2 As Floridi will explain it what Descartes acknowledged to be an essential sign of intelligence— the capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage— would be a priceless feature of any appliance that sought to be more than merely smart. (Floridi, KL 2657) Floridi will put an opposite spin on all the issues around AGI and AI telling us that whatever it ultimately becomes it will not be some singular entity or self-aware being, but will instead be our very environment – what he terms, the InfoSphere: the world is becoming an infosphere increasingly well adapted to ICTs’ (Information and Communications Technologies) limited capacities. In a comparable way, we are adapting the environment to our smart technologies to make sure the latter can interact with it successfully. (Floridi, KL 2661)

For Floridi the environment around us is taking on intelligence, that it will be so ubiquitous and invisible, naturalized that it will be seamless and a part of our very onlife lives. The world itself will be intelligent:

Light AI, smart agents, artificial companions, Semantic Web, or Web 2.0 applications are part of what I have described as a fourth revolution in the long process of reassessing humanity’s fundamental nature and role in the universe. The deepest philosophical issue brought about by ICTs concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other. When artificial agents, including artificial companions and software-based smart systems, become commodities as ordinary as cars, we shall accept this new conceptual revolution with much less reluctance. It is humbling, but also exciting. For in view of this important evolution in our self-understanding, and given the sort of ICT-mediated interactions that humans will increasingly enjoy with other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. (Floridi, KL 3055-62)

That our conceptions of reality, self, and environment will suddenly take on a whole new meaning is beyond doubt. Everything we’ve been taught for two-thousand years in the humanistic traditions will go bye-bye; or, at least will be treated for the ramblings of early human children fumbling in the dark. At least so goes the neo-information philosophers such as Floridi. He tries to put a neo-liberal spin on it and sponsors an optimistic vision of economic paradises for all, etc. As he says in his conclusion we are constructing an artificial intelligent environment, an infosphere that will be inhabited for millennia of future generations. “We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new physical and intellectual environments that will be inhabited by future generations (Floridi, KL 3954).”  Because of this he tells us we will need to forge a new new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, 3971) 

In some ways I concur with his statement that we need to take a critical view of our current narratives. To me the key is just that. Humans live by narratives, stories, tales, fictions, etc., always have. The modernists wanted grand narratives, while the postmodernists loved micro-narratives. What will our age need? What will help us to understand and to participate in this great adventure ahead in which the natural and artificial suddenly form alliances in ways never before seen from the beginning of human history. From the time of the great agricultural civilizations to the Industrial Age to our own strange fusion of science fiction and fact in a world where superhuman agents might one day walk among us what stories will we tell? What narratives do we need to help us contribute to our future, and to the future hopefully of our species? Will the narratives ultimately be told a thousand years from now by our inhuman alien AI’s to their children of a garden that once existed wherein ancient flesh and blood beings once lived: the beings that once were our creators? Or shall it be a tale of symbiotic relations in which natural and artificial kinds walk hand in hand forging together adventures in exploration of the galaxy and beyond? What tale will it be?

Romance or annihilation? Let’s go back to the bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” If we listen to the religious wing of transhumanism and the singulatarians, we are presented with a rosy future full of augmentations, wonders, and romance. On the other side we have the dystopians, the pessimists, the curmudgeons who tell us the future of AGI leads to the apocalypse of AI or superintelligence and the demise of the human race as a species. Is their a middle ground. Floridi seems to opt for that middle ground where humans and technologies do not exactly merge nor destroy each other, but instead become symbionts in an ongoing onlife project without boundaries other than those we impose by a shared vision of balance and affiliation between natural and artificial kinds. Either way we do not know for sure what that future holds, but as some propose the future is not some blank slate or mirror but is instead to be constructed. How shall we construct it? Above all: whose future is it anyway? 

As James Barrat will tell us consider DARPA. Without DARPA, computer science and all we gain from it would be at a much more primitive state. AI would lag far behind if it existed at all. But DARPA is a defense agency. Will DARPA be prepared for just how complex and inscrutable AGI will be? Will they anticipate that AGI will have its own drives, beyond the goals with which it is created? Will DARPA’s grantees weaponize advanced AI before they’ve created an ethics policy regarding its use? (Barrat, 189)

My feeling is that even if they had an ethics policy in place would it matter? Once AGI takes off and is self-aware and able to self-improve its capabilities, software, programs, etc. it will as some say become in a very few iterations a full blown AI or superintelligence with as much as a thousand, ten thousand, or beyond intelligence beyond the human. Would ethics matter when confronted with an alien intelligence that is so far beyond our simple three pound limited organic brain that it may not even care or bother to recognize us or communicate. What then?

We might be better off studying some of the posthuman science fiction authors in our future posts (from i09 Essential Posthuman Science Fiction):

  1. Frankenstein, by Mary Shelley
  2. The Time Machine, by H.G. Wells
  3. Slan, by A.E. Van Vogt
  4. Dying Earth, Jack Vance
  5. More Than Human, by Theodore Sturgeon
  6. Slave Ship, Fredrick Pohl
  7. The Ship Who Sang, by Anne McCaffrey
  8. Dune, by Frank Herbert
  9. “The Girl Who Was Plugged In” by James Tiptree Jr.
  10. Aye, And Gomorrah, by Samuel Delany
  11. Uplift Series, by David Brin
  12. Marooned In Realtime, by Vernor Vinge
  13. Beggars In Spain, by Nancy Kress
  14. Permutation City, by Greg Egan
  15. The Bohr Maker, by Linda Nagata
  16. Nanotech Quartet series, by Kathleen Ann Goonan
  17. Patternist series, by Octavia Butler
  18. Blue Light, Walter Mosley
  19. Look to Windward, by Iain M. Banks
  20. Revelation Space series, by Alasdair Reynolds
  21. Blindsight, by Peter Watts
  22. Saturn’s Children, by Charles Stross
  23. Postsingular, by Rudy Rucker
  24. The World Without Us, by Alan Weisman
  25. Natural History, by Justina Robson
  26. Windup Girl, by Paolo Bacigalupi

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (pp. 184-185). St. Martin’s Press. Kindle Edition.
2. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 2422-2423). Oxford University Press. Kindle Edition.

Emerson, Neuroscience, & The Book of Nature – On Fate & Freedom

 

The book of Nature is the book of Fate. She turns the gigantic pages, — leaf after leaf, — never returning one.  … The element running through entire nature, which we popularly call Fate, is known to us as limitation. Whatever limits us, we call Fate. … Why should we fear to be crushed by savage elements, we who are made up of the same elements? – Ralph Waldo Emerson: Fate

As one reads and rereads Emerson’s essays, and especially the ones in The Conduct of Life, one gains a deeper appreciation of this man’s dark temperament, and of his tenacity in the face of those who would tyrannize us with superfluous notions of just what necessity and fate truly are.  For Emerson the notion of fate was but one of the forces, not the ruling force of life in this universe. The opposing force for him was freedom. If there are limits, if there are environmental factors that shape and bind us to certain limits and limitations of physical and mental constitution, there is also the opposing notion of mind and intelligence to counter the harsh necessities of life’s circumstances. Yet, the mind is not some separate entity, above it all; this would be illusion, too. No, the mind is very much enmeshed within the web of elements we call the universe, and it is within this very context and rootedness of mind in the processes of the universe that we must approach fate and freedom.

In his poem Fate  (see below) Emerson tells us that “There is a melody born of melody, which melts the world into a sea.” The notion that there are processes born of processes, which fold the world internally into the processes of the brain is at the heart of this. One could say that the production of production, system of system, or feedback loop within feedback loop all work their magic in this sea within:

That you are fair or wise is vain,
Or strong, or rich, or generous;
You must have also the untaught strain
That sheds beauty on the rose.
There is a melody born of melody,
Which melts the world into a sea.
Toil could never compass it,
Art its height could never hit,
It came never out of wit,
But a music music-born
Well may Jove and Juno scorn.
Thy beauty, if it lack the fire
Which drives me mad with sweet desire,
What boots it? what the soldier’s mail,
Unless he conquer and prevail?
What all the goods thy pride which lift,
If thou pine for another’s gift?
Alas! that one is born in blight,
Victim of perpetual slight;—
When thou lookest in his face,
Thy heart saith, Brother! go thy ways!
None shall ask thee what thou doest,
Or care a rush for what thou knowest,
Or listen when thou repliest,
Or remember where thou liest,
Or how thy supper is sodden,—
And another is born
To make the sun forgotten.
Surely he carries a talisman
Under his tongue;
Broad are his shoulders, and strong,
And his eye is scornful,
Threatening, and young.
I hold it of little matter,
Whether your jewel be of pure water,
A rose diamond or a white,—
But whether it dazzle me with light.
I care not how you are drest,
In the coarsest, or in the best,
Nor whether your name is base or brave,
Nor tor the fashion of your behavior,—
But whether you charm me,
Bid my bread feed, and my fire warm me,
And dress up nature in your favor.
One thing is forever good,
That one thing is success,—
Dear to the Eumenides,
And to all the heavenly brood.
Who bides at home, nor looks abroad,
Carries the eagles, and masters the sword.

Continue reading

Blind Brain Theory and Enactivism: In Dialogue With R. Scott Bakker

Adam from Knowledge Ecology and R. Scott Bakker of Three Pound Brain got together last week for a talk… Adam presents it for us… What was interesting is how close to each other their ideas converged after reading through the conversation. Obviously there are slight disagreements and nuances of metaphor and framework, but all in all a very good and amiable conversation that was indeed enlightening. Adam would say:

“As I am working my way through the meta-theoretical baggage, though, I keep finding less and less to disagree with you on at the level of the constraints you argue for, which I am coming to see I agree with you on to some extent, though these are constraints that I think put you firmly in the skeptical / transcendental tradition you’re at pains to break free from! Anyway, on my end getting through the meta-theoretical layer is important since only then can I learn how to disagree with you better (and rest assured I have a feeling the disagreement will be longstanding!)”.

Also Found it interesting for Bakker to hone in as usual carefully registering his take on what he truly means by his stance on ‘intentionality’:

“Intentional cognition is real, there’s just nothing intrinsically intentional about it. It consists of a number of powerful heuristic systems that allows us to predict/explain/manipulate in a variety of problem-ecologies despite the absence of causal information. The philosopher’s mistake is to try to solve intentional cognition via those self-same heuristic systems, to engage in theoretical problem solving using systems adapted to solve practical, everyday problem – even though thousands of years of underdetermination pretty clearly shows the nature of intentional cognition is not among the things that intentional cognition can solve!”

Knowledge Ecology

tumblr_mkwlanqOci1qzngato1_1280[Image: Hannah Imlach]

Last week I posted a short essay on the question of meaning, style, and aesthetics in the ecological theories of Alva Noë and Jacob von Uexküll. The post resulted in a long and in-depth discussion with science fiction novelist and central architect of the Blind Brain Theory (BBT) of cognition, R. Scott Bakker. Our conversation waded through multiple topics including phenomenology, the limits of transcendental arguments, enactivism, eliminativism, meaning, aesthetics, pluralism, intentionality, first-person experience, and more. So impressed was I with Bakker’s adept ability to wade through the issues — across disciplines, perspectives, and controversies — despite my protests that I felt it worth excerpting our dialogue as a record of the exchange and as a resource for others interested in these debates. Whatever your views on the philosophy of mind, Bakker’s unique position is one you should familiarize yourself with — if only, like me, so that you…

View original post 7,183 more words

Ray Brassier: The Manifest and Scientific Images of Wilfred Sellars

Senselessness and purposelessness are not merely privative; they represent a gain in intelligibility.

– Ray Brassier, Nihil Unbound – Enlightenment and Extinction

Ray Brassier’s philosophical work  Nihil Unbound – Enlightenment and Extinction has been for the past few years a sort of touchstone text, a repository of specific problems and issues to be resolved, looked at, returned to, thought about, explored, digested, and finally adapted to my ongoing philosophical project. Those that have not read his work are missing out on one of the great mind’s of our time. His clarity of thought, ability to hone in on the specifics of a particular notion, idea, or concept is without peer in the scale of his undertakings.

In this specific work he takes on both Analytical and Continental traditions beginning with the work of Wilfred Sellers whose ‘Myth of Jones’ would crystallize and articulate for several generations the key to the “rational infrastructure of human thought” that binds us all as humans in a “community of rational agents”.1 Now these notions of rational agents and rational infrastructure are connected to Sellar’s two “images” of man, the manifest and scientific images. One must not see these as opposing images as much as the need to align them, as Brassier suggests, stereoscopically. Both images represent specific breakthroughs for humans in the long course of their evolutionary heritage, sophisticated theoretical achievements without which we would not be the types of beings we are now. The distinctive feature of the manifest image is that it was the first breakthrough or originary framework in which humans first encountered there new found conceptual abilities. When did we acquire these conceptual tools?

 

Darwin himself would suggest that the evolution of intelligence in man was due to several factors. Early humans would develop the ability to adapt their habits to new conditions of life. As he stated it:

He invents weapons, tools and various stratagems, by which he procures food and defends himself. When he migrates into a colder climate he uses clothes, builds sheds, and makes fires; and, by the aid of fire, cooks food otherwise indigestible. He aids his fellow-men in many ways, and anticipates future events . . . from the remotest times successful tribes have supplanted other tribes.2

 

Others have suggested that besides those mentioned by Darwin there were also a combination of selection pressures – climatic, ecological (e.g., hunting), and social – that influenced the evolution of the human brain and mind and the evolution of what is now called general fluid intelligence (ibid., KL 695) As tells us that with the help new neural imaging technologies the  “more than 100 years of empirical research – on general intelligence has isolated those features of self-centered mental models – the conscious-psychological and cognitive components of the motivation to control  – that are not strongly influenced by content and that enable explicit representations of symbolic information in working memory and an attention-dependent ability to manipulate this information in the service of strategic problem solving. (KL 1268-1272) This ability to anticipate, to so to speak time-travel mentally, to simulate past, present, and future events that allowed better coordination of activities in social, hunting, gathering, etc. was key. Strategy and anticipation, both keys to problem solving. And it was these specific environmental pressures that challenged these early humans to adapt and survive, to surmount problems in climate, ecological, and social realms that other animals did not need to encounter in the same way.

As Brassier states it for Sellar’s the thing humans acquired according to the myth of Jones was intentionality, the ability to focus and direct the mind toward specific goals and purposes: what he would term the “propositional attitude of ascription” (Brassier, 5). He remarks that the primary component of the manifest image “is the notion of persons as loci of intentional agency (Brassier, 6). Of course there is another school of thought that questions whether intentional states do indeed exist, whether such things as powers and dispositions are real entities (ontological) or just functional temporary processes (epistemological functions) within the ongoing decision making layers of the brain itself as connected to its different interactions with various forms of memory. Below you can see a chart of aspects of the memory:

Types of Human Memory: Diagram by Luke Mastin

As we know the brain is a hugely complex organ, with an estimated 100 billion neurons passing signals to each other via as many as 1,000 trillion synaptic connections. It continuously receives and analyzes sensory information, responding by controlling all bodily actions and functions. It is also the centre of higher-order thinking, learning and memory, and gives us the power to think, plan, speak, imagine, dream, reason and experience emotions.3

Because of humans development of differing forms of memory, and adapting both strategic and problem solving capabilities that the manifest image came about. This is why for Sellear’s the manifest image itself should be considered a type of the ‘scientific image’ – and, as Brassier remarks, it is “correlational” as compared to “postulational” in respect to our current framework of the scientific image. Ultimately what Sellars hoped to accomplish was not doing away with the manifest image, but rather a “properly stereoscopic integration of the manifest and scientific images such that the language of rational intention would come to enrich scientific theory so as to allow the latter to be directly wedded to human purposes (Brassier, 6).

The big problem here is if we truly have intentions at all. As Bruce Hood in his recent The Self Illusion: How the Social Brain Creates Identity states it:

My biases, my memories, my perceptions, and my thoughts are the interacting patterns of excitation and inhibition in my brain, and when the checks and balances are finally done, the resulting sums of all of these complex interactions are the decisions and the choices that I make. We are not aware of these influences because they are unconscious and so we feel that the decision has been arrived at independently—a problem that was recognized by the philosopher Spinoza when he wrote, “Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are determined.”4

What many neuroscientists are discovering is that most of us think we have intentions (“beliefs”, “emotions”, “desires”, etc.) because we are unaware of and blind to the actual layers of the brain that make and apply all these various decisions. Because of our blindness we invent subtle fictions and ascribe to these fictions an internal mapping as if they actually existed as real entities either ontologically or epistemologically. But as you will see below this is an illusion of our supposed self-reflexive first-person-perspective rather than an truth.

In the 1980s, Californian physiologist Benjamin Libet was working on the neural impulses that generate movements and motor acts. Prior to most voluntary motor acts, such as pushing a button with a finger, a spike of neural activity occurs in the brain’s motor cortex region that is responsible for producing the eventual movement of the finger. This is known as the readiness potential, and it is the forerunner to the cascade of brain activation that actually makes the finger move. Of course, in making a decision, we also experience a conscious intention or free will to initiate the act of pushing the button about a fifth of a second before we actually begin to press the button. But here’s the spooky thing. Libet demonstrated that there was a mismatch between when the readiness potential began and the point when the individual experienced the conscious intention to push the button.(Hood, pp. 127-128)

What they discovered was that our conscious intentions come after the fact, that the deeper layers of the brain that actually make all these decisions and processes are folded behind the invisible curtain that we with our self-reflexive first-person-singular fiction of self will never have direct access too. When certain philosophers speak of the notion of a post-intentional philosophy this is where their starting from. The idea that we have intentions or that we make our own decisions is a lot more complex that philosophy up to now has had to deal with, and some say they should not even try; that it is time to leave off from philosophy and let science do what it does best. I’ll not argue that point. For we still have the issues of the everyday use of the manifest image, and even Sellars knew that such a stereoscopic integration of manifest and scientific was hypothetical, not yet realized. But one thing for sure we will need to be attentive to what is happening in the sciences and be more open to integrate their findings in our contemporary forms of philosophy, otherwise we’ll be spinning tales for the babbling crowd than for the serious student of philosophy or science.


In my next post I’ll cover section 1.2 of Ray’s work The instrumentalization of the scientific image.

1. Ray Brassier. Nihil Unbound – Enlightenment and Extinction. (Palgrave McMillan, 207)
2.  (2012-03-22). Foundations in Evolutionary Cognitive Neuroscience (Kindle Locations 690-693). Cambridge University Press. Kindle Edition.
3. see Memory and Brain: http://www.human-memory.net/brain.html. Also Memory [Stanford Encyclopedia of Philosophy] http://plato.stanford.edu/entries/memory/#MemCogSci
4. Hood, Bruce (2012-04-25). The Self Illusion: How the Social Brain Creates Identity (p. 122). Oxford University Press. Kindle Edition. [also – you might be interested in R. Scott Bakker’s conceptions of BBT of Blind Brain Theory: here]

Reza Negarestani: Navigating the Game of Truths

By entering the game of truths – that is, making sense of what is true and making it true – and approaching it as a rule-based game of navigation, philosophy opens up a new evolutionary vista for the transformation of the mind. 

– Reza Negarestani, Navigate With Extreme Prejudice 

Reza Negarestani, an Iranian philosopher who has contributed extensively to journals and anthologies and lectured at numerous international universities and institutes, has begun a new philosophical project that is focused on rationalist universalism beginning with the evolution of the modern systems of knowledge and advancing toward contemporary philosophies of rationalism, their procedures, as well as their investment in new forms of normativity and epistemological control mechanisms. He recently hooked up with Guerino Mazzola, a Swiss mathematician, musicologist, jazz pianist as well as author and philosopher. He is  qualified as a professor of mathematics (1980) and of computational science (2003) at the University of Zürich.

On the Urbanomic blog I noticed a new entry: Deracinating Effect – Close Encounters of the Fourth Kind with Reason (see here). It appears that Reza and Guerino took part in a recent event in March name The Glass Bead Game after the novel by that name by Herman Hesse. It was organized by Glass Bead (Fabien Giraud, Jeremy Lecomte, Vincent Normand, Ida Soulard, Inigo Wilkins) and Composing Differences (curated by Virginie Bobin). Reza and Guerino both presented talks on philosophy, mathematics, games and the paradigm of navigation.

I’ve been interested of late in Reza’s shift in tone and effect, his philosophical framework seems to in the past few years undergone a mind-shift toward what he terms the ‘Paradigm of Navigation’. Doing a little research for this post I came upon his recent entry for the Speculations on Anonymous Materials Symposium paper transcribed by Radman Vrbanek Arhitekti from the youtube.com video. In this essay he aligns himself with the history of systems history, which grew out of a very rigid approach to engineering in the 19th Century but has over the past 30 years unfolded in a new and completely different epistemology of matter and its intelligibility.

What he discovered different in these newer systems theories is that against an architectural or engineering approach based on input/outputs these new systems theorists had moved from an essentialist view of system dynamics toward a functionalist approach: the notion that its the behavior and the functional integration underlying that behavior, or what these theorists termed the ‘functional organization’ of the system that matter. He tells us this is important, saying:

This becomes important because functions… systematic or technical understanding of function is that functions are abstractly a realizable entities meaning that they can be abstracted from the content of their constitution. So a functional organization can emerge, it can be manipulated, it can get automated and it can actually gain a form of autonomy that developed not because of the constitution in which it was embedded but in spite of it. Hence, functions allows for an understanding of the system that is no longer tethered or chained to an idea of constitution.

At the heart of this new form of systems theory is the use of heuristics. It entails a move away from analytics and toward synthetics. The sense is that heuristics are not analytical devices, but rather are synthetic operators. As he states it:

They treat material as a problem. But they don’t break this problem into pieces. They transform this problem into new problem. And this is what the preservation of invariance is. Once you transform a problem by way of heuristics to a new problem, you basically eliminate so much of the fog around this problem that initially didn’t allow us to solve it.

In this sense one sees an almost Deleuzean turn in systems theory, for it was Deleuze who believed philosophy was about problems to be solved. In their What is Philosophy? Deleuze and Guattari explain that only science is concerned with the value of claims and propositions; philosophy searches for solutions to problems, rather than the truth. In this sense they were returning to Nietsche who told us he was waiting for those who would come, those philosophical physicians who were no longer concerned with truth but rather something else:

I am still waiting for a philosophical physician in the exceptional sense of the term – someone who has set himself the task of pursuing the problem of the total health of a people, time, race or of humanity – to summon the courage at last to push my suspicion to its limit and risk the proposition: what was at stake in all philosophizing hitherto was not at all ‘truth’ but rather something else – let us say health, future, growth, power, life. . .(6)

–  Friedrich Nietzsche,  The Gay Science

But is this what Reza is seeking? We’ll return to this later. What Reza tells us in this essay is that heuristics as a new tool, an apparatus allows us to remove both the lower and upper boundaries of materiality. At the lower boundary where the understanding of constitution and understanding of fundamental assumptions or axiomatic conceptual behaviors exist; and, at the upper boundary where it basically turns materiality into living hypothesis and its behavior can be expanded. Its evolution, i.e. its constructability becomes part of the project of its self-realization. As he states it:

Hence, the understanding that the system is nothing but its behavior and behavior is a register of constructability – the same thing about materiality and how engineers approach materiality by way of heuristics – which is rooted in this new understanding of systematicity by way of understanding in in the sense of functions and behaviors.

In his essay The Glass Bead Game he lays down the gauntlet telling us that by “simulating the truth of the mind as a navigational horizon, philosophy sets out the conditions for the emancipation of the mind from its contingently posited settings and limits of constructability”. Continuing he says: “Philosophy’s ancient program for exploring the mind becomes inseparable from the exploration of possibilities for reconstructing and realizing the mind by different realizers and for different purposes.”

Of course being the creature I am I want to ask: I see talk of the Mind as if it were some autonomous entity in its own right disconnected from both body and its command system, the brain. So I ask: Where is the brain in all this discussion of emancipation and the limits of constructability? As Bakker on his blog keeps pounding away at “Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions.”1 Reza’s notion of simulating the truth of the mind would entail information that we – as of yet, just do not have access to; in fact. because of medial neglect and the inability of second order reflection ever to catch its own tail, we will never have access to it through intentional awareness. Instead we will have to rely not on philosophy but the sciences (and especially the neurosciences) to provide both the understanding and the testable hypothesis before such experimental constructions and reconstructions could begin to even become feasible as more than sheer fantasy.

We see just how much fantasy is involved in his next passage:

In liberating itself from its illusions of ineffability and irreproducible uniqueness, and by apprehending itself as an upgradable armamentarium of practices or abilities, the mind realizes itself as an expanding constructible edifice that effectuates a mind-only system. But this is a system that is no longer comprehensible within the traditional ambit of idealism, for it involves ‘mind’ not as a theoretical object but as a practical project of socio-historical wisdom or augmented general intelligence.

How is such an liberation from illusions of ineffability and irreproducible uniqueness to come about? And, how can this apprehension come about? (Which can only mean second-order self-reflexivity that, if Bakker in his Blind Brain Theory is correct, is based on medial neglect (i.e., the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.))

Be that as it may what Reza is trying to do is remap the cognitive territory that has for too long been overlaid with certain scientistic mythologies for more than a century. As he sees it the mind is a “diversifiable set of abilities or practices whose deployment counts as what the mind is and what it does”. This ontological and pragmatic mixture abstraction and decomposition that allow “philosophy is able to envision itself as a veritable environment for an augmented nous precisely in the sense of a systematic experiment in mind simulation”. This turn toward the pragmatic-functionalist perspective and development of a philosophy of action and gestures rather than of contemplation and theory is at the heart of a new movement toward Synthetic Category Theory in Mathematics. Several philosophers seem to be at the center of this theory of the gesture:  Guerino Mazzola, Fernando Zalamea, and Gilles Chatelet. Along with Alain Badiou these philosophers of math have changed the game and invented new paths forward for philosophy.

It’s as if this network of scientists, mathematicians, information specialists, geophilosophers, etc. are planning on reengineering society top-down and bottom-up. Of course the metaphor of the Glass Bead Game is almost apposite to the purpose of such an effort. The Glass Bead Game of Das Glasperlienspiel of Herman Hesse was of a secularization of the communal systems of the Medieval Ages of Monk Monasteries and their vast Libraries. In this novel the hero practices a contemplative game of the Mind in which knowledge is grafted onto a strategy game of 3D projections in yearly contests among participants. These contemplative knowledge bearers are excused from the menial life of work and allowed to pursue at their own discretion strange pursuits in knowledge. The whole thing goes against what Reza and his cohorts seek in their action oriented pragmatic philosophy. It was Arendt herself that spoke of this division in philosophy between the ‘vita contemplativa’ and the ‘vita activa’ as a continuing battle along the course of the past two millennia of philosophy. Reza tips his hat toward the active stance.

It reminds me in some ways of the EU Onlife Initiative  which takes a look at the ICT’s – The deployment of information and communication technologies (ICTs) and their uptake by society affect radically the human condition, insofar as it modifies our relationships to ourselves, to others and to the world. These new social technologies are blurring of the distinction between reality and virtuality; blurring of the distinctions between human, machine and nature; bringing about a reversal from information scarcity to information abundance; and, the shift from the primacy of entities to the primacy of interactions. As they see it the world is grasped by human minds through concepts: perception is necessarily mediated by concepts, as if they were the interfaces through which reality is experienced and interpreted. Concepts provide an understanding of surrounding realities and a means by which to apprehend them. However, the current conceptual toolbox is not fitted to address new ICT-related challenges and leads to negative projections about the future: we fear and reject what we fail to make sense of and give meaning to. In order to acknowledge such inadequacy and explore alternative conceptualisations, a group of scholars in anthropology, cognitive science, computer science, engineering, law, neuroscience, philosophy, political science, psychology and sociology, instigated the Onlife Initiative, a collective thought exercise to explore the policy-relevant consequences of those changes. This concept reengineering exercise seeks to inspire reflection on what happens to us and to re-envisage the future with greater confidence.

This new informational philosophy approach seems to align well with Reza’s sense of philosophy establishing a “link between intelligence and modes of collectivization, in a way that liberation, organization and complexification of the latter implies new odysseys for the former, which is to say, intelligence and the evolution of the nous”. Ultimately Reza’s project hopes to break us out of our apathetic circle of critique and theoretical spin bureaus of polarized idiocy that has entrapped us in useless debates and provide a new path forward by “concurrently treating the mind as a vector of extreme abstraction and abstracting the mind into a set of social practices and conducts, philosophy gesticulates toward a particular and not yet fully comprehended event in the modern epoch – as opposed to traditional forms – of intelligence: The self-realization of intelligence coincides and is implicitly linked with the self-realization of social collectivity. The single most significant historical objective is then postulated as the activation and elaboration of this link between the two aforementioned dimensions of self-realization as ultimately one unified project”.

Next he tells us that the first task of philosophy is to locate an access or a space of entry to the universal landscape of logoi. I think this attends Seller’s notions of the “space of reasons” which describes the conceptual and behavioral web of language that humans use to get intelligently around their world, and denotes the fact that talk of reasons, epistemic justification, and intention is not the same as, and cannot necessarily be mapped onto, talk of causes and effects in the sense that physical science speaks of them. In this sense as Reza tells it the “landscape of logoi is captured as a revisable and expandable map of cascading inferential links and discursive pathways between topoi that make sense of truth through navigation”.

At the core of this new philosophical project is the ‘self-realization of intelligence’: (1) by pointing in and out of different epochs and activating the navigational links implicit in history; (2) by grasping intelligence as a collective enterprise and hence, drawing a complex continuity between collective self-realization and the self-realization of intelligence as such, in a fashion not dissimilar to the ethical program of an ‘all-encompassing self-construction bent on abolishing slavery’ articulated by the likes of Confucius, Socrates and Seneca.(ibid.)

The explicit hope of this philosophy is according to Reza the notion of keeping pace with intelligence, which implies that philosophy always reconstitutes what it was supposed to be.

I wonder if this sort of endeavor is doomed to begin with? When one thinks of how machine intelligence as it moves into the quantum era of ubiquitous computing will ever be able to keep pace with the vast amounts of processing power that will come available to these future AI entities?

Next he tells us that localization is the constitutive gesture of conception and the first move in navigating spaces of reason. ‘To localize’ means ‘to conceive’ the homogenous and quantitative information into qualitatively well-organized information-spaces endowed with different modalities of access. Obviously we must conceive of advanced computer simulation systems that allow almost rhizomatic access from anywhere in the world, with multiple entry points and departures. When we think about the new Zettabyte Era and the impact of dataglut one realizes that even a team of philosophers would be hard pressed to sift through the datamix:

In 2003, researchers at Berkeley’s School of Information Management and Systems estimated that humanity had accumulated approximately 12 exabytes of data (1 exabyte corresponds to 1018 bytes or a 50,000-year-long video of DVD quality) in the course of its entire history until the commodification of computers. A zetabyte equaling 1,000 exabytes.2

Tools will need to be developed, as well as new algorithms that can churn through such massive data and combine advanced simulations or automatons for filtering out the noise and making smart choices or decisions on that data before passing it on to their human counterparts. Much like the trillions of operations that go on in the human brain that the average person is hardly aware of, and the decisional processes that go on below the threshold of consciousness before we ever see an idea or notion arise, we are caught in the trap of believing we have enough information to make coherent and intelligent decisions based on the minimal data received at the end of that brain processing initiative. We’re not. We are deluded into thinking we know what in fact we do not know. We make conscious decisions after the fact, and are usually motivated by dispositions and powers we do not even have access too.

Yet, Reza, would have us believe that there is a navigable “link between the rational agency and logoi through spaces of reason that marks the horizon of knowledge” (ibid.). When he speaks of ‘rational agency’ is this the human, the AI, the collective subjectivication? His notion of  universality that presents knowledge and by extension philosophy as platforms for breaking free from the supposedly necessary determinations of local horizons in which the rational or advanced agency appears to be firmly anchored, seems to portend more issues and problems that it resolves. How does one break free of these local determinations? What would such a universal knowledge assume on a global scale? As he puts it “without this unmooring effect, philosophy is incapable of examining any commitment beyond its local implications or envisaging the trajectory of reason outside of immediate resources of a local site”. So against all those microhistories and labors of the postmodern era poststructuralists we are to return to the beginning of the Enlightenment project, but with a twist in that we shall have the new technologies of simulation at hand to empower this age of informational and rational governance and agency. As he calls it: “Philosophy proposes analytico-synthetic methods of wayfinding in what Robert Brandom discribes as the rational system of commitments”.

But what of all those dark corners of the irrational that Freud, Lacan, Deleuze, and so many other discovered in the mind? What of that irrational core? We know that the neoliberal think-tanks that gave us Rational Choice Theory and the economics of the free market have led us into destruction, how better shall another rational system fare – even one from the Left?

He seems to understand the issues, saying:

Philosophy sees the action in the present in terms of destiny and ramifications, which is to say, based on the reality of time. It constructively adapts to an incoming and reverse arrow of time along which the current cognitive or practical commitment evolves in the shape of multiple future destinations re-entering the hori- zon of what has already taken place. Correspondingly, philosophy operates as a virtual machine for forecasting future commitments and presenting a blueprint for a necessary course of action or adaptation in accordance with a trajectory or trajectories extending in reverse from the future. It discursively sees into the future. In short, philosophy is a nomenclature for a universal simulation engine.

In fact it is inside this simulation engine that the self-actualization of reason is anticipated, the escape plan from localist myopias is hatched and the self-portrait of man drawn in sand is exposed to relentless waves of revision. In setting up the game of truths by way of giving functions of reason their own autonomy – in effect envisioning and practicing their automation – philosophy establishes itself as the paradigm of the Next (computational) Machine, back from the future.(ibid.)

But why philosophy? Why not the neurosciences that actually deal with the inner workings not of the Mind but of the brain? Will philosophy ever acknowledge that the sciences must play a great part in the coming information age? Or will it continue to go blindly down its own intentional path, directing its own blind goals without a true knowledge of things as they are? With the advent of the NBIC (Nanotech, Biotech, InfoTech, CongitiveTech) and the Information and Communications Technologies or ICT’s we have already entered or go beyond recourse to much of what philosophy can say. Many like Luciano Floridi and his team have already entered this information age leaving much of the intentional drift of phenomenology, idealism, and materialism as they derive certain information structural realisms and ontologies for a path forward. Only time will tell if Reza and his cohorts do the same… I have much to catch up on and probably need more data on Reza and his cohorts efforts to truly make a definitive judgment so I’ll refrain from such problematique statements.

This is a commendable project and one that we should continue to look into and keep an eye on over the coming months and years. I would only ask that Reza and these Mathematicians begin extending their borders into the sciences of the brain as well as many of the new features transpiring on the Continent in the Information Philosophy fields. I still have questions about his reliance on Brandomian normativity since it is a fall back to retrograde intentionalism rather than a move toward a post-intentional world view. My hopes is that he will look long and hard at other alternatives and begin question the very notion of ‘intentionality’ and ‘directedness’ as an outmoded tool of a phenomenological perspective that needs recasting in the light of new sciences and philosophies.

—————

*appending the youtube.com video by Guerino Mazzola Melting the Glass Beads – The Multiverse Game of Strings and Gestures:

1. R. Scott Bakker. (see The Blind Mechanic)
2. Floridi, Luciano (2010-02-25). Information: A Very Short Introduction (Very Short Introductions) (p. 6). Oxford University Press. Kindle Edition.

Technocapitalism: War Machines and the Immortalist Imperative

   Having recognized religious doctrines to be illusions, we are at once confronted with the further question: may not other cultural possessions, which we esteem highly and by which we let our life be ruled, be of a similar nature?

– Sigmund Freud, The Future of an Illusion

Psychical researchers , supported by some of the leading figures of the day, believed immortality might be a demonstrable fact . The seances that were so popular at this time were not just Victorian parlour games invented to while away dreary evenings. They were part of an anxious, at times desperate, search for meaning in life…

from John Gray’s – The Immortalization Commission: Science and the Strange Quest to Cheat Death

The longer one peers into the depths of human history the more one wants to escape it. From the outside history seems one long saga of tyranny, violence, and coercion while from the inside it seems – as Macbeth suggested, a ‘tale told by and idiot, full of sound and fury signifying nothing’. Yet, we go on. Why? Why do we continue? Why persist? Out of habit? Because we think there must be something better? That if I just try a little harder I can make a go of it, create a happy, healthy life for myself and my children in the midst of all this death and decay? But at whose expense? Isn’t my chance in the sun at the expense of some other poor soul (me being a white Caucasian male living in a first world country with a basic living wage, etc.). And, by poor soul I mean all those millions of beings, other humans living in not only poverty but degradation, decay, and ultimately uninhabitable spaces of death? Why should my life be supposedly blessed with such amenities? Why do I exist in these circumstances that give me opportunities to even think about such things?

Continue reading

The Rise of the Machines: Brandom, Negarestani, and Bakker

Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery.

– R. Scott Bakker, The Blind Mechanic

Ants that encounter in their path a dead philosopher may make good use of him.

– Stanislaw Lem, His Master’s Voice 

We can imagine in some near future my friend R. Scott Bakker will be brought to trial before a tribunal of philosophers he has for so long sung his jeremiads on ignorance and blindness; or as he puts it ‘medial neglect’ (i.e., “Medial neglect simply means the brain cannot cognize itself as a brain”). One need only remember that old nabi of the desert Jeremiah and God’s prognostications: Attack you they will, overcome you they can’t… And, like Jeremiah, these philosophers will attack him from every philosophical angle but will be unable to overcome his scientific tenacity.

Continue reading

Stanislas Dehaene: Global Neuronal Workspace Hypothesis

We have discovered signatures of conscious processing, but what do they mean? Why do they occur? We have reached the point where we need a theory to explain how subjective introspection relates to objective measurements.

–  Stanislas Dehaene,  Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

When did reality leave off and fiction begin? I never did get the memo on that. Maybe that’s the problem with us all, if it is a problem – no one ever told us it had happened, or if they had it was somehow lost in translation long ago. So it goes. T.S. Eliot once stipulated that people couldn’t bare “too much reality”, but he never told us that there might be a further problem, the one I’m facing now: it’s not reality, but too much fiction that has become the issue. I mean we keep getting messages from the media Moghuls about Reality TV. Sure, but whose reality? I keep thinking that reality must be in there somewhere: but where is where? Is this a problem of space or time, or maybe – spacetime? I never could get those figured out either.

What to do? There are those that tell us we need to inquire into the nature of being – as if that was somehow the magic key to reality, as if we could finally discover the truth about life, the universe, and everything if we could only grasp existence as it is (i.e., in the parlance of metaphysics: being qua being – being in so much as it is being or beings insofar as they exist). But then I wondered: What does it mean for a thing to exist? That’s when I stopped thinking about things and existence and realized we’d never have access to such knowledge about things and existence for the simple reason that language is incapable of reaching beyond itself much less describing things or existence, whether that language is natural or as latter day philosophers and scientists presume, mathematical. All one is doing is manipulating signs that point to things and existence, rather than giving us those things as they are in them selves. But that was the point, right? There are those who say things do not exist until we construct them, that reality is a model that the mind constructs out of its own manipulation of those very symbols of natural and mathematical language. These philosophers tell us that there is a mid point between things and mind where reality becomes reality for-us in a new object, or concept. For these philosophers it is the concept that ties the mind and reality together in a communicative act of solidarity. So that if we create effective concepts we can all share in the truth of this reality for-us. So reality is a shared realm of meaning between certain minds as they negotiate the unknown realm of being.

Now I’m no philosopher but am a creature who has read a lot of philosophy and have come to the moment like Socrates before me to the realization that what I know is that I don’t know much of anything. But what is the knowing and unknowing that I don’t know? When we speak of knowing something what do we mean? What is knowing? I thought for this exercise I’d begin a Wiki:

Knowledge is a familiarity, awareness or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education by perceiving, discovering, or learning. Knowledge can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); it can be more or less formal or systematic. In philosophy, the study of knowledge is called epistemology; the philosopher Plato famously defined knowledge as “justified true belief“. However, no single agreed upon definition of knowledge exists, though there are numerous theories to explain it.

Knowledge acquisition involves complex cognitive processes: perception, communication, association and reasoning; while knowledge is also said to be related to the capacity of acknowledgment in human beings. (Knowledge)

Well that’s a lot of information on knowledge and ultimately frustrating in that we discover that no one really knows what it is, or at least there is no agreement among those who should know as to what knowledge is. Yet, as we see above it gives us some hints. And, the biggest hint, is that it seems to be connected to the Mind or as that one sentence stipulates: the cognitive processes. This would lead us to that three-pound lump of neurons and biochemical mass in our skull we call the Brain. But, we ask: Can the brain speak for itself? How can we inquire into the nature of the brain and its processes when the very tool we use to inquire with is itself the cognitive processes of consciousness. Can consciousness grab its own tail? Can it see its self in the act of seeing? Isn’t consciousness by its very nature always directed toward something, intentional by its very nature? If it can only ever process that which is outside itself, its environment then how can it ever understand or know itself? Consciousness is no ouroboros  even if we speak about self-reflexivity to doomsday.

In my epigraph Stanislas Dehaene comes to a point in his brain book in which our need to peer into the mysteries of the brain and bring together our self-reflexive subjective notions and our objective and quantified scientific knowledge. But isn’t that the crux of the problem? Can we ever bring those disparate worlds together? Of course Dehaene thinks we can, and has spent fifteen years inventing through trial and error a set of protocols to do just that. As he tells us he proposes a “global neuronal workspace” hypothesis, my laboratory’s fifteen-year effort to make sense of consciousness.1 Now just what exactly is a “global neuronal workspace”? In my mind I picture a Rube Goldberg contraption of strange electrodes, miles of cable, computers emulating the brain’s processes all in some fantastic Frankenstein laboratory with a brain in a vat connected to electromagnetic imaging processor spun upon a cinematic 3-D Screen. Of course this is all fantasy and the truth is more concrete and less fantasy:

The human brain has developed efficient long-distance networks, particularly in the prefrontal cortex, to select relevant information and disseminate it throughout the brain. Consciousness is an evolved device that allows us to attend to a piece of information and keep it active within this broadcasting system. Once the information is conscious, it can be flexibly routed to other areas according to our current goals. Thus we can name it, evaluate it, memorize it, or use it to plan the future. Computer simulations of neural networks show that the global neuronal workspace hypothesis generates precisely the signatures that we see in experimental brain recordings. It can also explain why vast amounts of knowledge remain inaccessible to our consciousness. (Dehane, KL 2711-2716)

Ah, there we go, so that’s the reason we as Socrates told us we are: Blind as Bats, unknowing of the little we know, or even think that we know. Why? Because our brains function differently than that, and knowledge is not one of its strong points – at least for that part of the brain we call self-reflexive consciousness. We do not have access to “vast amounts of knowledge”, not because the knowledge does not exist, but because our consciousness was configured by the brain to do other things like being attentive to specific temporary bits of information, and as a regulatory device within a larger broadcasting system. One needs also to recognize that there is a subtle difference between knowledge per se and information. Consciousness has access to bits of information fed to it by other processes within the brain. Now Dehaene tries to bring in intentionality with a notion of “current goals” and our ability to “name it, evaluate it, memorize it, or use it to plan the future”. But is this true? Do we truly have goals? Or do the goals have us? What I mean is consciousness the one that has intent or a telos – a sense of directional or goal oriented finality? Is this true? Does consciousness actually have the ability to name, evaluate, memorize for future recall and use? Or is this, too, an illusion of consciousness, too?

We’ve come a long way over the past fifteen years or so toward an understanding of the brain, but have we truly been able to bridge the gap between our knowledge of the brain’s processes and our understanding of just why those processes create consciousness to begin with? As Dehaene comments: “Although neuroscience has identified many empirical correspondences between brain activity and mental life, the conceptual chasm between brain and mind seems as broad as it ever was.” The first thing I notice in his statement is this dichotomy between brain activity and mental life as if brain and mind were two distinct things. But is this true? Is there some dualism between brain and mind? Does the mind in itself exist in some transcendent sphere beyond the brain? How are the two connected? Does the mind even exist? Is this notion of a separate mental activity an illusion of our self-reflexive consciousness? What if consciousness is continuous with the brain activities? What if it were just a specialized function of the brain itself, not some special entity in its own right? What if we are still bound to older theological notions of Self, Identify, Consciousness, Mind, Soul, etc. that just no longer hold water, no longer answer the questions of these physical processes? What if the physical processes of the brain were all continuous with each other and that consciousness is just a function within a myriad of other ongoing processes that are neither permanent nor stable, but rather continuously rise and fall, fluctuate and disperse as needed in the flow of the brains own ongoing activities. Why this need for a dualism of Brain and Mind?

Deheane himself sees the problem but seems to continue its discussion as if he too were blind to its illusion:

In the absence of an explicit theory, the contemporary search for the neural correlates of consciousness may seem as vain as Descartes’s ancient proposal that the pineal gland is the seat of the soul. This hypothesis seems deficient because it upholds the very division that a theory of consciousness is supposed to resolve: the intuitive idea that the neural and the mental belong to entirely different realms. The mere observation of a systematic relationship between these two domains cannot suffice. What is required is an overarching theoretical framework, a set of bridging laws that thoroughly explain how mental events relate to brain activity patterns.

Neural correlates tips the hand. With that one statement we fall back into a dualistic or Descartian approach. But as he realizes this approach to consciousness constructs a division between the two realms of brain and consciousness as if the neural processes and mental processes were of a different order of being. Yet, he proposes a framework, a set of bridging laws to “explain how mental events relate to brain activity patterns”. Hmm… isn’t this still to fall into that same trap? All he’s done is to rearrange the words from neural and mental, to events and patterns. But why do we need such a framework to begin with? Is there really some difference between a pattern and its event? Are not the two one and the same, continuous. Is there are reason to see a separation where none may in fact exist? He goes on – and, I think mistakenly:

No experiment will ever show how the hundred billion neurons in the human brain fire at the moment of conscious perception. Only mathematical theory can explain how the mental reduces to the neural. Neuroscience needs a series of bridging laws, analogous to the Maxwell-Boltzmann theory of gases, that connect one domain with the other. … In spite of these difficulties , in the past fifteen years , my colleagues Jean-Pierre Changeux, Lionel Naccache, and I have started to bridge the gap. We have sketched a specific theory of consciousness, the “global neuronal workspace,” that is the condensed synthesis of sixty years of psychological modeling.(Kindle Locations 2743-2745).

I think his approach, personally, is all wrong headed. I do not think any computer model or mathematical model will ever bridge the gap between the one domain and the other for the simple reason that there is no separate domain to bridge. I’ll have to come back to that at a future time. My reasoning has to do with all the new techniques already available that are being used to study the brain’s activities with much effect: Electroconvulsive Therapy (ECT), Transcranial Magnetic Stimulation (TMS), Electronic brain stimulation (ESB), Brain Implants, Deep Brain Stimulation (DBS), Vagus Nerve Stimulation (VNS), Transcutaneous electrical nerve stimulation (TENS), Transcranial direct current stimulation (tDCS), Magnetic seizure therapy (MST), and psychotherapy, pharmaceutical and biopower medical applications, etc. Through these we can map the brains activity precisely right down to the decision making processes. So why do we need some grand theoretical framework to describe some mapping of brain to mind? Is this a recursion to outmoded forms of philosophical prejudice and the intentionality that has for so long held us in its clutches? Isn’t it time to release ourselves from the intentional universe of philosophical speculation, of trying the mind to consciousness in some elaborate mapping as if that would describe anything at all much rather just an exercise in complexification?

I mean listen to how complicated it gets when Deheane begins trying to philosophize about this new framework:

When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain. Among the millions of mental representations that constantly crisscross our brains in an unconscious manner, one is selected because of its relevance to our present goals. Consciousness makes it globally available to all our high-level decision systems. We possess a mental router, an evolved architecture for extracting relevant information and dispatching it. The psychologist Bernard Baars calls it a “global workspace”: an internal system, detached from the outside world, that allows us to freely entertain our private mental images and to spread them across the mind’s vast array of specialized processors. (Kindle Locations 2749-2755).

If we carefully understand the logic of the above we see this underlying intentionality written into its less than adequate descriptions. First is the notion that we can “mean” something. As if we can explain information bound to a specific storage area in the brain that then can be retrieved. None of this is actually visible nor explanatory of the actual processes at all, but is rather a human description or construction after the fact of those processes for our delectation. Obviously we have no other choice than to use natural language and try to explain things that are not in fact what the fact is, but for us to say this is what information means? And then he tells us that this information stored is part of what we term mental representations and that consciousness is never aware of all these bits of knowledge and information but only of those that are selected do the “relevance to our present goals”. But one asks who intends the selection and the goalsfollowing Bernard Baars, terms, the “global workspace”. So the conscious systems seems to be this “s vast array of specialized processors”. This sentence spells out the whole intentional fallacy. As if consciousness was a free intentional entity in its own right that could actively and intentionally make its own decisions between the brain and the outside environment and work with its own internalized set of mental images then send them down into the brain for processing.

Again, I ask, is this true? Sounds like he is trying to slip the notion of Self and Subjectivity back into the equation without naming them as the active agent, but instead reduces self and subjectivity to Consciousness as the Agent between Brain and Environment.  Either way I think there is something too complex in this move and that whatever consciousness is it is not some active agent in its own right, but is rather a bit player in a temporary stage play of the brains ongoing productions. Consciousness rather than being like some unruly Hamlet strutting across the stage is more like his friend Horatio who know one ever sees but who rather sees all anonymously without intent and always fully impersonal and disinterested. Consciousness comes and goes at the behest of the brains own physical needs and processes, and when not needed is sent to sleep or withdraws till called out to effect the brains decisions.

 

1. Dehaene, Stanislas (2014-01-30). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts (Kindle Location 2710). Penguin Group US. Kindle Edition.

Posthumanism and Transhumanism: The Myth of Perfectibility – Divergent Worlds?

History is a nightmare from which I am trying to awake.

– James Joyce

Enhancement. Why shouldn’t we make ourselves better than we are now? We’re incomplete. Why leave something as fabulous as life up to chance?

– Richard Powers,  Generosity: An Enhancement

In Thomas Pynchon’s Gravity’s Rainbow a point is reached in the text in which the inexorable power of an accelerating capitalism is shown out of control mutating into something else something not quite human:

The War needs electricity. It’s a lively game, Electric Monopoly, among the power companies, the Central Electricity Board, and other War agencies, to keep Grid Time synchronized with Greenwich Mean Time. In the night, the deepest concrete wells of night, dynamos whose locations are classified spin faster, and so, responding, the clock-hands next to all the old, sleepless eyes— gathering in their minutes whining, pitching higher toward the vertigo of a siren. It is the Night’s Mad Carnival. There is merriment under the shadows of the minute-hands . Hysteria in the pale faces between the numerals. The power companies speak of loads, war-drains so vast the clocks will slow again unless this nighttime march is stolen, but the loads expected daily do not occur, and the Grid runs inching ever faster, and the old faces turn to the clock faces, thinking plot, and the numbers go whirling toward the Nativity, a violence, a nova of heart that will turn us all, change us forever to the very forgotten roots of who we are.1

This notion of a violent nativity, of giving birth to something that is both new and as old as the very “forgotten roots of who we are” seems appropriate to our time of accelerating impossibilities. We who are atheists seem to visualize some secular apocalypse of the semantic, a breaking of the bonds of the Anthropocene era, of a bridging of the gap, a great crossing of some inevitable Rubicon of the inhuman within us into something post-human, something strange and almost unthinkable. Yet, as we study our religious forbears we notice a paradox, a sort of literalization of the Christian mythos of the perfectibility of Man, the veritable myth of a New Adam in the making. But whereas the church going population saw this as a release from embodiment, of a shift into transcendence of spirit, our new atheistic or secular priests of posthumanism and/or transhumanism see it as just an immanent change within the very condition of the human animal itself.

The idea of the perfectibility of man emerges in the 18th century, with the relaxation of the theological barriers protecting the property for God alone. In Enlightenment writers such as the Condorcet and Godwin, perfectibility becomes a tendency actually capable of being realized in human history. Before Kant, both Rousseau and the Scottish thinker Lord Monboddo (1714–99) envisaged perfectibility as the power of self-rule and moral progress. The 19th century represented the high-water mark of belief in perfectibility, under the influence first of Saint-Simon, then Kant, Hegel, Comte and Marx. With the arrival of the theory of evolution it was possible to see successive economic and cultural history as a progress of increasing fitness, from primitive and undeveloped states to a potential ideal associated with freedom and self-fulfilment. This optimism, frequently allied with unlimited confidence in the bettering of the human condition through the advance of science, has taken on a new twist in the pseudo-science of Transhumanism.2

Abraham Maslow, the central figure in “third force” psychology, was one of the first to use the term “transhuman” to describe a new form of secular religion of peak experiences. Maslow described peak experiences as very like orgasms : “the peak experience is temporary, essentially delightful, potentially creative, and imbued with profound metaphysical possibilities.” One cannot live on such peaks but, he insisted, a life without them is unhealthy, nihilistic and potentially violent. The peak experience sat at the summit of a pyramid built on a hierarchy of psychological and physiological needs. At the base of the pyramid was food, shelter, sleep; above that came sexuality, safety and security; above that, love, belonging, self-esteem; and finally, at the peak itself, self-actualization. This last state was regarded as spiritual but in no way religious. One of the achievements of a peak experience, Maslow thought, was that people became more democratic, more generous, more open, less closed and selfish, achieving what he called a “transpersonal” or “transhuman” realm of consciousness. He had the idea of a “non-institutionalized personal religion” that “would obliterate the distinction between the sacred and the profane”— rather like the meditation exercises of Zen monks, whom he compared to humanistic psychologists. Maslow’s idols in this were William James and Walt Whitman.3

George Bernard Shaw, a Fabian socialist, along with H.G. Wells affirmed a view of the perfectibility of human nature. Shaw once stated that the “end of human existence is not to be ‘good’ and be rewarded in heaven, but to create Heaven on earth.” As he wrote to Lady Gregory: “ My doctrine is that God proceeds by the method of ‘trial and error.’ . . . To me the sole hope of human salvation lies in teaching Man to regard himself as an experiment in the realization of God.” (Watson, KL 1959) Shaw also much like Quentin Meillasoux in our own time espoused the notion of inexistent God, of the god that does not yet exist but might. Shaw wrote to Tolstoy in 1910: “To me God does not yet exist. . . . The current theory that God already exists in perfection involves the belief that God deliberately created something lower than Himself. . . . To my mind , unless we conceive God as engaged in a continual struggle to surpass himself . . . we are conceiving nothing better than an omnipotent snob.”(Watson, KL 1930) Notions of perfectibility, good, and progress were all fused into the idea of neverending improvement in Shaw as well in which he “good” is a process of endless improvement “that need never stop and is never complete.”

For Wells on the other hand improvement, good, progress were conceived of within the tradition of “perfectibility” not in a theological way,  but as a three-pronged process— perfectibility of the individual but within the greater structure of the state and of the race. As he stated it:

The continuation of the species, and the acceptance of the duties that go with it, must rank as the highest of all goals; and if they are not so ranked, it is the fault of others in the state who downgraded them for their own purposes. . . . We live in the world as it is and not as it should be. . . . The normal modern married woman has to make the best of a bad position, to do her best under the old conditions, to live as though [as if] she were under the new conditions, to make good citizens, to give her spare energies as far as she can to bringing about a better state of affairs. Like the private property owner and the official in a privately conducted business, her best method of conduct is to consider herself [as if she were] an unrecognized public official, irregularly commanded and improperly paid. There is no good in flagrant rebellion. She has to study her particular circumstances and make what good she can out of them, keeping her face towards the coming time. . . . We have to be wise as well as loyal; discretion itself is loyalty to the coming state. . . . We live for experience and the race; the individual interludes are just helps to that; the warm inn in which we lovers met and refreshed was but a halt on the journey. When we have loved to the intensest point we have done our best with each other. To keep to that image of the inn, we must not sit overlong at our wine beside the fire. We must go on to new experiences and new adventures. (Watson, KL 2566)

John Passmore in his classic study The Perfectibility of Man  begins by distinguishing between “technical perfection” and the perfectibility of a human being. As Harold Coward points out following Passmore Technical perfection occurs when a person is deemed to be excellent or perfect at performing a particular task or role. In this sense we may talk about a perfect secretary, lawyer, or accountant, suggesting that such persons achieve the highest possible standards in their professional work. But this does not imply that they are perfect in their performance of the other tasks and roles of life. Passmore points out that Plato in his Republic allows for technical perfection by allocating to each person that task to perform in which the person’s talents and skills will enable a perfect performance of the task. But that same person might be a failure as a parent; and so, in Plato’s Republic he or she would not be allowed to be a parent. The parent role would be reserved for someone else whose talents enabled him or her to perfectly perform the task of raising children. But Plato distinguishes such technical perfection from the perfection of human nature evidenced by the special class of persons who are rulers of the Republic. These “philosopher-kings,” as he calls them, are not perfect because they rule perfectly; they are perfect because they have seen “the form of the good” and rule in accordance with it. Passimore comments, “in the end, the whole structure of Plato’s republic rests on there being a variety of perfection over and above technical perfection-a perfection which consists in, or arises out of, man’s relationship to the ideal.”‘ Passmore goes on to point out that other Western thinkers including Luther, Calvin, and Duns Scotus follow Plato in talking about technical perfection in terms of one’s vocation or calling. But the perfecting of oneself in the performance of the role in life to which one is called is not sufficient by itself to ensure one’s perfection as a human being.4

Plato by introducing the idea of a metaphysical good as the ideal to be achieved, he also evoked the idea of evil or the lack of good, and the tension between the two. They are related to the terms “perfect” or “perfection” in the sense of an end or goal that is completed (the Greek telos [end], and the Latin perficere [to complete])’ Thus, human nature attempts to perfect itself by actualizing the end (the “good,” in Plato’s thought) that is inherent in it. Insodoing it “completes” itself. (Coward, KL 124) Peter Watson in his The Age of Atheists wonders at such notions of good, perfection, progress, telos, etc. asking: “Is the very idea of completion, wholeness, perfectibility, oneness, misleading or even diverting? Does the longing for completion imply a completion that isn’t in fact available? Is this our predicament?”(Watson, 545)

Vernor Vinge in his now classic The Coming Technological Singularity gave his own answer to this question saying,

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.5

Vinge brought to fruition many of the ideas of the good from Plato to David Pearce. Illah R. Nourbakhsh commenting on David Pearce’s The Biointelligence Explosion, tells us that Pearce sets up an antihero to the artificial superintelligence scenario, proposing that our wetware will shortly become so well understood, and so completely modifiable, that personal bio -hacking will collapse the very act of procreation into a dizzying tribute to the ego. Instead of producing children as our legacy, we will modify our own selves, leaving natural selection in the dust by changing our personal genetic makeup in the most extremely personal form of creative hacking imaginable. But just like the AI singularitarians, Pearce dreams of a future in which the new and its ancestor are unrecognizably different. Regular humans have depression, poor tolerance for drugs, and, let’s face it, mediocre social, emotional and technical intelligence. Full-Spectrum Superintelligences will have perfect limbic mood control, infinite self-inflicted hijacking of chemical pathways, and so much intelligence as to achieve omniscience bordering on Godliness.6

In this same work, Singularity Hypotheses: A Scientific and Philosophical Assessment, Dr. David Roden of Enemy Industry blog stipulates his Diconnection Thesis. Part of his wider Speculative Posthumanist stance this thesis provides that basic tenets that the “descendants of current humans could cease to be human by virtue of a history of technical alteration”; and, the notion of a “relationship between humans and posthumans as a historical successor relation: descent.” (Singularity, KL 7390) At the heart of this thesis is the notion “that human-posthuman difference be understood as a concrete disconnection between individuals rather than as an abstract relation between essences or kinds. This anti-essentialist model will allow us to specify the circumstances under which accounting would be possible.” (Singularity, KL 7397) In acknowledgment of Vinge Roden states “Vinge considers the possibility that disconnection between posthumans and humans may occur as a result of differences in the cognitive powers of budding posthumans rendering them incomprehensible and uninterpretable for baseline humans” (Singularity, 7554).

There seems to be a fine line between certain posthuman theorists and transhumanist theorists. Where they seem to converge is in the notion of progress, improvement, and perfectibility of human nature. On the one hand we see the enactment of a total divergence, or transcension, a disconnect between our current embodied natural humans (i.e., you and I), and those that will become our descendents – our posthuman descendents – the yet to be. Yet, the line of difference is more of nuance than of substance. Posthuman seem to seek a transformation to another order of being, a surpassing of the human into the inhuman/posthuman order of being. While the transhumanists seek a new inclusion of existing humanity in an enhanced order of being in which the immortality is the central telos rather than perfectibility of the human condition. Transhumanists find little point in living forever in old bodies, however, even in bodies that remain healthy. So in addition to being immortal, they want humans to engineer themselves to be forever young. Ray Kurzweil, for example, is counting on cloning and stem cells to do the trick, the same technologies that John Harris wants to employ to eliminate the diseases of old age. Our bodies will be rejuvenated, says Kurzweil, “by transforming your skin cells into youthful versions of every other cell type.”7

Secularist dreams of immortality seem more like religionists without a religion, a sort of philosophical humbug trip for disgruntled atheists to wonderland without the need to pay the ticket to Charon. Behind the whole drama of transhuman science is the century old notions of eugenics. The eugenic goals, which had informed the design of the molecular biology program and had been attenuated by the lessons of the Holocaust, revived by the late 1950s. Dredged from the linguistic quagmire of social control, a new eugenics, empowered by representations of life supplied by the new biology, came to rest in safety on the high ground of medical discourse and latter-day rhetoric of population control.8 But the shadow of eugenics has for the most part been erased from our memories. One must be reminded that the original holocaust was part of the progressive movement in medicine within the United States not Germany:

The goal was to immediately sterilize fourteen million people in the United States and millions more worldwide-the “lower tenth”-and then continuously eradicate the remaining lowest tenth until only a pure Nordic super race remained. Ultimately, some 60,000 Americans were coercively sterilized and the total is probably much higher. No one knows how many marriages were thwarted by state felony statutes. Although much of the persecution was simply racism, ethnic hatred and academic elitism, eugenics wore the mantle of respectable science to mask its true character.9

Many might think this is a thing of the past but they would be wrong. Eugenics no longer hides in plain site under the rubric of some moral or progressive creed of eliminating from the human stock a particular germ line. It now hides itself in other guises. One needs only seek out such new worlds of the Personal Genome Project: http://www.personalgenomes.org/ dedicated to what on the surface appears to be a perfectly great notion of health: “Sharing data is critical to scientific progress, but has been hampered by traditional research practices—our approach is to invite willing participants to publicly share their personal data for the greater good.” But such notions were already in place by one of the leaders of the eugenics movement Charles Davenport a century ago:

  • “I believe in striving to raise the human race to the highest plane of social organization, of cooperative work and of effective endeavor.”
  • “I believe that I am the trustee of the germ plasm that I carry; that this has been passed on to me through thousands of generations before me; and that I betray the trust if (that germ plasm being good) I so act as to jeopardize it, with its excellent possibilities, or, from motives of personal convenience, to unduly limit offspring.”
  • “I believe that, having made our choice in marriage carefully, we, the married pair, should seek to have 4 to 6 children in order that our carefully selected germ plasm shall be reproduced in adequate degree and that this preferred stock shall not be swamped by that less carefully selected.”
  • “I believe in such a selection of immigrants as shall not tend to adulterate our national germ plasm with socially unfit traits.”
  • “I believe in repressing my instincts when to follow them would injure the next generation.”10

From the older form of sharing one’s “germ plasm” to the new terms of sharing one’s “personal genome” we’ve seen a complete transformation of the eugenics movement as the sciences transformed from early Mendelian genetics to mid-Twentieth century Molecular Genetics to our current multi-billion dollar Human Genome Project. But the base science of germ line genetics remains the same, and the whole complex of hereditarianism along with it. The reason for this new book which included a facsimile of the original educational manual textbook by Davenport Heredity in Relation to Eugenics is stated by the Cold Harbor review boards as:

…the most compelling reason for bringing Davenport’s book once again to public attention is our observation that although the eugenics plan of action advocated by Davenport and many of his contemporaries has long been rejected, the problems that they sought to ameliorate and the moral and ethical choices highlighted by the eugenics movement remain a source of public interest and a cautious scientific inquiry, fueled in recent years by the sequencing of the human genome and the consequent revitalization of human genetics.

When Mendel’s laws reappeared in 1900, Davenport believed he had finally been touched by the elusive but simple biological truth governing the flocks, fields and the family of man. He once preached abrasively, “I may say that the principles of heredity are the same in man and hogs and sun-flowers.” 54 Enforcing Mendelian laws along racial lines, allowing the superior to thrive and the unfit to disappear, would create a new superior race. A colleague of Davenport’s remembered him passionately shaking as he chanted a mantra in favor of better genetic material: “Protoplasm. We want more protoplasm!”(Black, KL 1053) Redirecting human evolution had been a personal mission of Davenport’s for years, long before he heard of Mendel’s laws. He first advocated a human heredity project in 1897 when he addressed a group of naturalists, proposing a large farm for preliminary animal breeding experiments. Davenport called such a project “immensely important.”(Black, 1068)

In our own time this notion of redirecting evolution is termed “transhumanism”. In section eight of the Transhumanist Declaration one will find: “We favor morphological freedom – the right to modify and enhance one’s body, cognition, and emotions. This freedom includes the right to use or not to use techniques and technologies to extend life, preserve the self through cryonics, uploading, and other means, and to choose further modifications and enhancements.”11 This freedom would also include the use of the latest biogenetic and neuroscientific technologies to transform or enhance humanity. As one proponent of this new morphological freedom put it:

Given current social and technological trends issues relating to morphological freedom will become increasingly relevant over the next decades. In order to gain the most from new technology and guide it in beneficial directions we need a strong commitment to morphological freedom. Morphological freedom implies a subject that is also the object of its own change. Humans are ends in themselves, but that does not rule out the use of oneself as a tool to achieve oneself. In fact, one of the best ways of preventing humans from being used as means rather than ends is to give them the freedom to change and grow. The inherent subjecthood of humans is expressed among other ways through self-transformation. Some bioethicists such as Leon Kass (Kass 2001) has argued that the new biomedical possibilities threaten to eliminate humanity, replacing current humans with designed, sanitized clones from Huxley’s Brave New World. I completely disagree. From my perspective morphological freedom is not going to eliminate humanity, but to express what is truly human even further.(Transhumanist Reader, 63)

That last sentence holds the key to the difference between most posthumanist and transhumanists: posthumans support in Roden’s terms some for of the disconnect thesis of a divergent descent from humans to something else through some technological transformation; while, most transhumanists want to bring the older humanistic notions into some morphological freedom in which humans become enhanced by technologies in ever greater empowerment.

As one outspoken spokesman tells us “genomic technologies can actually allow us to raise the dead. Back in 1996, when the sheep Dolly was the first mammal cloned into existence, she was not cloned from the cells of a live animal. Instead, she was produced from the frozen udder cell of a six-year-old ewe that had died some three years prior to Dolly’s birth. Dolly was a product of nuclear transfer cloning, a process in which a cell nucleus of the animal to be cloned is physically transferred into an egg cell whose nucleus had previously been removed. The new egg cell is then implanted into the uterus of an animal of the same species, where it gestates and develops into the fully formed, live clone.”12 This same author even prophesies that new NBIC technologies will help us in reengineering humanity in directions that natural selection never dreamed of:

Using nanobiotechnology , we stand at the door of manipulating genomes in a way that reflects the progress of evolutionary history: starting with the simplest organisms and ending, most portentously, by being able to alter our own genetic makeup. Synthetic genomics has the potential to recapitulate the course of natural genomic evolution, with the difference that the course of synthetic genomics will be under our own conscious deliberation and control instead of being directed by the blind and opportunistic processes of natural selection. …We are already remaking ourselves and our world, retracing the steps of the original synthesis— redesigning, recoding, and reinventing nature itself in the process. (Regenesis, KL 345)

As Nick Bostrom and Julian Savulescu suggest that human enhancement has moved from the realm of science fiction to that of practical ethics. There are now effective physical, cognitive, mood, cosmetic, and sexual enhancers —drugs and interventions that can enhance at least some aspects of some capacities in at least some individuals some of the time. The rapid advances currently taking place in the biomedical sciences and related technological areas make it clear that a lot more will become possible over the coming years and decades. The question has shifted from ‘‘Is this science fiction?’’ to ‘‘Should we do it?’’.13 They go on to state:

It seems likely that this century will herald unprecedented advances in nanotechnology, biotechnology, information technology, cognitive science, and other related areas. These advances will provide the opportunity fundamentally to change the human condition. This presents both great risks and enormous potential benefits. Our fate is, to a greater degree than ever before in human history, in our own hands.( Human Enhancement, 20-21)

Yet, as the great historian of the eugenics movement Daniel J. Kevles admonished speaking of Francis Galton, one of the progenitors of the genetic enforcement of the eugenics heritage tells us:

Galton, obsessed with original sin, had expected that the ability to manipulate human heredity would ultimately emancipate human beings from their atavistic inclinations and permit their behavior to conform to their standards of moral conduct. But in fact, the more masterful the genetic sciences have become, the more they have corroded the authority of moral custom in medical and reproductive behavior. The melodies of deicide have not enabled contemporary men and women to remake their imperfect selves. Rather, they have piped them to a more difficult task: that of establishing an ethics of use for their swiftly accumulating genetic knowledge and biotechnical power.14

Ethics, Law, Politics have yet to catch up with these strange twists of the eugenic heritage as it is brought to fruition by the great Corporate Funds, Think Tanks, Academies, and Scientific laboratories all part of the vast complex of systems that are moving us closer and closer to some form of Singularity. What should we do? Ultimately I wonder if we have a choice in the matter at all. That is my nightmare.

The novelist’s argument is clear enough: genetic enhancement represents the end of human nature. Take control of fate, and you destroy everything that joins us to one another and dignifies life. A story with no end or impediment is no story at all. Replace limits with unbounded appetite, and everything meaningful turns into nightmare.

– Richard Powers, Generosity: An Enhancement

1. Pynchon, Thomas (2012-06-13). Gravity’s Rainbow (pp. 133-134).  . Kindle Edition.
2. See more at: http://www.philosophycs.com/perfectibility.htm#sthash.ESqeoqFt.dpuf
3. Watson, Peter (2014-02-18). The Age of Atheists: How We Have Sought to Live Since the Death of God (Kindle Locations 7511-7519). Simon & Schuster. Kindle Edition.
4. Harold Coward. The Perfectibility of Human Nature in Eastern and Western Thought (S U N Y Series in Religious Studies) (Kindle Locations 89-100). Kindle Edition.
5. Vinge, Vernor (2010-06-07). The Coming Technological Singularity – New Century Edition with DirectLink Technology (Kindle Locations 16-18). 99 Cent Books & New Century Books. Kindle Edition.
6. Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection) (Kindle Locations 6222-6229). Springer Berlin Heidelberg. Kindle Edition.
7. Mehlman, Maxwell J. (2012-08-10). Transhumanist Dreams and Dystopian Nightmares: The Promise and Peril of Genetic Engineering (p. 23). Johns Hopkins University Press. Kindle Edition.
8. Lily E. Kay. The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology (Monographs on the History & Philosophy of Biology) (Kindle Locations 4511-4513). Kindle Edition.
9. Black, Edwin (2012-11-30). War Against the Weak: Eugenics and America’s Campaign to Create a Master Race, Expanded Edition (Kindle Locations 182-186). Dialog Press. Kindle Edition.
10. Davenport’s Dream: 21st Century Reflections on Heredity and Eugenics (Cold Spring Harbor Laboratory Press, 2008)
11.   (2013-03-05). The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (p. 55). Wiley. Kindle Edition.
12. Regis, Ed; Church, George M. (2012-10-02). Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves (Kindle Locations 269-274). Perseus Books Group. Kindle Edition.
13. Savulescu, Julian; Bostrom, Nick (2009-01-22). Human Enhancement (Page 18). Oxford University Press. Kindle Edition.
14. Kevles, Daniel J. (2013-05-08). In the Name of Eugenics: Genetics and the Uses of Human Heredity (Kindle Locations 6624-6629). Knopf Doubleday Publishing Group. Kindle Edition.

 

Reza Negarestani: Back to the Future

Sufficiently elaborated, humanism—it shall be argued—is the initial condition of inhumanism as a force that travels back from the future to alter, if not to completely discontinue, the command of its origin.

– Reza Negarestani, The Labor of the Inhuman, Part II: The Inhuman

In my first post I elaborated the specific elements of Negarestani’s return to the Enlightenment humanist project (see post). He reiterates again in this essay the basic thematic of his program: the notion that inhumanism is what precedes humanity, that humanity is a model, a construct; yet, not a static model but an ongoing processual development of collective production that is in continuous revisioning process, and that this project is shaped by a normative commitment, a commitment within a “space of reasons” that enforces the stringent task of social constructivism:   a commitment to humanity must fully elaborate how the abilities of reason functionally convert sentience to sapience. As he remarks: “Humanism is by definition a project to amplify the space of reason through elaborating what the autonomy of reason entails and what demands it makes upon us.”

When he tells us that this project has a commitment to the autonomy of reason (via the project of humanism) and is a commitment to the autonomy of reason’s revisionary program over which human has no hold. One wants to rephrase that last italic to which human has no control. That this binding act that puts returns us to that rational world of the enlightenment almost seems like a parody at first take. As if Reza was traveling back to revise the whole history of the enlightenment project from within and show that it was correct all along. That yes, we have always been inhuman, but never human, and that now “we” the collective will begin constructing the new humanity according to an autonomous plan based of that greatest of autonomous agents, autonomous reason.

Yet, this erasure of the human by way of the inhuman is not a return of the Same, but something else: “Once you commit to human, you effectively start erasing its canonical portrait backward from the future. It is, as Foucault suggests, the unyielding wager on the fact that the self-portrait of man will be erased, like a face drawn in sand at the edge of the sea.” It’s as if he were saying: yes, we great ones are going to rewrite history, erase all the bad effects of the past two hundred years, and replace this image of the human with our own thereby inhabiting a time-machine that will conquer two hundred years of mistakes, of war, famine, genocide, ethnocide, etc.

Continue reading

Reza Negarestani: On Inhumanism

Inhumanism, as will be argued in the next installment of this essay, is both the extended elaboration of the ramifications of making a commitment to humanity, and the practical elaboration of the content of human as provided by reason and the sapient’s capacity to functionally distinguish itself and engage in discursive social practices.

– Reza Negarestani, The Labor of the Inhuman, Part I: Human

On e-flux journal   Reza enjoins us to move beyond both humanism and anti-humanism, as well as all forms of a current sub-set of Marxist theoretic he terms “the fashionable stance of kitsch Marxism today”. Taking up both Sellarsian notions of the “space of reasons” as well as the inferential and normative challenges offered by Robert Brandom. Brandom developed a new linguistic model, or “pragmatics”, in which the “things we do” with language is prior to semantics, for the reason that claiming and knowing are actings, production of a form of spontaneity that Brandom assimilates to the normative “space of reasons” (Articulating Reasons 2000).1

Reza starts with the premise that inhumanism is a progressive shift situated within the “enlightened humanism” project. As a revisionary project it seeks to erase the former traces within this semiotic field of discursive practices and replace it with something else, not something distinctly oppositional but rather a revision of the universal node that this field of forces is. It will be a positive project, one based on notions of “contructivism”: “to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.” I’m always a little wary of such notions as models, construction, constructible hypothesis, as if we could simulate the possible movement of the real within some information processing model of mathematical or hyperlinguistic, algorithmic programming. We need to understand just what Reza is attempting with such positive notions of constructions or models otherwise we may follow blindly down that path that led through structuralism, post-structuralism, and deconstruction: all those anti-realist projects situated in varying forms of social constructivsm and its modifications (i.e., certain Idealist modeling techniques based as they were on the Linguistic Turn).

Right off the bat he qualifies his stance against all those philosophies of finitude or even the current trend in speculative realism of the Great Outdoors (Meillassoux, Brassier, Iain Hamilton Grant, Graham Harmon). Against in sense of an essence of the human as pre-determined or theological jurisdictions. Against even the anti-humanist tendencies of both an inflationary and deflationary notion of the human that he perceives even in microhistorical claims that tend toward atomism, he offers a return to the universalist ambitions of the original enlightenment project voided of its hypostasis in glorified Reason. Against such anti-humanist moves he seeks a way forward, a way that involves a collaborative project that redefines the enlightenment tradition and its progeny and achieves the “common task for breaking out of the current planetary morass”.

Continue reading

Neuropath: Further thoughts on R. Scott Bakker

“I’m the world’s first neuronaut, Goodbook. And you’re about to join me.”

– R. Scott Bakker, Neuropath

By now we all know that Scott has an obsession with the brain. One might say he’s a man with a mission. In this novel he allows the drift of his research into the theoretical worlds of neuroscience and philosophy to merge into a neurofiction. I read this work about a year ago but have since gotten better acquainted with the underlying sciences that underpin its unique message.

I want spend time rehearsing the plot of Scott’s book which is really a fictionalization of his pet project, The Blind Brain Theory. What he does in this novel is to embody the dark portent of his current theories as they might actually play out under certain conditions. The main protagonist and villain of the work are Thomas Bible and Neil Cassidy bosom buddies from college who have over the years played a dual role in each others lives: a sort of brain to brain network, a socialization of the brain’s search for its own tail – or, the old serpent biting its own tail mythos.

There comes a point in the novel when Thomas Bible finally falls prey to his old friend’s machinations. Caught in the meshes of a design without a purpose, a method without an outcome other than an exercise in deprogramming, of a gnostic vita negativa in which the mind finally discerns its uselessness at ever resolving the darkest quest of its short life: it lacks the very functions that would help it uncover the sources of its own blindness.

Continue reading

We Are Our Brains

Everything we think, do, and refrain from doing is determined by the brain. The construction of this fantastic machine determines our potential, our limitations, and our characters; we are our brains. Brain research is no longer confined to looking for the cause of brain disorders; it also seeks to establish why we are as we are. It is a quest to find ourselves.

— D.F. Swaab, We Are Our Brains

One could almost say that the brain is a biochemical factory, with neurons and glia as both bureaucracy and workers. Yet, even such a literary reduction wouldn’t really get at the truth of the matter. Jacob Moleschott (1822– 1893) was one of the first to observe that what this factory with all its billions of neurons and trillions of glia produces is what we aptly term the ‘mind’. This process of production from life to death entails: electrical activity, the release of chemical messengers, changes in cell contacts, and alterations in the activity of nerve cells.1

Many of the new technologies as imaging, electromagnetic and biochemical are being used to both study and heal certain long standing malfunctions and neurological disorders in the brain, as well as invasive electro and magnetic therapies applied to patients suffering diseases like Alzheimer’s, schizophrenia, Parkinson’s, multiple sclerosis, and depression. (Yet, I interject, that these technologies present us a double-edged sword that while on the one hand they can be used to heal they can also be used by nefarious governments to manipulate and harm both external enemies and internal citizenry.)

Continue reading

Deleuze: Control and Becoming

New cerebral pathways, new ways of thinking, aren’t explicable in terms of microsurgery; it’s for science, rather, to try and discover what might have happened in the brain for one to start thinking this way or that. I think subjectification, events, and brains are more or less the same thing.

– Gilles Deleuze, Control and Becoming

The new information communications technologies form the core infrastructure of what many have termed our Global Information Society and what Deleuze once termed under the more critical epithet “societies of control”.  As Harold Innis once stated in his classic work Empire and Communications: “Concentration on a medium of communication implies a bias in the cultural development of the civilization concerned either towards an emphasis on space and political organizations or towards an emphasis on time and religious organization.”1 With the spread of information culture and technologies the older forms of newspaper, radio, television, and cinema form the core nexus of propaganda machines for both government and corporate discipline and control within national systems, while – at least in the free world, information technologies remain borderless and open systems. Yet, even this being called into question in our time. With both governmental and international agency pressure the protocols for invasive control over the communications of the internet are becoming the order of the day.

Continue reading

Pete Mandik: On Neurophilosophy

An introduction to reductionism and eliminativism in the philosophy of mind, by Professor Pete Mandik of William Paterson University. Three youtube.com vids that give a basic intro to Paul and Patricia Churchland’s notions following W.V. Quine that science and philosophy should inform each other, and the establishment of that within the philosophy of mind termed neurophilosophy. Might skip the first five minutes of the vid one, mainly speaking to his class. (In fact you could probably skip the first vid, which basically introduces the aforementioned philosopher/scientists and move right into the second vid which immediately speaks directly to the topics) Otherwise a good basic intro for those that want to know the difference between the reductionist and eliminativist approaches.

Continue reading

Operational Neuroscience: The Militarization of the Brain

“Why design a machine to read thoughts when all you have to do is shut down a few circuits and have your subject read them out for you?”

– R. Scott Bakker,  Neuropath

————————

In a presentation to the intelligence community five years ago, program manager Amy Kruse from the Defense Advanced Research Projects Agency (DARPA) identified operational neuroscience as DARPA’s latest significant accomplishment, preceded by milestone projects that included the Stealth Fighter, ARPANET, the GPS, and the Predator drone. National security interests in operational neuroscience encompass non-invasive, non-contact approaches for interacting with a person’s central and peripheral nervous systems; the use of sophisticated narratives to influence the neural mechanisms responsible for generating and maintaining collective action; applications of biotechnology to degrade enemy performance and artificially overwhelm cognitive capabilities; remote control of brain activity using ultrasound; indicators of individual differences in adaptability and resilience in extreme environments; the effects of sleep deprivation on performance and circadian rhythms; and neurophysiologic methods for measuring stress during military survival training.

Anthropologist Hugh Gusterson, bioethicist Jonathan Moreno, and other outspoken scholars have offered strong warnings about potential perils associated with the “militarization of neuroscience” and the proliferation of “neuroweapons.” Comparing the circumstances facing neuroscientists today with those faced by nuclear scientists during World War II, Gusterson has written, “We’ve seen this story before: The Pentagon takes an interest in a rapidly changing area of scientific knowledge, and the world is forever changed. And not for the better.” Neuroscientist Curtis Bell has called for colleagues to pledge that they will refrain from any research that applies neuroscience in ways that violate international law or human rights; he cites aggressive war and coercive interrogation methods as two examples.

Read Article: Neuroscience, Special Forces and Yale by Roy Eidelson.

Stephen Jay Gould: On the Reduction/Anti-Reduction Debate

At this point in the chain of statements, the classical error  of reductionism often makes its entrance, via the following argument: If our  brain’s unique capacities arise from its material substrate, and if that  substrate originated through ordinary evolutionary processes, then those unique  capacities must be explainable by (reducible to) “biology” (or some other  chosen category expressing standard scientific principles and procedures).

The primary fallacy of this argument has been recognized  from the inception of this hoary debate. “Arising from” does not mean “reducible  to,” for all the reasons embodied in the old cliche that a whole can be more  than the sum of its parts. To employ the technical parlance of two fields,  philosophy describes this principle by the concept of “emergence*,” while science  speaks of “nonlinear” or “nonadditive” interaction. In terms of building  materials, a new entity may contain nothing beyond its constituent parts, each  one of fully known composition and operation. But if, in forming the new entity,  these constituent parts interact in a “nonlinear” fashion—that is, if the  combined action of any two parts in the new entity yields something other than  the sum of the effect of part one acting alone plus the effect of part two  acting alone—then the new entity exhibits “emergent” properties that cannot  be explained by the simple summation of the parts in question. Any new entity  that has emergent properties—and I can’t imagine anything very complex  without such features—cannot, in principle, be explained by (reduced to)  the structure and function of its building blocks.

— Stephen Jay Gould, In Gratuitous Battle

—————————————————-

* A note he qualifies his use of “emergence”:

Please note that this definition of “emergence” includes no  statement about the mystical, the ineffable, the unknowable, the spiritual, or  the like—although the confusion of such a humdrum concept as nonlinearity  with this familiar hit parade has long acted as the chief impediment to  scientific understanding and acceptance of such a straightforward and  commonsensical phenomenon. When I argue that the behavior of a particular  mammal can’t be explained by its genes, or even as the simple sum of its genes  plus its environment of upbringing, I am not saying that behavior can’t be  approached or understood scientifically. I am merely pointing out that any full  understanding must consider the organism at its own level, as a product of  massively nonlinear interaction among its genes and environments. (When you  grasp this principle, you will immediately understand why such  pseudosophisticated statements as the following are not even wrong, but merely  nonsensical: “I’m not a naive biological determinist. I know that intelligence  represents an interaction of genes and environment—and I hear that the  relative weights are about 40 percent genes and 60 percent environment.”)

The American Cyborg: Neuroscience, DARPA, and BRAIN

Proverbs for Paranoids: You may never get to touch the Master, but you can tickle his creatures.

– Thomas Pynchon,  Gravity’s Rainbow

What if the Master has a steel face and looks something like the DARPA Atlas in the image above? When we discover the Master is a mask for the economic masters one need not worry about tickling any creatures whatsoever, more than likely they will be tickling you soon enough. That’s what I thought the first time I saw the White House BRAIN. Yes, yes… the new Manhattan Project of the decade or millennia is to unlock the secrets in the your skull – that three-pound loaf of grey matter that swims behind your eyes recreating moment by moment the words you are reading in the blips and bits of electronic light from your screen at this very moment. In the bold print we hear about the wonders that will be accomplished through such research: “…a bold new research effort to revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.” All good, of course, nothing wrong with solving the terrible problems of the brain that have brought so much devastation and suffering to millions. But then one looks down the page and notices where the major portion of the funding is going and realizes … hmm… military (DARPA) expenditure: $50 million for understanding the dynamic functions of the brain and demonstrating breakthrough applications based on these insights.

The Defense Advanced Research Projects Agency (DARPA) is the central research and development organization for the Department of Defense (DoD). It manages and directs selected basic and applied research and development projects for U.S Department of Defense and pursues research and technology where risk and payoff are both very high and where success may provide dramatic advances for traditional military roles and missions. DARPA sponsors such things as robotic challenges(here). Their mission statement tells it all:

The Defense Advanced Research Projects Agency (DARPA) was established in 1958 to prevent strategic surprise from negatively impacting U.S. national security and create strategic surprise for U.S. adversaries by maintaining the technological superiority of the U.S. military.

To fulfill its mission, the Agency relies on diverse performers to apply multi-disciplinary approaches to both advance knowledge through basic research and create innovative technologies that address current practical problems through applied research.  DARPA’s scientific investigations span the gamut from laboratory efforts to the creation of full-scale technology demonstrations in the fields of biology, medicine, computer science, chemistry, physics, engineering, mathematics, material sciences, social sciences, neurosciences and more.  As the DoD’s primary innovation engine, DARPA undertakes projects that are finite in duration but that create lasting revolutionary change.

The Mind-Body Debates: Reductive or Anti-Reductive Theories?

More and more I have come to see in the past few years that the debates in scientific circles seem to hinge on two competing approaches to the world and phenomena: the reductive and anti-reductive frameworks. To really understand this debate one needs to have a thorough understanding of the history of science itself. Obviously in this short post I’m not going to give you a complete history of science up to our time. What I want to do is to tease out the debates themselves, rather than provide a history. To do that entails to philosophy and history rather than to specific sciences. For better or worse it is in the realm of the history of concepts that one begins to see the drift between these two tendencies played out over time. Like some universal pendulum we seem to see the rise and fall of one or the other conceptual matrix flit in and out as different scientists and philosophers debate what it is they are discovering in either the world or the mind. Why? Why this swing from reductive to anti-reductive then back again in approaches to life, reality, and mind-brain debates?

Philosophers have puzzled over this question from the time of Pre-Socratics, Democritus, Plato, Aristotle onwards… take the subject of truth: In his book TruthProtagoras made vivid use of two provocative but imperfectly spelled out ideas: first, that we are all ‘measures’ of the truth and that we are each already capable of determining how things are for ourselves, since the senses are our best and most credible guides to the truth; second, given that things appear differently to different people, there is no basis on which to decide that one appearance is true rather than the other. Plato developed these ideas into a more fully worked-out theory, which he then subjected to refutation in the Theaetetus. In his Metaphysics  Aristotle argued that Protagoras’ ideas led to scepticism. And finally Democritus incorporated modified Protagorean ideas and arguments into his theory of knowledge and perception.

Continue reading

Thomas Nagel: Idealism and the Theological Turn in the Sciences

The view that rational intelligibility is at the root of the natural order makes me, in a broad sense, an idealist— not a subjective idealist, since it doesn’t amount to the claim that all reality is ultimately appearance— but an objective idealist in the tradition of Plato and perhaps also of certain post-Kantians, such as Schelling and Hegel, who are usually called absolute idealists. I suspect that there must be a strain of this kind of idealism in every theoretical scientist: pure empiricism is not enough.

– Thomas Nagel, Mind and Cosmos

Now we know the truth of it, and why Thomas Nagel has such an apparent agenda to ridicule and topple the materialist world view that he seems to see as the main enemy of his own brand of neutral monism: a realist of the Idea, whether one call it mind or matter – it’s neutral. What’s sad is his attack on scientific naturalism and its traditions even comes to the point where he offers the conclusion that even religion upholds a more appropriate view of reality than the naturalist:

A theistic account has the advantage over a reductive naturalistic one that it admits the reality of more of what is so evidently the case, and tries to explain it all. But even if theism is filled out with the doctrines of a particular religion (which will not be accessible to evidence and reason alone), it offers a very partial explanation of our place in the world.(25)

Continue reading

The Mind-Body Debates: Beginnings and Endings

Jaegwon Kim tells us it all started with two papers published a year apart in the late fifties: “The ‘Mental’ and the ‘Physical'” by Herbert Feigl in 1958 and “Sensations and Brain Processes ” by J. J. C. Smart the following year. Both of these men brought about a qualitative change in our approach to the study of the brain and its interactions with the physical substrate. Each of them proposed in independent studies an approach to the nature of mind that has come to be called the mind-body identity theory, central-state materialism, the brain state theory, or type physicalism. That the identity theory in itself would lose traction and other theories would come to the fore, the actual underlying structure of the debates would continue to be set by the framework they originally put in place. As Kim suggests:

What I have in mind is the fact that the brain state theory helped set the basic parameters and constraints for the debates that were to come – a set of broadly physicalist assumptions and aspirations that still guide and constrain our thinking today.1

This extreme form of reductionist Physicalism was questioned by the multiple realizability argument  of Hilary Putnum and the anomalous argument by Donald Davidson. At the heart of Putnum’s argument as the notion of functionalism, that mental kinds and properties are functional kinds at a higher level of abstraction than physicochemical or biological kinds. Davidson on the other hand offered the notion of anomalous monism that the mental domain, on account of its essential anomalousness and normativity , cannot be the object of serious scientific investigation, placing the mental on a wholly different plane from the physical. At first it seemed to many of the scientists of the era that these two approaches, each in its own distinctive way, made it possible for “us to shed the restrictive constraints of monolithic reductionism without losing our credentials as physicalists” (4). Yet, as it turned out this, too, did not last.

Continue reading

Thomas Nagel: Constitutive Accounts – Reductionism and Emergentism

Thomas Nagel in his Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False starts from the premise that psychophysical reductionism, a position in the philosophy of mind that is largely motivated by the hope of showing how the physical sciences could in principle provide a theory of everything has failed to prove its case. As he states the case:

This is just the opinion of a layman who reads widely in the literature that explains contemporary science to the nonspecialist. Perhaps that literature presents the situation with a simplicity and confidence that does not reflect the most sophisticated scientific thought in these areas . But it seems to me that, as it is usually presented, the current orthodoxy about the cosmic order is the product of governing assumptions that are unsupported, and that it flies in the face of common sense.1

You notice the sleight of hand was move from “unsupported” to “flies in the face of common sense”. He seems over an over in his book to fall back on this common sense doxa approach when he’s unable to come up with legitimate arguments, admitting his amateur status as “nonspecialist” as if this were an excuse; and, then qualifying his own approach against the perceived “sophisticated scientific literature” as a way of disarming it in preference to his own simplified and colloquial amateurism.  The sciences of physics, chemistry, and biology are the key sciences that he wishes to use to prove his case. Behind it is a notion of a philosophy of “neutral monism” that he seems to favor: he tells us he “favors some form of neutral monism over the traditional alternatives of materialism, idealism, and dualism” (KL 71-72). As he tells it: “It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection. We are expected to abandon this naïve response, not in favor of a fully worked out physical/ chemical explanation but in favor of an alternative that is really a schema for explanation, supported by some examples.(KL 85-88)” To support his book’s overall theme he asks two major questions of the scientific community of reductionists:

First, given what is known about the chemical basis of biology and genetics, what is the likelihood that self-reproducing life forms should have come into existence spontaneously on the early earth, solely through the operation of the laws of physics and chemistry? The second question is about the sources of variation in the evolutionary process that was set in motion once life began: In the available geological time since the first life forms appeared on earth, what is the likelihood that, as a result of physical accident, a sequence of viable genetic mutations should have occurred that was sufficient to permit natural selection to produce the organisms that actually exist?(KL 89-93)

Continue reading

The Rise of Science and the Mathematization of Reality: Competing Views

It [Mathematics] did not, as they supposed, correspond to an objective structure of reality; it was a method and not a body of truths; with its help we could plot regularities—the occurrence of phenomena in the external world—but not discover why they occurred as they did, or to what end.

– Isaiah Berlin, from an entry in Dictionary of the History of Ideas – The Counter-Enlightenment

Isaiah Berlin in his entry on what he termed the “counter-Enlightenment” tells us that opposition “…to the central ideas of the French Enlightenment, and of its allies and disciples in other European countries, is as old as the movement itself”. 1 The common elements that these reactionary writers opposed in the Enlightenment project were notions of autonomy of the individual, empiricism and scientific methodology, its rejection of authority and tradition, religion, and any transcendent notions of knowledge based on faith rather than Reason. Berlin himself places Giambattista Vico (1668-1744) and his Scienza nuova (1725; radically altered 1731) as playing a “decisive role in this counter-movement”. He specifically uses the term “counter-movement” rather than the appellation “counter-Enlightenment”.

I’ve been following – – blog Persistent Enlightenment, and one of the interesting threads or series of posts on his site deals with the concept of “Counter-Enlightenment,” a term coined by none other that Isaiah Berlin in the early 50’s (see his latest summation: here). I believe that he correct in his tracing of this concept and its history and use in scholarship. Yet, for myself, beyond tracing this notion through many different scholars I’ve begun rethinking some of the actual history of this period and of the different reactions to the Enlightenment project itself as well as the whole tradition of the sciences. One really needs to realize the Enlightenment itself is the culmination of a process that started centuries before with the emergence of the sciences.

Stephen Gaukroger’s encyclopedic assessment of the sciences and their impact on the shaping of modernity has been key in much of my own thinking concerning the history and emergence of the sciences as well as the understanding of the underpinnings of the mechanistic world view that informs it in this early period. One of the threads in that work is the battle between those traditionalist scholars of what we now term the “humanities” who seek to protect human learning – the study of ancient literature along with philosophy, history, poetry, oratory, etc. – as Gaukroger says, “as an intrinsic part of any form of knowledge of the world and our place in it” (1).1  He mentions Gibbon’s remark that during his time that the study of physics and mathematics has overtaken the study of belles lettres as the “pre-eminent form of learning” (1). In our own time this notion that philosophy and the humanities are non-essential to the needs of modern liberal democracies has taken on a slight edge as well.

Continue reading

Neuroethics: The Dilemmas of Brain Research

Henry T. Greely in a recent article Neuroethics: The Neuroscience Revolution, Ethics, and the Law paints a gloomy picture of our posthuman future. In this paper he breaks down the revolution in neuroethics into four domains: prediction, litigation, confidentiality and privacy, and patents. He tells us it is the responsibility of any ethicist to understand the ethical, legal, and social consequences of new technologies to look disproportionately for troublesome consequences. Neuroethics is specific to the new branches of neurosciences.

Ethical problems revolving around neuroscientific research have induced the emergence of a new discipline termed neuroethics, which discusses issues such as prediction of disease, psychopharmacological enhancement of attention, memory or mood, and technologies such as psychosurgery, deep-brain stimulation or brain implants. Such techniques are capable of affecting the individual’s sense of privacy, autonomy and identity. Moreover, reductionist interpretations of neuroscientific results challenge notions of free will, responsibility, personhood and the self which are essential for western culture and society. They may also gradually change psychiatric concepts of mental health and illness. These tendencies call for thorough, philosophically informed analyses of research findings and critical evaluation of their underlying conceptions of humans. The stakes are high, for it entails nothing less and nothing more that the core values that have guided since the Enlightenment.

Continue reading

Neuromilitary: The Dark Side of Government Spending

Michael N. Tennison and Jonathan D. Moreno report that National security organizations in the United States, including the armed services and the intelligence community, have developed a close relationship with the neuroscientific community. The latest technology often fuels warfighting and counter-intelligence capacities, providing the tactical advantages thought necessary to maintain geopolitical dominance and national security. Neuroscience has emerged as a prominent focus within this milieu, annually receiving hundreds of millions of Department of Defense dollars. Its role in national security operations raises ethical issues that need to be addressed to ensure the pragmatic synthesis of ethical accountability and national security. (abstract)

They make the obvious observance that the military establishment’s interest in understanding, developing, and exploiting neuroscience generates a tension in its relationship with science: the goals of national security and the goals of science may conflict. An understatement, or is this the wave of the future? The sciences have not been neutral for a long while now. As John Brockman once said “Where the money flows the science flows.” This should be no surprise.

Continue reading

Neuroenhancement: The Shadow Worlds of Science or Economy?

“Free will is an illusion,” Sam said in a strange tone.

– R. Scott Bakker,  Neuropath 

Happened on an article over at Mindhacks which is actually old hat, but made me think about both the uses and abuses of our new cognitive neurosciences. He talks about an essay in the British Journal of Psychiatry in which the so called new and cognitive enhancement medicines are already set to improve ethical behaviour and we should be prepared for a revolution in ‘moral pharmacology’. In one article the experts conclude that as cognitive neuroscience and related technologies become more pervasive, using technology for nefarious purposes becomes easier.1 As the study suggests “the intelligence community (IC) faces the challenging task of analyzing extremely large amounts of information on cognitive neuroscience and neurotechnology, deciding which of that information has national security implications, and then assigning priorities for decision makers”. You can bet that if there asking questions about the moral implications that there already thinking about how they can use these new sciences for war. With all the new imaging technologies along with new pharmacological neuromedicines a whole new world of human experimentation is taking place under our noses. It could be asked, What types of experiments are being done? How are the experiments being controlled and monitored, and why were they chosen? How would human experimentation be conducted outside accepted informed-consent limits?

And where there is money and corporations involved there are patents. New U.S. and international market incentives are driving this research into neuropsychopharmacology. But as he suggests even though the fact that many currently marketed drugs are or have been major sources of profit the ethical concerns have gone unnoticed. A whole new underground or shadow market economies will arise. Neurotechnology enhancement market is analogous to the athletic performance enhancement market. People will make the choice to take illegal and off-label prescription neuropharmaceuticals even if they do not know the side effects or believe that the side effects are worth the potential enhancement. This controversial market will grow dramatically if evidence becomes available that a specific drug is consistently effective in improving performance.(ibid.)

Continue reading