Paul Churchland: The Framework of the Mind

“Orolo said that the more he knew of the complexity of the mind, and the cosmos with which it was inextricably and mysteriously bound up, the more inclined he was to see it as a kind of miracle— not in quite the same sense that our Deolaters use the term, for he considered it altogether natural. He meant rather that the evolution of our minds from bits of inanimate matter was more beautiful and more extraordinary than any of the miracles cataloged down through the ages by the religions of our world. And so he had an instinctive skepticism of any system of thought, religious or theorical, that pretended to encompass that miracle, and in so doing sought to draw limits around it.”

– Neal Stephenson,  Anathem

We can all agree that our inherited notions of matter and the material world as seen within classical notions of materialism have not been able to sustain the revolutionary developments of twentieth-century physics and biology. For centuries Isaac Newton’s idea of matter as consisting of ‘solid, massy, hard, impenetrable, and movable particles’ reigned in combination with a strong view of laws of nature that were supposed to prescribe exactly, on the basis of the present physical situation, what was going to happen in the future. This complex of scientific materialism and mechanism was easily amalgamated with common-sense assumptions of solid matter as the bedrock of all reality. In the world view of classical materialism, it was claimed that all physical systems are nothing but collections of inert particles slavishly complying with deterministic laws. Complex systems such as living organisms, societies, and human persons, could, according to this reductionist world view, ultimately be explained in terms of material components and their chemical interactions.

We’ve all heard it before that one of the specific tasks of a philosophy of science is to investigate the limits of even the best developed and most successful forms of contemporary scientific knowledge. It may be frustrating to acknowledge, but we are simply at the point in the history of human thought at which we find ourselves, and our successors will make discoveries and develop forms of understanding that will more than likely surpass our present understanding. Humans are addicted to the hope for a final reckoning, but intellectual humility requires that we resist the temptation to assume that tools of the kind we now have are in principle sufficient to understand the universe as a whole. Scientists are well aware of how much they don’t know, but this is a different kind of problem— not just of acknowledging the limits of what is actually understood but of trying to recognize what can and cannot in principle be understood by certain existing methods.

Most will understand classical materialism was situated itself within philosophically reductionist position that mental properties are identical to—and in that sense are nothing but—physical properties. (Idealism was the competing reductionist answer to the Mind-Body Problem, reducing physical properties to mental properties.) Throughout most of the history of philosophy, materialism took the form of what today we call the Identity Theory, according to which mental properties are identical to internal bodily properties, whether they be the properties associated with Democritean atoms, Hobbesian motions in the body or, in our period, electrochemical interactions at the neurological level.

In the first half of the Twentieth Century another form of materialist reductionism emerged, namely, Behaviorism, according to which mental properties are identical to behavioral properties (dispositions of the body to behave in certain ways in certain circumstances). In the 1960s and ’70s a third form of reductionism gained prominence, namely, functionalism, according to which our standard mental properties and relations (being conscious, thinking, etc.) are identical to (and hence reducible to) second-order properties: specifically, mental properties are held to be definable in terms of the characteristic interactions of their first-order ‘realizer’ properties with one another and the external environment—where in the actual world, and perhaps all possible worlds, these first-order realizer properties are physical properties. On a strong version of this view (‘Functionalism’), the realizers of mental properties are necessarily first-order physical properties, from which it follows that mental properties are necessarily second-order physical properties and therefore belong to the general ontological category of physical property. Like the Identity Theory and Behaviorism, Functionalism qualifies as a form of Reductive Materialism.1

A newer form of this approach became what we now term Eliminative Materialism, according to which there simply are no mental properties—or, at least, no instantiated mental properties. It turns out, however, that there are extremely few full-blown Eliminative Materialists. Most philosophers who identify themselves as eliminative materialists do so simply because they reject some central subcategory of mental property. For example, Paul and Patricia Churchland reject propositional-attitude properties, but they nevertheless accept that there are experiential properties (regarding which they adopt a certain form of reductionism).v Moreover, although they deny that there is a propositional attitude of knowing, they hold that there is knowledge. Another radical view is that there is no consciousness whatsoever (and so, in particular, no conscious experiential properties); but among the few philosophers of mind who have held this view, most have accepted that there are at least nonconscious propositional-attitude properties. The fact is that it is difficult to think of any major philosopher today who is thoroughgoing eliminativist, who holds that there are absolutely no (instantiated) mental properties—no knowing, no experiencing, no consciousness.2

The de-centering of the human or the de-athropomorpizing tendency that is beginning to slowly subvert the humanistic traditions, is also another aspect of this slow decay of the older forms of classical materialism. As Paul Churchland says, in his new book Plato’s Camera, that up till now most of our representational theories were centered in language as the only “systematic representational system available to human experience” (PC, 5).3  He goes on to say:

An account of cognition that locates us on a continuum with all of our evolutionary brothers and sisters is thus a prime desideratum of any responsible epistemological theory. And the price we have to pay to meet it is to give up the linguaformal ‘judgment’ or ‘proposition’ as the presumed unit of knowledge or representation. But we need no longer make this sacrifice in the dark. Given the conceptual resources of modern neurobiology and cognitive neuromodeling, we are finally in a position to pursue an alternative account of cognition, one that embraces some highly specific and very different units of representation. (PC, 5)

When we think about the conceptual frameworks that have dominated philosophy during its history we two sorts of notions concerning concepts. In the writings of Plato, Descartes, and Fodor illustrate the first great option: since one has no idea how to explain the origin of our concepts, one simply pronounces them innate, and credits either a prior life, almighty God, or fifty million years of biological evolution for the actual lexicon of concepts we find in ourselves. The works of Aristotle, Locke, and Hume illustrate the second great option: point to a palette of what are taken to be bundle theory of sensory ‘simples,’ such as the various tastes, smells, colors, shapes, sounds, and so forth, and then explain our base population of simple concepts as being faint ‘copies’ of the simple sensory originals, copies acquired in a one-shot encounter with such originals. Nonsimple or ‘complex’ concepts are then explained as the recursively achieved concatenations and/ or modulations of the simple ‘copies’. (PC, 15)

Churchland argues that neither of these two conceptual traditions provides a viable foundation for our understanding of modern cognitive capacities. Neither the blanket nativism of the first nor the concatenative empiricism of the second framework offers us what we seek. The first option confronts the difficulty of how to code for the individual connection-places and connection-strengths of fully 1014 synapses— so as to sculpt the target conceptual framework— using the resources of an evolved genome that contains only 20,000 genes, 99 percent of which we share with mice, with whom we parted evolutionary company some fifty million years ago. As he explains:

The real difficulty is the empirical fact that each person’s matured synaptic configuration is radically different from anyone else’s. It is utterly unique to that individual. That synaptic configuration is thus a hopeless candidate for being recursively specifiable as the same in all of us, as it must be if the numbers gap just noted is to be recursively bridged, and if the same conceptual framework is thereby to be genetically recreated in every normal human individual (PC, 15).

The second option fares no better. Empirical research on the neuronal coding strategies deployed in our several sensory systems reveals that, even in response to the presumptively ‘simplest’ of sensory stimuli, the sensory messages sent to the brain are typically quite complex, and their synapse-transformed offspring— that is, the downstream conceptualized representations into which they get coded— are more complex still, typically much more complex. As he explains it:

The direct-inner-copy theory of what concepts are, and of how we acquire them, is a joke on its face, a fact reflected in the months and years it takes any human infant to acquire the full range of our discriminatory capacities for most of the so-called ‘simple’ sensory properties. Additionally, and as anyone who ever pursued the matter was doomed to discover, the recursive-definitions story suggested for ‘complex’ concepts was a crashing failure in its own right. Try to construct an explicit definition of “electron,” or “democracy”— or even “cat” or “pencil,” for that matter— in terms of concepts that plausibly represent sensory simples (PC, 15-16).

Instead of either of these philosophical frameworks Churchland tells us that what we need is a workable story of how a neuronal activation space can be slowly sculpted, by experience, into a coherent and hierarchical family of prototype regions. This story also accounts for the subsequent context-appropriate activation of those concepts, activations made in response to sensory instances of the categories they represent. And this same neurostructural and neurofunctional framework sustains penetrating explanations of a wide variety of perceptual and conceptual phenomena, including the profile of many of our cognitive failures. It forms the essential foundation of the larger epistemological theory. (PC, 16)

He terms this new framework the activation-vector-space framework appropriate to the brain’s basic modes of operation (PC, 18).Collectively, these many subspaces specify a set of ‘nomically possible’ worlds— worlds that instantiate the same categories and share the enduring causal structure of our own world, but differ in their initial conditions and ensuing singular details. He finishes saying:

Accordingly, those spaces hold the key to a novel account of both the semantics and the epistemology of modal statements, and of counterfactual and subjunctive conditionals. The account envisioned here does not require us to be swallow-a-camel ‘realists’ about the objective existence of possible worlds, nor does it traffic in list like state-descriptions. In fact, the relevant representations are entirely non-discursive, and they occupy positions in a space that has a robust and built-in probability metric against which to measure the likelihood, or unlikelihood, of the objective feature represented by that position’s ever being instantiated (PC, 18).

At the center of his theoretical framework is the three levels of Learning that guide both human and non-human cognitive processes. In the first level we find a process of structural change, primarily in the microconfiguration of the brain’s 1014 synaptic connections. The product of this process is the metrical deformation and reformation of the space of possible activation patterns across each receiving neuronal population. This product is a configuration of attractor regions, a family of prototype representations, a hierarchy of categories— in short, a conceptual framework. In biological creatures, the time-scale of this unfolding process is no less than weeks, and more likely months, years, and even decades. It is slow, even molasses-like. It never ceases entirely, but by adulthood the process is largely over and one is more or less stuck with the framework it has created (PC, 33).

In the second level we find a process of dynamical change in one’s typical or habitual modes of neuronal activation, change that is driven not by any underlying synaptic changes, but by the recent activational history and current activational state of the brain’s all-up neuronal population. Bluntly, the brain’s neural activities are self-modulating in real time, thanks to the recurrent or feed-backward architecture of so many of its axonal projections. The product of this process is the systematic redeployment, into novel domains of experience, of concepts originally learned in a quite different domain of experience. It involves the new use of old resources. In each case, this product amounts to the explanatory reinterpretation of a problematic domain of phenomena, a reinterpretation that is subject to subsequent evaluation, articulation, and possible rejection in favor of competing reinterpretations. The time-scale of such redeployments is much shorter than that of structural or basic-level learning— typically in the region of milliseconds to hours. This is the process that comprehends sudden gestalt shifts, ‘eureka’ effects, and presumptive conceptual revolutions in an individual’s cognitive take on some domain of phenomena (PC, 33-34).

And, finally, in the third level we find a process of cultural change, change in such things as the language and vocabulary of the community involved, its modes of education and cultural transmission generally, its institutions of research and technology, and its techniques of individual and collective evaluation of the conceptual novelties produced at the first two levels of learning. This is the process that most decisively distinguishes human cognition from that of all other species, for the accumulation of knowledge at this level has a time-scale of decades to many thousands, even hundreds of thousands, of years. Its principal function is the ongoing regulation of the individual and collective cognitive activity of human brains at the first two levels of learning (PC, 34).

He adds a cautionary note saying that the tripartite division of these learning levels is a deliberate idealization whose function is to provide a stable framework for more detailed exposition at each level, and for an exploration of the major interactions between the three levels (PC, 34).

1. Koons, Robert C.; Bealer, George (2010-03-25). The Waning of Materialism . Oxford University Press.
2. Paul M. Churchland  (Author), Patricia Smith Churchland. On the Contrary: Critical Essays, 1987-1997. A Bradford Book; First Edition edition (June 18, 1999)
3. Churchland, Paul M. (2012-03-09). Plato’s Camera. MIT Press. Kindle Edition.

1 thought on “Paul Churchland: The Framework of the Mind

Leave a comment