R. Scott Bakker: The Last Magic Show

There always seems to be a fine line in commentary between teasing thoughts out of the mind of another, and the downright obliteration of those very thoughts by an insidious misappropriation and transformation or distortion that takes place in any philosophical commentary. Over the years – and, I’ve literally read thousands of commentaries of specific authors, books, etc. – I’ve come to the realization that most of us will probably never agree on the meaning of reality, that we all tend to differing conceptions due to culture, natural disposition, and the inexplicable and as of yet undefined modes of our specific existences. We are a mystery that will never be wholly explained. Even the idea that we are ‘critical thinkers’ has recently been called into question with the assertion that the ‘rationalizing brain’ is a thinking machine which is far too complex to be reduced to the older forms of subjectivity and intentionality. As my friend over at Three Pound Brain, R. Scott Bakker iterates over an over: we’re all blind to our own brain, and all the cultural and philosophical baggage coined under the term ‘intentional awareness’ is sham through and through. Scott even reminds us in his provisional manifesto (here) that those of our contemporary literati and philosophical radicals (so called) are actually quite conservative – still believing in the old terms, the old mythology of the Self as Subject even under the auspices of overthrowing such conceptual bric-a-brac, etc. :

Where the Old Theory discusses ‘fragmented subjectivities,’ cognitive science has moved on to fragmented intentionalities more generally, questioning the stability and reality of things–context, affect, normativity, perception, and so on–that the Old Theory still takes for granted. The Old Theory, in other words, continues to anthropomorphize its discursive domain, positing intentionalities that the sciences are now calling into serious question. Ignorant of the truly radical alternatives, it continues to service the same folk-psychological intuitions that underwrite the cultural status quo.

Science treats us as machines, and fragmented machines at best, broken mis-measurers of reality who blinded to their own partial knowledge or lack of such assume metacognitive appropriation of the real where none is to be had. “How many puzzles whisper and cajole and actively seduce their would-be solvers? How many problems own the intellect that would overcome them?” So begins Bakker’s The Last Magic Show: A Blind Brain Theory of the Appearance of Consciousness (here). Bakker already admits to his outsider status within the domain of scientific practice and discipline that he has chosen to stake his theoretical proclivities (here), but sees this as par for the course for any viable future theory which for him will embrace “the crank, the amateur, understanding that unprecedented answers tend to come from institutionally unconstrained sources–from the weeds outside our academic gardens.”

As I continue to read Scott’s blog I have slowly ingrained myself to his terminology, which seems to float through many disciplines in search of a key to tap others of like mind. He doesn’t mind the crankiness and quirkiness of his work, or even the castigation of it he receives. For him this is all par for the course of any new theory: the test is that people cannot remain neutral to its impact, they can only love or hate it – never sit on the fence with its conceptions. Scott is an avid reader of current literature dealing with ‘intentionality’ and the sciences and philosophy of mind and consciousness. Over time he has honed his arsenal of tools and approach to his own ignorance and Socratic path. I admire his tenacity and forthrightness. He seems like the proverbial dog of Diogenes always barking at the masters bitter truths realizing that what he sees both exasperates him and astounds him. Sometimes he wants to be kicked, hoping someone will disprove his hunches; yet, time and time again, the veritable panoply of oncomers fail to convince and fall by the wayside as he continues his search for the definitive martialing of his theory.

In the Last Magic Show he alludes to the discrepancy between the appearance and the scientific descriptive portrayal of consciousness, and of the need for a supplementary theory to tease out the appearance of consciousness. But before tackling such a theory one wants Scott to first explain what he means by appearance and consciousness. Should we assume these terms mean something specific for him or that they should be qualified by the history of their use in science or philosophy; or, even as partial of the accepted definitions (ie. the OED, etc.). Do we just assume a complicity between the writer and his audience that we all have the same understanding of these terms and their heuristic use in the text? Why should I even raise this as an issue? Shouldn’t the text itself in the movement of its words bring out the meaning of these two such important terms and their use as Scott continues his discourse.

Since he does not make explicit what he means by such terms up front, then we must continue our reading and see what he is up too. In the next paragraph he unloads a bomb: “The central assumption of the present paper is that any final theory of consciousness will involve some account of multimodal neural information integration.” He actually places a footnote for this (and of course we will assume for better or worse that this is a published paper for a specific audience, and not intended for the general reader who may or may not be knowledgeable of such terminology). And, of course in the footnote he informs us that the underpinnings of much of his theory are idealizations of other theoretical work in the sciences: “Tononi’s Information Integration Theory of Consciousness (2012) and Edelman’s Dynamic Core Hypothesis (2005). The RS as proposed here is an idealization meant to draw out structural consequences perhaps belonging to any such system.”

Tonino starts with phenomenology which ties him to the whole history of a specific set of philosophical presuppositions that I will not belabor. The point is that for Tonino consciousness is ‘integrated information’: a physical and quantifiable effect of the brain and not some substantive entity either immersed or transcendent of the brain. Our consciousness is generated out of neural processes for specific evolutionary reasons. One can see the full lecture:

For Edelman and Tonino on the Dynamic Core Hypothesis one can read their Consciousness and Complexity paper here. I’ll leave this to the reader to pursue. A blog post for the future could delve into both of these in depth but for the moment I’m dealing again with R. Scott Bakker’s proposal. Yet, since these two men’s work seem to underpin his essay it might be good to know just what they are proposing.

We propose that a large cluster of neuronal groups that together constitute, on a time scale of hundreds of milliseconds, a unified neural process of high complexity be termed the “dynamic core,” in order to emphasize both its integration and its constantly changing activity patterns. The dynamic core is a functional cluster –its participating neuronal groups are much more strongly interactive among themselves than with the rest of the brain. The dynamic core must also have high complexity — its global activity patterns must be selected within less than a second out of a very large repertoire.

The point being that consciousness is the effect of a specific set of interacting neurons termed the ‘dynamic core’ and its communicative processes in integrating messages or chemical transformations from the global brain as part of a  specific functionary dynamism of complex processes (feedback loops, energy transfer, chemical reactors, etc.). The crux of their goal is a theory that supports the “belief that a scientific explanation of consciousness is becoming increasingly feasible”. The point being for them is to have a scientifically valid theory that relates the phenomenology of consciousness to a “distributed neural process that is both highly integrated and highly differentiated”.

Now Bakker in his reading sees consciousness as the product of a “Recursive System (RS) of some kind, an evolutionary twist that allows the human brain to factor its own operations into its environmental estimations and interventions”(here). The use of the term ‘recursive system’ comes from the technical use made by Dynamic Core theory: “The dynamic core consists of a momentary subset of the thalamocortical system defined by active synapses.  Positive feedback/reentrant signals circulate in the network of the dynamic core. The active synapses comprising the dynamic core continually change as the dynamic core updates recursively on the basis of about 100 ms.” (here) For Bakker the subjective personal identity of first person is an illusion, a confusion of our experience of consciousness which is actually a machine of neuronal activity blind to its own emergent processes which become conscious only after these specific sub-neuronal processes have emerged from the function of the Dynamic Core.

Yet, I wonder, is our awareness of being aware an illusion of this process as well? Or is it part of the actual dynamic process in its ongoing neuronal activity, being only one phase of this process and not the whole gamut? Why are we aware of our awareness to begin with? Is it because of these recursive feedback-loops interacting at such high rates and complexity that we confuse the process for something else: a center of self and subjectivity? Knowing the facts of this brain activity does not take away the awareness of our awareness, so how explain this awareness of our consciousness to begin with? This so called science tries to describe the process not the outcome, but we are more interested not in the material processes that over the evolutionary strand have due to some quirk in our natural history brought about this blind brain. What we are interested in is an explanation of why we are aware at all? Why do we need consciousness to begin with? Why this confusion of self and world, this seeming sense of a self to begin with? If we accept that this is a lie, an illusion created by the process itself then is it something useful, a happy accident of evolution? Explaining it in scientific terms doesn’t really get at the heart of the confusion so far as I can see. Knowing that we are just the fabrication of a blind brain immersed in sub-neural and neuronal processes explains only the bare minimum of the brain itself, but this doesn’t really get at consciousness at all. Instead it just complicates the matter with more questions.

Why did evolution bring about consciousness in just this specific form in humans and not in other creatures? Why are other creatures not aware of their awareness? Why humans? What brought about this strange if complicated separation between the brain and its awareness, and of its ability to recursively process its own awareness? Why are there thinking minds to begin with? What in the evolutionary process brought the need for thinking to begin with? And, why just one specific species? If that is even true.

As Bakker informs us over and over we’re we “generally don’t possess the information we think we do!” Consciousness is just the tip of a great iceberg or abyss that we are completely unaware of. Ok I’ll bite, and realize we filter out almost 99% (of course we have no quantifiable measuring stick for this, scientific or otherwise) of the data below our conscious mind. We seem to thrive quite nicely on our ignorance and let the physical brain do the rest in unconscious bliss. But one does not need a rocket scientist to tell us that if we had all that information at our disposal in one moment we’d be unable to see the forest for the trees, we’d be lost in a maze of information. So what we discover is that consciousness is a filter, a selective center of a specific set of processes that integrates the information that is processed below the stream in the brain and brings to awareness only the specific information needed to get on with the physical process of life itself. Is this so hard to accept? Surely not! We all understand that we need only what will help us get on with our work. The crux is not in this, we only become aware of it as a problem when we are unable to retrieve the information needed, when the brain for medical or other reasons does not work, and in fact breaks down and is no longer able to integrate the information: then we call for either the medical or psychological teams to investigate.

Of course Bakker is not unaware of this quagmire:

..at some point in our recent evolutionary past, perhaps coeval with the development of language, the human brain became more and more recursive, which is to say, more and more able to factor its own processes into its environmental interventions. Many different evolutionary fables may be told here, but the important thing (to stipulate at the very least) is that some twist of recursive information integration, by degrees or by leaps, led to human consciousness. Somehow, the brain developed the capacity to ‘see itself,’ more or less.

This is where my own questions start? Why? What event or strange evolutionary process brought this about? Why us and not other animals as well? If recursivity is game then why did evolution see this for just one specific species? There needs to be something more concrete that a ‘fable’ to explain this? Bakker again has a guess for this in the wings “the RS is an assemblage of ‘kluges,’ the slapdash result of haphazard mutations that produced some kind of reproductive benefit (Marcus, 2008).” But this is more surmise than actual answer. Another scientific fable to confuse more that enlighten us about the fabric of consciousness and its specific form in the human animal.

Yet, Bakker admits to my own point saying “We have good reason to suppose that the information that makes it to consciousness is every bit as strategic as it is fragmental. We may only ‘see’ an absurd fraction of what is going on, but we can nevertheless assume that it’s the fraction that matters most …” Exactly! For whatever reason the information we get is what we need to get own with our work whatever that might be, and yet sometimes we need more we need to invent other avenues of information that the brain lacks. What then? If the brain does not give us what we need what then? Could this lead us to ask other questions as to why we formed a specific type of consciousness that we did? Is brain science the last answer, the be all end all of a physical apprehension of these processes?

Sometimes I get the feeling that Bakker sees consciousness as a bit player, as a passive pony in a parade that is for the most part hidden in the recesses of recursive processes totally out of its control of sway. But is this true? Is consciousness just a passive receptacle, a sort of central void where all these recursive processes finally integrate and divulge their long labors in the unconscious brain? –

The problem lies in the dual, ‘open-closed’ structure of the RS. As a natural processor, the RS is an informatic crossroads, continuously accessing information from and feeding information to its greaterneural environment. As a consciousness generator, however, the RS is an informatic island : only theinformation that is integrated finds its way to conscious experience. This means that the actual functions subserved by the RS within the greater brain —the way it finds itself ‘plugged in’—  are no more accessible to consciousness than are the functions of the greater brain. And this suggests that consciousness likely suffers any number of profound and systematic misapprehensions.

His use of the metaphor ‘plugged in’ as if this dynamic core were machine plugged into the greater databank of the brain with consciousness totally blank and devoid of knowledge of this specific engine it is connected too. I sometimes feel like we are reading a new Lovecraft novel written by a scientist rather than a literary fantasist. And of course Bakker is that as well (no pun intended).

So ultimately we come to crux of Bakker’s theory, BBT of Blind Brain Theory: “Blind Brain Theory of the Appearance of Consciousness simply represents an attempt to think through this question of information and access in a principled way: to speculate on what our ‘conscious brain’ can and cannot see.”

So his actual theory is quite specific more toned down that it’s actual portrayal in post after post on his blog. A speculative theory on the brains blindness and insight into its own recursive processes. Simple and sweet, yet infinitely complex in its actual goals. What I like about Bakker’s work so far is that he moves us beyond the quagmires of present philosophical literature. Current philosophy in it anit-representaionalist and representationalist literature Analytic or Continental deal with the extremes of Subject or Object. In Badiou and Zizek we start with the ‘Subject’, with others – such as the SR or OOO gang with ‘Objects’ and a multitude of those in between those two extremes measuring the world in processes. I simplify of course. But my drift is that those such as Zizek deal with the void of self, the abyss within around which consciousness like a satellite revolves in recursive formation; while others like Graham Harman consider objects as withdrawn and unknowable, as recursive dynamic systems that consciousness is totally blind too. Bakker on the other hand coming out of a naturalistic scientific philosophical background seeks scientific terminology of the newer brain sciences that try to move us beyond the use of Subject and Object altogether.

The next question that arises is ‘Time’, and specifically the now of our conscious mind, the first-person singular illusion he speaks of. As he says, “Any theory that fails to account for it fails to explain a central structural feature of consciousness as it is experienced. It certainly speaks to the difficulty of consciousness that it takes one of the most vexing problems in the history of philosophy as a component!” For RS theory time is nothing more that the integration point where the brain becomes conscious: this is the moment we experience as ‘now’.  As Bakker would have it “Our experience of  time is an achievement. Our experience of nowness, on the other hand, is astructural side-effect. The same way our visual field is boundless and yet enclosed by an inability to see, our temporal field – this very moment now –  is boundless and yet enclosed by an inability to time. This is what makes the now so perplexing, so difficult to grasp: it is what might be called an ‘occluded structural property of experience.’”

One could spend an essay or even a book on just what Time is and its relation to consciousness. Yet, it is one of the cornerstones of many philosophical debates. In the older Newtonian universe the spatio-temporal dimensions were extensive and contained in a passive receptacle. In recent time Whitehead offered a more dynamic cross-sectional theory. As most scientists know experiments that might serve as bases for the construction of a physical theory or that might serve as tests for the confirmation of a physical theory are subject to the demand that standard conditions prevail or that suitable correction factors be introduced to ensure the consistency and the comparison of the experimental results. Otherwise, the experimental results would be one-time reports with no significance beyond isolated experiments, certainly not beyond the domain of the peculiar conditions that do prevail in the experiments. Also, were there not an assumption of standard conditions, it would follow that theories would be constructed and confirmed with reference only to peculiar conditions prevailing in particular areas where the experimentation takes place.

I’m not a Whitehead expert but feel there is an important part of his work to be still investigated. In Process and Reality we discover that for him the physical and geometrical order of nature in were described in terms of “a hierarchy of societies” (PR 147-50, 506-08). Basically, a “society” is a grouping of events which manifest a common characteristic, the presence of that characteristic being guaranteed by the relations which the events sustain. The physical and geometrical order of nature is constituted by at least three societies, “the society of pure extension,” “the geometric society,” and “the electromagnetic society.” The point to be noted is the relationship of the geometrical society and the electromagnetic society. The latter is embedded, so to speak, in the former, so that a determination of the variable physical quantities which characterize the electromagnetic society is obtained against a background of relationships which comprise a uniform metric structure:

The whole theory of the physical field is the interweaving of the individual peculiarities of actual occasions upon the background of systematic geometry. (PR 507)

[T] hese diversities and identities are correlated according to a systematic law expressible in terms of the systematic measurements derived from the geometric nexus. (PR 150)

When I think of the recursive embedding of these differing hierarchies of societies I’m reminded of how consciousness too is embedded in a recursive nexus of processes of which it is unaware, but that can be measured through a determination of certain variable physical quanta through an analogous background of relationships that comprise the uniform metric structure of the global brain itself. The now being nothing more than one of those ‘actual occasions’ upon which the background is woven. If one applied the exactitude of such geometrical precision to the brain science one might actually be able to systematically measure the peculiarities of consciousness itself in a scientific way. A testable theory!

Without going into every detail of Bakker’s essay, which I could not begin to do full justice too in one blog post. I will instead leave you with his parting words:

I sometimes fear that what we call ‘consciousness’ does not exist at all, that we ‘just are’ an integrative informatic process of a certain kind, possessing none of the characteristics we intuitively attribute to ourselves. Imagine all of your life amounting to nothing more than a series of distortions and illusions attending a recursive twist in some organism’s brain. For more than ten years I have been mulling ‘brain blindness,’ dreading it–  even hating it. Threads of it appear in every novel I have written. And I still can’t quite bring myself to believe it.

This idea that we are machines, ‘integrative informatics processing’ machines at that, who have for so long assumed grandiose dribble about our personal worth and identity seems to be Bakker’s worst nightmare come true. What it seems to me is that he has discovered what is coming toward us, the future belongs to something else… something not quite human, yet born of our own strange informatics processes: the cyborgs and artificial intelligences that we may one day give birth too may look back quaintly at this troubled angel of flesh and blood and wonder just what all the fuss was about anyway. Maybe the last magic show is not for us but for our electronic children. Wouldn’t that be a recursive twist for the comic book heroes of an age to come… or is that age upon us? Nightmares indeed…

48 thoughts on “R. Scott Bakker: The Last Magic Show

  1. just passed some of RSB’s recent writing on Dennett along to Dan and if he responds (I don’t imagine he will comment on the blog tho that would be nice) I’ll let you know.

    Like

  2. Pingback: Circulating Flesh, Fractal Flesh & Phantom Flesh -Stelarc | synthetic_zero

  3. Awesome stuff. “The Last Magic Show” is a pretty enthymemetic portal into BBT, and I’m sure you’ve been too kind to the presentation if not the argument! There’s too much groping (both the dark and the reader) for my taste anymore. I feel like I have a far better handle on this strange space, now anyway. I’m glad that you foregrounded two of BBT’s cardinal virtues: a naturalistic theoretical means of at last putting the subject/object dichotomy to bed, and the way it redefines the landscape of the posthuman.

    Because this allows you to stress what is far and away the most important thing: the fact that the post-intentional blackboard is *blank.* My pessimistic prognostications are purely speculative. BBT is what allows you to diagnose and go through traditional philosophy. The rest is undiscovered country.

    Like

  4. Pingback: Soul-Making vs the Blind Brain Theory | Footnotes 2 Plato

  5. Pingback: The Philosopher’s Trilemma (is just the tip of the iceberg) | Three Pound Brain

  6. You’ve expanded this post substantially in the middle since it originally went up, I believe, or else I just went too fast through it the first time. That middle section nicely summarizes the dynamic core idea. Brain studies provide empirical support for a multimodal activation that varies dynamically through rapid iterative cycles, as you describe. There also seems to be a region of the cortex devoted to the continual integration of the multimodal inputs, though this function is decidedly not the homunculus sitting at the control panel of earlier models.

    You observe that, if human consciousness works, it attends only to what it needs and not to the vast quantities of information available to it. Yet humans do process a lot of information at an unconscious level. And this processing is distributed throughout the nervous system: if memory serves, the optic nerve can pass to the visual cortex only something like 5% of the information it receives from the rods and cones in the eyes. There’s a lot of pattern-matching and elimination of redundancy going on in the background all the time. Both conscious and unconscious cognition are resource-intensive activities in order to maximize what economists might term cost-efficiency. To grind through all the informational details would require even bigger brains (that wouldn’t pass through the birth canal), longer processing times (that might often prove too slow in a rapidly-changing environment), and the need for much more caloric intake.

    Why recursive reflexivity? Well if you throw out the possibility of intentions a priori then maybe self-awareness is epiphenomenal. But suppose when you get out of bed in the morning you want to check the weather outside. You need to monitor the environment — is this the right route to get from bedroom to front door? — as well as your bodily state — am I walking without falling down or veering off course? But you also need to keep your intention refreshed — why am I walking toward the back door? Most of this trimodal monitoring goes on unconsciously, but if you get confused — you’re in someone else’s house, say, so you’re not sure where the front door is — then the dynamic core can get activated, focusing on those few essential variables that will allow you to accomplish the intent. This would support your proposition that recursive self-awareness is fully integrated with iterative environmental monitoring.

    You note that, in order to understand how a system works, it can be helpful to contemplate the system when it’s not working properly. Last week my dad moved in with us. His cognitive capacities are severely compromised by aggressive Alzheimer’s. I’m going outside, he’ll announce. He can barely put one foot in front of another any more, not due to physical debilitation but because he can’t remember too well how walking is done. He gets lost between his bed and the front door, a voyage of maybe a hundred feet unblocked by any obstacles. And by the time he reaches the door he’s forgotten why he was going there in the first place.

    Like

    • I sympathize… yes, the malfunctions of the brain reveal the problems and challenges of self and identity, etc. When the brain breaks down, when the neruons that were once connected and channeling directives fired correctly no one stopped to question the brain and its role in the process of awareness. But like anything else when something breaks we suddenly become fully attentive to that object and begin questioning its functions and operations.

      Sad to say but it seems like anything else that our ‘knowledge’ of something (brain science, etc.) will always fall far short of truly understanding the thing in itself. We become aware of something (ie., conscious of it) when it breaks. When it is working we never, for the most part, question its use or operation.

      The brain like anything else on this planet is a product of evolutionary processes that took place over eons of experimentation. That it took this form within humanity is not something we can lightly reduce to such a theory as BBT. BBT is really nothing more than the observation that the self does not exist in the way we have been taught over the past couple of millennium, but to me this is not something new as R. Scott Bakker would have us believe. He is stuck for the most part in the illusion that science has uncovered something new. But to me if you have studied philosophy and religion, both East and West there are many such resemblances that harken to this concept of the blind brain theory.

      In the east through my practice of martial arts there is a concept of No-Mind. Eastern traditions the concept of no-mind (or no-self) means prior to thought, prior to desire, prior to any conceptualization, whatsoever. It is discovered by stripping away all sensation, desire, concepts, intellection, volition, and awareness of “I.” Buddhism calls this mind the Buddha Nature, and much of Buddhist practice is aimed at its realization.

      Many people think this is mystical mumbo-jumbo, but in truth it’s the pragmatic awareness of the empty mind, of the self as illusion, of awareness as the empty vessel of awareness. That attachment to the objects of awareness shifts us into the illusion that we are this or that, when in truth we are nothing, nothing at all…

      Even such a current philosopher as Zizek harbors such a view and bring Buddhism in defense of it:

      “This is why, for Buddhism, the point is not to discover one’s “true Self,” but to accept that there is no such thing, that the “Self ” as such is an illusion, an imposture. In more psychoanalytic terms: not only should one analyze resistances, but, ultimately, “there is really nothing but resistance to be analyzed; there is no true self waiting in the wings to be released.” The self is a disruptive, false, and, as such, unnecessary metaphor for the process of awareness and knowing: when we awaken to knowing, we realize that all that goes on in us is a flow of “thoughts without a thinker.””

      see: http://bigthink.com/postcards-from-zizek/slavoj-zizek-on-buddhism-and-the-self

      Like

      • “but to me this is not something new as R. Scott Bakker would have us believe.”

        I’ve heard this complaint (I call it the ‘No Big Whup’ criticism) a number of times over the years, and like I always say, the ‘self’ is just the easiest target: Buddhist thought is intentional through and through, as are all the Western ‘philosophies of suspicion.’ The ‘self’ is such a small corner of the problem! This is the same ‘intentional opportunism’ you find throughout all contemporary theory, Anglo-American as well. People want to be able to eliminate this or that without facing up to the prospect of global elimination. My challenge is always the same: show me a theoretical outlook critical of intentionality that doesn’t play this pick and choose (or worse, bait and switch) game?

        No takers so far.

        The charge of scientism is far and away the most common criticism, but the problem here is Theoretical Incompetence (What theoretical cognition can we trust outside the sciences?) and the related problem of the Big Fat Pessimistic Induction (Why should we think the prescientific discourses of the human will not be revolutionized in a manner similar to all previous prescientific discourses whose domain has been scientifically colonized?). Science provides actionable, high-dimensional information which in turn maximizes efficiencies and returns: with the reverse-engineering of bios, all becomes techne, and techne is mechanical. It becomes very hard to see how traditional, intentional forms of thinking can be anything more than a sop in such an environment – apologia for ever more efficiently managed masses. If theory is to have any hope of being ‘critical,’ it seems to me that it has to follow something like the route I’m taking. Pick and choose philosophies like Zizek’s, etc. are simply reservoirs for the dissipation of criticality, if you ask me.

        Once again, I’m open to counter-arguments.

        Like

      • @rsbakker

        Don’t think my comment was really a criticism of your stance, just that there are other worlds than pure science that have hit upon aspects of this conceptual framework before. How it has been implemented by differing schools from history to now is of course debatable.

        As for Zizek I keep wondering if you’ve actually read any of his work or if it is just hearsay on your part. Obviously you read philosophy in your younger years and came to a conclusion for yourself that it wasn’t for you in that sense. You seem to lump a lot of present philosophy with the pomo crowd ‘philosophies of suspicion’… I don’t think I would call Zizek a bait and switch philosopher in your sense… He’s pretty straight forward about what he is up too, and most of it harkens back to Socrates stance that it is what questions we ask that is important, not the answers: if we can get the questions right the answers will come of their own accord.

        You’re whole investment in mechanistic science: “Science provides actionable, high-dimensional information which in turn maximizes efficiencies and returns: with the reverse-engineering of bios, all becomes techne, and techne is mechanical.” seems dubious at best. Obviously with your score humans will be obsolete in a few decades and the singularity will alleviate us of our humanity and there will be nothing but pure techne: the androids, cyborgs, and machinic beings of the future. Some would agree that this transhuman future is inevitable. All I can ask is will those machines enjoy music: a concert in the Boston commons, a rock-in-roll jam session in Berkely, a country western dance in Tennessee, a Saturday night all nighter in my Aunt’s kitchen in Louisiana? I mean what will art of any type mean to these machines, or even these stripped down humans who seem more socio-pathic and emotionless. Will they know joy or sadness, will they remember pain? Wasn’t it Nietzsche who said it was our pain, the cruelty of life that gave us our humanity? What will happen to us when there is no pain? What of the brain when we give birth to machinic beings that no longer need fleshly brains? What kind of consciousness will this be? Have you become the prophet of the posthuman?

        Like

  7. “there are other worlds than pure science that have hit upon aspects of this conceptual framework before…” The Charvaka… maybe? The problem is that the ability to systematically distinguish between intentional and nonintentional concepts took quite some time to develop (another sign of metacognitive incapacity). Again, this is one of the issues I’m VERY interested in being wrong about. I understand that various thinkers have fastened on various intentional problematics as the ‘Problematic Ontological Assumption,’ but globally? You have to look to Anglo-American tradition to get close (and with Rorty, very close). What examples are you thinking of?

    “As for Zizek…” Not much, just bits and pieces, and a fairly deep reading of The Parallax View. Check out: http://rsbakker.wordpress.com/2013/01/23/zizek-hollywood-and-the-disenchantment-of-continental-philosophy/

    “All I can ask is will those machines enjoy music…” Who can say, but becauce machines already do enjoy music, it will have nothing to do with the fact they are machines if they don’t. This is the horrific upshot.

    Like

  8. Pingback: Jonathan Edwards, Calvinist Neuroscientist | Ktismatics

  9. There are times when I feel ‘big whup’ about all this and times when it is exhilarating to read, either in its horrific dimension or in its resonance with my more pessimistic core. For me it always comes down to the pathological. Even if Scott is completely correct then the question for me is the pragmatic one of what to do with that truth? ktismatics’s father’s dementia is certainly one affecting example of what happens when the recursive systems of infomatic integration breakdown, but no less are depression and anxiety.

    Anxiety is my preferred example in these discussions, following ideas around the existence of a neurocognitive behavioural inhibition system that functions via pragmatic heuristic decision-making around what information (in the form of either external or internal bodily stimuli) is necessary to the functioning of the body. When the criteria for informatic salience is disturbed or non-functional there can be an electrifying overstimulation of the all the physiological systems responsible for the perceptual-behavioural circuitry (the unconsciously reflexive motor-arc of Merleau-Pontian phenomenology). This leads to too much information being tracked, an excess that cripples the body’s ability to respond adequately, triggering mis-attributions of threat or danger levels (this links to a “looming threat” model of panic).The upshot is that behavioural inhibition goes out the window: we over-respond, over-react. All of this occurs via a physiological-perceptual-environmental feedback that is mediated by informatic translations, that is via the necessary compression of a direct experience that would be far too rich for the brain or any other organ to process. Anxiety is thus not so much existential as it is fully physiological- but nonetheless, consciousness in the form of cognition can be acted on, worked with, and tampered so as to engender a return from the anxious series (i say series rather than “state” to emphasis its dynamism).

    This physiological circuit is fully machinic in the Deleuzian sense insofar as it is composed as an assemblage of various units of various scales, molar and molecular. Yet as ever, none of this actually obliterates the existential dimension of anxiety per se. It might say: “well look, that experience is an epiphenomenon generated by these processes…” but it doesn’t off-centre my experience of subjectivation. Whether or not my experience corresponds to a naturalistic scientific explanation is indifferent to that experience.

    I realise this is just to repeat an oft stated criticism of Scott’s work, but the important emphasis for me rests on the pragmatic efficacy of the experience of subjectivation and the capacity to ignore the (increasingly likely to be) epiphenomenality of intentionality. Whether we see through all this, whether we are able to produce more prostheses (techne is nothing new in itself) such as deep basal stimulation in the treatment of depression, or transcranial stimulation in addictions, seems largely irrelevant to experience. One way of defining depression is precisely as the failure of the pragmatic capacity for self-delusion. How can I know I am a machine and hold onto my “self”? Because I know “I” am nothing but a convenient fiction- but the fictivity of myself recedes from my virtual working spaces because…it just isn’t needed.

    I also wonder how much of this- I realise it accords with certain strands of neuroscience- is just displacing self onto brain. The brain is the integrative organ for informatic flow, but as is pointed out above a lot of this kind of integration is carried out throughout the distributed network of the nervous system. When we take into account that organs themselves do some of the work that contributes to integration (the neuronal tissue located in the gut?), and that much of our biochemical existence is literally outside us, being ‘transmitted’ or ‘radiated’ to other endocrine systems, can we really place so much emphasis on the centrality of the brain?

    The more interesting angle for me with all this is the way it points to bodies rather than brains, and even then to the way that bodies fail to be contained by themselves. I’d be interested in reconstructing a physiological philosophical tradition…and perhaps I still miss some essential point, but so much of the above seems consistent with both Nietzsche and his darker heir, EM Cioran. And there again, a lot of this still seems to sit pretty closely to folk like Evan Thompson, qua Steven’s comments on Buddhism.

    As a final comment- this seems to spell less the end of intentionality itself and more the end of a transcendental conception of intentionality as belonging to this “self-as-subject”. Of course, there is a vast philosophical territory that takes this identification as axiomatically wrong (Marx, but also Nietzsche and his latter followers). It seems to me that rather than intentionality, we’re now left with a multiplication of intentionalities that compete, struggle with one another, hijack each other, cooperate together- and that these exist within and across organisms. For these intentionalities very different affective attachments or evalutations are made (so addicts can crave their addictive substance/state whilst also wanting to be clean, whilst also wanting to party etc etc).

    The question is then- and in a way that doesn’t specifically reference Vagel- whether there remains a post-intentional phenomenology?

    Like

    • My biggest beef with Bakker is that he wants to do away with all illusions as he sees them through the eyes of cutting edge Science. Even Nietzsche admitted we need our illusions: they help us get on with the work of life. Do away with all illusions you would live in a world of pure fear and hate, bound to the tyrannical dictates of machinic life. But what are Illusions, anyway? Fictions, delusions, inventive tools to cope with the unknowns, strange devices to negotiate our way through life and windows onto reality. If as Nietzsche and, even, the poet, T.S. Eliot, remind us “humans cannot bear too much reality,” then where would Scott’s scientific worldview leave us? Heck, even Freud wanted to strip away the illusions, but was himself caught in the trap of his own mistaken scientism. Blake, the poet, said we are all bound to systems of though and feeling, but will we be bound to our those of our own making or another’s?

      If as Lacan, Zizek, Badiou, Meillassoux, Johnston all agree: we are unhitched from the old mythologies of the Big Other then do we want to replace it with a new Sciences as our Utopian Mythology? Are we giving ourselves over from one grand illusion to another?

      I tend to agree with one caveat with your statement “we’re now left with a multiplication of intentionalities that compete, struggle with one another, hijack each other, cooperate together- and that these exist within and across organisms.” Instead of the word ‘intentionalities’ being multiplied, I tend to see what Rorty once called ‘vocabularies’ as being the definitive battleground of the near future. Science and Philosophy have long been both accomplices and enemies in many respects. But both seem to be seeking a new framework, a new vocabulary that will move us forward and give us a solid stance from which to investigate reality. This return to some form of ‘Realism’ in our time – after the long battles of the ‘Linguistic Turn’ in the last century are not likely to end so quickly. We will probably see many vying vocabularies fighting it out in the marketplace of notions.

      All we can say for sure is that Bakker is placing his bets on Science rather than other institutions for his vocabularies, and specifically on the narrowed world of the brain sciences.

      Like

      • Anxiety will be obliterated. As little as we know, we already spend billions researching pharmacological hammers to dampen and deaden – what happens when we have the ability to excise it outright? We will excise it outright, or turn it into a pet so as to savour this or that element of our ‘ancestral phenodrome.’ The ‘vocabulary’ (I think this concept is too slim) of science is the one that enslaves environments to our existing evolutionary imperatives, so *of course* it’s going to carry the field – the way it always has. By time those imperatives themselves have been dismantled, discussions like these will be little more than crayon scribbles.

        In other words, the ‘pragmatic’ (efficacious) dimension you both mention is actually a key element of my position. All my anti-messianic posturing is a consequence of the Big Fat Pessimistic Induction. Nobody talks about the problem of ‘scientism’ in biology or geology or astrophysics simply because in these domains of theoretical cognition science is the only game in town. The question I keep asking is what makes the domain of the ‘human’ anything special in this regard? If you agree we are nature all the way through…

        As a one-time Heideggerean I understand full well how ‘narrow’ the approach I’m taking sounds. But the question I keep asking is not so different than Adorno’s: What other socially efficacious (pragmatic) approach is there? Lemme know! I’ll sign up straightaway (given some shred of cognitive reliability)! I appreciate that all these ‘illusions’ (gear-grindings) are themselves efficacious, but my question is: How, aside from keeping professional philosophers employed (and legions of undergrads stupefied and New Age gurus well-sexed and so on and so on)?

        Science is the vehicle that delivered us to these straits, so it seems safe to assume that science is what will drive us through… or over, as the case might be. Once again, what else is there?

        You guys both know what neuroscientific researchers themselves sound like, how alien their discourse is to traditional means of understanding ourselves – and *their fields are just getting off the ground*! What will they sound like in ten years? Twenty years? The pragmatic question is the one possessing the most fundamental horrors, I think.

        “I also wonder how much of this- I realise it accords with certain strands of neuroscience- is just displacing self onto brain.”

        BBT’s focus on the brain (as opposed to body or environment) is a matter of complexity, not ontology. It’s the black box, is all. The ongoing challenge is one of how the brain plugs into the rest of nature, and us along with it. Once that challenge is overcome, the question becomes one of how to read the human across different scales of nature. The ‘human’ can be something that begins with the Big Bang, or the flush of dopamine during sex. Everything is physical, embodied, enmeshed.

        “It seems to me that rather than intentionality, we’re now left with a multiplication of intentionalities that compete, struggle with one another, hijack each other, cooperate together- and that these exist within and across organisms.”

        There are no intentionalities on BBT, just systems that can be more or less reliably cognized via different social heuristics (to talk of intentionalities substantive is to succumb to the illusion of theoretical metacognitive access). What we will have are a plethora of systems that ‘componentialize’ the traditional ground of the human, criss-crossing it with ulterior functions, rendering individuals components of far greater systems.

        Like

  10. Let’s assume, Scott, that you’re right: that intentions are not efficacious, that they are illusory, that they are merely the self-deceptive euphemism humans apply to the causal chains driving their affects, thoughts, and behaviors. Now let’s suppose that people wise up, that they come to an awareness of their prior brain-blind self-deception. What then would be the expected effect of wielding Occam’s Razor, of exposing the intentionality illusion? I would presume that there would be no effect whatever. The only consequence would be that people attain a less-blind understanding of the way their brains have always functioned and continue to function. I.e., if intentionality doesn’t make anything happen, then awareness of this inefficacy would make no difference. Even giving up the habit of forming intentions wouldn’t make any difference, because intentions don’t make any difference anyhow. What am I missing here?

    Like

    • I guess I’m not entirely clear on the question. Intentional heuristics will be used so long as we stand on this side of posthumanity. We evolved these mechanisms for ‘good reason.’ What would really be transformed would be our theoretical second-order understanding of ourselves – and profoundly so. The folk will continue believing in their cherished conceptual fetishes as they always do. The industrial elites will abandon the last shreds of humanistic pretence to the extent that explicitly treating consumers as machines amounts to competitive advantages. The scientific elites will interrogate the ‘human’ unencumbered’ by traditional intentional occultisms like ‘content,’ ‘representation,’ and so on. The intellectual elites will war across these very lines, with those unable to abandon their metacognitive intuitions providing apologetic, humanistic rationalizations, and with those who can speaking a language that becomes ever more incomprehensible to traditional ears.

      Like

    • If we evolved intentionality for good reason, what would you say that reason is? Is it to keep us from becoming aware of our being pawns of the deterministic cause-effect chains, such that we preserve our sense of plenitude as decisive agents? Or did humans evolve intentions because they actually work? Presumably you believe that the self-deception of faux-intentionality is the adaptive mutation, keeping us plugging away as if we actually exert control over our own destinies. You’re acknowledging that even if this self-deception is exposed, most humans still won’t stop performing their intentionality routines. Agreed.

      You say that, by acknowledging that intentionality is a pervasive human self-deception, the industrial elites will change the way they deal with consumers and the scientific elites will change the way they conduct human research. These changes wouldn’t be intentionally planned, I presume. Marketing campaigns and scientific research projects respond passively to the causal effects of increased awareness that intentionality is illusory. Presumably this awareness would not be conscious, since in your view consciousness too is illusory. So an unconscious awareness of the illusion of conscious intentionality will cause marketers’ and scientists’ projects to be changed? Hmm.

      Like

      • “If we evolved intentionality for good reason, what would you say that reason is?”

        Since I suspect you’re referring to the ‘intentionality’ you think you intuit, I’ll rephrase the question to one of why we evolved the actual heuristics responsible: the inability of the brain to track neural complexities (self or other) the way it tracks environmental complexities. (I say this ad nauseum).

        Lot’s of straw in your second paragraph, I fear. My guess is that you can’t see how to think around what strikes you as self-evident experience of your ‘efficacious intentionality.’ Why are you so inclined to trust these intuitions, especially given all the counter-intuitive things discovered so far?

        Like

      • Your rephrasing is fine. Are you saying that the self-deceiving intuition of intentionality confers survival value on a species that lacks accurate self-awareness? I.e., is faux intentionality a compensatory mechanism, not optimal but good enough to have passed the test of natural selection such that pretty much all well-functioning humans have this capacity? Or are you saying that the false intuition on intentionality is maladaptive?

        The second paragraph is just the logical consequence of your position. I’m mulling it over, not dismissing it. I don’t doubt that human decision-making can be influenced by factors outside of awareness, and that these factors can be manipulated. But if there’s no intentionality then even the elite scientists and manipulators can’t override this inability. Certainly the movement of science and economics are affected by forces outside of conscious awareness and control.

        Humans can offload all sorts of tasks for which they aren’t well-equipped naturally. Some devices compensate for physical limitations; others, for mental limitations. Is it possible for brain-blind humans to offload self-awareness of mental processes onto a device? Arguably some humans are already doing that, via scientific understanding of how human thinking works. Similarly, can intentionality-challenged humans offload intentionality onto artifacts that do a better job of weighing the relative pros and cons influencing specific decisions? I’d say that some humans are doing that already too.

        So maybe the so-called posthumans are the ones who have acknowledged their mental and volitional limitations and have learned to compensate for them prosthetically. Not unconscious zombies but the evolution, through artificial selection, of true human agency. Maybe these are the elite.

        Like

  11. The puzzle of intentionality seems reside much farther upstream than the specifically human. Even the simplest organisms act in order to survive, thrive, and reproduce. Through natural selection, mutations that enhance survival and reproduction tend to be perpetuated. In short, the emergence and perpetuation of all life forms seem predicated on a low-level form of intentionality, even when those intentions aren’t the result of any sort of conscious or unconscious decision-making. In his 2012 book Incomplete Nature, Terrence Deacon calls this property of all living organisms “ententionality.”

    Other sorts of self-organizing systems — stream beds in ponds, convection currents in boiling water — emerge as a means of maximizing entropy. Once the energy gradient vis-a-vis the environment plateaus, the system self-disorganizes. Not so with life forms, which rely on cell walls and so on to preserve negentropy. If, in short, all life forms are ententional not just in self-organization but in self-perpetuation and reproduction, then organisms that through mutations can enhance their ententionality would have increased survival value. Human intentionality would thus be merely an augmentation of capabilities that all organisms possess rather than a marker of special-snowflake transcendence.

    Like

    • I’d agree with the idea that ‘Even the simplest organisms act in order to survive, thrive, and reproduce’, following Hans Jonas in the insistence that organisms have intentionality insofar as they are reaching beyond their own boundaries, evaluating, in order to maintain themselves, and doing so as an evaluation that their being ought to be maintained. Whether this is blind complex systems at work or intentionality seems almost a side question at this level. There is intentionality in experience, and the systems that are responsible for generating that experience. Might we also talk about eliminating sight by eliminating the experience of sight? But then what’s left?

      Still, if intentionality is something is “added” then the point stands: when humans experience themselves as lacking intentionality we tend to diagnose them with depression. The answer to this seems to be that once applied neuroscience has reached maturity we’d simply eradicate the neural coordinates of depression. So, in a sense, we’d introduce it to cure it?

      I’m a bit perturbed by the hope Scott is placing in this applied neuroscience. It reminds me of the eternal hopes placed in psychiatry, a discipline that is still seen as young or at least not yet mature by many of its advocates. The “pharmacological hammers” (very nice turn of phrase) that we currently employ don’t really work, or they work by producing all kinds of averse effects that are integral to the functioning of the drugs but get labelled as “side-effects”, as if they were somehow incidental. Many of the drugs we’re using are still pretty old, and much of the research suggests the improvements made with newer generation formulations have been massively over stated, or in a few cases more or less fabricated. I’m wary of thinking a fully loaded neuropsychiatry would necessarily be any better than this: a more sophisticated, endocolonisational form of ECT/chemical restraint.

      Perhaps, what with Steven’s speculations of a neurototalitarianism a while back, many of our responses might just amount to a fear of more sophisticated version of THX 1138 happening.

      Like

      • I’m not sure I see the ‘hope,’ arran! but my guess is that what you mean is my assumption that neuroscience will successfully reverse engineer the human brain. I agree with all the problems you mention with psychiatry, but insofar as these problems primarily turn on mechanical intervention in the absence of mechanical knowledge, I’m unclear as to why this dynamic is doomed to be repeated once we have that mechanical knowledge. It’s like discrediting chemistry by appealing to the pseudoscientific follies of alchemy. It’s a whole different ballgame across the whole spectrum of brain related pathology.

        Eric Schwitzgebel is one prominent philosopher of cognitive science who thinks the brain might be too complex to ‘reverse engineered’ in the sense of stolen military technology, and he may be right. I’m just not sure it matters so long as the brain can be reverse engineered to the extent that reliable, specialized technological intervention (which is arguably already being done!) can be achieved. Psychiatry shows that we don’t have to know what we’re doing to try it.

        “There is intentionality in experience, and the systems that are responsible for generating that experience.” This is the very claim I’m denying, and therefore the very claim I’m challenging ANYONE to make stick.

        On the one hand, we have no *first-order* ‘experience of’ intentionality – ever. We have first-order experiences of X, which we subsequently characterize as ‘intentionality,’ given our metacognitive shortcomings. This is what makes BBT so devilishly difficult to counter: Either we just magically know X = intentionality via some kind of ‘metacognition without metacognition,’ or we metacognitively know X = intentionality, in which case, you need to explain how any system in the brain possesses the capacity to accurately cognize any other system or process in the brain, meanwhile BBT can explain why it seems X = intentionality given the shortcomings of metacognition – the very one’s we should expect!

        And on the other hand, to ontologize intentionality by claiming that certain brain systems *are* intentional just commits you to the morass of arguing original intentionality. BBT, meanwhile, requires only mechanics. It redeems none of our intuitions or hopes, and explains all that needs to be explained.

        Like

    • This is my nightmare, by the way: that life evolved as the most efficient entropy devices on our planet, accelerating the inevitable heat death of the universe. All organisms leach potential energy from their environments. Humans are incredibly adept at expending energy — maybe that’s what qualifies them as an advanced species.

      Like

    • The trick is to see how these powerfully intuitive ways to characterize living systems are a product of being a living system characterizing living systems, rather than those systems themselves. When we can’t track a mechanism mechanically, determine the progression of its states from the ‘bottom up,’ our brains automatically pattern its behaviour acausally using powerful heuristics adapted to do just this. Our activity and the activity of the system tracked are just as mechanical, but since we find ourselves stranded just with the comprehension, and no metacognitive inkling whatsoever of the complexities underwriting it, we make the mistake of ontologizing this comprehension. Intentionality becomes an ‘intentional thing’ requiring its own special metaphysics, and the resulting mysteries and conundrums form the marching band called philosophy of mind.

      So this is the sense in which intentionality is entirely real (as a family of heuristic strategies) and yet not ‘intentional’ at all. So approaches like Deacon’s (whom I’ve never read) which begin by trying to ontologize some spin on intentionality generally strike me as doomed: the heuristics involved have evolved to problem solve absent causal information (this is why intentionality seems impossible to naturalize), which is to say, to solve problem-ecologies quite different from the question of life, autopoeisis, and so on.

      Like

  12. I think that if you understand Rorty’s vocabularies as varieties of speech-acts/behaviors than you end up with a kind of ‘radical’ behaviorism (or enactivism) along the lines of BBT but perhaps with more of an appreciation of aesthetic pleasures/possibilties. Rorty’s critique (Not All That Strange) of Dreyfus/Spinosa’s attempt to make something more of such things isn’t available open-access as far as I can tell but here is some Dreyfus/Kelly vs Dennett to add to the mix:

    Click to access Heterophenomenology.pdf

    Like

  13. I’m with arranjames and Deacon on the intrinsic low-level intentionality inherent in all life forms. It’s not clear how or why it happened, but the drive to survive and to replicate does seem to characterize all known organisms. The neo-Darwinian paradigm of mutation and natural selection is predicated on this drive, and there’s plenty of empirical evidence backing it up. Discounting low-level ententionality would entail revolutionizing not just brain science but all of biology. In the context of that broader life context it’s reasonable to explore the possibility that human intentionality is a naturally-selected survival skill assembled piece by piece on the more basic and universal ententional platform, in ways similar to other amazing human tricks like language and problem-solving. This isn’t to imply that intentionality = free will that transcends cause-effect, any more than language use entails participation in the eternal Logos.

    As to whether intentionality is merely an imaginary figment of blind metacognitiion, there’s plenty of observational and experimental evidence to support the intuition that people intend to do things. If asked to introspect about my skills I can claim to hop on one foot, or to solve algebra equations, or to cook a pretty good lamb curry. These self-reported insights can be put to the test: have me hop, solve, cook. So too with intentions: I can say this afternoon that I intend to cook lamb curry tonight for supper, and later you can come around my house to see whether it happened or not. You could, in fact, follow me around with a clipboard and write down, based on my verbalizations and behaviors, what you infer to have been my intentions during the course of the day. I suspect that there would be pretty high correlations between my self-reports and your observations.

    Dennett takes a cautiously pragmatic approach with his intentional stance: it’s adaptive to infer another’s intent if this inference helps predict the other’s actions. Inferring intent is also helpful in learning behaviors from others that can help me get what I want. E.g., I watch someone go to the fridge, open the door, look inside, come out with food, eat it. Through a sort of reverse engineering I infer that the person went to the fridge in order to get something to eat. Next time I want a snack I can imitate the fridge-raiding behavior I observed. Humans are far better at “aping” behaviors — learning goal-directed action sequences from others — than are members of other species. Most likely it’s because one can infer another’s intentions for performing that behavior and can replicate the behavior when at some later time one forms that same intent.

    Symbolic language use relies on shared intentionality. If I point at some object, Experimental results show that this ability to infer another’s intentions develops at the age of around 9-12 months. It’s this nascent ability to infer another’s intention to communicate that makes it possible for kids to start learning to use symbolic language at about the same age. You point at a tree and most other creatures, even mature ones, look at your finger. A 9-month-old human will look in the direction you point, toward the tree: the child infers your intention to draw its attention toward something else. Language acquisition develops through the child occupying a joint intentional-attentional space with an already-adept language user.

    And then there’s the tried-and true method of observing humans whose ability to form intentions and to act on them has been disrupted by lesion or disease, observing how their verbal self-reports and behaviors differ from cognitively intact humans. Arranjames observes that one symptom of depression is an impaired ability to formulate intentions. As I mentioned earlier in the thread, I can make observations on my demented father all day long. Like last night when he tried to pull his undershorts over his head like a shirt. My inference: he intended to get dressed, but couldn’t quite remember how it’s done.

    Like

      • But the ”as if’ dimension’ is itself an intentional way of understanding the application of intentional heuristics. The whole of Dennett’s pragmatic house balances upon that one card, the assumption that it’s stances all the way down, not specialized cognitive devices every which way as I claim. Since we *know* this latter is the case, why bother with speculative mysteries of the former when talking about what’s really going on?

        Like

      • not sure that my “as-if” is different from yer “heuristic” and if so isn’t more than that, our tools are as good as the results they afford us. I just don’t think that we can ‘engineer’ social affairs like say our justice systems to keep up with the neuroscience, just as we can’t manage the politics related to climate science…

        Like

      • “not sure that my “as-if” is different from yer “heuristic” and if so isn’t more than that, our tools are as good as the results they afford us. I just don’t think that we can ‘engineer’ social affairs like say our justice systems to keep up with the neuroscience, just as we can’t manage the politics related to climate science”

        Then what is your ‘as-if’? I just don’t know how they could be more different. ‘Taking as’ elides all reference to the causal-mechanical, whereas heuristics are mechanical processes through and through. Short of BBT (which diagnoses it as a metacognitive artifact), no one has a clue as to how to cash ‘takings as’ in natural terms. ‘Takings as’ a la Dennett strands us with all the old mysteries only in a manner that makes them seem downright useful. What the hell is a ‘stance stance’? No one knows, and Dennett explicitly claims that he *doesn’t want to know*: http://rsbakker.wordpress.com/2013/11/29/skyhook-theory-intentional-systems-versus-blind-brains/

        I don’t think we’re going to be able to keep up either! Which is why the exploitation of legal lacuna etc. will likely be rampant.

        Like

  14. Here’s a research protocol. A 9-month-old is sitting in a room with its parent. An adult walks into the room, places something on the table, and leaves. A second adult enters, picks up the object, moves it onto a shelf on the other side of the room, and leaves. The first adult re-enters the room, goes to the table where she placed the object, looks surprised, looks under the table. The adult looks toward the 9-month-old. The child points to the object, now located on the shelf where the second adult placed it. It seems plausible to interpret the child’s action as an inference that the first adult’s intention was to find the object she had placed on the table.

    Like

    • And this demonstrates the ontological reality of ‘intentions’ as opposed to the efficacy of intentional heuristics how?

      My hope would be at this point that the big question is why it is my questions are proving so difficult to answer, because they really shouldn’t be, especially if what you think is the case is anywhere so obvious as you assume it to be.

      Like

      • Can you explain to me your distinction between an “intention” and an “intentional heuristic”? A heuristic is a technique for accomplishing something, so an intentional heuristic would be a technique for formulating or achieving an intention. So sure, the infant’s pointing behavior is a heuristic, I suppose. That it’s a heuristic related to intentions is inferred, not directly observed. But neither is gravity directly observed, is it?

        The 9-month-olds are prelinguistic, so they can’t formulate any sort of verbal representation of what they’re doing or why. The experiment uses environmental manipulations and observations of behavior to infer cognitive processes that precede, and that in all likelihood anticipate, language acquisition. Do you have an alternative hypothesis as to why the observing infant points out to the perplexed adult the object that has been moved from the table to the shelf? Do you have a suggested research protocol that would enable you to compare your hypothesis with the intentionality hypothesis?

        All hard questions are difficult to answer: that’s why so many scientists spend so much effort working on them. You seem to regard the research being done on intentionality in evolutionary biology and cognitive developmental psychology, a very small portion of which I’ve thumbnailed here, as irrelevant to your questions. You objected that intentionality is merely intuitive; I’ve offered some evidence supporting observational and inttersubjective confirmation. If you regard this evidence as irrelevant to your questions then I can’t be of much further assistance.

        Like

  15. I’m posing another explanation of what’s happening that doesn’t require anything spooky. It’s really up to you to justify your more extravagrant thesis that some ‘more than merely mechanical’ is going on, not me.

    On my account, patterns approximating adults placing/adults searching trigger those mechanistic systems adapted to ‘fast and frugal’ social problem solving. The infant represents nothing (though it likely recapitulates – copies without content – certain environmental information), least of all entities like ‘intentions.’ It simply possesses short cuts that enable it to form social-mechanical relations, to literally transform itself into a component of a superordinate biological system.

    Like

  16. “It’s really up to you to justify your more extravagrant thesis that some ‘more than merely mechanical’ is going on, not me.”

    Who are you quoting with ‘more than merely mechanical’? Not me. I’m as interested in figuring out the cause-effect cascades for how intentions work as the next guy.

    I’m trying to make sense of your account. The question of the kid “representing” intentions has nothing to do with the task as posed. And “copies without content”? The kid made an inference from the given environmental information — that’s the whole point of the experimental task. Sure the kid forms social relations with the adult. What is the basis for the relation? Participation in the adult’s thwarted intent to find the object placed on the table. “Lliterally transform itself into a component of a superordinate biological system,” you say. You mean the participation in the adult’s intentional dilemma? Okay fine. Sure the kid is doing social problem solving. What problem is being solved? The kid has to infer why the adult is puzzled, and what the kid might be able to do to help the adult resolve that puzzlement. But hey, if the adult doesn’t want to look on the shelf where the kid is pointing, to give some tentative credence to the kid’s extravagant claim that the sought-for object is right over there on the shelf, it’s not really up to the kid to force the adult to look.

    Like

  17. “I’m as interested in figuring out the cause-effect cascades for how intentions work as the next guy.”

    I’m curious: What do you think the puzzle of intentional explanation versus causal explanation consists in, ktismatics? What’s the big whup about intentionality?

    Like

  18. I see what you’re doing there, Scott — you’re asking me to introspect. In light of your blind-brain skepticism I doubt you’d lend any credence to what I would report here. Besides, no doubt I’ve already overtaxed Noir’s charity in horning in so extensively on his engagement with you. Speaking of Noir, where have you gone Joe DiMaggio?

    Briefly though (won’t I ever shut up?), it seems clear that affective valences and environmental stimuli are what set the intentionality apparatus into motion. Your and Noir’s back-and-forth have served that function, drawing me into the exchange despite my prior intent not to do so. So in effect I’m here discussing will against my own will. Thank you for — and damn you for — engaging my attention.

    Like

    • On my blog I’m always happy to see others tango, so I’m guess I’m guessing Stephen is the same!

      Historically, the problem is that upon reflection ‘intentional properties’ (such as ‘aboutness’ and ‘evaluability’) seem to so obviously belong to human cognition, and yet our every attempt to account for these properties in naturalistic (causal) terms generates conundrums. On BBT this is simply because these ‘properties’ pertain to heuristic systems adapted to solving social problems absent any causal information. So when metacognition ontologizes them as properties, they suddenly seem to belong to some kind of special noncausal, or at least not straightforwardly causal, order of reality.

      I actually don’t think there’s anything wrong with operationalizing intentional concepts in experimental contexts, so long as the metaphysical assumptions are checked at the door. The problem with insisting intentionality *as metacognized* is an actual property of brain-states or systems is that it sends researchers on wild goose chases, searching for some special emergent (in some mysterious sense) phenomena that are really just a figment of reflection.

      Like

      • “I actually don’t think there’s anything wrong with operationalizing intentional concepts in experimental contexts, so long as the metaphysical assumptions are checked at the door. The problem with insisting intentionality *as metacognized* is an actual property of brain-states or systems is that it sends researchers on wild goose chases, searching for some special emergent (in some mysterious sense) phenomena that are really just a figment of reflection”.
        see I think you and I are in general agreement here but I’ll keep tuned to see how the specifics play out.

        Like

  19. Conversations like these reinforce the conviction that I’ll long dead before the academy can wrap it’s head around BBT. But just note, from a naturalistic standpoint, how *peculiar* their conversation is. In a brute sense, we’re talking about systems entering systematic relationships with other systems, about things bodily *doing* things to other things in ways that form greater systematic wholes. It’s a big, squishy mess – and whatever concepts are, they are functionally and constitutively embroiled in that mess. Now consider the arid terms employed by both Machery and Prinz (two of the more brilliant PoMs out there, I think): categorizations, references, representations as enabling constituents of ‘thoughts.’ There’s no squishy mess anywhere, because they are discussing MIND.

    So where has the squishy mess gone? Why has it vanished? How much of that squishy mess is crucial to activities we think involve concept mongering?

    Think about how basic these questions are, and think about how rarely they’re asked. Think about how both of them, including Machery (who’s a concept *skeptic* (in psychology at least)) simply ASSUME the adequacy of the basic metacognitive pallette reflection and tradition have provided them. Think of how they assume the sufficiency of the metacognitive pallette, and that the function of empirical evidence is to sort between the alternatives it offers.

    The great minds have been cracking their skulls against their intuitions for thousands of years now. At the same time, the unreliability of metacognitive intuition is becoming more and more evident the more we discover.

    I’m not a betting man, but…

    Like

    • ha, for once I’m the glass half-full person as I was encouraged by Machery’s pluralism but I long ago gave up trying to convince academics that there are no such things as concepts tho I still fall into it on the intertubes sometimes…

      Like

  20. A nerdy joke came to mind on my morning walk, prompted in all likelihood by this thread. Two guys walk into a bar. “There are two kinds of people in the world,” the first guy says: “eliminativists and emergentists.” I’ll spare you the details, but by the end even the two guys and the bar have been eliminated.

    Like

    • Ah contraire… a tale-tale sign was left and found by Detective Coal. A scrap of paper sitting carefully on the tip of a strange cube, that said: “emerging, eliminating… e ” One will never know for sure what followed that last ‘e’.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s