The Brain Electric: Merger of Mind and Machine

“We may actually for the first time be able to interact with the world in a nonmuscular manner,” Leuthardt said. “I’ve always needed muscle to communicate with you by moving my vocal cords or giving a hand expression or writing a note or painting a painting— anything. But that may not be the case anymore. So how does that change us? You unlock the mind and make it accessible to science and technology, and suddenly all this other stuff becomes possible. Everything changes. It’s a whole new palette for the human imagination.”
…….– Eric Leuthardt in Conversation with Malcom Gay

Reading The Brain Electric: The Dramatic High-Tech Race to Merge Minds and Machines tonight which explores the current research and experimental convergence of human and machine using BCI (Brain Computer Interfaces). A quick quote as Eric Leuthardt, a scientist studies a patient with Alzheimer’s disease at Barnes-Jewish Hospital Complex in St. Louis: 

He was after the electric current of thought itself: the millions of electrical impulses, known as action potentials, that continuously volley between the brain’s estimated 100 billion neurons. Those neurons are connected by an estimated 100 trillion synapses, the slender electrochemical bridges that enable the cranium’s minute universe of cells to communicate with one another. Like an exponentially complicated form of Morse code, the cells of the brain exchange millions of action potentials at any moment, an electric language that physically underlies our every movement, thought, and sensation. These are not sentient thoughts, per se, but in sum this mysterious and crackling neural language is what makes consciousness possible— a sort of quantum programming code that remains all but unrecognizable to the consciousness it creates.

Leuthardt’s hope was to understand that language. Using electrodes to ferry Brookman’s neural signals into a nearby computer, he would forge what’s known as a brain-computer interface— a wildly intricate union of synapses and silicon that would grant his patient mental control over computers and machines. As this pulsing language streamed from Brookman’s brain, the machine’s algorithms would work to find repeated patterns of cellular activity. Each time Brookman would think, say, of lifting his left index finger, the neurons associated with that action would crackle to life in a consistent configuration. Working in real time, the computer would analyze those patterns, correlating them with specific commands— anything from re-creating the lifted finger in a robot hand to moving a cursor across a monitor or playing a video game. The end command hardly mattered: once Leuthardt’s computers had adequately decoded Brookman’s neural patterns— his thoughts— Leuthardt could conceivably link them to countless digital environments, granting Brookman mental control over everything from robotic appendages to Internet browsers.

It’s a union whose potential beggars the imagination: an unprecedented evolutionary step— effectively digitizing the body’s nervous system— that conjures images of not only mental access to everyday objects like computer networks, appliances, or the so-called Internet of things but also telekinetic communication between people and cyborg networks connected by the fundamental language of neural code.

Just as the body’s nervous system comprises both sensory and motor neurons, the wired brain offers an analogous two-way means of communication. Brookman’s brain-computer interface may give him control over computers, but it would also grant Leuthardt’s computers access to Brookman’s brain— a powerful research tool to study the behavior of individual neurons as well as deliver new forms of sensory information.1

Reading the book one discovers just how primitive our research and discovery is. This is one of those reporter journalist books, a report on the nitty-gritty in your face butcher shop of open brain surgery, installation of experimental electrodes, computer graphs and read-backs punching the two-way communication of listening in on the buzz of hundreds of millions of neurons, seeking patterns in the noise of the brain’s own vat. Yet, it is a beginning. Shows the competitive spirit among the many researchers: Andrew Schwartz of the University of Pittsburgh, Miguel Nicolelis of Duke University and others. They all are in cutthroat competition for the next big DARPA grant and, of course, for the brass ring—the Nobel Prize that is almost certain to be awarded to the best of the lot.

A few success stories, but most of it just the brick-and-mortar work of pioneers: a paraplegic woman thinks a robot arm to feed herself; a monkey whose arms and hands are restrained plays a video game; the brains of two rats are linked in a way that gets the actions of one to affect the actions of the other. We discover just how little we yet know. The conditions and tools almost worthless: the brain has 100 billion neurons, but even the most sophisticated implants can monitor only a few hundred. Weeks or months after installation the immune system invariably attacks the implanted electrodes, rendering many of them useless, and the brain changes so rapidly that connections often have to be recalibrated daily to keep them working properly.

What’s exciting is to see that its being done, that science is doing what it does, seeking ways to overcome problems, generate data, discover new empirical functions, explore the boundary zone between the mind and possible interface with external systems; as well as the two-way manipulation and communication of these external systems with the brain. All this by-passing the old mainstay, consciousness, working directly on the brain through electrodes, and indirectly through the feed-back loops and algorithms, code, and de-coding that build the representations on the computer screen of the actual translation of success between live transaction of brain and computer.

What we see is like many other things from the history of the sciences, the first steps to something strange, bizarre, and potentially useful for future medical – or, more ominously, military forms of BCI. Like many other R&D projects going on this is just one piece in an ongoing puzzle to transform the human/machine interaction in our future. The questions surrounding this turn on the ethical dimension such systems present to us going forward. Is this the first step toward migration of the organic into machinic? The expansion of mind into machine, the development of distributive share or collective mindscapes of individuals across the ICT frontiers, the governance and military uses of such knowledge, all open us to an endless set of problems and questions. The sciences are a two-edged sword that can be used for good or ill. Hopefully the new technologies emerging out of this will be used for the benefit of humans, but as we know from past history such technologies can be turned to more militaristic and nefarious purposes. There’s always a fine line…


 

  1. Gay, Malcolm (2015-10-20). The Brain Electric: The Dramatic High-Tech Race to Merge Minds and Machines (Kindle Locations 38-58). Farrar, Straus and Giroux. Kindle Edition.

7 thoughts on “The Brain Electric: Merger of Mind and Machine

    • Yep, day by day, year by year, we learn more and more through empirical research about the actual processes and phenomenal underpinnings of our being… quite amazing stuff!

      I remember when I was a teenager visiting the Library of Congress with my Dad and realizing at that time I’d never understand everything, realizing that the tens of thousands, now millions of books published a year will never find a Renaissance Individual who could master such knowledge again. Yet, of course, the whole notion of mastery has come under fire in our time, so that all this accumulated data in books, journals, academic publications, think-tanks, etc. is just accumulated like capital… but I wonder with Bataille, who will spend this Capital? Why such excess without expenditure? Of course this is the point of all those librarians, scholars, critics, teachers, etc. isn’t it? They are the shamanic specialists of our secular culture, the mediators of this vast treasure trove, or at least this is what we’ve been taught was the old form of “humanistic” learning and education. But this is all dead now if we were to believe our current philosophers: humanistic learning as dead and gone… But I wonder, is it? We still see these academics wandering the halls of libraries, studying the etymologies of the old ways, pulling out tid-bits from the old texts… I think sometimes the so called non-human turn etc. is just a way to construct a new space for academics to have a job… 🙂 The old humanistic learning goes on in secret, pragmatically and under the hood.

      Like

      • it really is, a few years ago I was working in Kansas on PTSD treatment regimes for the VA and was invited to check out the lab of a DARPA MD/PhD neurologist doing some very basic research into the physiology of neuroreceptors and his team included a mathematician, a tooling engineer, a chemist, and two part-time coders, slowly but steadily they are piecing it all together.

        Liked by 1 person

      • When you think about it this reverse engineering of brain into code and back again into representations on a screen is the portrayal of philosophy made concrete and pragmatic. I think it’s this more than anything that Bakker is getting at when he says sciences are making all the chatter obsolete. We may never have a complete explanation of what consciousness “is,” but we don’t need an explanation of the “is” only the “does”: how it works, not how it is… So the descriptions will be of the process of what works and how to use that knowledge for good (health) or ill (military).

        Like

      • indeed, they are still interpreting their work/results as they go and so no real way out of hermeneutics but as you say a kind of pragmatism (we don’t need an explanation of the “is” only the “does”: how it works, not how it is) could go a long way and really the egineering has a life of its own.

        http://www.cbc.ca/radio/spark/308-designing-for-focus-understanding-sarcasm-and-more-1.3425667/can-an-algorithm-detect-sarcasm-better-than-you-1.3426064

        Liked by 1 person

      • Yep, it’s all in the patterns… and those can be turned into math into algorithms into logic into code/de-code, representations, diagrams, maps, filters, tools… In some ways the feed-back loops of consciousness and self-reflexivity have now been externalized into the feed-back loops of empirical/mathematical ontologies of pragmatic empirical praxis / software mathematical translation and re-presentation into live brain feeds and analytical data, etc. In this sense we are doing the impossible: enabling a third-person view of consciousness from the Outside in. A view of the very material processes through the feed-back loops of actual empirical and mathematical translation into computational and functional systems. So that what we could not attain by way of philosophical internal self-reflexivity, we are attaining by way of scientific method of experimentation and pattern matching algorithms. As Bakker implies we internally are Blind to our physical processes, but externally through empirical means we are attaining just that – a view onto these very processes through direct empirical intervention and implants, and the transformation and translation of this into mathematical software digitization and re-presentation: the whole processes of the feed-back loop seen in real time and then stored as memory for break down and de-coding into analytics and synthetic forms of interpretive or hermeneutical knowledge.

        Yea, for me personally there has never been a problem of the sciences vs. philosophy. Why? Because the two are after different things. The sciences are as you say, pragmatic and empirical, seeking what works and does; while philosophy is theoretical and seeks some abstract metaphysical explanation of the “is”, “thisness”, etc., the ontological or epistemological conundrums of conceptuality etc. That’s why the C.P. Snow two-culture divided will probably always remain, except that in the real world, the business world the sciences produce something that is economically viable and profitable, while philosophy is just part of our humanistic heritage, nice to have around but hardly something to build a corporation around. Obviously people read these books to get inspiration, but then they seek concrete ways to put this to work in real world empirical ways. That’s just the way things are…

        All my years as a Software Engineer and then Architect were not paid for by my philosophical outlook or knowledge, even if those did inform my conceptual framework for such pragmatic work; it was the actual empirical and mathematical knowledge and coding, translation of business processes and analytical systems knowledge that paid my way. Pragmatism will win every time in the real world of work. Probably why pragmatism became the American philosophy of capitalism… signs, semiosis, etc. were turned to use and work rather than to abstraction and philosophical contemplation, etc.

        Even Object-Oriented Programming is pragmatic though it is studied logically within a framework of set theory, the Object being the empty place holder or base class from which all others arise etc. So math, philosophy inform such systems thinking, but the work itself is empirical and analytical in the details – even more diagrammatical and anti-representational rather than interpretable. One doesn’t interpret code one constructs it or de-codes it… that’s why sometimes Guattari makes sense… against the Idealism of Lacan.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s