Grand Unified Theory of the Brain?

Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain.
………– Stanislas Dehaene

Carrying on a conversation with my friend Scott Bakker of Three-Pound Brain led him to mention Stanislas Dehaene. Dehaene a couple years back had mentioned the work of Karl Friston who may be closer than anyone else to providing a solid framework for the neurosciences going forward. In his paper on the free energy principle (here: A free energy principle for the brain: pdf) he mentions the basics of this concept:

By formulating Helmholtz’s ideas about perception, in terms of modern-day theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts: using constructs from statistical physics, the problems of inferring the causes of sensory input and learning the causal structure of their generation can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organisation and responses.

As Gregory T. Huang in Is this a unified theory of the brain? (or here complete article) reports: What we still don’t have, though, is a way to bring all these pieces together to create an overarching theory of how the brain works. Despite decades of research, neuroscientists have never been able to produce their own equivalent of Schrödinger’s equation in quantum mechanics or Einstein’s E=mc2 – a powerful, concise, mathematical law that encapsulates how the brain works. Nor do they have a plausible road map towards a “theory of everything”, like string theory in physics. Surely if we can get so close to explaining the universe, the human brain can’t be that hard to crack? Continuing he states:

Until now none of their ideas has been general or testable enough to arouse much excitement in straight neuroscience. But a group from University College London (UCL) may have broken the deadlock. Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston’s group claims to be able to explain almost everything about our grey matter.

It’s a controversial claim, but one that’s starting to make people sit up and take notice. Friston’s work has made Stanislas Dehaene, a noted neuroscientist and psychologist at the College of France in Paris, change his mind about whether a Schrödinger equation for the brain might exist. Like most neuroscientists, Dehaene had been pessimistic – but not any more. “It is the first time that we have had a theory of this strength, breadth and depth in cognitive neuroscience,” he says.

Friston’s ideas build on an existing theory known as the “Bayesian brain”, which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses.

The idea was born in 1983, when Geoffrey Hinton of the University of Toronto in Canada and Terry Sejnowski, then at Johns Hopkins University in Baltimore, Maryland, suggested that the brain could be seen as a machine that makes decisions based on the uncertainties of the outside world. In the 1990s, other researchers proposed that the brain represents knowledge of the world in terms of probabilities. Instead of estimating the distance to an object as a number, for instance, the brain would treat it as a range of possible values, some more likely than others.

This notion of using values as predictive parameters and variables in a probabilistic systems of inferences made me think of the recent AI deep learning system that beat the GO champion of Europe, etc. The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any look ahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Two works that go in depth on the notion of the predictive mind are Jacob Hohwy’s The Predicitve Mind, and Andy Clarke’s Surfing Uncertainty: Prediction, Action, and the Embodied Mind. As Hohwy’s explains it this new theory of the brain as a probabilistic machine, or a predictive and anticipation system offers the neurosciences a larger framework to tie many aspects of various singular and experiential knowledge’s of the brain onto a new footing:

A new theory is taking hold in neuroscience. The theory is increasingly being used to interpret and drive experimental and theoretical studies, and it is finding its way into many other domains of research on the mind. It is the theory that the brain is a sophisticated hypothesis-testing mechanism, which is constantly involved in minimizing the error of its predictions of the sensory input it receives from the world. This mechanism is meant to explain perception and action and everything mental in between. It is an attractive theory because powerful theoretical arguments support it. It is also attractive because more and more empirical evidence is beginning to point in its favour. It has enormous unifying power and yet it can explain in detail too.1

Andy Clark will describe this minimization of error this way: “one of the brain’s key tricks, it now seems, is to implement dumb processes that correct a certain kind of error: error in the multi-layered prediction of input. In mammalian brains, such errors look to be corrected within a cascade of cortical processing events in which higher-level systems attempt to predict the inputs to lower level ones on the basis of their own emerging models of the causal structure of the world (i.e. the signal source). Errors in predicting lower level inputs cause the higher-level models to adapt so as to reduce the discrepancy. Operating over a plethora of linked higher-level models, the upshot is a brain that encodes a rich body of information about the source of the signals that regularly perturb it.”2 He’ll go on to describe Friston’s top-down system:

the generative model providing the ‘top-down’ predictions is here doing much of the more traditionally ‘perceptual’ work, with the bottom-up driving signals really providing a kind of ongoing feedback on their activity (by fitting or failing to fit, the cascade of downward-flowing predictions). This procedure combines ‘top-down’ and ‘bottom–up’ influences in an especially delicate and potent fashion, and leads to the development of neurons that exhibit a “selectivity that is not intrinsic to the area but depends on interactions across levels of a processing hierarchy” (Friston (2003) p.1349). Hierarchical predictive coding delivers, that is to say, a processing regime in which context-sensitivity is fundamental and pervasive. (ibid., p. 26)

As Hohwy comments “one important and, probably, unfashionable thing that this theory tells us about the mind is that perception is indirect…[…]…what we perceive is the brain’s best hypothesis, as embodied in a high-level generative model, about the causes in the outer world.” (Hohwy, p.322) This notion of “indirect” perceptual access to the world aligns with many new speculative realist and materialists notions of our ontological status as well. (I’ll not go into details here.) Clark commenting on this states that

There is something right about this. The bulk of our daily perceptual contact with the world, if these models are on the mark, is determined as much by our expectations concerning the sensed scene as by the driving signals themselves. Even more strikingly, the forward flow of sensory information consists only in the propagation of error signals, while richly contentful predictions flow downwards, interacting in complex non-linear fashions via the web of reciprocal connections. (Clark, p. 56)

Quoting Bubic he will remark that “an expected event does not need to be explicitly represented or communicated to higher cortical areas which have processed all of its relevant features prior to its occurrence” (ibid., p. 56) Going on to say:

If this is indeed the case, then the role of perceptual contact with the world is only to check and when necessary correct the brain’s best guessing as to what is out there. This is a challenging vision, since it suggests that our expectations are in some important sense the primary source of all the contents of our perceptions, even though such contents are constantly being checked, nuanced, and selected by the prediction error signals consequent upon the driving sensory input. (ibid., p. 56)

Rather than representing reality we infer it through a continuous feed-back forecasting and Baysean percetual grid: “the percept – even in the case of various effects and illusions  – is an accurate estimation of the most likely real-world source or property given noisy sensory evidence and the statistical distribution, within some relevant sample, of real-world causes.” (ibid., p. 57).

Yet, as Clark will affirm the upshot of this theory is that a full account of human cognition cannot hope to ‘jump’ directly from the basic organizing principles of action-oriented predictive processing to an account of the full (and in some ways idiosyncratic) shape of human thought and reason. (ibid., p. 62) He concludes, saying,

What emerges instead is a kind of natural alliance. The basic organizing principles highlighted by action-oriented predictive processing make us superbly sensitive to the structure and statistics of the training environment. But our human training environments are now so thoroughly artificial, and our explicit forms of reasoning so deeply infected by various forms of external symbolic scaffolding, that understanding distinctively human cognition demands a multiply hybrid approach. Such an approach would combine the deep computational insights coming from probabilistic generative approaches (among which figure action-oriented predictive processing) with solid neuroscientific conjecture  and with a full appreciation of the way our many self structured environments alter and transform the problem spaces of human reason. (ibid., p. 62)

We’ll have to keep our eyes on this… interesting times in the neurosciences.


 

  1. Hohwy, Jacob. The Predictive Mind. OUP Oxford; 1 edition (November 28, 2013)
  2. Whatever Next? Predictive Brains, Situated Agents, and the Future
    of Cognitive Science. Andy Clark School of Philosophy, Psychology, and Language Sciences 

     

(Note: this was a scheduled post! I’ll be back end of week!)

 

2 thoughts on “Grand Unified Theory of the Brain?

  1. Reblogged this on The Ratliff Notepad and commented:
    While I get nervous when the term “machine” or “computer” is used to describe the human brain, progress in explaining how the brain works is necessary.

    To me, there’s more to it than “how a computer works” or “how a machine processes things” because what describes what we term as irrational behavior?

    If someone were to answer that with “computer virus” … I would ask “Then where is the anti-virus ‘software’?” Since the answers to those questions are not as simple as reducing the brain down to simple logic … I’m really hesitant to use “computer” or “machine” to describe the brain.

    That said, I’m also aware of what we have discovered, and can see where overall mechanisms of the brain could be described as such, just not the totality.

    Because describing the brain “as a” computer or machine in totality is too simplistic, too analogous. These terms limit us to boiling down the description of the workings of the brain to that of a robotic – type device (not robot, I’m using analogy too).

    I just think there is way more to our brains. The “general relativity” that will describe our brains will be far more complex than what seems to be described in these recent papers (in my opinion).

    But we will see.

    Liked by 1 person

Leave a comment