How many times has a new technological invention changed the course of history, created new forms of social, political, and philosophical – and, yes, even religious views about ourselves and the universe. One could recite a litany of inventions that have had both a material and immaterial impact upon our world and the way we perceive it.
Think of it this way. Before the launch of the Hubble Space Telescope (HST) in the 1990’s we thought we had a fairly accurate picture of the formation and eventual heat-death of the universe, etc. But with the launch of this new technological wonder scientists were able for the first time to study aspects of the universe that had up to that moment been closed off in speculation and theory.
Before the launch of this telescope one thing was fairly certain about the expansion of the Universe. It might have enough energy density to stop its expansion and recollapse, it might have so little energy density that it would never stop expanding, but gravity was certain to slow the expansion as time went on. Granted, the slowing had not been observed, but, theoretically, the Universe had to slow. The key here is that it was all theory. No one had actually been able to observe what was going on. Instead we developed elaborate mathematical theorems to describe what we did know rather than what we didn’t know.
But with the launch of this telescope scientist instead of being bound to an armchair philosophy of math and theory were able to get a front row seat and open a window onto the great outdoors of being. What they discovered in their observations of very distant supernovae is that, a long time ago, the Universe was actually expanding more slowly than it is today. So the expansion of the Universe has not been slowing due to gravity, as everyone thought, it has been accelerating. No one expected this, no one knew how to explain it. But something was causing it.
But what was this mysterious X that was causing this? No one had an idea. Yet, as they began readjusting their theories to meet the truth of what they were observing they discovered even more paradoxical truths: the major part of our universe is made up of something other than matter. Yes, you heard me. What these scientists realized is that matter, our phenomenal world of rocks and dust, stars, and galaxies, etc. made up only 5% of the known universe. But if their mathematical calculations were correct then what is the unknown stuff that makes up the other 95% of the universe?
What these scientists discovered as it turns out is that roughly 68% of the Universe is dark energy, and another 27% is made of Dark matter. The rest – everything on Earth, everything ever observed with all of our instruments, all normal matter – adds up to less than 5% of the Universe. One can find all kinds of information on this on the web. I particularly liked the National Geographic breakdown: here. Of course these names were given because what they masked is not really something we know anything about at all. Nothing. All we know is the math is correct. That there is this quantified certainty that something exists behind these unknown knowns. But exactly what this something that is less than nothing is not known. Oh sure they have several theories, but have no proof for these theories… again, everything is speculation based on theoretical mathematics rather than empirical verification. Many countries are spending millions of dollars on detecting this mysterious unknown. China is entering the race to detect mysterious dark matter in a big way, with a huge facility in Sichuan province set to begin collecting data in the coming weeks. (see Space)
The point I wanted to originally make is not the astounding truth of these two new aspects of the universe, but how technology impacts the way we view the universe itself. Up to this time neither scientists nor philosophers could give a detailed explanation about our universe. All we had were educated speculations based on a limited set of known facts. It was from these that we built up our pictures and representations of the universe.
This same thing is happening now with the advent of neuroimaging technologies in the 1970’s. After centuries of brain inquiry and research these new technologies gave neuropsychologists and neuroscientists images of living, functioning brains. In other words we didn’t need to speculate about what was happening internally in our minds, perceptions, etc. We had indirect access to the living processes themselves through these neuroimaging systems.
The two main types of neuroimaging technologies are the Structural and Functional Imaging systems. Structural imaging provides images of the brain’s anatomical structure. This type of imaging helps in the diagnosis of brain injury, and the diagnosis of certain diseases. Functional imaging provides images of the brain as patients complete tasks, such as solving math problems, reading, or responding to stimuli such as auditory sounds or flashing lights. The area or areas of the brain that are involved with completing or responding to these tasks “light up,” giving researchers a visual 3-D view of the parts of the brain involved with each type of task.
So many of the speculations concerning the mind that had been the bread and butter of philosophers of Mind for centuries is now part of the technological mind-toolset of scientists and doctors. Yet, the social, political, religious, ethical impact of these technologies and how they are changing our view of the human are barely scratching the surface. Both scientists and philosophers are scrambling to revise their empirical and systematic understanding of the human under the impact of these technologies.
One of the issues is description itself. How to frame the relevant data that is being exposed in the neuroimaging technologies? As Bickle and Mandik tell us:
Given that philosophy of neuroscience, as other branches of philosophy of science, has both descriptive and normative aims, it is critical to develop methods for accurate estimation of current norms and practices in neuroscience. Appeals to intuition will not suffice, nor will single paradigm case studies do the job because those case studies may fail to be representative.1
On Amazon alone I found a few hundred books on various aspects of this new technological world of the neurosciences and the impact of neuroimaging systems. Yet, in process of uncovering the best of these works I discovered the usual mix of pop cultural reference mixed in with expertise, along with shoddy conceptuality. It always seems that people love to cushion the effects of technologies impact rather than giving us the straight up and up.
I know my friend R. Scott Bakker loves to keep reminding me that the neurosciences will give us what philosophers only dreamed of: the truth about the Mind/Brain, etc. But with every new book I read by a reputable scientist I become more and more disillusioned not by the scientific findings, but rather that scientists with the best intentions (ah! that word, intention) try to convey the conceptual truth of what they are discovering, but invariably fall back into descriptions that use old worn out metaphysical jargon, tropes, metaphors, etc. that confuse and abuse the issue rather than clarifying the actual facts of their findings. Then one turns to other commentators to get the clarification that was not forthcoming in the original rendition of the finding.
So who do we go too to give us the narrative facts of the issue? The scientists, the philosopher; or, some middle-party science journalist who can fuse the two? Is there an answer? Since not all of us have the scientific credentials or background to study the actual first hand data ourselves shall we be bound to some second-hand appraisal of this data; either through the lens of some scientist’s or philosopher’s framework? Or can we develop a shared framework that the educated public can use to know what is of value? Isn’t this an age-old problem?
I know in ages past – at least for literature and culture, we had this educated creature called the literary critic who was able to filter in and out the public validity of a work and present us with the best and brightest of the lot. So that instead of reading 500 books that repeat each other’s findings in various modes of expertise, we could instead discover the best “authority” and most equitable purveyor of this knowledge. Of course now days people frown on such thinking as anti-democratic and elitist. So that instead we have anyone and everyone as their own DIY expert. What to do?
Maybe I should wait for some technological cyber-mind, some AI of the neo-knowledge set to rise up out of the dead world of the Smithsonian library who will be able to sift through the remains of human knowledge at the blink of an eye: who will then speak to me in some alien register of the stupidity of all our learning. Then give me the monstrous truth.
Bickle, John, Mandik, Peter and Landreth, Anthony, “The Philosophy of Neuroscience“, The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), Edward N. Zalta (ed.),