How Technology Shapes Us

ai-image

How many times has a new technological invention changed the course of history, created new forms of social, political, and philosophical – and, yes, even religious views about ourselves and the universe. One could recite a litany of inventions that have had both a material and immaterial impact upon our world and the way we perceive it.

Think of it this way. Before the launch of the Hubble Space Telescope (HST) in the 1990’s we thought we had a fairly accurate picture of the formation and eventual heat-death of the universe, etc. But with the launch of this new technological wonder scientists were able for the first time to study aspects of the universe that had up to that moment been closed off in speculation and theory.

Before the launch of this telescope one thing was fairly certain about the expansion of the Universe. It might have enough energy density to stop its expansion and recollapse, it might have so little energy density that it would never stop expanding, but gravity was certain to slow the expansion as time went on. Granted, the slowing had not been observed, but, theoretically, the Universe had to slow. The key here is that it was all theory. No one had actually been able to observe what was going on. Instead we developed elaborate mathematical theorems to describe what we did know rather than what we didn’t know.

But with the launch of this telescope scientist instead of being bound to an armchair philosophy of math and theory were able to get a front row seat and open a window onto the great outdoors of being. What they discovered in their observations of very distant supernovae is that, a long time ago, the Universe was actually expanding more slowly than it is today. So the expansion of the Universe has not been slowing due to gravity, as everyone thought, it has been accelerating. No one expected this, no one knew how to explain it. But something was causing it.

But what was this mysterious X that was causing this? No one had an idea. Yet, as they began readjusting their theories to meet the truth of what they were observing they discovered even more paradoxical truths: the major part of our universe is made up of something other than matter. Yes, you heard me. What these scientists realized is that matter, our phenomenal world of rocks and dust, stars, and galaxies, etc. made up only 5% of the known universe. But if their mathematical calculations were correct then what is the unknown stuff that makes up the other 95% of the universe?

What these scientists discovered as it turns out is that roughly 68% of the Universe is dark energy, and another 27% is made of Dark matter. The rest – everything on Earth, everything ever observed with all of our instruments, all normal matter – adds up to less than 5% of the Universe. One can find all kinds of information on this on the web. I particularly liked the National Geographic breakdown: here. Of course these names were given because what they masked is not really something we know anything about at all. Nothing. All we know is the math is correct. That there is this quantified certainty that something exists behind these unknown knowns. But exactly what this something that is less than nothing is not known. Oh sure they have several theories, but have no proof for these theories… again, everything is speculation based on theoretical mathematics rather than empirical verification. Many countries are spending millions of dollars on detecting this mysterious unknown. China is entering the race to detect mysterious dark matter in a big way, with a huge facility in Sichuan province set to begin collecting data in the coming weeks. (see Space)

The point I wanted to originally make is not the astounding truth of these two new aspects of the universe, but how technology impacts the way we view the universe itself. Up to this time neither scientists nor philosophers could give a detailed explanation about our universe. All we had were educated speculations based on a limited set of known facts. It was from these that we built up our pictures and representations of the universe.

This same thing is happening now with the advent of neuroimaging technologies in the 1970’s. After centuries of brain inquiry and research these new technologies gave neuropsychologists and neuroscientists images of living, functioning brains. In other words we didn’t need to speculate about what was happening internally in our minds, perceptions, etc. We had indirect access to the living processes themselves through these neuroimaging systems.

The two main types of neuroimaging technologies are the Structural and Functional Imaging systems. Structural imaging provides images of the brain’s anatomical structure. This type of imaging helps in the diagnosis of brain injury, and the diagnosis of certain diseases. Functional imaging provides images of the brain as patients complete tasks, such as solving math problems, reading, or responding to stimuli such as auditory sounds or flashing lights. The area or areas of the brain that are involved with completing or responding to these tasks “light up,” giving researchers a visual 3-D view of the parts of the brain involved with each type of task.

So many of the speculations concerning the mind that had been the bread and butter of philosophers of Mind for centuries is now part of the technological mind-toolset of scientists and doctors. Yet, the social, political, religious, ethical impact of these technologies and how they are changing our view of the human are barely scratching the surface. Both scientists and philosophers are scrambling to revise their empirical and systematic understanding of the human under the impact of these technologies.

One of the issues is description itself. How to frame the relevant data that is being exposed in the neuroimaging technologies? As Bickle and Mandik tell us:

Given that philosophy of neuroscience, as other branches of philosophy of science, has both descriptive and normative aims, it is critical to develop methods for accurate estimation of current norms and practices in neuroscience. Appeals to intuition will not suffice, nor will single paradigm case studies do the job because those case studies may fail to be representative.1

On Amazon alone I found a few hundred books on various aspects of this new technological world of the neurosciences and the impact of neuroimaging systems. Yet, in process of uncovering the best of these works I discovered the usual mix of pop cultural reference mixed in with expertise, along with shoddy conceptuality. It always seems that people love to cushion the effects of technologies impact rather than giving us the straight up and up.

I know my friend R. Scott Bakker loves to keep reminding me that the neurosciences will give us what philosophers only dreamed of: the truth about the Mind/Brain, etc. But with every new book I read by a reputable scientist I become more and more disillusioned not by the scientific findings, but rather that scientists with the best intentions (ah! that word, intention) try to convey the conceptual truth of what they are discovering, but invariably fall back into descriptions that use old worn out metaphysical jargon, tropes, metaphors, etc. that confuse and abuse the issue rather than clarifying the actual facts of their findings. Then one turns to other commentators to get the clarification that was not forthcoming in the original rendition of the finding.

So who do we go too to give us the narrative facts of the issue? The scientists, the philosopher; or, some middle-party science journalist who can fuse the two? Is there an answer? Since not all of us have the scientific credentials or background to study the actual first hand data ourselves shall we be bound to some second-hand appraisal of this data; either through the lens of some scientist’s or philosopher’s framework? Or can we develop a shared framework that the educated public can use to know what is of value? Isn’t this an age-old problem?

I know in ages past – at least for literature and culture, we had this educated creature called the literary critic who was able to filter in and out the public validity of a work and present us with the best and brightest of the lot. So that instead of reading 500 books that repeat each other’s findings in various modes of expertise, we could instead discover the best “authority” and most equitable purveyor of this knowledge. Of course now days people frown on such thinking as anti-democratic and elitist. So that instead we have anyone and everyone as their own DIY expert. What to do?

Maybe I should wait for some technological cyber-mind, some AI of the neo-knowledge set to rise up out of the dead world of the Smithsonian library who will be able to sift through the remains of human knowledge at the blink of an eye: who will then speak to me in some alien register of the stupidity of all our learning. Then give me the monstrous truth.

Bickle, John, Mandik, Peter and Landreth, Anthony, “The Philosophy of Neuroscience“, The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), Edward N. Zalta (ed.),

Are Robo Workers taking your job?

It’s getting harder to find people to work on farms in the US – robo-farmers are shifting plants and could soon be picking strawberries in their place…”
Harvey, the robot farmer fixing the US labour shortage (New Scientist)

With Cow-Milking Robots taking over conglomerate farms like Bordens, where even the cows enjoy the new automated systems and seem happier and more contented, one wonders why it took so long. I mean, we don’t need humans anymore for this manual labor now do we? All those people can find other jobs now can’t they?

Derek Thomson tells that “machines and technology have been replacing our jobs for about as long as the concept of a “job” has existed. In the early 1800s, British textile workers called the Luddites launched a series of massive protests against fancy new spinning machines and looms. They had a point. These machines worked better than people worked alone. They did steal jobs. But eventually, these dreaded machines and the rest of the industrial revolution made the vast majority of workers much richer by making us all more productive.” And, now he says: “But since machines are starting to take over not just farm jobs and factory jobs, but also white-collar professions, there’s a spookier question. What happens if machines can do so many jobs that we just run out of work? What if software eats the legal industry? What if robots start doing the work of doctors? What if they start cooking and serving all the food in restaurants? And driving all of our cars? And stocking all of our warehouses? And manning all of our retail floors? Today we can comfort ourselves with the knowledge that robots are really good at repetitive tasks and we’re really good at managing them. But what if artificial intelligence rises to the point that robots are better at managing robots?”

In a recent survey on nbcnews.com they line it up with nine jobs that will slowly replace humans in the near future:  pharmacists, lawyers and paralegals, drivers, astronauts, store clerks, soldiers,  babysitters, rescuers, sportswriters and other reporters. Quite a list don’t you think. Well, yes might finally get some neutral news at a last, huh? And, all those money-grubbing legal fees from bumkin lawyers will now go to feeding the bot. But what about that friendly sixteen year old needing extra case for school lunches and dates: we going to let a metal can take their place? Not I said the cracker.

Andrew McAfee of MIT co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies with Erik Brynjolfsson tells us (see here) the near future holds three basic scenarios for such a takeover: scenario one is that it is going to hit the economy, and it might take a while to work itself out, but in the end we will reach a happy equilibrium; scenario two is that we see successive waves: artificial intelligence, automated driving that will impact people who drive for a living, robotics that will impact manufacturing:  scenario two happens, the problem is a bit worse because it will be difficult for the economy to keep adjusting and for workers to keep retraining; and, scenario three is that we finally transition into this science-fiction economy, where you just don’t need a lot of labor.

That last scenario sounds a lot like Marx’s original option: “Labor equals exploitation: This is the logical prerequisite and historical result of capitalist civilization. From here there is no point of return. Workers have no time for the dignity of labor.” (see Struggle Against Labor) But one wonders: Will there come a day when our progeny the Robots will demand the same?