Jobs and robots
One report recently suggest that machine learning and new AI personal assistants will cost 7% job loss in the U.S. market alone over the next 10 years. According to Forrester, 16 per cent of US jobs will be lost in the U.S. over the next decade as the result of the rise of artificial intelligence and technology, although it also believes that 13.6 million jobs will be created during that time due to the trend.
The Future of Jobs, 2025: Working Side by Side with Robots” has been written by J.P Gownder, vice president and principal analyst serving Infrastructure & Operations Professionals for Forrester, who is attempted to dispel the often quoted statistic study from Oxford academics Fry and Osborne that found that almost half (47 per cent) of US jobs would be exposed to job losses from computerisation.
Gownder wrote in a blog about the study; “Cultural anxieties about robots (as seen in the novel Robopocalypse, or the Battlestar Galatica reboot) create an atmosphere in which people readily believe the worst case scenario. But the scariest numbers have the least specific timeframes and outcomes associated with them; even Frey and Osborne write of their estimate that at-risk jobs are merely ‘potentially automatable’ (emphasis mine) and that their timeframe is ‘over some unspecified number of years, perhaps a decade or two.’ And aggregate economic productivity numbers don’t suggest that automation is moving the needle toward human redundancy.”
Predictive Pattern recognition and fashion markets
In a report by Amir Mizroch on Digits that Amazon is once again expanding its Big Data initiative. This time their rolling out its sophisticated big-data-crunching platform for developers in Europe, and is also hiring scientists for its research teams in New York and Berlin who specialize in getting machines to do things like make sales predictions and predict fraud.
“Machine-learning software predicts what a customer is likely to do in the next five seconds or in the next five weeks. It’s pattern recognition at scale,” said Ralf Herbrich, European Union director of machine learning at Amazon and managing director of the Amazon Development Center in Germany. Herbrich previously worked at Microsoft Research and Facebook, showing how AI experts are in high demand at global tech firms.
What’s happening is the infiltration of AI Platforms for commercial use in a global corporate market that Amazon is planning to monopolize if they can. As they describe it their new platform of machine learning algorithms offers a “set of visualization tools and wizards that allow nontechnical users to create predictions based on their businesses’ historical data.”
Aliyun, the cloud computing unit of Alibaba Group, is launching an artificial intelligence service that it claims is the first in China. Called DT PAI, the platform combines algorithms used by Alibaba with machine and deep learning techniques and presents them in a simple drag-and-drop interface. Aliyun says developers can use DT PAI to predict user behavior without having to write new code.
The platform’s applications, however, go beyond e-commerce. For example, genomic research institute BGI used it in 2013 to sequence genes more quickly. ODPS has also been used to track weather patterns and pharmaceutical drugs sold in China.
“In the past, the field of artificial intelligence was only open to a very small number of qualified developers and required the use of specialized tools. Such an approach was prone to error and redundancy,” said Aliyun senior product expert Xiao Wei in a press release. “However, DT PAI allows developers with little or no experience in the field to construct a data application from scratch in a much shorter period of time. What used to take days can be completed within minutes.”
The moment that nontechnical users such as mid-level management, stock brokers, sales execs, specialized scientists, schools, civic governments, military and NSA analysts etc. can have their executives use such platforms we’re going to see a great intake of revenue to drive further and further exponential economics into this niche market. Whether we like it or not what’s driving technology as it always has is economics. If AI emerges out of this it will be due to this influx of the market driven economy not the academic and scientific treadmill of disinterested knowledge. Today technology and its innovation is coming out of the private corporate competitive systems of capitalism. Companies like Amazon or collaborates for DARPA systems. The government is aligned with big corporate systems for their war-machine and other technological initiatives.
As Mizroch reports Amazon already has several machine-learning research groups, in cities such as Bangalore, Seattle, Palo Alto, Calif., and Berlin. It also has a speech-recognition team in Aachen, Germany. Amazon’s move brings it in line with other tech giants like Microsoft and Facebook, who also have AI research labs in New York.
As if they’d read the William Gibson novel Pattern Recognition and were following it into the fashion industry we discover that scientists at the New York unit will focus on demand forecasting, predicting how likely it is that a customer will select a product that is shown in a search query, and how likely that customer is to then click on a related link or purchase a product, Herbrich said.
The forecasting is expected to focus at first on fashion—apparel and shoes—a highly seasonal market segment that is hard to predict based on historical data. Styles of clothes may differ widely from season to season and year to year, and fashion shows in one year largely determine what retailers will sell in the next year’s seasons.
Will these smart machines become the new prognosticators and forecasters of trends in our near future? Will the clothes you wear come from such predictions? Will the next step be machines that not only predict what will wear but actually take over the whole fashion industry from inception to final product, creating, manufacturing, marketing, and even – oh, no… becoming buyers themselves of such products? We’ll there even be a need for us in such a machine owned and driven market economy? A sort of closed system of feedback loops that integrates every last segment and niche market into its self-reflecting and organizing systems?
We learn from Jillian Ward that Facebook Inc. is testing a personal digital assistant to run inside its Messenger mobile application, an early step toward a challenge to artificial intelligence-based services like Apple Inc.’s Siri, Google Inc.’s Google Now and Microsoft Corp.’s Cortana.
As she tells us the service, called M, can carry out tasks on a user’s behalf, such as making purchases, booking appointments and travel, or sending gifts, according to a posting on the social network by David Marcus, Facebook’s vice president of messaging products. It’s being tested with a few hundred people in the San Francisco Bay Area to start, said Facebook spokesman Ari Entin.
Seems as if the new assistants will do everything for us which will leave us in a realm of pure boredom, free to be happy and mindless. But of course all this happiness of digital assistants taking over from our laziness to do things for ourselves will come with a price. The less we do for ourselves the more we will forget the human element of personal choice and decision. Our machines will in the future begin to make more and more of the decisions in our lives. But what does this really mean for us?
In the Mobile Robot Laboratory, Thomas Collins collaborates with researchers in the College of Computing to create machines that can make complex decisions. They are exploring two new applications in a study funded by the Defense Advanced Research Projects Agency (DARPA). Researchers are teaching the robots how to search through rooms for biological hazards, and perhaps to find, intercept and destroy a moving enemy tank on the battlefield. The robots perform the tasks on their own. No one uses a joystick to guide them.
Some university robot labs focus on low-level performance, such as movement guidance systems. Others work to achieve higher-level reasoning in machines. But researchers in Georgia Tech’s robot program are pioneering efforts to integrate those separate levels of functioning to design behavior-based robotics for both military and private-sector applications.
“Our goal is to create intelligence by combining reflexive behaviors with cognitive functioning,” explains Ronald Arkin, a Regents’ professor of computer science and director of the lab. “This involves the issue of understanding intelligence itself. Is it complex? Or just an illusion of complexity?”
The task of building knowledge and awareness for machines is huge. Consider the different kinds of behavior humans use when driving their cars. People can motor along without being conscious of actively driving (reflexive behavior), but that changes if they get lost. Then they think about how to navigate (cognitive reasoning). “We are figuring out how to make robot architecture both act and ‘think,’ using learned and acquired skills,” adds Arkin, who specializes in development of high-level, behavior-based robotic software. He builds it using abstract behaviors that capture both sensing and acting, but can be reasoned as separate pieces of intelligence. Arkin’s approach is influenced by psychology and neuroscience.
In related work, Collins is collaborating with Assistant Professors Tucker Balch and Frank Dellaert and research scientist Daniel Walker, all from the College of Computing. The researchers are developing a colony of 100 small robots to simulate a large-scale system that may include humans, robots and other machines.
As Collins tells it “It is inevitable that our machines will become smarter at anticipating our needs and performing their functions without frequent human intervention,” he predicts. Sophisticated robots “are still a long way off, not just because of the intelligence issues, but also because of problems with power storage, locomotion capabilities, perception and other limitations. But basic androids will probably happen eventually – quite possibly within the lifetimes of today’s children.”
One might look at this like the automobile industry. Go back to the early 1900’s and watch do a documentary of the emergence of automobiles out of Ford’s early assembly lines with their Model-T’s, and then through all the various stages, improvements, testings, side-industries of racing enthusiasts, etc. till we reach our own age of electric cars etc. Think of large robotic collegiums where humans and robots learn together, interact and study together in cooperatives. Or the notion like horse racing, dog racing, automobile racing we will see machines in either enhanced cyborgs mergers or pure robotics in challenges for the masses. Will the mindless masses of the future be sitting in global arenas cheering on their favorite machines as they duel to the death? The gladiatorial arenas of a new dark enlightenment?
Is it possible for an autonomous machine to make moral judgments that are in line with human judgment? As Bill Christiansen reports machines are coming of age in ethical decisioning too. In Modelling Morality with Prospective Logic, was written by Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universitas Indonesia. The authors declare that morality is no longer the exclusive realm of human philosophers.
In this article he will use the Trolley problem to discuss such ethical decisions and the complexity that machine learning will entail to reach such processes. The general form of the problem is this: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
(1) Do nothing, and the trolley kills the five people on the main track.
(2) Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the correct choice? The authors of the paper claim that they have been successful in modeling these difficult moral problems in computer logic. They accomplished this feat by resolving the hidden rules that people use in making moral judgments and then modeling them for the computer using prospective logic programs.
What Christiansen suggests is that we begin a careful analysis of the pulp fiction of SciFi that is littered with thousands of scenarios of this type in which men and women devoted to testing such futuristic scenarios have already accumulated – at least in the mindscapes of imagination, the questions to such dilemmas if not their solutions.
To me two factors will need to be resolved in this scenario: 1) the semiotic factor – which is the actual linguistic aspect of machine learning, symbol manipulation, interpretation, etc.; and, 2) the external behavioral factor – the sense based systems that robotics will need to develop to internalize the vast array of data at the experiential level of reality. With the battle in philosophy and the sciences over just what reality is in itself such factors of robotics will either mimic our current “common sense” view and then incorporate the – so to speak, “scientific image” or concept based systematic view in an ongoing algorithmic update. This sense of building a duopolistic system that allows for both our everyday sense of reality, while also allowing the more prosthetic capabilities that machinic intelligences will have access to like larger vision based access to the full spectrum of light and visual cues, etc. Unlike humans machines will not be bound to our limited range of sound, visual, haptic, scent, or physical limitations. These machines will have enhanced capacities we as humans have only dreamed about but will never have except as we, too, become more immersed in our own machines as cyborgs.
Of course that brings back the ethical dilemma. Will we enter a stage where some humans will become enhanced through pharmaceuticals, biogenetic transformation, or machinic merger; while others decide to remain old style humans of limited range and vision? What kind of social systems will arise out of such bifurcations? Will the old style humans ultimately become members of a dying species as more and more people see the benefits of enhancement or cyborg immersion? Will the immortalists who predict that humans who take on these new posthuman forms become part of a new elite? Will such enhancements be offered only to the most rich and powerful, while the cost for such enhancements and cyborgization become out of reach to those left behind? Will corporations dangle such gifts of enhancement or cyborgization to employees as benefits if they are willing to bind their lives and minds to the dictates of their new masters?
R.Scott Bakker in his excellent Artificial Intelligence as Socio-Cognitive Pollution raises both the legal and moral dimension in the debate. In a paragraph that brings together Eric Schwitzgebel and Ryan Calo’s respective stances in the legal and moral debates surrounding AI, he relates:
…where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question.
I’ll let the reader read the article for herself, I’ll only relate Scott’s estimation of this debate:
The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.
Not only that, we are information processing machines who have learned through evolutionary processes of selection and adaptation to incorporate and internalize through short and long term memory acquisition a knowledge of the environment in which we live and inhabit. Yet, a few centuries ago we began a project of turning that information processor onto the inner workings of our own mind and brain and are realizing the tools we invented to do so are both inadequate to the task and were not originally built for such investigations. Yet, as Scott so many times likes to point out the sciences rather than our philosophical forbears are do just that through the hard won methods of scientific investigation. What they are discovering is quite amazing and is informing both our economic and practical lives day by day.
As Scott reiterates his basic theme on our own blindness to our own cognition says,
“The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’”
Someday we may study philosophy as a branch of literature, a fantastic exploration of strange and bewildering worlds of concept building and mind hacks. So many questions, so little thought about such things. Maybe it is time to go back and see what our science fiction pioneers thought about our futures. In the archives of time we might just find our futures.