Doing a little research on the current robotics challenge that DARPA has put on for the past few years has been interesting to say the least. In an article by Connelly LaMar and Brian Anderson The Dawn of Killer Robots we discover the current work being done on a few projects within the U.S.A..
The notion of building humanoid robots became more and more apparent when the nuclear disaster in Fukushima, Japan. (see the documentary below) In this incident the Japanese and other countries were tasked with implementing a robotics system that could explore, record, and use tools to fix and coordinate efforts to stabilize the nuclear facility. All parties involved realized that the most optimal design for such an effort would need to be humanoid. So several countries have set up challenges to offer companies incentives to come up with a solution.
Some of the top humanoid robots in a documentary:
DARPA Robotics Challenge
Gill Pratt in the official DARPA robotics challenge spoke with Defense One recently. He describes the number one priority for the initiative for building emergency based systems: “The No. 1 issue in emergencies is communication and coordination. Communications, Command, and Control is the hard part in an emergency where you’ve got hundreds of people trying to help. In the future, if the emergency is very bad and the environment is dangerous for human beings, some of the assets we bring to bear won’t be flesh and blood, but machines. How do we integrate them into the general response even if communications is degraded? That’s the question we were trying to get at in the DRC it’s great to have bots that can do the work of humans in a dangerous place but if the comms are bad, how do you get them to work as a team?”
In another military related question Pratt was brought to task for DARPA’s emphasis that neither he nor the Defense Department is trying to build armed ground robots through this competition. But other nations don’t share those reservations. Russia has deployed armed robots that can fire autonomously to guard missile sites. If the Russian government, hypothetically, were to approach you for advice on how to make that autonomous armed ground robot system work, what would you tell them?
For a number of years to come, for situations of human time scales, the decision of whether to fire or not is one that a human will be best at. The chance of making a military situation much worse by having a machine autonomously make a mistake outweighs the benefit.
There are some cases where you have no choice. For instance, the Phalanx [Close-In Weapon System] gun that’s on ships that shoots at incoming missiles has been around for decades and it’s an armed, robotic system. It has to work beyond human speeds. The 2012 DOD directive on autonomy talks about that. The key is quality control, making sure that that machine can’t make an error. The Phalanx looks at the size and speed of the object coming toward it and if it determines that the object coming is so fast and so small it couldn’t possibly have a person in it, it shoots.
In those systems where we do have to arm an autonomous system — we have them now and I’m sure we will have them in the future — we must ensure good quality control over the target-detection software to make sure the system never makes an error. The U.S. will likely do that best. I worry more about the nation that doesn’t care as much as the U.S. does if errors are made.
We should also keep in mind that human beings make a lot of mistakes in war when they fear for their lives. There’s reason to think these systems can make things better. They can make things better by deciding when NOT to shoot.
You notice he’s not worried about the military use of targeted systems, only about the “quality control”, and the fear of other nations whose quality control will care less. The point being that DARPA like any other military related institution knows very well that these systems initiatives they back for supposed emergency and peacetime use will ultimately go toward the military-industrial complex and war efforts. A future driven by autonomous machines merged with AI algorithms that like the hedge-fund systems that are out-of-control on the markets of our world will tend toward a future we can no longer control. This notion of command and control is itself laughable when we think about the deep learning and AI initiatives. We can expect somewhere in the next thirty to forty years most experts tell us a battlefield and police force of purely autonomous agents that can outperform, outmaneuver, and more than likely outthink humans in war and peace.
DARPA, Boston Dynamics, and Google are all at the forefront of the weaponized and autonomous AI centric development of robotics. Google has supposedly tried to cut its ties with DARPA after buying up both Boston Dynamics and Schaft (a Japanese start-up that won the 2014 DARPA challenge). BBC ran a report last year and discovered how slow it is in developing these technologies, and that the notion of autonomous targeting robots are probably years away. Yet, they are coming.
As Stars and Stripes reported back in 2012. As Chris Caroll states it when intelligent robots debut on battlefields in the next few years, they’re going look and act less like the Terminator or other robots of science fiction and more like the quotidian load-carrying machines the Marines were testing last week at Fort Pickett. That’s not a slam on GUSS, or Ground Unmanned Support Surrogate, being developed by TORC Robotics of Blacksburg, Va., or on other similar systems designed to do tiring but basic jobs. What sets this new breed apart is that unlike the thousands of other “robots” now in service with the U.S. military, GUSS and its brethren have the intelligence to make some of their own decisions — known as “autonomy” in the robotics field.
Caroll learned from James Giordano, director of the Center for Neurotechnology Studies at the Potomac Institute for Policy Studies, an Arlington, Va., think tank focused on science and defense issues, that somewhere in the next five or so years autonomous enabled systems will be able to take over specific duties of humans, freeing them of certain dangerous tasks. “You want it to be the proverbial ‘point-and-shoot’ system,” Giordano said. “I don’t want to be tethered to my machine, walking it through every process.” As another Marine said jokingly “If it gets hit that’s a shame, but you haven’t lost any Marines,” Mills said. “This is just a first step, and a good one.”
Even in Russian Putin’s military is busy building autonomous tanks with mounted machine guns and unmanned, amphibious Jeep-size vehicles. The “battlefield robot” project, which could involve other enterprises, will be implemented as part of a state-private partnership, and carries serious development risks, Rogozin said.
Human rights and robotic ethics
As the Human Rights Watch said of this. “Fully autonomous weapons do not exist yet, but they are being developed by several countries and precursors to fully autonomous weapons have already been deployed by high-tech militaries,” HRW said in a statement on its website. “Some experts predict that fully autonomous weapons could be operational in 20 to 30 years.”
“These weapons would be incapable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity. The weapons would not be constrained by the capacity for compassion, which can provide a key check on the killing of civilians,” the human rights watchdog said. “Fully autonomous weapons also raise serious questions of accountability because it is unclear who should be held responsible for any unlawful actions they commit.”
Back in 2013 Christof Heyns told the Human Rights Council in Geneva “A decision to allow machines to be deployed to kill human beings worldwide, whatever weapons they use, deserves a collective pause.” Jody Williams the Nobel Peace prize winner whose efforts helped the “banning and clearing of anti-personnel mines” in war-torn countries around the world has recently taken up the cause of stopping killer robots. She and her cohorts developed the Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons, also known as lethal autonomous robots. This should be achieved through new international law (a treaty), as well as through national laws and other measures. Stating that they are concerned about weapons that operate on their own without human supervision. The campaign seeks to prohibit taking the human ‘out-of-the-loop’ with respect to targeting and attack decisions on the battlefield. Campaign to Stop Killer Robots has been established to provide a coordinated civil society response to the multiple challenges that fully autonomous weapons pose to humanity.
Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society—and ethics—change with robotics? IEEE Robotics site on ethical issues. As Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield says, “I am not a proponent of lethal autonomous systems. I am a proponent of when they arrive into the battle space, which I feel they inevitably will, that they arrive in a controlled and guided manner. Someone has to take responsibility for making sure that these systems … work properly. I am not like my critics, who throw up their arms and cry, ‘Frankenstein! Frankenstein!'” Nothing would make him happier than for weapons development to be rendered obsolete, says Mr. Arkin. “Unfortunately, I can’t see how we can stop this innate tendency of humanity to keep killing each other on the battlefield.”
One of the better documentaries mainly dealing with Japan’s efforts implementing ethical and pragmatic systems for use in peacetime: