Robotic Futures: Military and Peacetime


Doing a little research on the current robotics challenge that DARPA has put on for the past few years has been interesting to say the least. In an article by Connelly LaMar and Brian Anderson The Dawn of Killer Robots we discover the current work being done on a few projects within the U.S.A..

The notion of building humanoid robots became more and more apparent when the nuclear disaster in Fukushima, Japan. (see the documentary below) In this incident the Japanese and other countries were tasked with implementing a robotics system that could explore, record, and use tools to fix and coordinate efforts to stabilize the nuclear facility. All parties involved realized that the most optimal design for such an effort would need to be humanoid. So several countries have set up challenges to offer companies incentives to come up with a solution.

Some of the top humanoid robots in a documentary:

DARPA Robotics Challenge

Gill Pratt in the official DARPA robotics challenge spoke with Defense One recently. He describes the number one priority for the initiative for building emergency based systems: “The No. 1 issue in emergencies is communication and coordination.  Communications, Command, and Control is the hard part in an emergency where you’ve got hundreds of people trying to help. In the future, if the emergency is very bad and the environment is dangerous for human beings, some of the assets we bring to bear won’t be flesh and blood, but machines. How do we integrate them into the general response even if communications is degraded? That’s the question we were trying to get at in the DRC it’s great to have bots that can do the work of humans in a dangerous place but if the comms are bad, how do you get them to work as a team?”

In another military related question Pratt was brought to task for DARPA’s emphasis that neither he nor the Defense Department is trying to build armed ground robots through this competition. But other nations don’t share those reservations. Russia has deployed armed robots that can fire autonomously to guard missile sites. If the Russian government, hypothetically, were to approach you for advice on how to make that autonomous armed ground robot system work, what would you tell them?

His reply:

For a number of years to come, for situations of human time scales, the decision of whether to fire or not is one that a human will be best at. The chance of making a military situation much worse by having a machine autonomously make a mistake outweighs the benefit.

There are some cases where you have no choice. For instance, the Phalanx [Close-In Weapon System] gun that’s on ships that shoots at incoming missiles has been around for decades and it’s an armed, robotic system. It has to work beyond human speeds. The 2012 DOD directive on autonomy talks about that. The key is quality control, making sure that that machine can’t make an error. The Phalanx looks at the size and speed of the object coming toward it and if it determines that the object coming is so fast and so small it couldn’t possibly have a person in it, it shoots.

In those systems where we do have to arm an autonomous system — we have them now and I’m sure we will have them in the future — we must ensure good quality control over the target-detection software to make sure the system never makes an error. The U.S. will likely do that best. I worry more about the nation that doesn’t care as much as the U.S. does if errors are made.

We should also keep in mind that human beings make a lot of mistakes in war when they fear for their lives. There’s reason to think these systems can make things better. They can make things better by deciding when NOT to shoot.

You notice he’s not worried about the military use of targeted systems, only about the “quality control”, and the fear of other nations whose quality control will care less. The point being that DARPA like any other military related institution knows very well that these systems initiatives they back for supposed emergency and peacetime use will ultimately go toward the military-industrial complex and war efforts. A future driven by autonomous machines merged with AI algorithms that like the hedge-fund systems that are out-of-control on the markets of our world will tend toward a future we can no longer control. This notion of command and control is itself laughable when we think about the deep learning and AI initiatives. We can expect somewhere in the next thirty to forty years most experts tell us a battlefield and police force of purely autonomous agents that can outperform, outmaneuver, and more than likely outthink humans in war and peace.

DARPA, Boston Dynamics, and Google are all at the forefront of the weaponized and autonomous AI centric development of robotics. Google has supposedly tried to cut its ties with DARPA after buying up both Boston Dynamics and Schaft (a Japanese start-up that won the 2014 DARPA challenge). BBC ran a report last year and discovered how slow it is in developing these technologies, and that the notion of autonomous targeting robots are probably years away. Yet, they are coming.


As Stars and Stripes reported back in 2012. As Chris Caroll states it when intelligent robots debut on battlefields in the next few years, they’re going look and act less like the Terminator or other robots of science fiction and more like the quotidian load-carrying machines the Marines were testing last week at Fort Pickett. That’s not a slam on GUSS, or Ground Unmanned Support Surrogate, being developed by TORC Robotics of Blacksburg, Va., or on other similar systems designed to do tiring but basic jobs. What sets this new breed apart is that unlike the thousands of other “robots” now in service with the U.S. military, GUSS and its brethren have the intelligence to make some of their own decisions — known as “autonomy” in the robotics field.

Caroll learned from James Giordano, director of the Center for Neurotechnology Studies at the Potomac Institute for Policy Studies, an Arlington, Va., think tank focused on science and defense issues, that somewhere in the next five or so years autonomous enabled systems will be able to take over specific duties of humans, freeing them of certain dangerous tasks. “You want it to be the proverbial ‘point-and-shoot’ system,” Giordano said. “I don’t want to be tethered to my machine, walking it through every process.” As another Marine said jokingly “If it gets hit that’s a shame, but you haven’t lost any Marines,” Mills said. “This is just a first step, and a good one.”

Even in Russian Putin’s military is busy building autonomous tanks with mounted machine guns and unmanned, amphibious Jeep-size vehicles. The “battlefield robot” project, which could involve other enterprises, will be implemented as part of a state-private partnership, and carries serious development risks, Rogozin said.

Human rights and robotic ethics

As the Human Rights Watch said of this. “Fully autonomous weapons do not exist yet, but they are being developed by several countries and precursors to fully autonomous weapons have already been deployed by high-tech militaries,” HRW said in a statement on its website. “Some experts predict that fully autonomous weapons could be operational in 20 to 30 years.”

“These weapons would be incapable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity. The weapons would not be constrained by the capacity for compassion, which can provide a key check on the killing of civilians,” the human rights watchdog said. “Fully autonomous weapons also raise serious questions of accountability because it is unclear who should be held responsible for any unlawful actions they commit.”

Back in 2013 Christof Heyns told the Human Rights Council in Geneva “A decision to allow machines to be deployed to kill human beings worldwide, whatever weapons they use, deserves a collective pause.” Jody Williams the Nobel Peace prize winner whose efforts helped the “banning and clearing of anti-personnel mines” in war-torn countries around the world has recently taken up the cause of stopping killer robots. She and her cohorts developed the Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons, also known as lethal autonomous robots. This should be achieved through new international law (a treaty), as well as through national laws and other measures. Stating that they are concerned about weapons that operate on their own without human supervision. The campaign seeks to prohibit taking the human ‘out-of-the-loop’ with respect to targeting and attack decisions on the battlefield. Campaign to Stop Killer Robots has been established to provide a coordinated civil society response to the multiple challenges that fully autonomous weapons pose to humanity.

Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society—and ethics—change with robotics? IEEE Robotics site on ethical issues. As Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield says, “I am not a proponent of lethal autonomous systems. I am a proponent of when they arrive into the battle space, which I feel they inevitably will, that they arrive in a controlled and guided manner. Someone has to take responsibility for making sure that these systems … work properly. I am not like my critics, who throw up their arms and cry, ‘Frankenstein! Frankenstein!'” Nothing would make him happier than for weapons development to be rendered obsolete, says Mr. Arkin. “Unfortunately, I can’t see how we can stop this innate tendency of humanity to keep killing each other on the battlefield.”

One of the better documentaries mainly dealing with Japan’s efforts implementing ethical and pragmatic systems for use in peacetime:

4 thoughts on “Robotic Futures: Military and Peacetime

    • Yea, it shouldn’t, but I think we can both assume that the military could care less what we think. It has never stopped it from doing it so far, and will continue to turn a blind eye to our opposition and ethics. They’ll use the excuse that Russian and China are both doing it, so we must do it to secure our national interests and integrity. That’s the oldest con in the arsenal of deception.

      In fact as he finished the speech speaking of transparency he admits that certain countries that are not democratic will go right on with their plans for autonomous systems, but that we as democracies should not. My question would be simple: in a world where you’re enemies will send autonomous systems and killing machines against you, why should you bind your arm and hands, cripple your own capabilities, limit your own ability to survive, defend, and protect your citizenry? I think you can see where this is going: once the rabbit is out of the magic hat there is not putting it back in. All this talk of bans, etc. is already too late, mute, and besides the point. His mentioning of recentralization and a new feudalism is already here. We no longer live in democracies except as mediatainment hype. Our governments in EU and USA are essentially driven by profit, economics, corruption, and Oligarchic systems of elites, financiers, and military-industrial complexes. We have been slowly eroding our democracies for decades. Our education, our work, our lives are framed in fictions they keep us deluded and believing we are still in lands of freedom, when in fact the majority of the populations of most first world nations are enslaved in service jobs, taxed to death, and bound to the dictates of governments and economics we have no control over.

      I see no change unless people finally are shocked out of their complacency, which means what? Climate change, economic collapse, war, disease, … take your pick… but one thing for sure is that political change is dead, a fiction we keep lying to ourselves still exists. Listening to Suarez was interesting. He still believes in the liberal world, and has yet to realize its gone, dead, done… look at the Obama administration which was supposed to advance transparency… lol. He belongs to the Oligarchs just as much as the Republicans. Both sides have been corrupted and there is no democracy left, all we have is cronyism and dictatorship of consumerism. Buy, buy, buy… keep on buying … we love our slaves. And, someday, we’ll replace you with our shiny new line of robots, humanoids to work right along with you and replace you. Want it be a lovely world then… lol


      • S.C; I agree with you that it will take some kind of disaster to shock people out of their complacency. Unfortunate for the human species… When will Musk have those rockets ready to get us off this planet?


      • lol … like never… he’s pipe-dreaming … and, where would we go if he did? The universe is a very inhuman place to wander. Mars… lol, it would take decades to ship supplies and equipment to get a colony off the ground that’s even beginning to support an actual social system. The logistics alone would entail so much investment that the mind boggles at it. No.. space is a pipe-dream for humans, always has been… our posthuman progeny, our robots with AI will inherit that dream, not humans… oh, sure, maybe we will explore our solar system eventually and build colonies… but without some reengineered physical substratum, some new improved body we’re fairly well doomed to our earth… so before we wildly dream of escape we should consider our options of collaboration here on earth….

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s