Skynet Rising: IBM and Deep Learning for Nuclear Arsenal

On ZDNet Danny Palmer describes IBM’s ‘brain-inspired’ supercomputer to help watch over US nuclear arsenal:

Lawrence Livermore National Laboratory will use the new system to “explore new computing capabilities” surrounding the National Nuclear Security Administration’s (NNSA) missions in cybersecurity, control of US nuclear weapons, and, in theory, management of agreements to reduce the number of nuclear missiles in the world.

The Lawrence Livermore National Laboratory (LLNL), a federal government research facility in California, tasked with ensuring the safety, security, and reliability of the United States nuclear deterrent, is working alongside IBM on what’s been described as a “first of a kind” brain-inspired supercomputing platform for deep learning.

The neural-network will be based on IBM’s neurosynaptic TrueNorth computer chips. These processors are designed to aid computers in performing cognitive tasks, such as pattern recognition and sensory processing, more efficiently than conventional computer chips.

That efficiency is made possible because a single TrueNorth processor consists of 5.4 billion transistors wired together in such a fashion that it creates an array of one million digital neurons, which can communicate with each other via 256 million electrical synapses.

In total, the platform will consist of 16 TrueNorth chips and will process the equivalent of 16 million neurons and four billion synapses, while only consuming the energy of a tablet — just 2.5 watts of power.

On the IBM site Dharmendra S. Modha tells us that six years ago, IBM and our university partners embarked on a quest—to build a brain-inspired machine—that at the time appeared impossible. Today, in an article published in Science, we deliver on the DARPA SyNAPSE metric of a one million neuron brain-inspired processor. The chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt–literally a synaptic supercomputer in your palm. He continues, saying,

Let’s be clear: we have not built the brain, or any brain. We have built a computer that is inspired by the brain. The inputs to and outputs of this computer are spikes. Functionally, it transforms a spatio-temporal stream of input spikes into a spatio-temporal stream of output spikes.

If one were to measure activities of 1 million neurons in TrueNorth, one would see something akin to a night cityscape with blinking lights. Given this unconventional computing paradigm, compiling C++ to TrueNorth is like using a hammer for a screw. As a result, to harness TrueNorth, we have designed an end-to-end ecosystem complete with a new simulator, a new programming language, an integrated programming environment, new libraries, new (and old) algorithms as well as applications, and a new teaching curriculum (affectionately called, “SyNAPSE University”). The goal of the ecosystem is to dramatically increase programmer productivity. Metaphorically, if TrueNorth is “ENIAC”, then our ecosystem is the corresponding “FORTRAN.”

We are working, at a feverish pace, to make the ecosystem available—as widely as possible—to IBMers, universities, business partners, start-ups, and customers. In collaboration with the international academic community, by leveraging the ecosystem, we foresee being able to map the existing body of neural network algorithms to the architecture in an efficient manner, as well as being able to imagine and invent entirely new algorithms.

To support these algorithms at ever increasing scale, TrueNorth chips can be seamlessly tiled to create vast, scalable neuromorphic systems. In fact, we have already built systems with 16 million neurons and 4 billion synapses. Our sights are now set high on the ambitious goal of integrating 4,096 chips in a single rack with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power.

The architecture can solve a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices where computation is constrained by power and speed. These systems can efficiently process high-dimensional, noisy sensory data in real time, while consuming orders of magnitude less power than conventional computer architectures.

On one hand, with portable devices: think smart phones, sensor networks, self-driving automobiles, robots, public safety, medical imaging, real-time video analysis, signal processing, olfactory detection, and digital pathology. On the other hand, with synaptic supercomputers: —think multimedia processing on the cloud. In addition, our chip can be used in combination with other cognitive computing technologies to create systems that learn, reason and help humans make better decisions. Over time, our hope is that SyNAPSE will become an integral component of IBM Watson group offerings.

We have been working with iniLabs Ltd., creators of a retinal camera—the DVS—that directly produces spikes, which are the natural inputs for TrueNorth. Integrating the two, we have begun investigating extremely low-power end-to-end vision systems.

If we think of today’s von Neumann computers as akin to the “left-brain”—fast, symbolic, number-crunching calculators, then TrueNorth can be likened to the “right-brain”—slow, sensory, pattern recognizing machines.

We envision augmenting our neurosynaptic cores with synaptic plasticity to create a new generation of field-adaptable neurosynaptic computers capable of online learning.

3 thoughts on “Skynet Rising: IBM and Deep Learning for Nuclear Arsenal

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s