Sunday, November 13, 2016

"Cognitive Transistors" for AI, DL, ML

Interesting podcast interview, a broad view on "big data" and AI field.

Why businesses should pay attention to deep learning - O'Reilly Media

"In this episode of the O’Reilly Data ShowBen Lorica spoke with Christopher Nguyen, CEO and co-founder of Arimo. Nguyen and Arimo were among the first adopters and proponents of Apache Spark, Alluxio, and other open source technologies. Most recently, Arimo’s suite of analytic products has relied on deep learning to address a range of business problems."

Near the end of the interview, Nguyen suggested that general purpose processors are less efficient / spending more energy for simulating neural networks, and hardware improvement lead to more specialized hardware for ML and AI, leading to eventual specialized alternative to classic transistor that would be more similar to neurons, and could represent not only states 0 and 1 but also values in between. He didn't mention quantum computing where value is not deterministic, just more efficient basic hardware unit.

CPU (Central Processing Units) are general, but slow in emulating specialized processing like those used for AI
xPU (like  GPU, Graphical PU) are much more efficient in parallel operations, so they are now used for AI processing.
FPGA (Field Programmable Gate Arrays) are customize-able for tasks, and Microsoft is using them on Azure to speed up specialized operations like database and web.
ASIC are custom designed for maximum performance of specialized tasks, not programmable
IA = "Intelligence Augmentation"
AI = "Artificial Intelligence"

"Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is 
now a big step closer to building a brain on a chip."

Diagram explaining the various aspects of IBM's TrueNorth chip

"The animal brain (which includes the human brain, of course), as you may have heard before, is by far the most efficient computer in the known universe. As you can see in the graph below, the human brain has a “clock speed” (neuron firing speed) measured in tens of hertz, and a total power consumption of around 20 watts. A modern silicon chip, despite having features that are almost on the same tiny scale as biological neurons and synapses, can consume thousands or millions times more energy to perform the same task as a human brain."