Why businesses should pay attention to deep learning - O'Reilly Media
"In this episode of the O’Reilly Data Show, Ben Lorica spoke with Christopher Nguyen, CEO and co-founder of Arimo. Nguyen and Arimo were among the first adopters and proponents of Apache Spark, Alluxio, and other open source technologies. Most recently, Arimo’s suite of analytic products has relied on deep learning to address a range of business problems."
Near the end of the interview, Nguyen suggested that general purpose processors are less efficient / spending more energy for simulating neural networks, and hardware improvement lead to more specialized hardware for ML and AI, leading to eventual specialized alternative to classic transistor that would be more similar to neurons, and could represent not only states 0 and 1 but also values in between. He didn't mention quantum computing where value is not deterministic, just more efficient basic hardware unit.
Near the end of the interview, Nguyen suggested that general purpose processors are less efficient / spending more energy for simulating neural networks, and hardware improvement lead to more specialized hardware for ML and AI, leading to eventual specialized alternative to classic transistor that would be more similar to neurons, and could represent not only states 0 and 1 but also values in between. He didn't mention quantum computing where value is not deterministic, just more efficient basic hardware unit.
CPU (Central Processing Units) are general, but slow in emulating specialized processing like those used for AI
xPU (like GPU, Graphical PU) are much more efficient in parallel operations, so they are now used for AI processing.
FPGA (Field Programmable Gate Arrays) are customize-able for tasks, and Microsoft is using them on Azure to speed up specialized operations like database and web.
ASIC are custom designed for maximum performance of specialized tasks, not programmable
xPU (like GPU, Graphical PU) are much more efficient in parallel operations, so they are now used for AI processing.
FPGA (Field Programmable Gate Arrays) are customize-able for tasks, and Microsoft is using them on Azure to speed up specialized operations like database and web.
ASIC are custom designed for maximum performance of specialized tasks, not programmable
- CPU vs FPGA vs ASIC - YouTube
- Google Cloud Platform Blog: Google supercharges machine learning tasks with TPU custom chip (TPU: Tensor Processing Unit, a custom ASIC by Google)
- What is the difference among CPU, GPU, APU, FPGA, DSP, and Intel MIC? - Quora
- Intel unveils new Xeon chip with integrated FPGA, touts 20x performance boost - ExtremeTech
AI = "Artificial Intelligence"
- Innovation from China: what is means for machine intelligence and AI
- The future of machine intelligence - O'Reilly Media
- Deep Learning: A practitioner’s approach
- The Deep Learning video collection: 2016
- Hands-on machine learning with scikit-learn and TensorFlow
- The Ex-Refugee Bringing Google-Like Data Tech to Everyone | WIRED
- Algorithms of the Mind - Arimo
"Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is now a big step closer to building a brain on a chip."