Hardware holds key to deep-learning success.
Infrastructure from the point of view of Optimal+, is the global data highway that connects the semiconductor industry-supply chain--fabless companies, foundries, and OSATs, for example. Also important are the hardware platforms on which the algorithms run. "Accuracy is not enough--energy efficiency and speed are important as well," said Vivienne Sze, an associate professor at MIT in the Electrical Engineering and Computer Science Department, in a recent phone interview. "We are trying to address speed and energy by looking across whole stack from algorithm to hardware."
Sze's research interests include energy-aware signal-processing algorithms and low-power circuit and system design for deep learning, computer vision, autonomous navigation, and image/video processing. Work during 2015 led to the publication of a paper she coauthored titled "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks." At the International Solid State Circuits Conference in February 2016, MIT researchers described the chip as achieving 10 times the efficiency of a mobile GPU, as reported in MIT News.
Power consumption can be addressed at both the algorithm and platform level. "You can do a lot from a hardware perspective, but some aspects are limited to the algorithm itself," Sze said. She cited the High Efficiency Video Coding (HEVC) standard as an example, noting that she was involved in the algorithm changes to improve efficiency. She shared an Engineering Emmy Award last fall for her work on HEVC.
"We looked at both the algorithms and the hardware side in developing the Eyeriss chip," she told me. "The goal was to address the needs of a smartphone, wearable, or other embedded device."
Extensive work is continuing on developing the fast and efficient deep-learning hardware platforms. Sze cited a recent article in The New York Times noting that at least 45 companies are working on chips for deep-learning applications--at least five of which have each raised more than $100 million from investors.
The design of efficient hardware systems to support deep learning is the focus of an MIT Professional Education course titled "Designing Efficient Deep Learning Systems" that Sze will teach March 28-29 at the Samsung Research America campus in Mountain View, CA. The course will be repeated this summer on MIT's campus at a date to be determined. The course will cover hardware platforms, how algorithms run on them, and optimization techniques.
Sze said the course will provide a broad perspective of the deep-learning landscape with a focus on speed and power and the interplay of algorithms and hardware. Algorithm developers in attendance can learn how platforms vary and how to adapt their code to run efficiently. Hardware developers can learn what type of neural networks are out there and how they can support them. And finally, investors can gain insights about what questions to ask and what metrics to apply when evaluating startups seeking funding.
Sze's course is part of a portfolio of courses that make up MIT Professional Education's new Professional Certificate Program in Machine Learning and Artificial Intelligence. The full portfolio of courses hasn't been announced yet, but--in addition to Sze's course--initial offerings include "Modeling and Optimization for Machine Learning and Applications," "Machine Learning for Big Data and Text Processing," and "Machine Learning for Healthcare."
|Printer friendly Cite/link Email Feedback|
|Date:||Mar 1, 2018|
|Previous Article:||Robots pick groceries, make sushi, assist amputees.|
|Next Article:||Chinese firm completes deal to buy U.S. semiconductor company.|