Past few decades have seen unprecedented growth in the information processing capabilities of electronic systems such as desktops, laptops, mobiles phone etc. This emergence of advanced data processing systems has revolutionized several industries and has led to the availability of vast amounts of data. Recent advances in machine learning and big data explore ways of deriving useful conclusions from the available data but at a significant cost in silicon. Hence, it has now become crucial to ask, “what is the best way to build information processing systems for the future?”. This session invites researchers working on addressing various aspects of this question, including but not limited to, advances in state-of-the-art digital and analog CMOS-based designs, advances in state-of-the-art computer architectures and compilers, ways of addressing challenges such as high device variability and leakage power, alternative computing paradigms such as bio-neuro-inspired computing or computing using beyond-CMOS devices, alternative storage paradigms such as in-memory computers, novel memories such as RRAM or MRAM etc.
Keynote Speaker: Song Han – MIT
Talk Title – Coming Soon
Time and place – Friday Feb. 8, 9am-10am / TBA
Summary – Coming Soon
Song Han is an assistant professor in the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology. Dr. Han received the Ph.D. degree in Electrical Engineering from Stanford University advised by Prof. Bill Dally. Dr. Han’s research focuses on energy-efficient deep learning, at the intersection between machine learning and computer architecture. His research efforts on deep compression and hardware acceleration received the Best Paper Award at ICLR’16 and the Best Paper Award at FPGA’17. With these technologies, Dr. Han co-founded DeePhi Tech, which was acquired by Xilinx.
MIT HAN Lab’s research focuses on:
H: High performance, High energy efficiency Hardware
A: Architectures an Accelerators for Artificial Intelligence
N: Novel Algorithms for Neural Networks
Time and Place