In-memory binary neural network accelerators have been garnering interests as an energy and area-efficient platform for lightweight applications. Most previous in-memory neural network accelerators, however, could not fully exploit the benefits of in-memory computing due to the high overhead of peripheral circuits such as analog-to-digital converters (ADCs). Such a problem becomes worse when a large scale neural network is mapped to the multiple memory arrays. In this talk, I will discuss the algorithm/hardware co-design techniques which my group has been working on to solve the issues. I will also introduce our SRAM-based in-memory Binary Neural Network (BNN) chip.
Jae-Joon Kim is currently a professor at Pohang University of Science and Technology (POSTECH), Pohang, Korea. He received the B.S. and M.S. degrees in Electronics Engineering from Seoul National University and Ph.D. degree from the School of Electrical and Computer Engineering of Purdue University, respectively. Before joining POSTECH, he was with IBM T. J. Watson Research Center as a Research Staff Member from May 2004 to Jan. 2013. His current research interest includes design of deep learning hardware accelerators, neural network compression, hardware security circuit, and circuit for exploratory devices.
Copyright ⓒ 2015 KAIST Electrical Engineering. All rights reserved. Made by PRESSCAT
Copyright ⓒ 2015 KAIST Electrical Engineering. All rights reserved. Made by PRESSCAT
Copyright ⓒ 2015 KAIST Electrical
Engineering. All rights reserved.
Made by PRESSCAT