Deep neural networks (DNNs) demand significant amount of computation and memory access as the complexity of tasks and the number of data continue to grow. To ease the burden, resistive memory crossbar array (RCA) based neural network hardware accelerators are gaining attention due to their parallel vector-matrix multiplication capability and in-memory computing characteristics. However, there are many challenges to realize the potential of the RCA-based accelerator for large scale neural networks. In this talk, I will discuss some of the issues and introduce the solutions that my group has been working on. Based on our experiences, we believe that co-design of neural network and hardware is one of the keys to implement scalable RCA-based neural network hardware accelerator.
Jae-Joon Kim is currently an associate professor at Pohang University of Science and Technology (POSTECH), Pohang, Korea. He received the B.S. and M.S. degrees in Electronics Engineering from Seoul National University, Seoul, Korea and Ph.D. degree from the School of Electrical and Computer Engineering of Purdue University at West Lafayette, IN, USA in 1994, 1998, and 2004, respectively. Before joining POSTECH, he was with IBM T. J. Watson Research Center as a Research Staff Member from May 2004 to Jan. 2013. His current research interest includes design of neuromorphic/neural processor, low power circuits, and circuits for exploratory devices.