Deep learning has achieved great success in recent years. In many fields of applications, such as computer vision, biomedical analysis, and natural language processing, deep learning can achieve a performance that is even better than human-level. However, behind this superior performance is the expensive hardware cost required to implement deep learning operations. Deep learning operations are both computation intensive and memory intensive. Many research works in the literature focused on improving the efficiency of deep learning operations. In this talk, special focus is put on improving deep learning computation and several efficient arithmetic unit architectures are proposed and optimized for deep learning computation.
Seokbum Ko is currently a Professor at the Department of Electrical and Computer Engineering and the Division of Biomedical Engineering, University of Saskatchewan, Canada. He got his PhD degree from the University of Rhode Island, USA in 2002. His research interests include computer architecture/arithmetic, efficient hardware implementation of compute-intensive applications, deep learning processor architecture and biomedical engineering. He is a senior member of IEEE circuits and systems society and associate editors of IEEE TCAS I and IEEE Access.