News & Event

Seminar

Home > News & Event > Seminar

News & Event

Seminar

(June 23) Architectures for Deep Neural Network

Subject

Architectures for Deep Neural Network

Date

2016/06/23 (Thursday) 14:00

Speaker

Vice President Stephen W. Keckler (UT Austin Adjunct Professor)

Place

E3-1 #1501

Overview:

Deep Neural Networks (DNNs) have emerged as a key algorithm for a wide range of difficult applications including image recognition, speech processing, and computer virus detection. Today’s DNNs are often trained on farms of GPUs and then deployed in a wide range of systems from mobile to server. Current trends in DNN architectures are toward deeper and more complex networks, placing more stress on both training and inference. This talk will discuss the challenges associated with emerging DNNs and describe recent work that (1) enables larger and more complex networks to be trained on a single GPU with limited memory capacity; and (2) methods of reducing the memory and computation footprints of DNNs at inference time, enabling them to run with vastly improved energy efficiency. This talk will draw on recent results by researchers at NVIDIA, MIT, and Stanford.
 

Profile:

Steve Keckler is the Vice President of Architecture Research at NVIDIA and an Adjunct Professor of Computer Science at the University of Texas at Austin, where he served on the faculty from 1998-2012. His research interests include parallel computer architectures, high-performance computing, energy-efficient architectures, and embedded computing.  Dr. Keckler is a Fellow of the ACM, a Fellow of the IEEE, an Alfred P. Sloan Research Fellow, and a recipient of the NSF CAREER award, the ACM Grace Murray Hopper award, the President’s Associates Teaching Excellence Award at UT-Austin, and the Edith and Peter O’Donnell award for Engineering. He earned a B.S. in Electrical Engineering from Stanford University and an M.S. and a Ph.D. in Computer Science from the Massachusetts Institute of Technology.