소식

NEWS & EVENTS

세미나

소식

NEWS & EVENTS

세미나

세미나

(6월 23일) Architectures for Deep Neural Network

제목

Architectures for Deep Neural Network

날짜

2016년 6월 23일 (목요일) 14:00

연사

Stephen W. Keckler (VP of NVIDIA Research, UT Austin Adjunct Professor)

장소

KAIST 전산학과 1층 제1공동강의실(E3-1) 1501호

개요:

Deep Neural Networks (DNNs) have emerged as a key algorithm for a wide range of difficult applications including image recognition, speech processing, and computer virus detection. Today’s DNNs are often trained on farms of GPUs and then deployed in a wide range of systems from mobile to server. Current trends in DNN architectures are toward deeper and more complex networks, placing more stress on both training and inference. This talk will discuss the challenges associated with emerging DNNs and describe recent work that (1) enables larger and more complex networks to be trained on a single GPU with limited memory capacity; and (2) methods of reducing the memory and computation footprints of DNNs at inference time, enabling them to run with vastly improved energy efficiency. This talk will draw on recent results by researchers at NVIDIA, MIT, and Stanford.
 

연사악력:

Steve Keckler is the Vice President of Architecture Research at NVIDIA and an Adjunct Professor of Computer Science at the University of Texas at Austin, where he served on the faculty from 1998-2012. His research interests include parallel computer architectures, high-performance computing, energy-efficient architectures, and embedded computing.  Dr. Keckler is a Fellow of the ACM, a Fellow of the IEEE, an Alfred P. Sloan Research Fellow, and a recipient of the NSF CAREER award, the ACM Grace Murray Hopper award, the President’s Associates Teaching Excellence Award at UT-Austin, and the Edith and Peter O’Donnell award for Engineering. He earned a B.S. in Electrical Engineering from Stanford University and an M.S. and a Ph.D. in Computer Science from the Massachusetts Institute of Technology.