Research

Research Highlights

Home > Research > Research Highlights

Research

Research Highlights

Research Highlights

KAIST EE Presented Cutting-edge AI Research Results at ICML 2019

ICML and NeurIPS are the world’s most prestigious machine learning conferences. The quality and impact of institution’s research in machine learning are often measured by the number of papers accepted at these conferences. KAIST EE has been very prolific in this sense. At 2019 ICML alone, KAIST EE researchers have published 9 papers, becoming one of the most productive institutions of the world in machine learning research. These papers can be found below:

 

 

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Sung Whan Yoon, Jun Seo, and Jaekyun Moon 


Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning

Seungyul Han and Youngchul Sung


Weak Detection of Signal in the Spiked Wigner Model

Hye Won Chung and Ji Oon Lee


QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning

Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero and Yung Yi


Learning What and Where to Transfer

Yunhun Jang, Hankook Lee, Sung Ju Hwang, and Jinwoo Shin


Training CNNs with Selective Allocation of Channels

Jongheon Jeong and Jinwoo Shin


Robust Inference via Generative Classifiers for Handling Noisy Labels

Kimin Lee , Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, and Jinwoo Shin 


Using Pre-Training Can Improve Model Robustness and Uncertainty

Dan Hendrycks, Kimin Lee and Mantas Mazeika


Spectral Approximate Inference

Sejun Park, Eunho Yang, Se-Young Yun, and Jinwoo Shin