Professor Eui-Jong Whang joined the panel discussion at “AI with Google 2018” conference

Professor Eui-Jong Whang of our department made a presentation and participated in a panel discussion with industry experts at the ‘AI with Google 2018’ conference, sharing ideas and challenges about AI innovation under the theme ‘AI for All’.

In this event, Dr. Jeff Dean, Head of Google AI gave a keynote lecture and Professor Eui-Jong Whang was the only one from academia who participated in the event and gave a presentation on ‘AI Research and Talent Development at KAIST”. In the following panel discussion, they shared their knowledge of AI innovation and our future challenges.

Professor Whang had worked as a researcher at Google Lab for 5 years. He had collaborated on developing the data infrastructure of TensorFlow Extended Machine Learning Platform and published his research at ACM SIGKDD paper and ACM SIGMOD tutorial.

 

<Link>

http://www.bloter.net/archives/313731

%ED%99%A9%EC%9D%98%EC%A2%85%20%EA%B5%90%EC%88%98

Ph.D. course Hyunho Yeo's paper (Advisor: Dongsu Han) adopted USENIX OSDI 2018 paper

The paper “Neural Adaptive Content-aware Internet Video Delivery” (Author Hyunho Yeo, Youngmok Jung, Jaehong Kim, Jin Woo Shin and Dongsu Han) of Ph.D. Student Hyunho Yeo (Advisor: Dongsu Han), will be adopted at USENIX Symposium on Operating Systems Design and Implementation (OSDI) October, 2018.

USENIX OSDI is a flagship conference in the field of Operating Systems, held every two years, with about 30-40 papers published each time. It is famous for presenting research and innovative technologies such as Google’s flagship system MapReduce (2004, citation = 25,000), BigTable (2006, citation = 5,800), TensorFlow (2016, citation = 5,300), and others. Especially, this issue is meaningful in the fact that it was the first paper of KAIST in the history of 26 years of OSDI society and adopted in Korean institution in 10 years.

This study is based on the position paper “How will Deep Learning Change Internet Video Delivery? (ACM HotNets 2017, First author Hyunho Yeo) collaborated with Deep Learning Expert Jin-woo Shin to present a new design that combines adaptive streaming and deep learning technology, the core technologies of Internet video transmission.

In this paper, we propose an HTTP adaptive streaming system based on super-resolution DNN (Deep Neural Network) and re-implementation DASH player, dramatically improving the quality of User Experience (more than 50%) compared to the existing one. For this, the client is designed to enable real-time inference, and the design that transmits the DNN model for each individual video is operated in conjunction with the existing MPEG DASH.

In this study, we applied the DNN to the video content itself for the first time to prepare for the research on the video transmission and the network of the distribution network (Content Distribution Network). We propose content-awareness of DNNs, implement the system practically, and acknowledge innovation that demonstrates its advantages in a real environment.

박사과정 0

Professors Jin-Woo Shin & Dong-Su Han’s research group develops super-resolution technology using Deep Learning

Sometimes, when we watch a video on the internet, the resolution of the video becomes bad depending on the network connection status.  This phenomenon happens because we use the adaptive streaming method, which adjusts the streaming video resolution in real time following the ever-changing internet bandwidth. To solve this problem, prof. Jinwoo Shin and Dongsu Han research group have recently developed the technology that improves the quality of video regardless of network status. This technology has developed by combining ‘adaptive streaming’ and ‘super-resolution technology’ based on deep learning.

The research group has input the low-resolution and high-resolution data repetitively using CNN (Convolution Neural Network), which is a kind of deep-learning technology. Based on the CNN, ‘super-resolution technology’, which is a method of delicately extending the horizontal and vertical scales on the display, was used. Especially, when streaming video, “super-resolution” realizing CNN can be downloaded together so that the method to improve resolution in real time regardless of network condition has proposed. In case of CNN file, downloading this with video is light work because its size is up to 2MB. In addition, because it was designed to download divided CNN file pieces in order, ‘super-resolution’ technology can be applied to video with only some parts of CNN file.

The research team has shown that state-of-art adaptive streaming resolution can be realized with 26.9% lowered internet bandwidth through this system and, explained that 40% higher quality can be supplied with same internet bandwidth through this system. Prof. Dongsu Han said that “this technology has applied to youtube, Netflix’s video transmission systems and it makes big meaning to have practicality.” In addition, “it’s only implemented on a desktop now, but we will continue to develop it to make it work on mobile devices in the future.”

 

<Link>

IT Chosun

http://it.chosun.com/m/svc/article.html?contid=2018103002324&Dep0=m.search.naver.com&utm_source=m.search.naver.com&utm_medium=unknown&utm_campaign=itchosun

Joongang Ilbo

https://m.news.naver.com/read.nhn?mode=LSD&mid=sec&sid1=101&oid=025&aid=0002860051