Professor Dongsu Han’s research team has developed a video super-resolution technology based on deep learning on commercial mobile devices.
The research team remarkably improved the processing speed and drastically reduced the power consumption by utilizing the video dependency. Unlike the conventional technology that applied super-resolution for every single frame of a video, in this study, super-resolution is applied only to a small portion of frames and the results are recycled in the rest of the frame. The quality improvement of images per unit computing resource was maximized by carefully selecting frames to apply super-resolution, and the process of recycling super-resolution results was implemented o operate in real time by utilizing the video dependency information loaded in the compressed video.
The research team reported that the use of this technology can significantly improve the satisfaction of mobile users for video streaming. The technology is expected to be used in various fields related to video transmission/storage. Meanwhile, the results of this research were also announced at ACM Mobicom (Annual International Conference on Mobile Computing and Networking), one of the best conference in the field of mobile.
Detailed research information can be found at the link below.
Congratulations on Professor Dongsu Han’s remarkable achievement!
Project website: http://ina.kaist.ac.kr/~nemo
Conference presentation video: