Our department’s professor Yoo Hoi-Jun’s lab has developed a low power GAN (generative Adversarial Network) AI semiconductor chip.
The developed AI semiconductor chip can process multi-deep layer neural networks on low power, making it suitable for mobile platforms. The team has succeeded in performing image combination, style transform, recovery of damaged images and other generative AI technologies on a mobile device with the developed chip.
1st author Ph.D candidate Sanghoon Kang presented the research development on February 17 at the ISSCC conference, where approximately 3000 researchers from around the world gathered in San Francisco(Paper title : GANPU: A 135TFLOPS/W Multi-DNN Training Processor for GANs with Speculative Dual-Sparsity Exploitation). Recent developments are focused on the acceleration and performance of artificial intelligence on mobile platforms, but past researches are limited to single-deep layer neural networks and applicable only at the inference stage, which are not appropriate for mobile platforms. The team has shown that not only single-deep but also multi-deep layer neural networks such as generative adversarial networks can be performed on mobile devices via the GANPU(Generative Adversarial Networks Processing Unit) AI semiconductor chip.
The chip can train the GAN neural network on the processor itself without having to send the data to an external server, which can provide enhanced private data security. A 4.8-fold higher energy efficiency was achieved compared to previous multi-deep layer neural network devices.
The team has performed an automatic face changing application on a mobile tablet PC device. The developed application provides facial correction by designating the addition/transform/delete points of the 17 characteristics of the face, which then the GANPU networks automatically applies the corrections.