Title : Quantization-Error-Robust Deep Neural Network for Embedded Accelerators
Author: Youngbeom Jung, Hyeonuk Kim, Yeongjae Choi, Lee-Sup Kim
Abstract : Quantization with low precision has become an essential technique for adopting deep neural networks in energy- and memory-constrained devices. However, there is a limit to the reducing precision by the inevitable loss of accuracy due to the quantization error. To overcome this obstacle, we propose methods reforming and quantizing a network that achieves high accuracy even at low precision without any runtime overhead in embedded accelerators. Our proposition consists of two analytical approaches: 1) network optimization to find the most error-resilient equivalent network in the precision-constrained environment and 2) quantization exploiting adaptive rounding offset control. The experimental results show accuracies of up to 98.31% and 99.96% of floating-point results in 6-bit and 8-bit quantization networks, respectively. Besides, our methods allow the lower precision accelerator design, reducing the energy consumption by 8.5%.