AI in EE

AI IN DIVISIONS

AI in Circuit Division

AI in EE

AI IN DIVISIONS

AI in Circuit Division ​

AI in Circuit Division

ECIM: Exponent Computing in Memory for an Energy-Efficient Heterogeneous Floating-Point Processor (유회준교수 연구실)

The authors propose a heterogeneous floating-point (FP) computing architecture to maximize energy efficiency by separately optimize exponent processing and mantissa processing. The proposed exponent-computing-in-memory (ECIM) architecture and mantissa-free-exponent-computing (MFEC) algorithm reduce the power consumption of both memory and FP MAC while resolving previous FP computing-in-memory processors’ limitations. Also, a bfloat16 DNN training processor with proposed features and sparsity exploitation support is implemented and fabricated in 28 nm CMOS technology. It achieves 13.7 TFLOPS/W energy efficiency while supporting FP operations with CIM architecture.

Related papers:

  1. Lee et al., “ECIM: Exponent Computing in Memory for an Energy-Efficient Heterogeneous Floating-Point DNN Training Processor,” in IEEE Micro, Jan 2022
  2. Lee et al., “An Energy-efficient Floating-Point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory”, 2021 IEEE Hot Chips 33 Symposium (HCS), 2021
  3. Lee et al., “A 13.7 TFLOPS/W Floating-point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory,” 2021 Symposium on VLSI Circuits, 2021

 

2