An energy-efficient memory-based high-throughput VLSI architecture for convolutional networks

In this paper, an energy efficient, memory-intensive, and high throughput VLSI architecture is proposed for convolutional networks (C-Net) by employing compute memory (CM) [1], where computation is deeply embedded into the memory (SRAM). Behavioral models incorporating CM’s circuit non-idealities and energy models in 45nm SOI CMOS are presented.

System-level simulations using these models demonstrate that the probability of handwritten digit recognition Pr > 0.99 can be achieved using the MNIST database [2], along with a 24.5× reduced energy delay product, a 5.0× reduced energy, and a 4.9× higher throughput as compared to the conventional system.

Share This Post