Design of Area and power Efficient MAC architecture using CNN for DSP Applications
Keywords:
ASIC, CNN, HDL, MACAbstract
The planned task aims to develop a highly efficient MAC Architecture utilizing CNN on a Digital Signal Processor through the implementation of Verilog HDL for functional verification, Synthesis, and Physical Design using Cadence Genus and Innovus in the ASIC design flow. This architecture aims to significantly enhance the processor's speed by executing rapid multiplication and addition operations, characteristic of the current MAC unit. With the rapid evolution of technology, digital signal processors have become increasingly potent and resourceful. The cornerstone of this MAC architecture lies in its utilization of CNN, enabling swift operations. Constructing MAC architecture necessitates the integration of various digital blocks within the design. The proposed design achieves an exceptional reduction of 80.13% in both area and power, consequently resulting in a substantial decrease in the architecture's size. Moreover, the proposed design offers the added benefits of optimized power utilization and minimized area requirements.
Downloads
References
Taxonomy and Benchmarking of Precision-Scalable MAC Arrays Under Enhanced DNN Dataflow Representation authors Ehab M. Ibrahim , Linyan Mei , Graduate Student Member, IEEE, and Marian Verhelst , Senior Member, IEEE-2022
”The Data Flow and Architectural Optimizations for a Highly Efficient CNN Accelerator Based on the Depthwise Separable Convolution ” authors Hung-Ju Lin1 • Chung-An Shen-2022
”FPGA-Based Convolutional Neural Network Accelerator with Resource-Optimized Approximate Multiply-Accumulate Unit”Authors Mannhee Cho and Youngmin Kim-2021
”Area and energe efficient shift and accumulator unit for object detection in IoT applications” Authors Anakhi Hazarika,Sowmyajith poddar,Moustafa M Nasaralla,Hafizur Rehmann-2021
A High-Accuracy Hardware-Efficient Multiply–Accumulate (MAC) Unit Based on Dual-Mode Truncation Error Compensation for CNNs authors are SONG-NIEN TANG , (Member, IEEE), AND YU-SHIN HAN-2020.
High Speed, Approximate Arithmetic Based Convolutional Neural Network Accelerator authors Mohammed E. Elbtity∗,†, Hyun-Wook Son†, Dong-Yeong Lee†, and HyungWon Kim-2020.
Design of Floating-Point MAC Unit for ComputingDNN Applications in PIM,Authors Hun Jae Lee, Chang Hyun Kim, Seon Wook Kim School of Electrical and Computer Engineering Korea University Seoul, Korea-2020
"Very Deep Convolutional Networks for Large-Scale Image Recognition"Authors: Karen Simonyan, Andrew Zisserman,2015.
"Sequence to Sequence Learning with Neural Networks"Authors: I. Sutskever, O. Vinyals, and Q. V. Le Year: 2014
"ImageNet Classification with Deep Convolutional Neural Networks" Authors: A. Krizhevsky, I. Sutskever, and G. E. Hinton,Year: 2012
"Gradient-Based Learning Applied to Document Recognition"Paper: "Gradient-Based Learning Applied to Document Recognition" by Y. LeCun et al. (1998).
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.