基于fpga的shufflenetv2加速器的优化及研究
首发时间:2024-06-28
摘要:在当今日益发展的人工智能领域,神经网络的训练和推理需求越来越庞大。而在许多应用场景中,特别是在边缘设备和嵌入式系统中,例如人脸识别、语音识别和无人驾驶等场景中,常常会遇到资源受限的情况,例如功耗、内存和计算能力等,为了解决硬件资源受限的问题,设计了一种基于fpga的神经网络加速器,针对目前主流的轻量化卷积神经网络shufflenetv2进行了改进,实现了16bit的模型量化并重新设计了shufflenetv2的构造块及混洗操作,实现了硬件友好型设计。将优化后的模型部署在zynq-7020的板卡上,实验表明,本文优化后的shufflenetv2模型在精度损失1%以内的情况下,相较于gpu有0.4ms的速度提升,cpu有7.3倍的速度提升,gops达到了4.05,能耗仅为2.663w,能效比达到了1.54gops/w。优化后的模型时延达到了43ms,是原始模型的42%。
关键词:
for information in english, please click here
optimization and research of shufflenetv2 accelerator based on fpga
abstract:in the rapidly advancing field of artificial intelligence, the demands for neural network training and inference are increasingly substantial. particularly in various application scenarios such as edge devices and embedded systems, like face recognition, voice recognition, and autonomous driving, there often arises a situation of limited resources, including power consumption, memory, and computational capacity. to address the issue of hardware resource constraints, a neural network accelerator based on fpga was designed. this accelerator focuses on the mainstream lightweight convolutional neural network, shufflenetv2, introducing improvements that include 16-bit model quantization. it also redesigns the construction blocks and shuffle operations of shufflenetv2 for a hardware-friendly design. the optimized model was deployed on a zynq-7020 board. experiments show that, with less than 1% loss in accuracy, the optimized shufflenetv2 model achieves a 0.4ms speed improvement over gpu, a 7.3 times speed increase over cpu, and reaches 4.05 gops, with power consumption only at 2.663w, resulting in an energy efficiency of 1.54 gops/w. the latency of the optimized model is 43ms, which is 42% of the baseline.
keywords:
基金:
论文图表:
引用
导出参考文献
no.****
同行评议
勘误表
基于fpga的shufflenetv2加速器的优化及研究
评论
全部评论0/1000