基于动态token剪枝的高效压缩视频动作识别transformer
首发时间:2025-03-14
摘要:在线视频的快速增长推动了视频理解任务的需求,压缩视频凭借其稀疏存储的rgb帧和压缩的运动线索(如运动向量和残差)特性,在降低计算开销和存储成本方面展现出了巨大潜力,成为代替原始视频实现视频动作识别的有效方案。然而,现有的基于transformer的压缩视频动作识别方法面临精度和效率平衡的挑战,高昂的计算成本限制了其在实际场景中的应用。为此,本文提出了一种基于动态token剪枝的高效压缩视频transformer(dynamic token pruning for efficient compressed video transformer, dtp-ecvt)。具体来说,该框架首先构建了一个双流架构来分别处理压缩视频中的rgb模态和运动模态,随后通过跨模态交互和全局信息融合来增强识别性能。另一方面,为了降低模型复杂度,该框架在双流架构中集成了一个轻量的动态空间token剪枝(dynamic spatial token pruning, dstp)模块。该模块能够根据模态特征动态剪枝冗余token,从而降低训练和推理开销。实验结果表明,在hmdb-51、ucf-101和kinetics-400基准数据集上,dtp-ecvt在保持与现有最优方法相当或更优的识别精度的同时,显著降低了计算开销和推理延迟。
关键词:
for information in english, please click here
dynamic token pruning for efficient compressed video action recogition transformer
abstract:the rapid growth of online videos has driven the demand for video understanding tasks.compressed video, with its sparsely stored rgb frames and compressed motion cues (e.g., motion vectors and residuals), has emerged as a promising alternative to raw video for action recognition, offering significant advantages in reducing computational overhead and storage costs.however, existing transformer-based approaches for compressed video action recognition encounter a fundamental challenge in achieving an optimal trade-off between accuracy and computational efficiency, significantly hindering their deployment in real-world scenarios due to the substantial computational overhead. to address this, we propose dtp-ecvt (dynamic token pruning for efficient compressed video transformer), a novel framework integrating dual-stream processing and dynamic token pruning. specifically, dtp-ecvt first employs a dual-stream architecture to separately process rgb and motion modalities in compressed videos, followed by cross-modal interaction and global fusion to enhance recognition. furthermore, a lightweight dynamic spatial token pruning (dstp) module is introduced to dynamically eliminate redundant tokens based on modality-specific features, thereby reducing training and inference costs. extensive experiments on hmdb-51,ucf-101, and kinetics-400 benchmarks show that dtp-ecvt achieves comparable or superior accuracy to state-of-the-art approaches while significantly reducing computational overheadand inference latency.
keywords:
基金:
论文图表:
引用
导出参考文献
no.****
动态公开评议
共计0人参与
勘误表
基于动态token剪枝的高效压缩视频动作识别transformer
评论
全部评论