脉冲变换器的神经动态自注意力机制

📄 中文摘要

将脉冲神经网络(SNN)与变换器架构相结合,为边缘视觉应用提供了一条平衡能效与性能的有希望的路径。然而,现有的脉冲变换器面临两个关键挑战:一是与人工神经网络(ANN)相比,性能差距显著;二是在推理过程中内存开销较高。通过理论分析,这两种限制归因于脉冲自注意力(SSA)机制:缺乏局部性偏置以及需要存储大型注意力矩阵。受到生物视觉神经元的局部感受野(LRF)和膜电位动态的启发,提出了LRF-Dyn,该方法使用具有局部感受野的脉冲神经元,旨在提高性能并降低内存需求。

📄 English Summary

Neural Dynamics Self-Attention for Spiking Transformers

Integrating Spiking Neural Networks (SNNs) with Transformer architectures presents a promising approach to achieve a balance between energy efficiency and performance, especially for edge vision applications. However, existing Spiking Transformers encounter two significant challenges: a considerable performance gap compared to their Artificial Neural Network (ANN) counterparts and high memory overhead during inference. Theoretical analysis attributes these limitations to the Spiking Self-Attention (SSA) mechanism, which suffers from a lack of locality bias and the requirement to store large attention matrices. Inspired by the localized receptive fields (LRF) and membrane-potential dynamics observed in biological visual neurons, LRF-Dyn is proposed, utilizing spiking neurons with localized receptive fields to enhance performance while reducing memory requirements.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等