📄 中文摘要
锐度感知代理训练(SAST)应用于通过反向传播训练的脉冲神经网络(SNNs),旨在解决传统硬前向或代理后向训练中存在的非光滑前向模型与偏置梯度估计器的耦合问题。该方法将优化目标设定为普通光滑的经验风险,从而确保训练梯度对于所优化的辅助模型是准确的。在明确的有界性和收缩假设下,推导出紧凑的状态稳定性和输入Lipschitz界限,确立代理目标的光滑性,提供首个阶SAM近似界限,并证明了相关性质。这些研究结果为脉冲神经网络的训练提供了新的理论基础和方法论支持。
📄 English Summary
Sharpness Aware Surrogate Training for Spiking Neural Networks
Sharpness Aware Surrogate Training (SAST) is proposed for training Spiking Neural Networks (SNNs) to address the coupling of nonsmooth forward models with biased gradient estimators in conventional hard forward or surrogate backward training. The optimization target is defined as an ordinary smooth empirical risk, ensuring that the training gradient is exact for the auxiliary model being optimized. Under explicit boundedness and contraction assumptions, compact state stability and input Lipschitz bounds are derived, the smoothness of the surrogate objective is established, a first-order SAM approximation bound is provided, and relevant properties are proven. These findings offer a new theoretical foundation and methodological support for the training of spiking neural networks.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等