📄 中文摘要
连续时间、事件驱动的脉冲神经网络(SNN)严格基于脉冲事件进行操作,将脉冲的时序和顺序视为表征,而非时间离散化的副产品。这一观点与生物计算相一致,并与事件传感器和神经形态处理器的原生分辨率相符,同时使计算和内存的使用与事件数量成比例。然而,两个挑战阻碍了基于事件的SNN系统的实际端到端训练:1)精确的充电-触发-重置动态导致输入脉冲的处理本质上是顺序的,2)必须在没有时间区间的情况下解决精确的脉冲时刻。为了解决这两个问题,采用并行关联扫描技术同时处理多个输入脉冲,从而提高了处理效率。
📄 English Summary
Bullet Trains: Parallelizing Training of Temporally Precise Spiking Neural Networks
Continuous-time, event-driven spiking neural networks (SNNs) operate strictly on spike events, treating spike timing and ordering as representations rather than artifacts of time discretization. This perspective aligns with biological computation and the native resolution of event sensors and neuromorphic processors, enabling compute and memory usage that scales with the number of events. However, two challenges hinder practical end-to-end trainable event-based SNN systems: 1) exact charge-fire-reset dynamics impose inherently sequential processing of input spikes, and 2) precise spike times must be resolved without time bins. To address these issues, parallel associative scans are employed to consume multiple input spikes simultaneously, enhancing processing efficiency.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等