📄 中文摘要
回传传播算法推动了现代深度学习的发展,但由于全局误差传播、相位分离以及对集中式存储的高度依赖,使其难以实现在线、完全分布式的硬件学习系统。预测编码提供了一种替代方案,其中推理和学习源于相邻层之间的局部预测误差动态。该研究提出了一种数字架构,直接在硬件中实现离散时间的预测编码更新。每个神经核心维护自身的活动、预测误差和突触权重,仅通过硬连线与相邻层进行通信。支持通过统一的每神经元钳制原语进行监督学习和推理。
📄 English Summary
A Synthesizable RTL Implementation of Predictive Coding Networks
Backpropagation has been a cornerstone of modern deep learning; however, it poses challenges for realization as an online, fully distributed hardware learning system due to issues like global error propagation, phase separation, and a heavy reliance on centralized memory. Predictive coding presents an alternative approach where inference and learning emerge from local prediction-error dynamics between adjacent layers. This research introduces a digital architecture that implements discrete-time predictive coding updates directly in hardware. Each neural core independently maintains its own activity, prediction error, and synaptic weights, communicating solely with adjacent layers through hardwired connections. Supervised learning and inference are facilitated via a uniform per-neuron clamping primitive.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等