每个 PoP 中的 GPU:Cato Neural Edge 和 GPU 加速云安全的转变

📄 中文摘要

Cato Networks 在 SASE 市场中做出了最激进的架构选择:在其全球 85 个以上的接入点中直接部署 NVIDIA GPU。新的 Neural Edge 平台通过在同一位置以单次处理的方式同时进行流量检查和 AI 驱动的分析,缩小了两者之间的差距。这一创新引发了一个根本性的架构问题,适用于所有构建或运营云安全基础设施的组织:AI 实际上运行在哪里?大多数云交付的安全平台使用基于 CPU 的引擎在其接入点检查流量,这对于传统工作负载而言是有效的。

📄 English Summary

GPUs in Every PoP: Inside Cato Neural Edge and the Shift to GPU-Accelerated Cloud Security

Cato Networks has made a bold architectural move in the SASE market by deploying NVIDIA GPUs directly within each of its 85+ global Points of Presence. The new Neural Edge platform bridges the gap between traffic inspection and AI-driven analysis by enabling both processes to occur simultaneously in the same location and in a single pass. This innovation raises a fundamental architectural question for anyone building or operating cloud security infrastructure: where does your AI actually run? Most cloud-delivered security platforms currently inspect traffic at their PoPs using CPU-based engines, which works well for traditional workloads.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等