五角大楼将Anthropic列入黑名单,OpenAI获得认可——相同的安全协议,不同的规则

📄 中文摘要

2026年2月27日,特朗普政府将Anthropic指定为“国家安全供应链风险”,并命令所有联邦机构立即停止使用该公司的模型。同日,Sam Altman宣布OpenAI与国防部达成协议,将其模型部署在机密网络上。尽管两家公司与五角大楼协商了相同的安全原则,并坚持禁止国内大规模监控和对自主武器系统的人类责任,但国防部接受了OpenAI的条款,却拒绝了Anthropic的相同要求。这一决定并非技术性选择,而是出于政治考量。

📄 English Summary

Pentagon Blacklists Anthropic, Anoints OpenAI—Same Safety Deal, Different Rules

On February 27, 2026, the Trump administration designated Anthropic as a 'Supply-Chain Risk to National Security' and ordered all federal agencies to cease using the company's models immediately. On the same day, Sam Altman announced that OpenAI had reached an agreement with the Department of Defense to deploy its models on classified networks. Both companies had negotiated identical safety principles with the Pentagon, insisting on prohibitions against domestic mass surveillance and human accountability for autonomous weapons systems. The DoD accepted OpenAI's terms but rejected Anthropic's identical demands. This decision was not based on technical grounds but rather on political considerations.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等