大型语言模型去匿名化正在暴露在线真实身份

📄 中文摘要

2026年2月,研究者西蒙·勒尔曼的研究表明,具有互联网访问能力的大型语言模型(LLM)能够通过交叉引用写作风格、行为模式和公开数据,以超过85%的准确率去匿名化在线用户。这一能力的转变不仅是理论上的漏洞,更是对“匿名”在网络上实际含义的深刻影响。LLM去匿名化已成为一种生产规模的威胁,而非仅仅是研究中的好奇现象,任何具备互联网访问和足够上下文的部署都可能面临这一风险。

📄 English Summary

LLM Deanonymization Is Exposing Real Identities Online

In February 2026, a study by Simon Lermen demonstrated that large language models (LLMs) with internet access can reliably deanonymize online users with over 85% accuracy by cross-referencing writing styles, behavioral patterns, and publicly available data. This capability shift is not merely a theoretical vulnerability; it fundamentally alters the meaning of 'anonymity' online. LLM deanonymization has emerged as a production-scale threat rather than a mere research curiosity, posing risks to any deployment with internet access and sufficient contextual information.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等