认知分歧:人工智能上下文窗口、人类注意力下降与委托反馈循环

📄 中文摘要

研究记录并理论化了两个可测量趋势之间的自我强化动态:大型语言模型(LLM)上下文窗口的指数扩展与人类持续注意力能力的世俗收缩。由此产生的非对称性被称为认知分歧。自2017年以来,人工智能的上下文窗口从512个标记增长到2026年的200万个标记(增长因子约为3906;拟合的λ=0.59/年;翻倍时间约为14个月)。与此同时,人类有效上下文范围(ECS)——一种基于经过验证的阅读速率元分析(Brysbaert, 2019)和经验驱动的理解缩放因子的标记等效度量——从2004年的约16000个标记下降到2026年的估计1800个标记。

📄 English Summary

The Cognitive Divergence: AI Context Windows, Human Attention Decline, and the Delegation Feedback Loop

The study documents and theorizes a self-reinforcing dynamic between two measurable trends: the exponential expansion of large language model (LLM) context windows and the secular contraction of human sustained attention capacity. This resulting asymmetry is termed Cognitive Divergence. AI context windows have expanded from 512 tokens in 2017 to 2,000,000 tokens by 2026 (a factor of approximately 3,906; fitted lambda = 0.59/year; doubling time of about 14 months). Concurrently, the human Effective Context Span (ECS)—a token-equivalent measure derived from validated reading-rate meta-analysis (Brysbaert, 2019) and an empirically motivated Comprehension Scaling Factor—has declined from approximately 16,000 tokens (2004 baseline) to an estimated 1,800 tokens by 2026.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等