百万 Token 上下文窗口是一个陷阱。原因如下。
📄 中文摘要
Claude Opus 4.6 和 Gemini 2.5 Pro 支持高达 100 万个 Token 的上下文窗口,而 GPT-5 则提供 256K 的支持。尽管市场营销暗示更大的上下文窗口意味着更好的结果,但实际上,大型上下文窗口常常被误解。研究表明,模型在上下文的开头和结尾的信息表现最佳,而中间的信息则容易被忽视。这种现象被称为“迷失在中间效应”。因此,使用大型上下文窗口并不一定能带来更好的结果,反而可能导致更高的成本和更差的表现。
📄 English Summary
1 Million Token Context Windows Are a Trap. Here's Why.
Claude Opus 4.6 and Gemini 2.5 Pro support context windows of up to 1 million tokens, while GPT-5 offers 256K. Despite marketing claims suggesting that larger context windows yield better results, large context windows are often misunderstood. Research indicates that models perform best on information at the beginning and end of the context, while the middle information tends to be neglected, a phenomenon known as the 'lost in the middle' effect. Consequently, utilizing large context windows does not necessarily lead to better outcomes and may result in higher costs and poorer performance.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等