📄 中文摘要
大型语言模型(LLM)虽然强大,但将其单独称为“智能体”是一种分类错误。这种混淆在实际项目中经常出现,尤其是当人们期望单一提示能够像一个可以推理、行动和适应的系统时。如果在构建超出演示的项目时,可能已经遇到这一障碍。LLM的核心功能是根据给定的令牌序列预测下一个令牌,其他如推理、规划和解释等行为都是这一过程的涌现行为。理解这些限制对于构建智能系统至关重要。
📄 English Summary
Why LLMs Alone Are Not Agents
Large language models (LLMs) are powerful, but referring to them as 'agents' on their own is a category mistake. This confusion frequently arises in real projects, especially when users expect a single prompt to function like a system capable of reasoning, acting, and adapting. Those who have built anything beyond a demo have likely encountered this barrier. At their core, LLMs perform one task: predicting the next token given a sequence of tokens. Other behaviors such as reasoning, planning, and explanation emerge from this process. Understanding these constraints is crucial for building agentic systems.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等