每用户 Docker 容器:如何为每个用户提供自己的 AI 代理
📄 中文摘要
在构建 Telegram 上的 AI 伴侣 Adola 时,最初采用了一种标准架构:一台服务器、一个模型、一个数据库用于跟踪用户,以及一些巧妙的提示工程来保持对话的独立性。然而,这种架构在两周后被完全放弃。共享 AI 实例为多个用户服务时,面临着几个问题,包括上下文污染,即使使用用户 ID 前缀,模型偶尔会泄露用户间的信息;内存管理的复杂性,向量数据库适用于检索,但不适合 AI 伴侣所需的那种精心策划和不断发展的记忆;以及其他潜在的局限性。为了克服这些问题,作者决定为每个用户提供独立的 Docker 容器,从而实现更好的隔离和个性化体验。
📄 English Summary
Per-User Docker Containers: How I Give Each User Their Own AI Agent
When building Adola, an AI companion on Telegram, the initial architecture involved a single server, one model, a database for user tracking, and clever prompt engineering to maintain conversation separation. However, this setup was abandoned after two weeks due to several issues with shared AI instances. Problems included context contamination, where the model occasionally leaked information between users despite user ID prefixes; memory management challenges, as vector databases are suitable for retrieval but not for the evolving memory required by an AI companion; and other limitations. To address these issues, the author decided to implement separate Docker containers for each user, allowing for better isolation and a more personalized experience.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等