为何我构建了一个以隐私为首的 LLM 代理

出处: Why I built a privacy-first LLM proxy

发布: 2026年3月24日

📄 中文摘要

许多评估过的 LLM 网关都存在一个共同问题:它们会记录我的请求内容。在使用代理路由团队请求时,发现完整的请求和响应内容被存储在某个数据库中,有时甚至在他人的基础设施上。对于许多用例来说,这种做法是可以接受的,但在处理客户数据、内部文件或任何敏感信息时,默认存储所有内容并不是一种特性,而是一种风险。因此,需要一个简单的解决方案,包括通过单一端点路由 LLM 请求、管理 API 密钥以防止开发者共享原始提供者密钥、跟踪使用情况及其成本,并且绝对不接触实际的请求内容。

📄 English Summary

Why I built a privacy-first LLM proxy

Many evaluated LLM gateways share a common issue: they log user prompts. When routing team requests through a proxy, complete request and response bodies were found stored in a database, sometimes on someone else's infrastructure. While this may be acceptable for many use cases, it poses a liability when dealing with customer data, internal documents, or any sensitive information. A simple solution is needed, which includes routing LLM requests through a single endpoint, managing API keys to prevent developers from sharing raw provider keys, tracking usage and costs, and ensuring that actual prompts are never touched.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等