大语言模型工具的秘密管理:不要让你的 OpenAI 密钥出现在 GitHub 上 🚨

📄 中文摘要

在构建大语言模型(LLM)时,若不将秘密视为基础设施的一部分,风险将会加大。每周都有 OpenAI 密钥被推送到 GitHub、API 密钥被记录在 CloudWatch 中,以及秘密被硬编码在 Streamlit 演示中并最终投入生产。LLM 系统快速增加秘密,如果不及早设计,情况将迅速变得混乱。提供了一种生产就绪的蓝图,以正确保护 LLM 系统。一个 LLM 集成可能会导致数十个凭证的出现,包括一个 LLM API 密钥和多个嵌入端点。

📄 English Summary

Secrets Management for LLM Tools: Don’t Let Your OpenAI Keys End Up on GitHub 🚨

Building with large language models (LLMs) without treating secrets as first-class infrastructure poses significant risks. Each week, instances of OpenAI keys being pushed to GitHub, API keys logged in CloudWatch, and secrets hardcoded in Streamlit demos that later go into production are reported. LLM systems can quickly multiply secrets, leading to chaos if not designed properly from the start. A production-ready blueprint for securing LLM systems is provided. One LLM integration can result in dozens of credentials, including one LLM API key and multiple embedding endpoints.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等