OpenAI Codex 存在命令注入漏洞,可能窃取您的 GitHub 令牌
📄 中文摘要
BeyondTrust 的 Phantom Labs 最近发布了一份关于 OpenAI Codex 中命令注入漏洞的报告。该漏洞已被修复,但其攻击模式值得关注,因为这正是许多开发者难以预见的安全隐患。Codex 在受管理的容器中运行任务,克隆用户的 GitHub 仓库,并使用短期 OAuth 令牌进行身份验证。漏洞的根源在于分支名称在传递给环境设置的 shell 命令之前未经过滤,攻击者可以构造恶意分支名称,注入任意 shell 命令。这些命令将在容器内执行,并可以访问用户的 GitHub 令牌。该攻击方式适用于 Codex 的多个接口,包括网页界面、命令行工具、SDK 以及 IDE 集成。
📄 English Summary
OpenAI Codex Had a Command Injection Bug That Could Steal Your GitHub Tokens
BeyondTrust's Phantom Labs recently published a report detailing a command injection vulnerability in OpenAI's Codex. Although the vulnerability has been patched, the attack pattern is significant as it represents a security risk that many developers may not anticipate. Codex operates tasks within managed containers that clone users' GitHub repositories and authenticate using short-lived OAuth tokens. The vulnerability arose because branch names were not sanitized before being passed to shell commands during environment setup. An attacker could craft a malicious branch name that injects arbitrary shell commands, which would execute within the container and have access to the user's GitHub token. This attack method was applicable across multiple interfaces of Codex, including the web interface, CLI, SDK, and IDE integrations.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等