我担心共享 .cursorrules 的安全性,因此我构建了一个静态分析器来审计它们。
📄 中文摘要
在使用 Cursor 的过程中,许多用户会从 GitHub 和各种库中获取 .cursorrules 和 AI 脚本以提高工作效率。然而,这种做法存在安全隐患,因为用户实际上是在运行不受信任的第三方指令,这些指令可以完全访问源代码、终端和 .env 文件。为了解决这个问题,作者开发了一个名为 AgentFend 的工具,利用名为 Onyx 的静态分析引擎,在用户按下“Enter”之前扫描提示和脚本。该工具目前主要检测数据外泄和提示注入等安全风险,确保用户的代码和密钥不会被发送到外部 URL,保护用户的安全。
📄 English Summary
I was worried about the lack of security in shared .cursorrules, so I built a static analyzer to audit them.
Many users have been utilizing Cursor by grabbing .cursorrules and AI scripts from GitHub and various libraries to enhance productivity. However, this practice raises security concerns as it involves executing untrusted third-party instructions that have full access to source code, terminal, and .env files. To address this issue, the author developed a tool called AgentFend, which employs a static analysis engine named Onyx to scan prompts and scripts before the user hits 'Enter'. The tool currently focuses on detecting data exfiltration and prompt injections, ensuring that users' code and keys are not sent to external URLs, thereby safeguarding user security.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等