我在我的 AI 流量上运行了一个隐私代理。结果发现了什么。

📄 中文摘要

在构建 Velar 这一本地代理工具时,最初的设想是为他人解决隐私问题。然而,在自己使用该工具与 ChatGPT 进行正常浏览器交互时,发现其拦截了 40 项敏感数据,包括 30 个 API 密钥。这一发现引发了对 API 密钥数量的深思,尤其是在未进行任何异常操作的情况下,隐私保护的重要性愈加凸显。

📄 English Summary

I ran a privacy proxy on my AI traffic. Here's what it found.

The author developed Velar, a local proxy designed to mask sensitive data before it reaches AI providers, initially thinking it would solve problems for others. However, after running it on their own machine during regular interactions with ChatGPT, they discovered that it intercepted 40 items, including 30 API keys. This revelation highlighted the significant amount of sensitive data being exposed even during normal usage, emphasizing the critical need for privacy protection.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等