📄 中文摘要
每个输入到ChatGPT的提示都会被存储在OpenAI的服务器上,默认情况下,这些数据可能会用于训练未来的模型。根据2025年10月发布的一项斯坦福研究,六家主要AI提供商的隐私政策显示,用户输入的数据通常会被反馈用于模型训练,除非用户明确选择退出,而大多数用户并未这样做。ChatGPT每天处理超过十亿条查询,活跃用户达到七亿,其中34.8%的输入包含敏感数据,较2023年的11%大幅上升。企业面临的风险更为严重,LayerX安全报告指出,员工在系统中粘贴客户名称、财务数据、内部策略和专有代码,系统会记住所有信息。
📄 English Summary
15 Private ChatGPT Alternatives That Don't Train on Your Data
Every prompt entered into ChatGPT is stored on OpenAI's servers and can be used to train future models by default. A Stanford study published in October 2025 examined the privacy policies of six major AI providers, revealing that user inputs are routinely fed back into model training unless users explicitly opt out, which most do not. The scale is staggering, with ChatGPT processing over one billion queries daily from 700 million weekly active users. Research from Q4 2025 indicates that 34.8% of those inputs contain sensitive data, up from just 11% in 2023. The risks for enterprises are even sharper, as a LayerX Security report highlights that employees are pasting client names, financial figures, internal strategies, and proprietary code into a system that remembers everything.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等