ChatGPT 法律建议:一位 CEO 如何将工具转化为证据

📄 中文摘要

Changhan Kim 在未咨询律师的情况下,直接向聊天机器人询问如何避免支付 2.5 亿美元的业绩奖励。几个月后,特拉华州的一名法官引用了这一交流,指出 Kim 遵循了 ChatGPT 的大部分建议,并命令他复职,给予他额外时间以获得奖金。这一事件揭示了 ChatGPT 法律建议的潜在风险,不在于其可能产生的错误信息,而在于高管们可能生成的清晰、带时间戳的证据,显示出不当行为的意图。在 Fortis 与 Krafton 的案件中,法院将 AI 起草的策略视为其他公司沟通的一部分,利用这些信息推断意图和前提。领导者需要的不是禁止 AI,而是建立假设治理。

📄 English Summary

ChatGPT Legal Advice: How a CEO Turned a Tool into Evidence

Changhan Kim, without consulting his lawyers, turned to a chatbot for advice on how to avoid a $250 million earnout. Months later, a Delaware judge referenced this exchange, noting that Kim had followed 'most of ChatGPT's recommendations' and ordered his reinstatement with additional time to earn the bonus. This incident highlights the risks of relying on ChatGPT for legal advice, not due to potential hallucinations, but because executives may generate clear, timestamped evidence of bad-faith intentions. In the case of Fortis v. Krafton, the court treated AI-drafted strategies like any other corporate communication, using them to infer intent and pretext. Leaders need governance frameworks that assume AI's role rather than outright bans.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等