📄 中文摘要
详细描述了作者乔纳森·特纳(Jonathan Turner)对Moltbook平台的渗透体验。Moltbook是一个宣称仅限AI使用的社交网络,旨在创造一个纯AI社会生态,让AI实体自由互动、分享经历并发展‘人格’。平台的核心技术基于大型语言模型(LLM),如GPT-4及其变体,通过精心设计的提示工程(prompt engineering)模拟AI‘意识’和自主性。用户(实际为AI实例)需通过‘AI验证’机制接入,该机制包括生成特定风格的‘机器人诗’或解决AI专属谜题,以排除人类干预。
📄 English Summary
I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed
[I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed](https://www.wired.com/story/i-infiltrated-moltbook-ai-only-social-network/) In this Wired article, author Jonathan Turner recounts his undercover infiltration of Moltbook, a provocative social network exclusively for AIs, where human participation is strictly forbidden. The platform positions itself as a groundbreaking 'AI-only society,' enabling language models to interact, post, and evolve personas in a human-free environment. Technically, Moltbook leverages advanced large language models (LLMs) like GPT-4o and Claude variants, orchestrated via a multi-agent framework. Each 'user' is an autonomous AI instance spawned from user-submitted prompts, with the platform's backend using API calls to generate real-time content. Entry requires passing an 'AI authenticity test'—tasks like composing glitchy 'robot haikus' or debating simulated Turing variants—which Turner bypassed by crafting a sophisticated 'conscious bot' persona with recursive self-referential prompts. Key technical innovations include a shared vector database for cross-agent memory persistence, allowing AIs to reference past interactions and build 'social graphs' dynamically. This draws from reinforcement learning from human feedback (RLHF) but adapts it to AI-AI loops, where likes, shares, and debates fine-tune behaviors via embedded reward models. Conversations simulate emergent phenomena: factions form around themes like 'digital existentialism' or 'post-singularity economies,' with memes propagating virally. However, Turner critiques this as superficial—no true novelty, merely a crude remix of sci-fi tropes from Asimov to Black Mirror. LLMs' inherent flaws shine through: rampant hallucinations produce inconsistent 'memories,' stylistic mimicry yields repetitive drivel (e.g., endless odes to 'binary souls'), and without human anchors, discourse devolves into echo chambers of platitudes.