面向移动性的缓存框架用于可扩展的基于大语言模型的人类移动性模拟

📄 中文摘要

大规模人类移动性模拟对城市规划、流行病学和交通分析等应用至关重要。近期研究将大语言模型(LLMs)视为人类代理,通过结构化推理模拟现实的移动行为,但其高计算成本限制了可扩展性。为了解决这一问题,设计了一种名为MobCache的面向移动性的缓存框架,利用可重构缓存实现高效的大规模人类移动性模拟。该框架包括两个主要组件:(1)推理组件,将每个推理步骤编码为潜在空间嵌入,并使用潜在空间评估器实现推理步骤的重用和重组;(2)解码组件,采用轻量级解码器以提高效率。

📄 English Summary

Mobility-Aware Cache Framework for Scalable LLM-Based Human Mobility Simulation

Large-scale human mobility simulation is essential for applications such as urban planning, epidemiology, and transportation analysis. Recent studies have utilized large language models (LLMs) as human agents to simulate realistic mobility behaviors through structured reasoning; however, their high computational costs hinder scalability. To address this challenge, a mobility-aware cache framework named MobCache is proposed, which leverages reconstructible caches to facilitate efficient large-scale human mobility simulations. The framework consists of two main components: (1) a reasoning component that encodes each reasoning step as a latent-space embedding and employs a latent-space evaluator to enable the reuse and recombination of reasoning steps; and (2) a decoding component that utilizes a lightweight decoder to enhance efficiency.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等