MERIT:用于可解释知识追踪的记忆增强检索

📄 中文摘要

知识追踪(KT)模型用于建模学生不断变化的知识状态,以预测未来的表现,成为个性化教育的基础。传统深度学习模型虽然具有高准确性,但往往缺乏可解释性。大型语言模型(LLMs)具备强大的推理能力,但在有限的上下文窗口和幻觉问题上表现不佳。此外,现有的基于LLM的方法通常需要昂贵的微调,限制了其可扩展性和对新数据的适应能力。提出的MERIT(记忆增强检索)是一种无训练框架,结合了冻结的LLM推理与结构化的教学记忆。MERIT通过转化原始交互日志,而不是更新参数,来实现知识追踪的可解释性。

📄 English Summary

MERIT: Memory-Enhanced Retrieval for Interpretable Knowledge Tracing

Knowledge tracing (KT) models the evolving knowledge states of students to predict future performance, serving as a foundation for personalized education. While traditional deep learning models achieve high accuracy, they often lack interpretability. Large language models (LLMs) exhibit strong reasoning capabilities but struggle with limited context windows and hallucinations. Moreover, existing LLM-based methods typically require expensive fine-tuning, which limits scalability and adaptability to new data. MERIT (Memory-Enhanced Retrieval for Interpretable Knowledge Tracing) is proposed as a training-free framework that combines frozen LLM reasoning with structured pedagogical memory. Instead of updating parameters, MERIT transforms raw interaction logs to achieve interpretability in knowledge tracing.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等