通过不精确概率对大型语言模型的高阶不确定性进行表述
📄 中文摘要
随着对从大型语言模型(LLMs)中提取不确定性需求的增加,实证研究表明,传统概率不确定性框架下开发的引导技术并不能充分捕捉LLM的行为。这种不匹配导致了系统性的失败模式,尤其是在涉及模糊问答、上下文学习和自我反思的场景中。为了解决这一问题,提出了一种基于提示的新型不确定性引导技术,基于不精确概率这一原则性框架来表示和引导高阶不确定性。在此框架中,一阶不确定性捕捉对提示可能响应的未知性,而二阶不确定性则涉及对一阶不确定性的信心程度。
📄 English Summary
Verbalizing LLM's Higher-order Uncertainty via Imprecise Probabilities
The increasing demand for extracting uncertainty from large language models (LLMs) has highlighted that traditional elicitation techniques based on classical probabilistic frameworks often fail to adequately capture LLM behavior. This misalignment results in systematic failure modes, particularly in contexts involving ambiguous question-answering, in-context learning, and self-reflection. To address these challenges, novel prompt-based uncertainty elicitation techniques are proposed, grounded in the framework of imprecise probabilities, which provides a principled way to represent and elicit higher-order uncertainty. First-order uncertainty pertains to the uncertainty over potential responses to a prompt, while second-order uncertainty reflects the confidence in that first-order uncertainty.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等