关于不确定性——不确定性感知的可解释人工智能系统性调查

📄 中文摘要

不确定性感知的可解释人工智能(UAXAI)研究了如何将不确定性融入解释性流程,以及如何评估这些方法。文献中出现了三种常见的不确定性量化方法:贝叶斯方法、蒙特卡洛方法和符合性方法。此外,还提出了将不确定性整合到解释中的不同策略,包括评估可信度、限制模型或解释,以及明确传达不确定性。当前的评估实践仍然存在碎片化和以模型为中心的问题,对用户的关注有限,可靠性属性(如校准、覆盖率、解释稳定性)的报告不一致。近期的研究倾向于校准和不确定性传播的改进。

📄 English Summary

Concerning Uncertainty -- A Systematic Survey of Uncertainty-Aware XAI

This survey focuses on uncertainty-aware explainable artificial intelligence (UAXAI), examining how uncertainty is integrated into explanatory pipelines and the evaluation of these methods. Three main approaches to uncertainty quantification are identified in the literature: Bayesian methods, Monte Carlo methods, and conformal methods. Distinct strategies for incorporating uncertainty into explanations include assessing trustworthiness, constraining models or explanations, and explicitly communicating uncertainty. Evaluation practices are found to be fragmented and largely model-centered, with limited attention to user perspectives and inconsistent reporting of reliability properties such as calibration, coverage, and explanation stability. Recent works have shown a trend towards improving calibration and uncertainty communication.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等