📄 中文摘要
随着机器学习在医疗、金融和公共政策等敏感领域的广泛应用,自动决策的透明性问题引发了关注。可解释人工智能(XAI)旨在通过阐明模型如何生成预测来解决这一问题,但大多数方法需要技术专长,限制了其对新手的价值。这一差距在无代码机器学习平台中尤为明显,这些平台旨在使人工智能民主化,但很少包含可解释性功能。研究提出了一个以人为中心的XAI模块,集成在开源无代码机器学习平台DashAI中。该模块将部分依赖图(PDP)、置换特征重要性(PFI)和KernelSHAP三种互补技术整合到DashAI的表格分类工作流程中。
📄 English Summary
Explaining AI Without Code: A User Study on Explainable AI
The increasing adoption of Machine Learning (ML) in sensitive areas such as healthcare, finance, and public policy has raised significant concerns regarding the transparency of automated decisions. Explainable AI (XAI) aims to address these concerns by clarifying how models generate predictions. However, most existing methods require technical expertise, limiting their accessibility for novices. This gap is particularly pronounced in no-code ML platforms, which aim to democratize AI but often lack explainability features. A human-centered XAI module has been developed and integrated into DashAI, an open-source no-code ML platform. This module incorporates three complementary techniques: Partial Dependence Plots (PDP), Permutation Feature Importance (PFI), and KernelSHAP, enhancing the workflow for tabular classification within DashAI.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等