生产中的可解释人工智能:用于实时欺诈检测的神经符号模型

📄 中文摘要

该研究提出了一种基于神经符号模型的实时欺诈检测方法,该方法能够在0.9毫秒内生成确定性且易于理解的解释。这一过程是前向传播的副产品,与传统的SHAP方法相比,速度提升达到33倍,后者需要30毫秒来解释欺诈预测,并且其解释结果是随机的,需在决策后生成,同时依赖于在推理时维护的背景数据集。该模型在Kaggle信用卡欺诈数据集上的欺诈召回率与传统方法相同,展示了其在生产环境中的有效性和高效性。

📄 English Summary

Explainable AI in Production: A Neuro-Symbolic Model for Real-Time Fraud Detection

This study presents a neuro-symbolic model for real-time fraud detection that generates deterministic and human-readable explanations in 0.9 milliseconds as a by-product of the forward pass. This represents a 33× speedup compared to the traditional SHAP method, which requires 30 milliseconds to explain a fraud prediction and produces stochastic explanations that run after the decision, relying on a maintained background dataset at inference time. The model achieves identical fraud recall rates on the Kaggle Credit Card Fraud dataset, demonstrating its effectiveness and efficiency in production environments.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等