U-CAN:基于效用感知的对比衰减方法用于生成推荐中的高效遗忘
📄 中文摘要
生成推荐(GenRec)通常利用大型语言模型(LLM)将个性化重新定义为指令驱动的序列生成任务。然而,在用户日志上进行微调时,无意中将敏感属性编码到模型参数中,引发了严重的隐私问题。现有的机器遗忘(MU)技术在应对这一矛盾时面临多义性困境,神经元将敏感数据与一般推理模式叠加,导致传统梯度或剪枝方法下的灾难性效用损失。为了解决这一问题,提出了一种基于低秩适配器的效用感知对比衰减框架(U-CAN),该框架通过对比激活来量化风险,旨在实现精确的遗忘。U-CAN在保护用户隐私的同时,尽可能减少对模型效用的影响。
📄 English Summary
U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation
Generative Recommendation (GenRec) typically utilizes Large Language Models (LLMs) to redefine personalization as an instruction-driven sequence generation task. However, fine-tuning on user logs inadvertently encodes sensitive attributes into model parameters, raising significant privacy concerns. Existing Machine Unlearning (MU) techniques struggle with the Polysemy Dilemma, where neurons superimpose sensitive data with general reasoning patterns, resulting in catastrophic utility loss under traditional gradient or pruning methods. To address this issue, a precision unlearning framework called Utility-aware Contrastive Attenuation (U-CAN) is proposed, which operates on low-rank adapters. U-CAN quantifies risk by contrasting activations, aiming to achieve effective unlearning while minimizing the impact on model utility.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等