Machine Unlearning and Generative Recommendation: U-CAN to the Rescue
Generative Recommendation (GenRec) systems using Large Language Models (LLMs) are redefining personalization. However, fine-tuning on user logs can inadvertently encode sensitive attributes into model parameters, raising privacy concerns. Existing Machine Unlearning (MU) techniques struggle to navigate this tension due to the "Polysemy Dilemma", where neurons superimpose sensitive data with general reasoning patterns, leading to catastrophic utility loss under traditional gradient or pruning methods.
U-CAN: A Precision Unlearning Framework
To address this challenge, Utility-aware Contrastive AttenuatioN (U-CAN) has been proposed, a precision unlearning framework that operates on low-rank adapters. U-CAN quantifies risk by contrasting activations and focuses on neurons with asymmetric responses that are highly sensitive to the forgetting set but suppressed on the retention set. To safeguard performance, U-CAN introduces a utility-aware calibration mechanism that combines weight magnitudes with retention-set activation norms, assigning higher utility scores to dimensions that contribute strongly to retention performance.
Unlike binary pruning, which often fragments network structure, U-CAN develop adaptive soft attenuation with a differentiable decay function to selectively down-scale high-risk parameters on LoRA adapters, suppressing sensitive retrieval pathways and preserving the topological connectivity of reasoning circuits. Experiments on two public datasets across seven metrics demonstrate that U-CAN achieves strong privacy forgetting, utility retention, and computational efficiency.
For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!