๐ LLM
AI generated
Towards Unsupervised Causal Representation Learning via Latent Additive Noise Model Causal Autoencoders
## Introduction
Unsupervised causal representation learning is a fundamental task in data science, but standard methods relying on statistical independence often fail to capture causal relationships. A central challenge is identifiability: disentangling causal variables from observational data without supervision, auxiliary signals, or strong inductive biases is impossible.
In this work, we propose the Latent Additive Noise Model Causal Autoencoder (LANCA) as a strong inductive bias for unsupervised causal discovery. Theoretically, we prove that while the ANM constraint does not guarantee unique identifiability in the general mixing case, it resolves component-wise indeterminacy by restricting the admissible transformations from arbitrary diffeomorphisms to the affine class.
Methodologically, arguing that the stochastic encoding inherent to VAEs obscures the structural residuals required for latent causal discovery, LANCA employs a deterministic Wasserstein Auto-Encoder (WAE) coupled with a differentiable ANM Layer.
This architecture transforms residual independence from a passive assumption into an explicit optimization objective. Empirically, LANCA outperforms state-of-the-art baselines on synthetic physics benchmarks (Pendulum, Flow), and on photorealistic environments (CANDLE), where it demonstrates superior robustness to spurious correlations arising from complex background scenes.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!