Denoising Diffusion Networks for Normative Modeling in Neuroimaging

A recent study published on arXiv proposes the use of denoising diffusion probabilistic models (DDPMs) for normative modeling in neuroimaging. This approach aims to estimate reference distributions of biological measures, conditional on covariates, enabling the derivation of centiles and clinically interpretable deviation scores.

Architectures and Evaluation

The research explores two denoiser backbones: a feature-wise linear modulation (FiLM) conditioned multilayer perceptron (MLP) and a tabular transformer with feature self-attention and intersample attention (SAINT). Covariates are conditioned through learned embeddings. The models were evaluated on a synthetic benchmark and on UK Biobank FreeSurfer phenotypes, scaling from dimension of 2 to 200.

The evaluation suite includes centile calibration, distributional fidelity (Kolmogorov-Smirnov tests), multivariate dependence diagnostics, and nearest-neighbour memorisation analysis. The results show that, for low dimensions, diffusion models deliver well-calibrated outputs comparable to traditional baselines while jointly modeling a realistic dependence structure. At higher dimensions, the transformer remains substantially better calibrated than the MLP and better preserves higher-order dependence, enabling scalable joint normative models.