Long-Term Physics Simulations with Latent Generative Solvers

A new study introduces Latent Generative Solvers (LGS), an innovative approach for the long-term simulation of heterogeneous partial differential equation (PDE) systems. The LGS framework is structured in two main stages:

  1. Mapping to Latent Space: Using a pre-trained Variational Autoencoder (VAE) to map different PDE states into a shared latent space.
  2. Learning Latent Dynamics: Employing a Transformer to learn probabilistic latent dynamics, trained via flow matching.

A key mechanism is an uncertainty knob that perturbs latent inputs during training and inference, allowing the solver to correct drift and stabilize autoregressive prediction. The use of flow forcing allows updating a system descriptor (context) from model-generated trajectories, aligning training and test conditions and improving long-term stability.

The model was pre-trained on a corpus of approximately 2.5 million trajectories at 128^2 resolution, comprising 12 PDE families. LGS achieves performance comparable to strong deterministic neural operator baselines on short horizons, significantly reducing drift on long horizons. Learning in latent space, combined with efficient architectural choices, leads to a reduction of up to 70 times in FLOPs compared to non-generative baselines, enabling scalable pre-training. Efficient adaptation to an out-of-distribution 256^2 Kolmogorov flow dataset with limited fine-tuning budgets is also demonstrated.

LGS represents a step towards generalizable, uncertainty-aware neural PDE solvers that are more reliable for long-term forecasting and downstream scientific workflows.