## LLM: Internal Dynamics and Functional Regimes A recent study published on arXiv analyzed the internal dynamics of large language models (LLMs) during the text generation process. The research focuses on the temporal organization of these dynamics, an aspect often overlooked by common interpretability approaches, which tend to favor static representations or causal interventions. ## Neuroscience and Transformer Models Researchers drew inspiration from neuroscience, adapting concepts such as temporal integration and metastability to transformer models. They developed a composite dynamic metric, calculated from activation time series during autoregressive generation. This metric was evaluated in the GPT-2-medium model under various conditions: structured reasoning, forced repetition, high-temperature noisy sampling, attention-head pruning, and weight-noise injection. ## Results and Implications Structured reasoning consistently showed higher metric values compared to repetitive, noisy, and perturbed regimes. The differences were statistically significant. The results proved robust to layer selection, channel subsampling, and random seeds. The study demonstrates that neuroscience-inspired dynamic metrics can reliably characterize differences in computational organization across different functional regimes in large language models. The authors emphasize that the proposed metric captures formal dynamic properties and does not imply subjective experience.