## Introduction Transformer-based language models have demonstrated the ability to learn rich geometric structures in their embedding spaces. But what happens when we try to capture higher-level cognitive organization within these representations? A new study finds that transformer embedding spaces exhibit a hierarchical geometric organization aligned with human-defined cognitive attributes. ## Methodology The research was conducted using a dataset of 480 natural-language sentences annotated with continuous ordinal energy scores and discrete tier labels spanning seven ordered cognitive categories. Researchers used transformer models to construct the spatial representations of the sentences and evaluated the recoverability of these annotations via linear and shallow nonlinear probes. ## Results Results show that both continuous scores and discrete labels are reliably decodable, with shallow nonlinear probes providing consistent performance gains over linear probes. Lexical TF-IDF baselines perform substantially worse, indicating that the observed structure is not attributable to surface word statistics alone. Nonparametric permutation tests further confirm that probe performance exceeds chance under label-randomization nulls. ## Discussion Qualitative analyses using UMAP visualizations and confusion matrices reveal smooth low-to-high gradients and predominantly adjacent-tier confusions in embedding space. Taken together, these results provide evidence that transformer embedding spaces exhibit a hierarchical geometric organization aligned with human-defined cognitive attributes, while remaining agnostic to claims of internal awareness or phenomenology. ## Conclusion In summary, transformer embedding spaces seem to exhibit a hierarchical geometric organization aligned with human-defined cognitive attributes. This result opens up new avenues for analyzing cognition and understanding the representations used by transformer-based language models.