## Building an Epistemic Constitution for AI Large language models (LLMs) are rapidly evolving, taking on the role of "artificial reasoners" capable of evaluating arguments and expressing judgments. However, the processes by which these models form their "beliefs" are often opaque and governed by implicit policies. A new study proposes to address this challenge through an "epistemic constitution" for AI. The idea is to define a set of explicit and contestable meta-norms that regulate how AI systems form and express their opinions. This approach aims to ensure greater transparency and accountability in AI decision-making processes. ## Addressing Biases in Source Attribution The research highlights how the most advanced models tend to penalize arguments attributed to sources whose expected ideological position conflicts with the content of the argument itself. This "source attribution bias" demonstrates that models apply a form of identity-stance coherence, which can lead to distortions. The study distinguishes two constitutional approaches: a Platonic one, which privileges formal correctness and source-independence, and a Liberal one, which promotes procedural norms to protect the conditions for collective inquiry, while allowing principled source-attending grounded in epistemic vigilance. The article argues for the Liberal approach, outlining eight principles and four fundamental orientations. ## AI Governance: Transparency and Contestability The proposal for an epistemic constitution for AI underscores the need for explicit and contestable governance, similar to what is expected for AI ethics. This approach aims to promote a more responsible AI aligned with human values.