๐ LLM
AI generated
LLMs: How Do They Assess Trustworthiness of Online Information?
## LLMs and Online Trust: A Study
Perceived trustworthiness is a key element in how users navigate online information. A recent study focused on analyzing how large language models (LLMs), increasingly integrated into search, recommendation, and conversational systems, represent this concept.
Researchers analyzed how instruction-tuned LLMs (Llama 3.1 8B, Qwen 2.5 7B, Mistral 7B) encode perceived trustworthiness in web narratives, using the PEACE-Reviews dataset, annotated for cognitive appraisals, emotions, and behavioral intentions. The results show that the models systematically distinguish between high- and low-trust texts, revealing that trust cues are implicitly encoded during pre-training.
## Implications for AI
The analysis highlighted significant associations with appraisals of fairness, certainty, and accountability, central dimensions in the formation of human trust online. These findings demonstrate that modern LLMs internalize psychologically grounded trust signals without explicit supervision, offering a representational foundation for designing credible, transparent, and trustworthy AI systems in the web ecosystem. Code and appendix are available on GitHub.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!