Contextual Analysis with LLMs: A Study on GPT-5
A recent study published on arXiv investigates the ability of large language models (LLMs) to support interpretative citation context analysis (CCA). The research focuses on an in-depth analysis of a single complex case, rather than expanding typological labels.
Prompt Sensitivity and Interpretations
The study highlights prompt sensitivity as a methodological issue, varying their structure and framing in a balanced 2x3 design. Using footnote 6 in Chubin and Moitra (1975) and Gilbert's (1977) reconstruction as a probe, a two-stage GPT-5 pipeline was implemented: a surface classification of the citation text and an expectation pass, followed by cross-document interpretative reconstruction using the citing and cited full texts.
Results and Implications
Across 90 reconstructions, the model produced 450 distinct hypotheses. The analysis identified 21 recurring interpretations. GPT-5's surface classification proved stable, consistently classifying the citation as "supplementary." In reconstruction, the model generates a structured space of plausible alternatives, but prompt structure and examples redistribute attention and vocabulary, sometimes towards strained readings. The study highlights opportunities and risks in using LLMs as guided co-analysts for inspectable and contestable CCA, demonstrating that prompt structure and framing systematically influence which plausible readings and vocabularies the model foregrounds.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!