Attribution of Causality to AI: An Analysis of Human Perception
A recent study examined how people attribute causal responsibility in complex situations where artificial intelligence (AI) is involved in harmful events. The research focuses on "cause selection," i.e., identifying the elements that led to a negative outcome, a crucial step in establishing liability.
The study revealed several key findings:
- AI Autonomy: When AI has moderate autonomy (human sets the goal, AI the means) or high autonomy (AI sets both the goal and the means), participants attribute greater causal responsibility to the AI. Conversely, with low autonomy, responsibility falls on the human.
- Human vs. AI Role: Even when humans and AI perform the same action, participants tend to consider the human action as more causal.
- Developer Responsibility: The developer, despite being distant from the chain of events, is considered highly responsible, reducing the attribution of blame to the human user, but not to the AI.
- AI Components: Decomposing the AI into a large language model (LLM) and an agentic component, the latter is perceived as more causal.
These results offer insights into how people perceive the causal contribution of AI in scenarios of misuse and misalignment, and how these assessments interact with the roles of users and developers. The research can contribute to defining liability frameworks for AI-caused harms and to understanding how intuitive judgments influence social and political debates about real-world AI-related incidents.
For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!