AI and Accessibility: A Complex Picture

A recent study published in Fortune highlights how AI-powered search engines provide incorrect answers more than 60% of the time. This alarming figure is also reflected in the quality of automatically generated captions by AI tools.

The captions, often characterized by disjointed sentences, transcription errors, and dialogues compressed into an incomprehensible stream of text, represent a significant barrier to accessibility.

The Challenges of Reliability

The unreliability of AI-generated deliveries raises important questions about the actual usefulness of these tools to improve accessibility. While AI promises to automate processes and make information more usable, the presence of errors seriously compromises this goal.

For those evaluating on-premise deployments, there are trade-offs between control, costs, and model accuracy. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.