The Challenge of AI Disinformation
The spread of AI-manipulated content poses a growing challenge to the reliability of online information. From interactive deepfakes to widely accessible hyperrealistic models, distinguishing between reality and fiction is increasingly complex.
Microsoft's Proposal
Microsoft has presented a blueprint for establishing the authenticity of online content. The plan involves the combined use of several techniques:
- Provenance: Detailed tracking of the origin and history of the content.
- Watermarks: Digital marks invisible to the human eye but detectable by machines.
- Fingerprints: Digital scans to generate a unique mathematical signature of the content.
The company evaluated 60 combinations of these techniques to determine which ones offer reliable results and which ones could generate confusion.
Implementation and Limitations
Eric Horvitz, chief scientific officer of Microsoft, emphasized that these tools are not designed to assess the truthfulness of content, but only to verify its possible manipulation. The adoption of these standards by platforms and AI companies is crucial, but could be hampered by economic interests.
Regulation and Future Prospects
The European Union's AI Act and other regulations under discussion globally could impose an obligation to indicate when content has been generated with AI. However, inadequate implementation of these tools could undermine user trust. Californiaโs AI Transparency Act will be a first test in the US.
For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!