The manipulation of reality in the age of AI

A recent article highlighted how the US Department of Homeland Security (DHS) is using AI video generators from Google and Adobe to create content shared with the public. This news, coupled with other examples of image manipulation, raises serious questions about the truth crisis in the age of artificial intelligence.

The failure of verification tools

In 2024, there was great anticipation for initiatives such as the Content Authenticity Initiative (CAI), co-founded by Adobe, which was supposed to label content indicating its origin and any AI involvement. However, these labels are often optional and can be removed by platforms.

The persistent influence of falsehoods

A study published in the journal Communications Psychology has shown that even when people know that content is fake, they remain emotionally influenced by it. This suggests that transparency alone is not enough to counter disinformation.

Towards a new strategy

AI tools for generating and editing content are becoming increasingly advanced and accessible. As a result, a new strategy needs to be developed to address the manipulation of reality, taking into account that exposing falsehoods does not always restore trust.