AI-generated images blur reality as identity fraud risks grow

AI Generated Pope
ROME — Artificial intelligence tools that once amused the internet with images of the Pope in a white puffer jacket are now raising deeper concerns about identity theft and truth, as experts warn the same technology can fabricate entire digital lives that are difficult to disprove, Jan. 1.

What began as a novelty has evolved into a threat to personal reputation, financial security and public trust as generative AI grows more convincing and more accessible.

In early 2023, a photorealistic image of Pope Francis wearing a stylish winter coat spread rapidly across social media, fooling millions before it was revealed to be fake. The image became a cultural touchstone, illustrating how easily AI-generated visuals can pass as authentic.

Since then, the technology has advanced beyond single images. Researchers and digital forensics experts say AI systems can now create synthetic identities complete with profile photos, voices, social media histories and even fabricated documents. In some cases, victims must prove a negative — that they are not the person an algorithm claims they are.

“Once an AI-generated identity enters databases or credit systems, correcting the record can be incredibly difficult,” said Hany Farid, a digital forensics professor at the University of California, Berkeley, who studies manipulated media. “Our institutions are built on the assumption that records are grounded in reality.”

The risks extend beyond embarrassment. Banks and employers increasingly rely on automated identity checks, while online platforms struggle to detect sophisticated fakes. Advocacy groups warn that marginalized communities and private citizens without legal resources are most vulnerable.

Governments are beginning to respond. The European Union’s AI Act, approved in 2024, requires labeling of synthetic media and imposes penalties for deceptive uses. In the United States, lawmakers have introduced bills targeting deepfakes used for fraud or impersonation, though comprehensive regulation remains unsettled.

Technology companies say they are developing safeguards, including watermarking AI-generated content and improving detection tools. Critics argue that the measures lag behind the pace of innovation.

As AI-generated content becomes cheaper and more realistic, experts say the puffer-jacket Pope may be remembered as a harmless warning — a moment when society laughed, before realizing the stakes were far higher.