Decoder with Nilay Patel artwork

Reality is losing the deepfake war

Decoder with Nilay Patel · with Jess Weatherbed · February 5, 2026 · 48 min

Summary

The rise of generative AI has created a "reality crisis" as deepfakes and manipulated media flood social platforms. This episode explores the limitations of AI labeling standards like C2PA and the challenges social media executives face in combating misinformation. Ecommerce operators should be aware of the implications for brand trust and authenticity in marketing.

Key takeaways

Themes

ai & automationbrand & content

Topics covered

deepfakesgenerative aiai labeling standardscontent authenticitysocial media moderationdigital media provenance

Episode description

Today, we’re going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. To do this, I sat down with Verge reporter Jess Weatherbed, who covers creative tools for us — a space that’s been totally upended by generative AI. We’ve been talking about how the photos and videos taken by our phones are getting more and more processed for years on The Verge. Here in 2026, we’re in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood onto social platforms at scale. So Jess and I discussed the limitations of AI labeling standards like C2PA, and why social media execs like Instagram boss Adam Mosseri are now sounding the alarm. Read the full transcript on The Verge. Links: This system can sort real pictures from AI fakes — why aren’t we using it? | The Verge You can’t trust your eyes to tell you what’s real, says Instagram | The Verge Instagram’s boss is missing the point about AI on the platform | The Verge Sora is showing us how broken deepfake detection is | The Verge Reality still matters | The Verge No one’s ready for this | The Verge What is a photo, @WhiteHouse edition | The Verge Google Gemini is getting better at identifying AI fakes | The Verge Let’s compare Apple, Google & Samsung’s definitions of 'photo’ | The Verge The Pixel 8 and the what-is-a-photo apocalypse | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Related episodes

Frequently asked about this episode

What does this episode say about ai & automation?
AI labeling standards like C2PA are currently insufficient to combat the scale of deepfakes, meaning businesses cannot solely rely on them for content authentication.
What does this episode say about brand & content?
Social media platforms are struggling to detect and moderate AI-generated content effectively, highlighting the need for brands to be proactive in verifying their own visual assets and monitoring for misuse.
What does this episode say about ai & automation?
The concept of what constitutes an "authentic" photo or video is rapidly evolving; brands should consider transparently labeling any AI-generated content they use to maintain consumer trust.
What does this episode say about ai & automation?
Developing sophisticated internal verification processes for visual content is crucial, as the public's ability to discern real from fake diminishes.
What does this episode say about ai & automation?
The "Deepfake War" directly impacts brand reputation and consumer trust. Businesses must prioritize authentic content creation, develop strategies for rapid response to misinformation, and vigilantly protect their brand image against AI-driven manipulation.

Listen