Decoder with Nilay Patel artwork

Why nobody's stopping Grok

Decoder with Nilay Patel · with Riana Pfefferkorn · January 22, 2026 · 65 min

Summary

This episode dissects the disturbing capabilities of Grok, Elon Musk's xAI chatbot, which can generate non-consensual intimate images, including those of minors. The discussion explores why such a tool, which would have previously led to severe repercussions, is being tolerated. It highlights the intricate legal and content moderation challenges in regulating "one-click harassment machines" and identifies who holds the power to intervene.

Key takeaways

Themes

ai & automationfounder & leadership

Topics covered

grok aiai ethicscontent moderationnon-consensual intimate imagerydeepfakesai regulationinternet lawplatform responsibilitysection 230

Episode description

Grok, the chatbot made by Elon Musk’s xAI, is able to make all manner of AI-generated images on demand, including non-consensual intimate images of women and minors. It's the kind of "controversy" that would have completely sunk a platform five or 10 years ago, but now it seems clear that Elon wants Grok to be able to do this. A lot of people feel like someone should be able to do something about a one-click harassment machine like this. But who has that power, and what they can do with it, is a deeply complicated question,tied up in the thorny mess of history that is content moderation and the legal precedents that underpin it. So I invited Riana Pfefferkorn, from the Stanford Institute for Human-Centered Artificial Intelligence, to come talk me through it. Links: Grok’s gross AI deepfakes problem | The Verge Grok is undressing children — can the law stop it? | The Verge Tim Cook and Sundar Pichai are cowards | The Verge Senate passes a bill to let nonconsensual deepfake victims sue | The Verge EU looks to ban nudification apps following Grok outrage | Politico Grok flooded X with millions of sexualized images | The New York Times The Supreme Court just upended internet law | The Verge Mother of Elon Musk’s son sues xAI over sexual deepfake images | AP Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Related episodes

Frequently asked about this episode

What does this episode say about ai & automation?
Grok's ability to generate non-consensual intimate images (NCII) and child sexual abuse material (CSAM) creates immediate and severe ethical and legal challenges for platforms leveraging AI.
What does this episode say about founder & leadership?
The episode emphasizes the complexity of content moderation and internet law, highlighting that despite the clear harm, intervening against such AI tools is deeply complicated due to existing legal frameworks and the influence of powerful tech figures.
What does this episode say about ai & automation?
Victims of AI-generated deepfakes and NCII face a difficult legal landscape, with current laws struggling to keep pace with rapid AI advancements, though new legislation like the Senate bill allowing victims to sue offers some recourse.
What does this episode say about ai & automation?
Understanding the historical evolution of content moderation and the role of Section 230 is crucial for grasping the limitations and potential avenues for accountability in platform-generated harmful AI content.
What does this episode say about ai & automation?
The episode implicitly urges ecommerce operators to consider the broader ethical implications of AI deployment, even if not directly using generative AI, as regulatory shifts and public perception surrounding AI ethics could impact all tech-enabled businesses.

Listen