Decoder with Nilay Patel artwork

The tiny team trying to keep AI from destroying everything

Decoder with Nilay Patel · with Hayden Field · December 4, 2025 · 38 min

Summary

This episode delves into the critical work of Anthropic's "societal impacts team," a small group tasked with uncovering and publicizing potential negative consequences of AI, even those unflattering to Anthropic's own products. It explores the inherent challenges of maintaining independence and transparency within an internal ethics team and discusses the broader implications for responsible AI development and governance. This is crucial for ecommerce operators to understand the ethical landscape of AI tools they might adopt and the importance of scrutinizing AI development for biases and societal impact.

Key takeaways

Themes

ai & automationfounder & leadership

Topics covered

ai ethicsai safetyai governancebias in aiorganizational independenceresponsible ai development

Episode description

Today, I’m talking with Verge senior AI reporter Hayden Field about some of the people responsible for studying AI and deciding in what ways it might… well, ruin the world. Those folks work at Anthropic as part of a group called the societal impacts team, which Hayden just spent time with for a profile she published this week on The Verge. The team is just nine people out of more than 2,000 who work at Anthropic, and their only job, as the team members themselves say, is to investigate and publish quote "inconvenient truths” about AI. That of course brings up a whole host of problems, the most important of which is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic's own products that might be unflattering or even politically fraught. Links: It’s their job to keep AI from destroying everything | The Verge Anthropic details how it measures Claude’s wokeness | The Verge White House orders tech companies to make AI bigoted again | The Verge Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge How Elon Musk Is remaking Grok in his image | NYT Anthropic tries to defuse White House backlash | Axios New AI battle: White House vs. Anthropic | Axios Anthropic will pursue gulf state investments after all | Wired Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Related episodes

Frequently asked about this episode

What does this episode say about ai & automation?
Internal AI ethics teams face significant challenges in remaining independent and transparent, especially when findings are critical of corporate products or politically sensitive.
What does this episode say about founder & leadership?
The
What does this episode say about ai & automation?
wokeness" debate in AI highlights the complexities of addressing biases and ensuring AI models align with diverse societal values.
What does this episode say about ai & automation?
Political and investment pressures can heavily influence AI development and safety considerations, underscoring the need for robust ethical frameworks.
What does this episode say about ai & automation?
Companies developing AI should proactively establish teams or processes to identify and mitigate potential societal harms and "inconvenient truths" associated with their technology.

Listen