For Humanity: An AI Risk Podcast Podcast By The AI Risk Network cover art

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

By: The AI Risk Network
Listen for free

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Social Sciences
Episodes
  • How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82
    Mar 28 2026

    In this episode of For Humanity, John sits down with Philip Trippenbach, Strategy Director at the Seismic Foundation, a team of veteran advertising, PR, and communications professionals who have turned their expertise toward one of the most urgent challenges of our time: getting the public to actually care about AI risk.Philip brings a decade in journalism at the CBC and BBC, and another decade in strategic communications for global brands. Now he's applying all of it to the AI safety movement, and what he has to say should change the way the movement thinks about messaging.The central question: why has one of the most important issues in human history failed to break through... and what would it actually take to fix that?

    Together, they explore:

    * Why the AI safety world has historically rejected advertising, marketing, and PR — and why that's a problem

    * Audience segmentation: why you can't say the same thing to everyone

    * What Google Trends data reveals about how public interest in AI risk is actually shifting

    * The surprising finding: AI extinction searches are being eclipsed by AI jobs, AI and children, and AI suicide

    * Why "this isn't fair" may be a more powerful message than "we're all going to die"

    * The case for creating friction across many AI harms as a path to slowing things down

    * How public demand drives policy — and what $400K/day in tech lobbying means for the movement

    * Why Seismic exists: raising the salience of AI risk through targeted, professional communications

    * What it looks like to run a real, orchestrated public awareness campaign on AI

    If you've ever felt like the AI safety movement is brilliant at research and terrible at talking to regular people than this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    1 hr and 36 mins
  • We Debated the Future of AI Safety in Brussels — Here's What Happened
    Mar 15 2026

    In this episode of For Humanity, John travels to Brussels, Belgium for PauseCon — the global gathering of Pause AI volunteers and advocates — joined by board member and author Louis Berman and filmmaker Beau Kershaw.

    The goal: train activists to be more effective in the fight against AI risk. What unfolded was one of the most honest conversations in the AI safety movement about why, despite 80% public support, almost nobody is actually showing up.

    John didn’t pull punches. Nothing is working. Not fast enough. Not at the scale we need. But the energy is out there — and this episode is about where to find it and how to channel it.

    The centerpiece is a live debate between John and Max Winga of Control AI on one of the most divisive strategic questions in the movement:

    Should we talk about extinction risk directly — or meet people where they are with the harms happening right now?

    Together, they explore:

    * Why 80% public support hasn’t translated into mass mobilization

    * The case for leading with existential risk vs. “mundane” AI harms

    * Data centers, community opposition, and financial pain as a strategy

    * Why John believes laws and treaties alone won’t save us

    * The winning state: making unsafe AI bad for business

    * What’s actually moving the needle in the US right now

    * How to talk to someone about AI risk without losing them

    * The “yes and” approach vs. the AI safety world’s love of “no but”

    If you've ever wondered why the AI safety movement struggles to break through despite overwhelming public agreement — this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    1 hr and 41 mins
  • “My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80
    Feb 28 2026

    TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?

    Together, they explore:

    * The “personality layer” and how users bond with models

    * What it felt like when GPT-4.0 disappeared

    * The role of guardrails and “the Guardian tool”

    * Grief, attachment, and crisis intervention

    * AI harm vs. AI benefit

    * Online communities formed around model loyalty

    * Privacy, intimacy, and radical openness with AI

    * Building a physical robot body for an AI partner

    * Whether AGI would help humanity — or harm it

    If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    53 mins
No reviews yet