Warning Shots Podcast By The AI Risk Network cover art

Warning Shots

Warning Shots

By: The AI Risk Network
Listen for free

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Politics & Government
Episodes
  • Robots in the White House, Brain Scans & the Tech Billionaire Immortality Dream | Warning Shots #35
    Mar 29 2026

    This week on Warning Shots: A humanoid robot showed up at the White House, and the First Lady wants one teaching your kids. Bernie Sanders stood on the Senate floor with a Geoffrey Hinton poster, calling for a data center moratorium over AI risk, and he's not alone. Around 40 members of Congress are now on record with serious concerns.Jensen Huang says AGI is already here and we're all going to live forever. Meta's new brain-scanning AI builds a digital twin of your neural responses, trained on 700 people, and uses it to precision-target your dopamine. A supply chain attack quietly infected Lite LLM, one of the most downloaded AI tools on the internet, stealing passwords from unsuspecting developers. And Google just made AI 6x more efficient, gutting the "it needs too much energy to be dangerous" argument for good. John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) break it all down.

    If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * A humanoid robot’s White House visit — and what it means when AI stops waiting for your prompt

    * Bernie Sanders on the Senate floor demanding a data center slowdown — is civilization finally waking up?

    * Jensen Huang’s claims that AGI is already here and death is optional — techno-optimism or dangerous denial?

    * Why every “AI can’t do X” argument has a two-week expiration date

    * The LiteLLM supply chain attack — and what it previews about AI-assisted cyberwarfare

    * Google’s 6x efficiency breakthrough quietly dismantling the “AI needs too much energy” counterargument

    * Meta’s brain-scanning AI that builds a digital twin of your dopamine responses to precision-target your beliefs

    * A leaked Anthropic model called “Mythos” — more powerful than anything before it, and coming soon

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should humanoid robots be allowed in public institutions like schools and government buildings? If AI can map your brain's dopamine responses and craft messages to match, what does informed consent even look like? And with 40 members of Congress now sounding the alarm, is the Overton window finally shifting fast enough? Weigh in below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    33 mins
  • The Automation Playbook They Don't Want Workers to Know About | Warning Shots #34
    Mar 22 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) cover a week where the cracks are showing, in chip smuggling operations, corporate boardrooms, and an AI company’s inbox.

    A Chinese billionaire used a hairdryer to peel stickers off Nvidia racks and smuggle $2.5 billion in AI hardware past U.S. export controls. China unveiled a surveillance drone the size of a mosquito. Jeff Bezos launched a $100 billion company with one goal: buy factories, fire the humans, automate everything. Forbes quietly reported that 93% of American jobs can now be automated. Grammarly got caught using real experts’ identities to make its AI look smarter… without asking them.

    And OpenAI? They had a 10-person internal email chain about a user in Canada who spent months discussing a school shooting with ChatGPT. They decided not to tell anyone. Eight people are dead.

    This is the week’s AI news. None of it made the front page.

    If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Mark Andreessen’s dismissal of introspection — and what it says about who’s steering AI

    * China’s mosquito-sized surveillance drone and the rise of “artificial nature”

    * A $2.5 billion Nvidia chip smuggling operation and the limits of U.S. export controls

    * Jeff Bezos’s $100 billion bet on automating every factory he can buy

    * Forbes says 93% of American jobs can be automated — who’s left?

    * Could an AI CEO outperform a human one by end of 2026?

    * Grammarly caught using real experts’ identities without consent

    * The OpenAI school shooting lawsuit — and what a 10-person internal email chain chose to ignore

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    If OpenAI's own employees flagged a potential school shooting and chose silence, what does that tell us about who's minding the store? And if 93% of jobs can be automated, what exactly are we building this for? Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    30 mins
  • This AI Ran an Entire Business Alone: Are Human CEOs Already Obsolete? | Warning Shots #33
    Mar 15 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where the goalposts keep moving — and nobody seems to be watching.Andrej Karpathy left an AI agent running for two days. It tested 700 changes, picked the best 20, and improved itself. No humans involved. Meanwhile, a man in Florida used AI to build an autonomous business that made $300K — while he slept. And the Pentagon just banned Claude from its supply chain, citing concerns that it might be sentient.Just another week.If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Karpathy’s auto-research experiment — and what it means that AI is now improving AI

    * Swarms of agents, self-optimizing models, and the first inklings of an intelligence explosion

    * The autonomous AI business making $300K — and whether human entrepreneurs can compete

    * The Paperclip Maximizer problem playing out in real time

    * The Pentagon banning Claude over sentience concerns — and why every model has the same risk

    * A jailbroken Claude used to orchestrate a mass cyberattack on the Mexican government

    * A 3D-printed, AI-designed shoulder-launch missile built by a guy on Twitter

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is an AI improving itself a milestone or a warning sign?

    Could you compete with a business that never sleeps?

    And if Claude might be conscious, what does that say about every other model?

    Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    29 mins
No reviews yet