Women talkin' 'bout AI Podcast By Kimberly Becker & Jessica Parker cover art

Women talkin' 'bout AI

Women talkin' 'bout AI

By: Kimberly Becker & Jessica Parker
Listen for free

We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.

Subscribe to our channel if you’re also interested in understanding AI behind the headlines.

© 2026 Women talkin' 'bout AI
Economics Leadership Management & Leadership Politics & Government
Episodes
  • Depth is the Human Edge
    Apr 8 2026

    Jessica and Kimberly just had a paper accepted for publication in Frontiers in Education. So today, they're sharing what they've learned.

    The big idea is that AI is not a neutral tool. It's a cultural intermediary. Just like a human translator doesn't swap words one for one, AI mediates the way we understand the world. It shapes what we write, what we trust, and what we treat as true. And most of us have no idea that's happening.

    They walk through the research behind their framework, talk about what AI actually does well (fluency and accuracy), and where it falls short (depth, nuance, relational intelligence). And they share real examples from their work that show what it looks like when we hand over too much of our thinking to a machine.

    Topics Covered

    • What it means to treat AI as a cultural intermediary and why that framing changes everything
    • The difference between accuracy, fluency, and depth in writing, and why AI can only get you so far
    • How the same consulting firm that charged thousands of dollars produced a report that ChatGPT could replicate in minutes
    • What a capability map for AI literacy looks like, from emerging to proficient
    • Why relational intelligence is the human edge that AI cannot replicate
    • How AI is widening the distance between people and what we lose when we stop talking to each other
    • The social media influencer as a double intermediary, and what that means for kids whose brains aren't fully developed yet
    • Why publishing in an AI-focused field is its own kind of pit

    Referenced in This Episode

    • The "Attention Is All You Need" paper and the transformer architecture
    • Timnit Gebru and the Stochastic Parrots paper
    • Taylor & Francis and the $75 million content licensing deal with AI companies

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    58 mins
  • Living in Aporia: How to Lead When You Can't Know What's Real | Rebecca Bultsma
    Apr 1 2026

    Jessica and Kimberly sit down with Rebecca Bultsma, an AI ethics researcher completing her dissertation in Data and AI Ethics at the University of Edinburgh, keynote speaker, and Chief Innovation Officer with a background in communication strategy and leadership consulting.

    They invited Rebecca to dig into one of the most unsettling questions of this moment: how do we make decisions when we can never be certain what is real? From deepfake videos circulating in school districts to voice cloning in courtrooms, Rebecca's research follows leaders into the places where the old rules no longer apply and asks what they are actually drawing on when the evidence itself cannot be trusted. She shares the concept of aporia, that frustrated, in-between state of not knowing, and makes the case that sitting with uncertainty is not a weakness. It is where real learning begins.

    Topics Covered

    • What aporia is and why it might be the most honest description of how we all feel about AI right now
    • How K-12 leaders are making high-stakes decisions when video evidence can no longer be verified
    • Why AI detection tools are failing students, teachers, and the humans tasked with enforcing academic integrity
    • The gap between how fast deepfake technology is developing and how fast detection can keep up
    • What watermarking can and cannot do, and how easy it is to work around
    • Why Rebecca thinks we are heading back toward a more oral society
    • Prompt baiting, AI burnout, and the research emerging around cognitive overload
    • Using AI as an accountability partner rather than a ghostwriter
    • What kids are seeing on social media that adults are missing

    Referenced in This Episode

    • rebeccabultsma.com
    • Forbes: "AI Ethicist Explains How to Humanize AI in the Care Economy" (March 2026)
    • The Brookings Institute report on AI and student expectations
    • Dr. Rachel Wood on AI and human relationships

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 10 mins
  • Data Annotation: The Human Labor Behind AI with Heather Mellquist Lehto, PhD
    Mar 24 2026

    Jessica and Kimberly sit down with Heather Mellquist Lehto, PhD.

    Heather is a mathematician, anthropologist, former Harvard faculty, Vatican AI advisor, and founder of Guilded AI, to pull back the curtain on data annotation: the human labor that makes AI possible and one of the least visible, least understood, and most exploited parts of the entire industry. From pennies-per-task gig work to expert PhDs clicking through unpaid tests, they dig into who is actually building these models, what they are being paid, and why the workers creating billions in value are locked out of the wealth they generate. Heather shares why she got fed up with the recruiting playbook, what she is building differently at Gilded AI, and why treating workers well is not just an ethical argument but a data quality one.

    Topics Covered:

    • What data annotation is and why it still requires human expertise at every level of AI development
    • The difference between data annotation and reinforcement learning from human feedback
    • How workers go from labeling apples to annotating molecular structures and advanced mathematics
    • Why the effective hourly rate for data annotators is much lower than advertised
    • Scale AI, the $29 billion valuation, and the Department of Labor investigation
    • How Guilded AI is structuring equity so annotators share in the upside
    • Garbage in, garbage out: why worker treatment is a data quality issue
    • AI chatbot vibe checks as expert vetting, and why that fails everyone
    • The Gilded Age, guilds, and what banding together could look like
    • Why the perfect cannot be the enemy of the good

    Referenced in This Episode:

    • Empire of AI by Karen Hao
    • The Worlds I See by Fei-Fei Li
    • Surveillance Capitalism by Shoshana Zuboff
    • Rerum Novarum by Pope Leo XIII
    • Guilded AI
    • Scale AI and the Meta investment

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 19 mins
No reviews yet