• Good People Are Quietly Quitting | AI Leadership & Ethical Strategy with Carly Caminiti
    Apr 1 2026

    Send us Fan Mail

    In this episode of The Signal Room, Chris Hutchins sits down with executive coach Carly Caminiti to explore why good people are quietly quitting and how company culture deeply impacts retention. This conversation dives into essential AI leadership strategies and ethical leadership insights that healthcare and AI leaders must know to reduce burnout and improve team engagement.

    You'll learn key leadership approaches to address the hidden workforce crisis, including:

    • Why traditional staff satisfaction surveys miss the real truth
    • The true cost of replacing top talent beyond dollars
    • How burnout silently spreads across teams during AI transformation
    • Why AI implementation challenges make ethical leadership more critical than ever
    • Practical steps to build trust and retain your best employees

    This episode targets critical themes in AI governance, AI leadership strategies, and ethical leadership — equipping leaders navigating AI transformation in healthcare and beyond.

    Connect with Carly Caminiti, expert in leadership development and burnout reduction: Carlycam.com
    Learn more about The Signal Room: www.signalroompodcast.com
    Support the show: https://www.buzzsprout.com/2550733/support

    Support the show

    Show more Show less
    49 mins
  • AI Leadership & Operational Reality | MarKeisha Snaith
    Mar 25 2026

    Send us Fan Mail

    Explore AI leadership strategies and operational realities shaping system signals. MarKeisha Snaith shares insights on driving effective healthcare innovation and leadership ethics in healthcare and beyond.

    Transformation efforts in stall most often at the intersection of strategy and operations, where well-intentioned plans meet organizational resistance. This conversation covers where transformation stalls, how communication shapes culture, and the signals leaders need to act on before problems become crises.


    MarKeisha Snaith brings a practitioner's lens to leadership — examining the gap between what executives intend and what frontline teams experience. The episode explores how strategic decisions travel through organizational layers and why the signals that matter most are often the ones leaders aren't hearing.


    Guest: MarKeisha Snaith | Host: Chris Hutchins, Founder and CEO of Hutchins Data Strategy Consultants | The Signal Room Podcast

    Support the show

    Show more Show less
    53 mins
  • Healthcare Innovation & Caregiver Leadership | Amanda Roser
    Mar 18 2026

    Send us Fan Mail

    Delve into the vital role of caregivers in healthcare innovation and leadership ethics. Amanda Roser discusses strategies to enhance healthcare leadership and operational success

    Caregivers function as the connective tissue that holds fragmented healthcare systems together, and their role is frequently undervalued in technology discussions. This conversation with Amanda Roser examines how human connection serves as the bridge between clinical teams, patients, and organizational processes, and why technology designed without understanding caregiving relationships often fails to achieve its goals.

    Caregivers perform relational work that clinical systems and AI tools cannot address on their own, making them essential to effective patient care. Technology designed to support caregivers succeeds when it amplifies human connection rather than replacing it. Healthcare systems that fail to invest in caregiver support weaken the entire chain of care delivery because caregivers are the intermediaries who make systems work.

    Topics covered: the role of caregivers in healthcare systems, human connection in care delivery, relational healthcare models, caregiver support systems and technology, healthcare system fragmentation and integration, and why caregiver wellbeing directly impacts patient outcomes.

    Support the show

    Show more Show less
    50 mins
  • AI Regulation in ER & Clinical Judgment | Dr. Natasha Dole
    Mar 13 2026

    Send us Fan Mail

    Understand AI regulation and its impact on emergency healthcare alongside the enduring value of clinical judgment. Dr. Natasha Dole examines ethical leadership in healthcare AI.

    Emergency departments expose every weakness in AI systems because they demand speed, accuracy, and adaptive decision-making simultaneously. This conversation delivers a candid assessment of AI implementation in one of healthcare's most challenging environments. Trust gaps between emergency physicians and AI tools are not abstract concerns; they have direct consequences for patient outcomes.

    Emergency medicine environments reveal where AI systems lack contextual awareness and clinical nuance, making implementation failures visible immediately. Clinical expertise developed through years of emergency practice cannot be replicated by algorithms that lack the situational awareness experienced physicians develop.

    Topics covered: AI in emergency medicine implementation, clinical judgment vs. algorithmic recommendations, trust gaps in healthcare AI, emergency department workflows, digital health leadership in clinical settings, and the boundary between AI support and clinical authority.

    Support the show

    Show more Show less
    44 mins
  • Enterprise AI Journey: Agentic AI, Generative AI & Data Foundations in Healthcare | Gary Cao
    Mar 4 2026

    Send us Fan Mail

    Join Gary Cao to explore AI strategy for healthcare enterprises, from data foundations to AI transformation leadership. Learn strategic leadership essentials for AI implementation.

    The most successful healthcare organizations approach AI as a multi-year journey with distinct phases, each building on previous work. This conversation with Gary Cao maps that full arc from data foundations through analytics maturity to generative and agentic AI, exploring how healthcare organizations build capabilities that compound over time rather than chasing isolated wins.

    The enterprise AI journey is fundamentally sequential, and healthcare organizations that skip foundational steps pay for it later when advanced initiatives fail. Data foundations are not a phase you complete and move past; they are a continuously strengthened capability that enables everything that comes after.

    Topics covered: enterprise AI maturity progression, data foundation requirements, analytics capability building, the role of governance in enabling AI, generative AI adoption in healthcare, agentic AI applications and limitations, and why short-term thinking about AI leads to expensive failures.

    Support the show

    Show more Show less
    42 mins
  • AI Strategy to Execution & Ethical Leadership | Brian Sutherland
    Feb 25 2026

    Send us Fan Mail

    Discover the path from AI strategy to execution, focusing on trust, ethical leadership, and operational realities. Brian Sutherland shares key leadership development insights in healthcare AI.

    Many healthcare AI strategies fail because they never translate into operational reality. This conversation with Brian Sutherland addresses that gap by examining why so many AI strategies don't survive first contact with actual healthcare organizations. Trust and leadership alignment determine whether AI initiatives move from boardroom vision to bedside impact.

    The gap between AI strategy and execution in healthcare is rarely a technology problem; it is almost always a leadership alignment problem. Trust is the currency that determines whether AI initiatives survive the transition from vision to reality. Success requires understanding that organizational transformation is fundamentally about behavior change, not technology deployment.

    Topics covered: AI strategy to execution translation, healthcare leadership alignment, trust dynamics in organizational change, operational realities of AI implementation, and the leadership behaviors that determine success or failure.

    Support the show

    Show more Show less
    41 mins
  • Why AI Verification is the Real Bottleneck in Pharmaceutical Drug Discovery | David Finkelshteyn
    Feb 18 2026

    Send us Fan Mail

    Explore AI verification bottlenecks and leadership challenges in pharmaceutical drug discovery with David Finkelshteyn. Key topics include healthcare AI governance and ethical AI in pharma.

    AI can dramatically accelerate pharmaceutical research pipelines, yet the verification and validation steps remain the true bottleneck constraining how quickly drug discovery actually progresses. This conversation with David Finkelshteyn focuses on a constraint most AI discussions ignore: the rigorous verification required when patient safety depends on getting AI right.

    AI acceleration in pharmaceutical research only matters if verification processes can keep pace and maintain scientific credibility. Verification is where the credibility of AI-driven research is established, not where it is threatened. Responsible AI adoption in pharma requires treating verification and validation as core competencies, not as compliance checkboxes.

    Topics covered: AI verification science in pharmaceutical research, drug discovery acceleration, regulatory frameworks for AI in pharma, validation methodologies, and how verification requirements shape AI strategy in drug discovery.

    Support the show

    Show more Show less
    38 mins
  • No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'
    Feb 11 2026

    Send us Fan Mail

    This episode explores ethical leadership and AI governance challenges in healthcare cybersecurity, emphasizing the risks of undetected breaches.'

    In this episode of The Signal Room, Chris Hutchins speaks with Guman Chauhan, a cybersecurity and risk leader, about one of the most dangerous conditions in modern organizations: being breached and not knowing it. While dashboards stay green and alerts stay quiet, attackers increasingly operate using valid credentials, normal behavior patterns, and long dwell times—remaining invisible for weeks or months.

    Guman explains why “no alerts” is often mistaken for “no breach,” and why silence is one of the most misleading signals in cybersecurity. The conversation unpacks how attackers deliberately avoid detection, why security tools alone do not equal security outcomes, and where organizations create blind spots through untested assumptions, alert fatigue, and fragmented processes.

    They explore why undetected breaches are more damaging than known ones, how time compounds risk once attackers are inside, and what separates organizations that mature after incidents from those that repeat the same failures. Guman emphasizes that proven security is not built on policies, certifications, or dashboards—but on continuous testing, validated detection, and teams that know how to act under pressure.

    This episode is a practical guide for executives, security leaders, healthcare organizations, and regulated enterprises that need to move from assumed security to proven breach readiness.

    Guest: Guman Chauhan
    LinkedIn: https://www.linkedin.com/in/guman-chauhan-m-s-cissp-cism-600824103/

    Topics Covered

    • Why undetected breaches are more dangerous than known breaches
    • How attackers use valid credentials to avoid detection
    • Why “no alerts” does not mean “no breach”
    • Alert fatigue and the signal-to-noise problem
    • Security tools vs security outcomes
    • Visibility gaps, unknown assets, and logging failures
    • External penetration testing and real-world validation
    • Cultural and leadership factors in breach response
    • Assumed security vs proven security

    Key Takeaways

    • Silence is not security; it often means you are not seeing the right signals.
    • Most breaches go undetected because attackers behave like legitimate users.
    • Security tools do not fail—untested assumptions do.
    • Alert fatigue hides real risk by normalizing noise.
    • Proven security requires testing detection and response end to end.
    • Mature organizations treat breaches as learning moments, not events to hide.
    • Confidence without validation creates the most dangerous blind spots.

    Chapters / Timestamps

    00:00 – Why undetected breaches are the real risk
    02:30 – Being breached vs being breached and not knowing
    06:00 – How attackers stay invisible using valid credentials
    08:30 – Why dashboards and alerts create false confidence
    10:00 – Common reasons breaches go undetected for months
    13:30 – Security tools vs security outcomes
    16:00 – Technology, process, and people failures
    19:30 – Alert fatigue and finding real signals
    22:30 – Why external penetration testing still matters
    26:30 – What mature organizations do after a breach
    31:00 – One action to improve breach readiness this year
    32:45 – The uncomfortable question every leader should ask
    34:30 – Assumed security vs proven security
    36:30 – How to connect with Guman & closing

    Support the show

    Show more Show less
    34 mins