• How Do You Teach Responsibility if Students Don't Care? - Lorin Koch
    Apr 30 2026

    In this episode, Priten speaks with Lorin Koch, an educator who has taught across high school, online, and college settings after starting his career in journalism. Koch brings perspective from multiple vantage points—as a classroom teacher navigating AI integration, an online instructor confronting assessment challenges, and a parent of soon-to-be teenagers. Together they explore what happens when students understand the difference between learning and shortcutting but choose the shortcut anyway, and whether responsibility can be taught when the incentive to take a quick way out has never been lower.

    Key Takeaways:

    • Understanding responsibility is not the same as practicing it. Students conceptually grasp that using AI to do their work for them is wrong, but when faced with pressure to get things done, they often choose the shortcut anyway—suggesting that knowing what you should do doesn't guarantee you'll do it.
    • Self-paced, online environments create new accountability problems that have nothing to do with AI. The absence of in-person interaction makes it harder to detect cheating and easier to rationalize it, which means AI hasn't created the problem of student disengagement—it's simply made it more visible and more scalable.
    • Your teaching intuition about whether something is AI-generated will become less reliable. As students grow up reading AI-generated text, their own writing will be shaped by those patterns, making it harder for teachers to distinguish between authentic voice and AI assistance based on stylistic markers alone.
    • Presenting work through dialogue forces different stakes than submitting text alone. Requiring students to explain their thinking through presentations or discussion boards creates accountability that's harder to fake, even if the source material was AI-generated.
    • The gap between high-achieving and struggling students will likely widen because of how students think about time. Students with short-term vision—those thinking about the next 24 hours rather than long-term consequences—are the most vulnerable to AI shortcuts, and they're also the ones who need human attention most.

    Lorin Koch is an educator with 21 years experience teaching high school and 3 years as a college instructor of education. He holds an Ed.D. degree from the University of South Carolina. Lorin currently teaches online and in person from Washington state, where he works at Walla Walla University. He also writes and presents on Artificial Intelligence in education, focusing on integrating generative AI into the classroom.

    Show more Show less
    31 mins
  • What If Our Pedagogical Goal Was Curiosity? - Mary Shawn Newins
    Apr 28 2026

    In this episode, Priten speaks with Mary Shawn Newins, a computer science teacher in Greensboro, North Carolina, who arrived in the classroom at sixty with decades of corporate and sales experience but no coding background. Her unusual arc gives her permission to build AI literacy alongside her students rather than ahead of them. What emerges is a classroom culture where curiosity itself—not mastery or fear—becomes the pedagogical goal. She uses practical structures like a "quack" incentive and peer questioning to shift how students see AI: not as a shortcut to avoid, but as a tool that works best when you know what you actually want to learn.

    Key Takeaways:

    • Curiosity as a pedagogical aim changes everything about how students use AI. When learning for its own sake is the standard—not grades or compliance—AI becomes a catalyst for deeper exploration rather than a means of dodging work. A student asking AI about birds of prey out of genuine interest learns far more than one copying homework.
    • Making AI use visible and gamified shifts students from hiding it to owning it. Mary's "quack quack" jar and peer accountability turn using AI into something worth discussing openly. Social transparency works where rules do not.
    • Three non-negotiable standards replace prohibition: name the tool, share the prompt, explain the output in your own words. This mirrors citation practices students already know. It's not about policing—it's about maintaining the chain between question, resource, and understanding.
    • Strict phones, generous computers reflects a deeper principle about attention and agency. Banning personal devices while enabling desktop computers creates a bounded space for learning. The boundary isn't about rejecting technology; it's about who controls the environment.
    • Late-career teachers bring a rare asset: they remember how knowledge worked before AI. Mary's corporate background means she can model learning alongside students without needing to be the expert first. That permission ripples through the classroom.

    Mary Shawn M. Newins is a Marketing and Computer Science educator at Southern Guilford High School in Greensboro, North Carolina. She has been a full-time faculty member since Spring 2023 and proudly serves as the school’s AI Champion, supporting innovative and responsible technology integration in the classroom. Mary holds a Bachelor of Science in Education from Bowling Green State University and is an Ambassador for the CodeMonkey High School curriculum, advocating for accessible and engaging computer science education for all students. Before transitioning into education, Mary spent 30 years in the business sector, working across business-to-business sales, retail, direct sales, and operations management. Outside the classroom, Mary is a wardrobe stylist at Chico’s Friendly Center, a denim upcycler, and a creative at heart who enjoys painting.

    Show more Show less
    32 mins
  • Are We Building AI Literacy or AI Dependence? - Alyssa Muhvic
    Apr 23 2026

    In this episode, Priten speaks with Alyssa Muhvic, a high school history teacher in Indiana navigating AI's reshaping of her classroom. With experience on her district's AI task force and deep expertise in both AI literacy and equity concerns, Alyssa demonstrates how educators can lead rather than resist technological change. She challenges the assumption that AI's presence signals either inevitable dependence or straightforward disruption, arguing instead that the work is fundamentally pedagogical: helping students develop the judgment to use these tools responsibly while still engaging with core historical thinking skills.

    Key Takeaways:

    • Treating AI as a search engine reframes citation, sourcing, and critical thinking as one unified practice. Students must learn to evaluate AI outputs with the same skepticism they'd apply to any source—examining bias, verifying claims, and contextualizing information. This makes digital literacy inseparable from historical literacy.
    • The equity issue isn't access; it's reliability and responsibility at different price tiers. Paid AI plans produce output 20% more accurate than free versions. When affluent students get more reliable tools, the learning gap widens. Teaching responsible use becomes a justice issue.
    • Academic dishonesty with AI reflects overwhelm, not moral failure. High-achieving students risk-taking for perfection; struggling students disengaging entirely. Neither group benefits from prohibition. Both need to understand why checking your work still matters.
    • Transparency about your own AI use gives students permission to use it thoughtfully. When teachers hide their tool-use, students either view AI as forbidden or adopt it covertly. Showing your process—and its limits—normalizes critical engagement over sneaking.
    • Districts need protected time, not more mandates, to equip teachers as active learners. Asking educators to master AI literacy while managing diploma rewrites, state standards shifts, and dual-credit pipelines is unsustainable. The bottleneck is time, not will.

    Alyssa Muhvic is a Social Studies Teacher at Noblesville High School in Indiana, where she has been shaping young minds since 2021. She teaches United States History, Pre-AP World History, and Indiana Studies, and was the driving force behind launching the school's Ethnic Studies course — designing and implementing the curriculum from the ground up. Alyssa earned her degree in General History and Secondary Social Studies Education, with a minor in African American Studies, from Ball State University in 2021.

    Show more Show less
    42 mins
  • How Should Special Education Approach AI? - Brian Merusi
    Apr 21 2026

    In this episode, Priten speaks with Brian Merusi, a special education teacher at Niles High School working with students aged 14–19 who have cognitive impairments. Brian brings two decades of international teaching experience across Abu Dhabi, Poland, Penang, and rural development contexts. The central tension: how do we unlock AI's potential for accessibility and student expression while protecting students from its ethical risks and exploitation?

    Key Takeaways:

    • Speech-to-text accessibility tools matter more to this population than ChatGPT ever will. For students with typing challenges and diverse communication styles, the ability to speak and have systems capture their thinking credibly is transformative in ways that generative AI is not.
    • Pandemic developmental delays hit hardest where social interaction was irreplaceable. Students with cognitive delays experienced compounding losses during remote learning—missing not just content but windows of social and executive development that cannot be fully recovered later.
    • Teachers are curators of development, not content deliverers. Brian frames his role as shepherding students toward independent learning and workforce readiness, making technology decisions based on what advances that mission, not on what's trendy.
    • AI's dual promise and peril is most acute for students with fewer safeguards against manipulation. The same tools that could help students with dyslexia access reading can also draw them into harmful spaces they wouldn't otherwise encounter—requiring active pedagogical intervention.
    • Educators need unified policy guidance, not individual teacher judgment calls on authenticity. Without district-wide clarity on what constitutes authentic work in an AI world, each teacher invents their own standard, creating inconsistency and confusion.


    Brian Merusi is a mission-driven educational leader and community developer who combines over four decades of diverse global experience with a passion for practical solutions. Deeply rooted in Special Education and Learning Support across the U.S., Malaysia, the UAE, and Poland, his career also encompasses executive roles as a biotech CEO and development leadership in the D.R. Congo and Uzbekistan. A specialist in technology integration, Brian currently leverages this unique cross-sector expertise to create accessible learning environments where technology opens doors for every student.

    Show more Show less
    24 mins
  • Can You Still Teach Critical Thinking? - Paul Blaschko
    Apr 16 2026

    In this episode, Priten speaks with Paul Blaschko, an assistant teaching professor of philosophy at Wake Forest University. Paul's work sits at the intersection of liberal education, critical thinking instruction, and course design. The central question driving their conversation: in an era of AI that can generate plausible-sounding arguments and explanations, can we still teach students to think critically—or must we fundamentally reimagine what critical thinking means?

    Key Takeaways:

    • EdTech should solve existing problems, not create new ones. Paul approaches technology as a tool only when he's already facing a pedagogical challenge. This shifts the question from "what can this tool do?" to "what does my classroom need?"
    • YouTube explainers preceded ChatGPT in reshaping how students research and learn. Long before AI, students were outsourcing understanding to video tutorials rather than wrestling with dense texts, revealing a deeper shift in how students approach knowledge.
    • Critical thinking instruction requires direct practice with real arguments, not shortcuts around difficulty. There's no substitute for students actually constructing and defending their own positions through dialogue and written work, even when AI can do it faster.
    • Scaling critical thinking instruction demands new infrastructure, not just new pedagogy. Paul and his team are testing whether platforms like Think Arguments can help instructors manage the feedback and iteration needed to teach reasoning at scale across institutions.
    • AI may not replace the professor's role so much as expand it into explicit curation and judgment. In a world where explanations are abundant, the teacher's value shifts toward deciding which frameworks matter and helping students evaluate competing arguments.

    Paul Blaschko is an assistant teaching professor at the University of Notre Dame. He teaches God and the Good Life, a course dedicated to asking the big questions about meaning, morality, and faith. He also serves as the Director of the Sheedy Family Program in Economy, Enterprise, and Society, a program devoted to exploring how the humanities can help us find meaning in work. With Meghan Sullivan, he has co-authored The Good Life Method (Penguin Press, 2022), a book about how philosophy can help us live better lives. He is currently working on a book on the philosophy of work (under contract with Princeton University Press), and is the co-founder of a Notre Dame based tech start-up that aims to solve problems with dialogue on the internet.

    Show more Show less
    51 mins
  • What Is Age-Appropriate AI in Education? - Megan Barnes
    Apr 15 2026

    In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect.

    Key Takeaways:

    • Before children can use chatbots, they need a solid concept of real versus not real. Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.
    • The attention economy is becoming a relational economy—and children are the target. The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.
    • AI literacy in elementary school means information literacy, not prompt engineering. Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.
    • Every generation of creative technology triggers the same panic—and the pattern holds. Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.
    • Curiosity doesn't need to be taught—it needs to be protected. Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.

    Megan E. Barnes is a librarian with over 14 years experience, as well as a Ph.D. student in Learning Technologies at the University of North Texas. Her research focuses on ethical considerations in educational technology adoption and curriculum design. She is currently a research assistant developing curriculum for edge AI and is an ed-tech leader and library director at an independent school. She believes that librarians are information professionals uniquely suited to exploring the intersection of information, technology, and pedagogy.

    Show more Show less
    44 mins
  • Is AI Literacy the New Professional Credential? - Anna Zendall
    Apr 9 2026

    In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.

    Key Takeaways:

    • Curriculum doesn't absorb AI -- it has to be rebuilt for it. Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.
    • AI is the first classroom technology to split faculty, students, and administration into opposing camps. Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.
    • Healthcare employers aren't waiting for higher ed to figure this out. Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.
    • A student tester changed the entire design process. Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.
    • The real danger isn't AI itself -- it's losing the habit of questioning it. Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.

    Anna Zendell is the program director for the MS in Healthcare Administration program. For over a decade, she has directed degree programs in healthcare administration, health sciences, and public administration. She teaches regularly at the graduate and undergraduate levels. A major emphasis is on ensuring equitable and accessible higher education for students of all abilities by leveraging the power of online learning and the unique attributes that adult learners bring to their learning.


    Prior to her academic administration and teaching work, Anna oversaw operations and evaluations for grant-funded research projects focusing on issues such as walkable communities, community health education, and dementia interventions. She developed enduring interdisciplinary partnerships with organizations, local governments, and community members. She provided professional development and continuing education for healthcare professionals. Key focus areas in Anna’s work include fostering meaningful inclusion in workplaces and communities and addressing health disparities, particularly around chronic illness and health promotion.


    Anna earned her doctorate and master’s degrees in social work at the University at Albany with a focus on management and community systems.

    Show more Show less
    28 mins
  • What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver
    Apr 7 2026

    In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.

    Key Takeaways:

    • Citation can't bridge the gap between AI-generated ideas and their sources. Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.
    • A global AI disclosure standard is actively being built. Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.
    • AI use in research often falls outside methodology entirely. A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.
    • Separating the disclosure from the assignment makes students more likely to do it. At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.
    • Authorship will likely settle at the disciplinary level, not the universal one. Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.

    Kari D. Weaver (she/her) holds a B.A. from Indiana University, a M.L.I.S. from the University of Rhode Island, and an Ed.D. in Curriculum and Instruction from the University of South Carolina where her dissertation examined the impact of professional development interventions on academic librarian teaching self-efficacy. She is the Program Manager, Artificial Intelligence and Machine Learning with the Ontario Council of University Libraries on secondment from her permanent role as the Learning, Teaching, and Instructional Design Librarian at the University of Waterloo. Additionally, Dr. Weaver is a continuing sessional faculty member in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education (OISE) at the University of Toronto. Her wide-ranging research background includes study of accessibility for online learning, information literacy, academic integrity, misinformation. She is widely recognized as an expert in AI citation, attribution, and disclosure practices for her development of the Artificial Intelligence Disclosure (AID) Framework and is currently the co-lead of the 2026 World Conferences on Research Integrity Focus Track: Toward a Global Reporting Standard for AI Disclosure in Research.

    Show more Show less
    38 mins