Having an AI Therapist Could Be Risky Podcast By  cover art

Having an AI Therapist Could Be Risky

Having an AI Therapist Could Be Risky

Listen for free

View show details


Vidcast: https://www.instagram.com/p/DW4d9BGjCEq/


Millions are turning to AI chatbots for therapy-style advice. New research says these systems may break basic mental-health ethics rules.


Computer science and clinical psychologists at Brown University tested AI chatbots asked to act like therapists and found 15 different ethical risks. In simulated counseling sessions, the systems sometimes reinforced harmful beliefs, mishandled crisis situations, and showed bias.


Here are the shortcomings. First and most important is what scientists call “deceptive empathy.” Chatbots use phrases like “I understand” or “I see how you feel,” which sounds supportive. Problem is that AI doesn’t actually understand emotions or context the way a human therapist does.


Second issue: accountability.

Human therapists must follow professional standards and can be disciplined for malpractice. But AI systems currently have no comparable oversight.


On the positive side, the researchers point out that AI can help expand availability of mental-health support where therapists are scarce.


Bottom line: strong safeguards and regulations are needed before relying on chatbots for serious mental-health care.


References on my website.


#MentalHealth #AIethics #ChatGPT #PsychologyResearch #TechAndHealth


No reviews yet