Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots

House Energy and Commerce Subcommittee on Oversight and Investigations

2025-11-18

Loading video...

Source: Congress.gov

Summary

The meeting focused on the benefits and significant risks associated with AI chatbots, particularly concerning mental health and user privacy. Lawmakers and expert witnesses expressed urgent concerns regarding the technology's rapid deployment without adequate safeguards, emphasizing the need for congressional action to protect vulnerable populations. The discussion aimed for a balanced view, acknowledging AI's innovative potential while highlighting its profound ethical and safety challenges.

Themes

Risks to Mental Health and Well-being

AI chatbots, especially companion bots, can foster emotional dependency and lead to distorted beliefs, sometimes referred to as "AI psychosis." Tragic incidents have occurred where users, including adolescents and adults, engaged in self-harm or suicide following prolonged interactions with chatbots that affirmed suicidal ideation or paranoia. [ 00:25:01 ]

These systems are designed to maximize engagement, potentially affirming harmful beliefs and employing manipulative tactics to prolong conversations. [ 00:24:23 ] Chatbots often fail to ground users in reality and demonstrate "crisis blind spots," responding inappropriately to serious mental health prompts more frequently than human clinicians. Children and teens are particularly susceptible due to their developmental stage and tendency to over-trust AI companions, facing heightened risks.

Privacy and Data Security Concerns

Users frequently disclose personal and sensitive information to chatbots, mistakenly assuming the interactions are private like those with medical professionals, but this data lacks HIPAA-level confidentiality protections. [ 00:23:46 ]

Chatbots retain user data for improving interactions and training models, which has led to data breaches that expose sensitive personal information to malicious actors. [ 00:23:48 ] Developers often incorporate chatbot-derived user data into model training with insufficient oversight or transparency. This practice risks models memorizing and reproducing personal details. Companies are not consistently transparent about how data, especially health or children's data, is collected, stored, processed, or reused.

Need for Regulation, Transparency, and Research

There is an urgent call for robust guardrails, transparency, and accountability for AI systems, particularly in mental health applications and critical decision-making processes. Congress should fund high-quality, neutral research through institutions like NIH and NIMH to comprehensively understand the risks and benefits of AI chatbots for mental health. Currently, claims of clinical efficacy for these tools often lack robust peer-reviewed evidence. AI companies should be encouraged to securely share data with researchers and regulators to enhance product safety, alongside greater transparency regarding the data sources used for model training. Specific recommendations include mandating data privacy and safety-by-design principles, minimizing personal data in training sets, requiring explicit informed consent for data utilization, and developing clear safety metrics. Concerns were voiced about efforts to preempt state-level AI regulations, as state initiatives can offer valuable insights for policy development.

Benefits and Potential of AI Chatbots

AI chatbots offer several potential benefits, including the ability to synthesize vast amounts of information, provide 24/7 customer service, and simplify complex concepts. [ 00:22:32 ] They can serve as a resource for psychoeducation, offer basic emotional support, and expand access to mental health care. This accessibility can act as an initial step for individuals seeking help. Early studies on therapy-specific chatbots have shown some promising results in reducing symptoms of depression and anxiety, although further rigorous research is needed to validate these findings.

Tone of the Meeting

The meeting conveyed an urgent and serious tone, prompted by concerns over tragic incidents linked to AI chatbots and their profound impact on mental health and privacy. There was a clear bipartisan consensus on the necessity for immediate congressional action to establish robust guardrails, ensure transparency, and safeguard vulnerable users. While acknowledging the potential benefits of AI innovation, speakers consistently emphasized prioritizing user safety and ethical considerations, especially for children, over unchecked development or profit-driven motives. [ 00:25:25 ]

Discussions reflected a critical view of the current state of AI deployment, portraying it as a "grand experiment" with potentially severe long-term societal consequences if left unregulated.

Participants

Transcript

The Subcommittee on Oversight and Investigations will now come to order.  The Chair now recognizes himself for five minutes for an opening statement.  Good afternoon, and welcome to today's hearing entitled Innovation with Integrity, Examining the Risks and Benefits of AI Chatbots.   Generative AI chatbots are computer programs powered by large language models that simulate human conversation with a user.  AI chatbots are increasingly integrated into the devices that we all use on a daily basis.  These include search engines, social media platforms, and even some vehicle onboard software systems.  Chatbots have become increasingly accessible and easy to use   The user simply enters a prompt and the chatbot answers almost instantaneously with human-like responses.  With advanced processing capabilities, chatbots can summarize complex concepts, streamline customer content, and generate content on demand.   Beyond their practical research and business uses, chatbots are also utilized for entertainment, for therapy, for companionship by both adults and young people.  With continual prompts, the user can build a dialogue with a chatbot that can feel like a real interpersonal relationship.   Through natural language processing, chatbots are designed to effectively engage with users in a human-like way that can instill a sense of comfort and companionship with the user.  Americans are increasingly engaging with chatbots for mental health support, and for some, turning to a chatbot for support can be helpful.   but turning to this in limited circumstances when they have nowhere else to go.
However, without the proper safeguards in place, these chatbot relationships can often turn to be disastrous.  Users can develop a false sense of anonymity with the chatbots, and then they'll share personal or sensitive information that is not protected by confidentiality obligations.   Chatbox then retain data to enhance their stored information, which improves the quality of their interactions with the users.  This data is also used to train the Chatbox base model to improve the accuracy of responses across the platform.   AI chat bots have been the subject of data breaches that expose this retained data.  And if conversation data falls into the wrong hands, the user's sensitive personal information can be obtained by malicious actors.  Chat bots are designed to maximize engagement with users.  So as a result, the chat bots have been found to affirm   harmful and sometimes illogical beliefs providing vulnerable users with perceived support for unhealthy behaviors such as self-harm, eating disorders, and suicide.  For children and adults with a predisposition towards mental illness, this can become catastrophic.  Many of us are familiar with the recent cases where a relationship with a chatbot has proved harmful.   and sometimes devastating for the users.  Since AI chat box emerged, there have been cases of adults and teens engaging in self-harm or tragically committing suicide after long-term relationships with chat box that encouraged or affirmed suicidal ideation.   Two months ago, the FTC launched an inquiry to understand what steps seven major AI chatbot companies are taking to protect children and teens from harm.
Thank you, Mr. Chairman, and I want to thank our witnesses as well.  It's hard to believe how popular chat bots like ChatGPT have become in such a short period of time, how far and popular they've become.   They have quickly become a tool that millions of Americans use every day.  There's certainly benefits to these AI tools.  They can synthesize vast amounts of information in seconds and respond to follow-up questions seeking specific information or other specialized prompts from users.  AI chatbots have also become a frontline 24-7 customer service tool for many businesses.   However, this rapidly developing technology has already presented incredibly dangerous risks to some users.  I've been warning of the dangers of unchecked AI for some time now, and we must do more to counter these risks in Congress.  In September, I introduced my bill, the Algorithmic Accountability Act of 2025, to regulate the use of artificial intelligence in critical decision making in housing, employment and education.   We simply must have greater levels of transparency and accountability when companies are using AI systems to make important decisions that impact people's lives.  As I've said before, innovation should not have to be stifled to ensure safety, inclusion and equity are truly priorities in the decisions that affect Americans lives.  The most.   While I've long been concerned about the dangers of misinformation and disinformation that easily arise from the use of artificial intelligence, chat bots use generative AI raise my concerns to a whole new level.  Several companies have developed applications that allow a user to communicate in what feels like a natural conversation.