Artificial Intelligence and Criminal Exploitation: A New Era of Risk

Crime

2025-07-16

Loading video...

Source: Congress.gov

Summary

No summary available.

Participants

Transcript

The chair is authorized to declare recess at any time.  We welcome everyone to today's hearing on artificial intelligence and criminal exploitation.  I now recognize the gentlewoman from Florida, Ms.  Lee, to lead us in the Pledge of Allegiance.   Thank you.  I now recognize myself for an opening statement.  I appreciate everyone being here today, our witnesses and those in the audience.  This is an important hearing, which focuses on artificial intelligence and how it is being exploited by criminals.  The conceptual roots of AI can be traced to British mathematician Alan Turing, when the 1930s theorized about machine being capable of performing any computable task.   Today, AI is best understood as a branch of computer science that leverages large-scale data processing, algorithmic modeling, and modern hardware to enable machines to perform tasks typically requiring human cognition.  Unfortunately, like most technical innovation, the criminal element has begun to use AI to enhance their illicit activities.  AI-enabled threats continue to evolve as bad actors use AI technology in a wide spectrum of criminal enterprises.   From deepfake scams and synthetic identity fraud to financial crimes and child sexual abuse material , the landscape continues to evolve at a rapid pace as AI provides users with enhanced capabilities.  AI-based or AI-driven threats and schemes can cost businesses millions of dollars a year, including both prevention and falling prey to them.  In one case, fraudsters used AI to clone a CEO's voice and authorize a wire transfer.   Among corporations that experienced a rise in deepfake incidents, 75% of deepfakes impersonated a CEO or another C-suite executive.
Generative AI enables the criminal exploitation of victims' emotional vulnerabilities through tactics such as sextortion, pig butchering scams, phishing, and elder fraud.  Senior citizens are increasingly targeted through voice phishing scams where an AI-generated replica of a grandchild or military officer claims to need urgent funds.   In one case, a Colorado mother received a call from what sounded like her daughter pleading for help.  The voice was AI generated, cloned from a short online clip, and used to demand ransom.  The voice was indiscernible to the mother who wired $1,000 to scammers in Mexico.  AI is also fueling a rise in sextortion and synthetic CSAM.  New AI tools can generate highly realistic but entirely fabricated explicit images, often used to extort miners or damage reputations.   Some sextortion scams exploit the trust associated with platforms like Apple's iMessage by impersonating classmates or romantic interests via recognizable blue bubble interfaces.  Criminals now deploy apps like MUA to fabricate child abuse images at scale.  Stanford University researchers have uncovered evidence that generative models were trained on real exploitative content.  Terrorist groups now utilize AI to target, recruit, and indoctrinate vulnerable individuals.   Generative AI provides a degree of separation, allowing actual terrorists to maintain anonymity in their public-facing recruiting practices.  Generative AI also allows terrorists to produce propaganda, fake news stories, and emotionally resonant messages tailored to specific psychological profiles.  Reports of generative artificial intelligence enabled scams between May of 2024 and May of 2025 rose by 456%.   The use of exploitative generative AI allows criminals to produce human-like text, code, images, and videos, allowing criminals to use the technology for further criminal activities, such as creating more realistic phishing lures or generating deep fakes for extortion.
On average, phishing attacks cost $4.9 million per breach.   On the other hand, AI is increasingly integrated into police investigations, offering new tools and capabilities for law enforcement agencies amid the backdrop of rapidly expanding digital data sources and increasing demands on law enforcement agencies.  AI provides a more adaptable and comprehensive approach to solving crimes compared to traditional methods, leveraging data analytics, machine learning, and pattern recognition to enhance investigations and assist with administrative tasks.   This is also potentially a problem as well, as we seek to balance curbing AI with our civil rights.  AI can also help process large volumes of data, identify patterns, and generate actionable insights.  And in turn, these applications can improve efficiency, accuracy, and resource allocation with investigative processes.   However, to fully benefit from AI applications, law enforcement entities need reliable data, human oversight, and while also tackling issues related to the privacy bias and ethical considerations.  Addressing the continued misuse of AI will require a varied approach while also raising public awareness about the risks associated with AI generated content.  Law enforcement agencies must engage openly with community stakeholders, legal experts, and the public to communicate the intended uses, benefits, and limitations of AI technologies.   A collaborative effort to both prevent the misuse of AI while encouraging lawful application is required to effectively navigate this evolving landscape.  I am excited for today's hearing.  I think this is the first of its kind.  I believe it will be only the first of its kind as we consider AI and its continued expansion of influence in our lives.  And I'm looking for a very substantive discussion that I anticipate a substantive discussion that we're going to have today.   And with that, I yield back and recognize now our ranking member, Ms.