Security to Model: Securing Artificial Intelligence to Strengthen Cybersecurity

Cybersecurity and Infrastructure Protection

2025-06-12

Loading video...

Source: Congress.gov

Summary

This meeting of the Committee on Homeland Security Subcommittee on Cybersecurity Infrastructure Protection focused on the intricate relationship between artificial intelligence (AI) and cybersecurity, exploring strategies to secure AI systems, mitigate risks posed by adversaries' use of AI tools, and harness AI's potential for cyber defense [ 00:19:21-00:19:42 ] . The discussion highlighted both the transformative opportunities AI presents for strengthening security and the significant challenges arising from its rapid evolution and weaponization by malicious actors [ 00:19:57-00:19:59 ] .

Themes

AI's Dual Impact: A Powerful Tool and a Potent Threat

AI is a critical tool for enhancing cybersecurity, enabling teams to manage vulnerabilities, detect and analyze threats, track compliance, and automate incident responses, thereby alleviating the shortage of skilled professionals . It offers a "decisive advantage" in the cyber domain, with AI-led investigations demonstrating significant cost reductions and efficiency gains . AI can scale defenses, personalize responses to attacks, and improve security operations by automating tasks that humans find tedious, like investigating incidents or looking for patterns . However, adversaries are weaponizing AI to scale and accelerate attacks, including a nearly 1200 percent increase in phishing since late 2022, social engineering attacks, ransomware development, autonomous bots, and deepfakes . AI models themselves are vulnerable to manipulation, affecting their accuracy and performance, and their attack surface grows with increased capability, making robust security essential .

The Imperative of "Secure by Design" and Robust Governance

Building security into AI systems from the outset is crucial for national and economic security . This "Secure by Design" approach requires maintaining robust defenses and continuous oversight throughout the AI system's lifecycle . It also involves educating AI builders, providing incentives for secure practices, and disseminating secure AI development guidelines across the entire supply chain . While the "Secure by Design" framework is not new, its mandatory adoption is critical, especially with the surge in AI-generated code, which can introduce new vulnerabilities . Vulnerability disclosure for AI systems, mirroring initiatives like CISA's pledge, helps justify security investments and improve processes . Developers must avoid poor access controls and conduct adversarial testing to protect against data security incidents .

Addressing Workforce and Talent Gaps

A significant shortage of skilled cybersecurity professionals exists, exacerbated by a "brain drain" as foreign entities recruit top talent [ 00:20:04-00:20:11 ] . This lack of expertise in procuring and securing AI hinders its full potential [ 00:20:27-00:20:31 ] . AI products can help bridge this skills gap by acting as an "always-on teacher" and assisting with complex tasks, thereby bolstering cybersecurity education and workforce development . Continued investment in research and development and in universities is essential to attract and retain the best global talent, as there is currently a "huge gap" in AI engineers and security professionals . Introducing AI into education, focusing on critical thinking and problem-solving rather than just homework assistance, is also key for future generations .

The Crucial Role of Government and Policy

There is an urgent need for legislation and AI guardrails, as attackers are outpacing regulators in exploiting inconsistencies and gaps . A federal baseline with state partnerships, similar to PCI or HIPAA, is necessary to establish minimum standards while allowing for regional adaptation . Congress should consider policy recommendations such as facilitating workforce readiness, developing cyber deterrence strategies, and reforming IT acquisition models to meet the demands of the AI era . Government support can also involve providing clear guidelines and making technical processes for monitoring autonomous AI agents less manual . Access to public datasets for cyber defense research is vital for benchmarking, validation, and fostering the development of more efficient AI models [ 01:03:58-01:05:04 ]

[ 01:05:23-01:05:45 ] .

The Emergence and Challenges of Agentic AI

Agentic AI, a new type of AI that enables agents to make decisions, raises important questions about control and security . These autonomous AI agents introduce complex security risks, as compromised agents could conduct cyber operations at machine speed . Layered, in-depth, and proactive defenses are essential to counter this threat . Promising use cases for agentic AI include enhancing security operations by automating investigative steps and significantly improving capability [ 01:11:52-01:12:25 ]

. However, the "surface area" of AI systems, particularly agentic ones, is often far more extensive and intricate than developers realize, necessitating AI-native solutions for comprehensive monitoring and security [ 01:24:25-01:25:05 ] .

Tone of the Meeting

The tone of the meeting was serious and urgent . Speakers consistently emphasized the rapid evolution of AI, the growing sophistication of cyber threats, and the critical need for immediate action and clear guardrails to address the emerging challenges . Despite the gravity of the risks, there was an underlying optimistic yet concerned perspective . The participants expressed strong belief in AI's potential to bolster cyber defense and foster innovation, while remaining clear-eyed about the dangers of weaponization and existing vulnerabilities . The discussions also conveyed a collaborative and forward-looking spirit, highlighting the importance of partnerships between government, academia, and industry to proactively anticipate and prepare for future threats .

Participants

Transcript

The Committee on Homeland Security Subcommittee on Cybersecurity Infrastructure Protection will come to order.  Without objection, the chair may declare the committee in recess at any point.  Purpose of this hearing is to examine the nexus between artificial intelligence, or AI, and cybersecurity.  We'll examine how AI systems can be secured, the risk to our cybersecurity posed by adversaries who use AI tools, and the promise AI holds for our cyber defense.  And I recognize myself for an opening statement.   In late 2022, generative artificial intelligence entered the public sphere and became a viral sensation.  Today, generative AI has evolved into a useful tool for cyber criminals, nation-state cyber actors, and cyber defense teams across the globe.  In just a few short years, AI has evolved so quickly that we are now discussing a new type of AI, agenda AI.  The use of agents raises important questions about how much decision-making control AI should have and how these tools can be secured.   Innovative American cybersecurity companies have developed cutting edge tools to integrate AI into cyber defense across the public and private sectors.  AI is upscaling cybersecurity teams abilities to make to manage vulnerabilities, detect and analyze threats, track regulatory compliance and automated responses to security incidents.  These new capabilities can help reduce cybersecurity teams workload and produce better cybersecurity outcomes.   This is especially crucial considering our nation's significant shortage of skilled cybersecurity professionals.  However, we must be clear-eyed.  While AI bolsters our productivity and security, our adversaries also hope to use technology for their own gain.  Our nation's adversaries increasingly weaponize AI to scale and more quickly develop attacks against American citizens, businesses, and government entities.  Additionally, phishing attacks have increased nearly 1200 percent since the rise of generative AI in late 2022.   Beyond cyberattacks, malicious actors exploit AI models for nefarious purposes, such as conducting AI-assisted social engineering attacks, developing ransomware, and creating autonomous attack bots.
The gentleman yields back.  Other members of the committee are reminded that opening statements may be submitted for the record.  I'm pleased to have a distinguished panel of witnesses before us today.  I ask that our witnesses stand and please raise their right hand.   You solemnly swear that the testimony you will give before the Committee on Homeland Security of the United States House of Representatives will be the truth, the whole truth, and nothing but the truth, so help you God.  Let the record reflect that the witnesses have answered in the affirmative.  Thank you, please be seated.  I would now like to formally introduce our witnesses.  Mr. Kiran Chinnagongan Nagari, sorry about that, is co-founder and chief product of technology officer at Securin.   He oversees the development of software as a service-based cybersecurity product to leverage AI and machine learning to increase vulnerable intelligence, attack surface management, and threat prioritization.  He previously served as chief technology officer and assistant director for the state of Arizona.  Steve Fail serves as the U.S.  government leader for Microsoft, where he is responsible for driving cybersecurity collaboration with federal government agencies to accelerate security monetization and increase national resilience.   He has over 20 years of cybersecurity experience and roles across the public sector.  Mr. Gareth McLaughlin is the Chief Product Officer for Trellix, where he is responsible for product development, innovation, and intelligence.  Prior to his role at Trellix, he was responsible for product management and go-to market for Mandiant Management Defense Consulting and Threat Intelligence.  Mr. Jonathan Danbrot,

Sign up for free to see the full transcript

Accounts help us prevent bots from abusing our site. Accounts are free and will allow you to access the full transcript.