HOUSE FINANCIAL SERVICES COMMITTEE
Hearing on AI in Financial Services
For questions on the note below, please contact the Delta Strategy Group team.
On December 10, the House Financial Services Committee held a hearing entitled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services.” Witnesses in the hearing were:
- Jeanette Manfra, Vice President and Global Head of Risk & Compliance, Google Cloud
- Tal Cohen, President, Nasdaq
- Nicholas Stevens, Vice President of Product, Artificial Intelligence, Zillow
- Wendi Whitmore, Chief Security Intelligence Officer, Palo Alto Networks
- Joshua Branch, Big Tech Accountability Advocate, Public Citizen
Below is a summary of the hearing prepared by Delta Strategy Group. It includes several high-level takeaways from opening statements and discussion.
Key Takeaways
- Chairman Hill (R-AR) commended the work of the bipartisan Congressional AI Task Force and the Committee’s AI Working Group in examining how financial services firms and regulators are approaching AI in analyzing its benefits and risks in a highly regulated environment. He discussed the role of agencies in providing a clear regulatory environment, assessing where existing laws may fall short or require modernization, and exploring ways to foster innovation.
- Chairman Hill referenced the adaptive approach taken in the 1990s by former Securities and Exchange Commission (SEC) Chairman Cox in response to the Internet’s commercialization, as well as bipartisan support for regulatory sandboxes for new technologies, such as blockchain. He called for applying past lessons in creating a framework for the current technological landscape. He outlined the need to identify gaps and obstacles in the regulations to enable an AI landscape where innovation can flourish without unnecessary barriers, while ensuring robust consumer protections, alongside a risk-based, technology-neutral regulatory approach.
- Chairman Hill questioned how the AI Action Plan aligns with industry best practices and recent advancements in AI, asking what can be learned from market participants to close the AI adoption gap and promote responsible use. Manfra commended the administration’s approach to using AI and other technologies to improve service, while also ensuring responsible use with appropriate safeguards. Manfra recognized the increased adoption of in the financial services sector, with one prominent use as to assist in key compliance and risk functions in the detection of financial crimes, including fraud, money laundering, financial crimes, and trade manipulation. She stated that as the use of AI models grows, so do questions about managing risks associated with the models, highlighting existing risk management (MRM). She discussed how regulators, financial institutions, and technology service providers have been looking at whether existing MRM guidance, as the traditional regulatory regime applicable to managing risk in the financial services industry, continues to be relevant for AI models and, if so, how the guidance should be interpreted and applied to AI.
- Cohen outlined four core principles around AI: 1) Leverage existing regulatory frameworks, citing how the financial sector is already heavily supervised, and that many AI-related risks can be addressed with existing rules and mandates; 2) Focus on use cases and outcomes, stating how regulation should reflect varying risk profiles and therefore require different regulatory treatment; 3) Keep frameworks flexible and innovation friendly, referencing how overly prescriptive rules age poorly and risk offshoring innovation overseas; and 4) strive for harmonization, with federal coordination as the best way to protect investors while preserving U.S. competitiveness.
- Chairman Hill referenced how Nasdaq was one of the first exchanges to release an AI-powered order type, asking about the efficiency gains achieved and what lessons were learned in that transition for other AI adaptations. He discussed how it was designed for large institutions to execute in markets with confidence and achieve the execution quality needed to serve retail customers. Cohen explained how Nasdaq’s approach to AI is focused on enhancing liquidity, ensuring transparency, and protecting integrity within efforts to manage risk, protect markets, and comply with regulations. He emphasized how AI-specific regulation should be consistent and harmonized, avoiding the creation of gaps, overlaps, or inconsistencies among regulators, jurisdictions, and sectors. He also emphasized how it should work to promote coordination and cooperation among the regulators, the industry, and the international community. He called for Congress to consider appropriate actions to avoid the creation of a patchwork of state laws governing AI, or otherwise run the risk of stifling innovation, increasing expense, reducing the availability of AI tools, and harming U.S. competitiveness.
- Representative Barr (R-KY) asked how AI sandboxes would enable regulators and market participants to responsibly experiment with AI and learn best practices. Cohen stated that the key qualifications are for them to be controlled, targeted, and time-boxed, alongside that they cannot be used to circumvent requirements. He also stated how sandboxes have yielded positive results and been critical to innovation when controlled, time-boxed, targeted, and transparent.
- Representative Wagner (R-MO) questioned whether existing tech-neutral regulations provide sufficient guardrails or if new AI-specific updates and clarifications must be implemented without stifling innovation. Cohen responded that there is a strong foundation within industry and securities laws, such as National Market System (NMS) requirements and Regulation Systems Compliance and Integrity (SCI). He stated how information sharing and public-private partnerships are critical around advancements in AI in order to identify potential gaps.
- Chairman Hill cited cyber risks as a significant challenge for both the private sector and government sector, questioning how such cyber risks should be countered and how to address bad actors using AI. Whitmore discussed how AI-driven cybersecurity is essential to protecting privacy, strengthening national security, and safeguarding digital operations. She warned that the riskiest outcome would be not to meaningfully leverage AI, citing how the process of continuous discovery and analysis allows preemptive threat detection. She raised how disrupting the cybersecurity industry’s status quo will be critical to combating current threats as well as emerging risks, such as encryption-breaking quantum computing.
- In response to Representative Lucas (R-OK), who serves as Chairman of the Task Force on Monetary Policy and Treasury Market Structure, Whitmore outlined how there are two main areas within the threat landscape related to AI. She discussed how attackers are using AI to fuel traditional cybercrime, with impacts on speed and scale, but also attackers are targeting AI itself, creating the ability to turn agents inside organizational environments into rogue insiders that cannot be trusted. In response to those factors, she stated that the critical need is for security to be closely coupled with AI innovation to protect the build, run, and access layers. She highlighted that, particularly at runtime, protections must ensure that agents do not have the ability to go rogue, with necessary safeguards for organizations to innovate successfully without that innovation being hijacked.
- Representative Lynch (D-MA), who serves as Ranking Member of the Subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence, voiced his concerns about how several of the regulatory proposals fail to include adequate guardrails, and invite the financial services industry on adopting AI to choose and avoid certain consumer protection and investor protection, alongside safety and soundness regulations. He called for developing bipartisan AI legislation that promotes innovation while ensuring robust consumer and financial protection.
- Representative Casten (D-IL) raised concerns about liability shields and state preemption for digital tools that, while not designed to commit fraud, may still facilitate fraudulent activity. He questioned how market managers can address tools that become optimized for fraud and what regulatory reforms would be needed to address SEC laws requiring intent to defraud. Cohen responded that federal preemption is intended to reduce regulatory complexity rather than avoid accountability. He emphasized that minimizing the patchwork of inconsistent state requirements is key to allowing exchanges, their members, and the broader industry to operate effectively while adhering to a clear regulatory framework.
