HOUSE FINANCIAL SERVICES COMMITTEE
Subcommittee Hearing
For questions on the note below, please contact the Delta Strategy Group team.
On September 18, the House Financial Services Committee Subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence held a hearing entitled “Unlocking the Next Generation of AI in the U.S. Financial System for Consumers, Businesses, and Competitiveness.” Witnesses in the hearing were:
- Dr. David Cox, Vice President, AI Models, IBM Director, MIT-IBM Watson AI Lab
- Dr. Christian Lau, Co-Founder and Chief Product Officer, Dynamo AI
- Matthew Reisman, Director, Privacy and Data Policy, Center for Information Policy Leadership
- Daniel Gorfine, Founder & CEO, Gattaca Horizons; Former Chief Innovation Officer & Director of LabCFTC
- Dr. Nicol Turner Lee, Senior Fellow and Director, Center for Technology Innovation, Brookings Institution
Below is a summary of the hearing prepared by Delta Strategy Group, which includes several high-level takeaways from opening statements and discussion.
Key Takeaways
- Committee Chairman Hill (R-AR) outlined how artificial intelligence (AI) has the potential to boost efficiency, cut costs, and strengthen tools, from fraud detection to anti-money laundering, but as with any innovation, there are risks. He emphasized that AI systems have to be trustworthy, fair, and secure as Congress begins examining how AI, particularly the emerging capabilities of generative AI, can reshape the financial system.
- Subcommittee Chairman Steil (R-WI) highlighted how financial institutions have been at the forefront of developing and deploying AI technology, and that the rise of generative AI could bring new efficiencies and opportunities, alongside potential risks to financial markets, as he outlined efforts in assessing whether the current regulatory framework is prepared to keep pace. He highlighted how the U.S. has long been a hub for financial innovation and that Congress must ensure policies support responsible AI adoption, not stifle it.
- Subcommittee Chairman Steil reiterated how it is essential that regulations strike the right balance between fostering innovation while ensuring investor protection and market integrity. He discussed how financial regulators must be cognizant of AI’s technology and its uses, and how Congress must provide clarity to encourage responsible development in the U.S. He called for ensuring markets are not left behind in the global race for AI leadership by shaping policies that encourage responsible innovation in financial markets while cementing U.S. AI leadership.
- Subcommittee Chairman Steil questioned how the AI action plan is an improvement from the previous administration’s AI approach, with Gorfine responding about current efforts to find existing regulations in place and taking an approach of allowing it to develop in order to identify risks and determine whether more is needed. Gorfine highlighted how the financial services industry is a heavily regulated industry, with existing laws, regulations, and guidance that have been able to adapt and incorporate emerging technologies. He cited approaches working preemptively and more prescriptively versus an approach recognizing the scaffolding in place and principles in place to guide adoption of technologies in a responsible way.
- Representative Huizenga (R-MI) questioned the impact of U.S. regulation and how it may threaten the use of AI in the financial services space, citing the Securities and Exchange Commission’s (SEC) predictive data analytics proposal under Chair Gensler. Gorfine noted that the predictive data analytics rule was rescinded and not technology neutral, emphasizing being technology neutral as a guiding principle. Gorfine also noted that how regulators communicate to the marketplace through regulatory messaging can create a challenging environment for compliance and can serve as a deterrent to adoption.
- Representative Timmons (R-SC) asked Lau about how AI can automate and streamline compliance processes, as well as alleviate burdens of compliance on operations and more effectively manage risk. Lau outlined how AI can automate more workflows with low staffing and open opportunities for greater compliance, such as continuous monitoring and 24-hour audits. He also noted that introducing AI to regulated workflows requires guardrails and controls to ensure AI follows the right policies and procedures to carry out tasks effectively and in compliance.
- Representative Davidson (R-OH) cautioned against allowing AI to become a tool for unchecked surveillance or data exploitation and highlighted the need for Congress to be proactive, not reactive, in setting clear boundaries, as he questioned how financial regulators are protecting sensitive financial data from AI exploitation. Gorfine agreed that data is the one place necessitating federal legislation to create a proper baseline for how to secure data that will be consumed by AI models. He pointed to the role of privacy-enhancing technologies and encrypting data, with the need for financial regulators to recognize and utilize such developing privacy-enhancing tools.
- Representative Rose (R-TN) asked how private companies and industry stakeholders can collaborate proactively to anticipate and combat the evolving threat of AI-assisted fraud. Reisman emphasized the need to ensure financial institutions are empowered to have the same tools to detect fraud that the fraudsters are using to advance it. He raised the need to ensure there are spaces, whether it is through sandboxes or other mechanisms set up by Congress and regulatory agencies, to constantly be sharing information about advances in technologies and fraud capabilities to collectively work together on solutions.
- Representative Downing (R-MT) questioned the role states should play in regulating AI, with Representative Timmons (R-SC) asking what the most pressing risks of a state-by-state regulatory patchwork for both consumers and innovators are. Reisman stated the first question should be is if there is a need for more regulation on AI. He raised how hard it can be when there is a kaleidoscope of different state regulations, emphasizing the value in having interoperable federal standards, as he posed whether there may be particular elements that states need to address as a complement, but not a substitute. Gorfine highlighted the importance of recognizing existing federal and state laws, regulations, and guidance that financial institutions adhere to, where a patchwork of state laws can interfere, conflict, or create ambiguity for firms operating on a national level. He encouraged having a proper federal framework that allows operations on a national scale and competition on a global scale.
- Lau highlighted that to truly manage new risks and unleash AI innovation, financial institutions and regulators must embrace technology solutions that can help risk and compliance teams scale oversight, including controls like AI guardrails, red team evaluations, and auditability over AI usage. He emphasized that regulators and policymakers need to adopt governing frameworks that keep up with the pace of innovation rather than falling behind on new opportunities and risks. He called for regulators to expand the use of AI sandboxes, as called for in the administration’s AI action plan and in the Unleashing AI Innovation in Financial Services Act, to not only enable experimentation with AI on high-impact use cases but also bring leading evaluation and red team technology to rigorously test experimental AI against real-world risks. He outlined that by drawing on global examples, such as Singapore’s successful sandbox programs, the U.S. can strengthen competitiveness while promoting secure and compliant AI across financial institutions and government.
- Reisman outlined that to secure the advantages of AI within the financial system, the U.S. must pursue a risk-based approach to regulation that focuses on the outcomes to be achieved, avoid overly prescriptive measures, build upon existing foundations of regulations, guidance, and standards, and, when necessary, clarify or adapt their application to emerging technologies. He called for consideration of risks and benefits in equal measure, to incentivize organizations to adopt accountable practices, and to build trust through meaningful transparency. He highlighted the need for engagement in cooperative dialogue among regulators, technologists, and industry, with regulatory sandboxes offering an invaluable avenue for such dialogue. He cited how numerous jurisdictions have established regulatory sandboxes over the past decade, as he supported promoting sandboxes, under America’s AI action plan, as well as the proposed bipartisan bicameral Unleashing AI Innovation in Financial Services Act.
- Gorfine raised how a guiding principle should be assessing AI improvements relative to the existing imperfect status quo, rather than focusing solely on risk, with the financial sector already operating under a robust, technology-neutral regulatory framework. He asserted that to maintain U.S. leadership, regulators must support, not block, responsible AI adoption, providing clear, consistent, and informed compliance expectations and equipping examiners with AI understanding. He called for Congress to enact a federal data privacy framework and ensure open access to permissioned financial data, as well as prevent conflicting state laws from undermining federal regulatory consistency. He highlighted that regulators should clarify risk frameworks and endorse well-crafted industry standards through recognition and safe harbors, emphasizing that AI use alone does not elevate risk.
- Cox warned that in approaching AI, the greatest risk may be hesitation, as consumers and the U.S. economy may miss out on the early benefits of adoption. He emphasized that responsible governance is not a brake on innovation, but a mechanism that ensures innovation can be deployed securely and sustainably. He urged support for open ecosystems, to regulate applications rather than technologies in the abstract, and to promote transparency requirements that allow enterprises and regulators alike to test models, evaluate accuracy, and understand safeguards to foster an environment where innovation can thrive responsibly. He highlighted that by embracing openness, insisting on transparency, and embedding security, the U.S. can ensure AI strengthens the financial system and the U.S.’s global competitiveness.
- Lee discussed how Congress must ensure safeguards are in place to reinforce algorithmic accountability, safety, and transparency, particularly in fraud and consumer protection. She recommended to enforce compliance with existing statutes to uphold algorithmic accountability, mandate transparency and full disclosure to consumers when AI is used in decision-making, encourage ethical development of financial models that balance innovation and regulation, prepare for adoption of agentic AI while ensuring robust oversight and consumer protections, and to invest in AI financial literacy programs to help the public understand the evolving sector.
- Committee Ranking Member Waters (D-CA) stated how the regulatory sandbox proposal appears to be more deregulation framed as innovation and lacks requirements for public disclosure and harm mitigation, with an unlimited scope and virtually no limitation. She detailed concerns that regulatory sandboxes may remove safeguards from a rapidly developing AI market already lacking meaningful federal regulations and oversight, questioning the risks regulatory sandboxes pose alongside the kinds of standards that should be met for responsible sandboxes under consideration. Lee stated that without the right variables to make sure they are safe, ethical, fair, and inclusive, sandboxes will be an exceptional exploitation of consumers. She also stated that without guardrails, as the AI action plan suggests, with modifications and waivers, sandboxes must be very transparent, with clear goals and questions of what they are trying to solve, alongside protections consumers part of sandboxes to ensure retribution for any harm.
- Subcommittee Ranking Member Lynch (D-MA) outlined how, alongside the deployment of AI to maximize operational efficiency and achieve cost reductions, the rapid development of AI-based technologies has introduced serious risks into the financial services space. He cited how the Treasury Department, Federal Reserve, and Consumer Financial Protection Bureau (CFPB) have all expressed concerns, with other financial regulators repeatedly cautioning that the development of robust and trustworthy artificial intelligence is dependent on the U.S.’s ability to encourage innovation that maximizes oversight, consumer protection, data privacy, workforce protection, and marketplace fairness.
- Subcommittee Ranking Member Lynch cited how Singapore included explainability requirements within their sandbox model to ensure that their AI systems can provide understandable justifications and to assess algorithms for potential discrimination, asking whether the U.S. should take a similar approach. Lee noted her support for sandboxes and how regulatory sandboxes are a great way to experiment, but because of the velocity and speed of AI gathering personal information, it is important to include many of those variables from the Singapore model. She outlined how it allows for accountability, continuous monitoring, transparency, including consumers in the process as opposed to providing waivers and exceptions to companies to experiment within the current approach.
- Representative Liccardo (D-CA) asked about combining the approaches of a sandbox and a task force in order to establish best practices and a negligent standard for compliance. Lee responded that the previous administration’s AI blueprint was a guide and path for protections and collaboration, supporting the creation of a task force focused on conversation, collaboration, and disclosure among entities. Cox noted the ISO 42001 standard for the creation process of an LLM as a voluntary standard with audits as a mark of proper hygiene that regulates processes, controls, documentation, and auditing, not the algorithm itself. He emphasized use and risk-based regulation as the normal approach.
