Avoiding Penalties for AI Misuse in 2025

June 16, 2025
8 Min Read

Executive Summary

Throughout 2022-23, the SEC & CFTC levied $2 billion in fines against dozens of banks and investment managers, in a process an insider described as “shooting fish in a barrel”.

These fines were for unsanctioned use of WhatsApp messenger and came just sixteen months after initial regulatory notices from August 2020 about unsanctioned messenger usage. In 2025, unsanctioned AI usage presents a strikingly similar risk profile.

Starting as early as 2023, regulatory bodies including the OCC, CFPB, and NCUA have issued multiple notices for banks and credit unions, reminding them of their record keeping and governance obligations related to generative AI tools. In April 2025, the Trump administration also issued memo M-25-21, which requires immediate AI governance action for all federal agencies, making it clear that AI mandates are a top priority in Washington.

There is widespread concern that a crackdown on unsanctioned AI usage is coming and that regulators will dole out heavy fines to the banks and credit unions that have not taken action. This guide aims to provide an overview of the regulatory landscape for AI in banking, key AI-related risks, and practical compliance strategies to help banks and credit unions safely adopt AI while avoiding costly penalties and reputational damage.

The Regulatory Landscape: AI Oversight in Banking

Over the past 12 months, all relevant FFIEC members – The Federal Reserve, OCC, FDIC, CFPB , and NCUA – have made it clear that AI enforcement is directly within their purview.

The OCC specifically flagged generative AI as an "emerging risk" across several categories in its Fall 2023 Semiannual Risk Perspective, warning that banks must manage AI "in a safe, sound, and fair manner."

“It is important for banks to identify, measure, monitor, and control risks arising from AI use as they would for the use of any other technology. Advances in technology do not render existing safety and soundness standards and compliance requirements inapplicable.” — OCC, Fall 2023 Semiannual Risk Perspective

The CFPB has issued dozens of notices about the fair use of AI in underwriting & lending, the risks AI pose for security and compliance, the increased risk of data breach by state-sponsored overseas actors, and its intention to levy fines for unfair, deceptive, or abusive acts and practices (UDAAP) related to AI misuse. Regarding AI use in lending, CFPB Director Rohit Chopra stated:

“Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.” — Rohit Chopra, Director of CFPB

Credit unions face similar expectations through the NCUA, which will issue cease-and-desist orders or civil penalties when AI usage breaches member privacy, results in discriminatory lending, or otherwise violates laws. In May 2025, the GOA recommended to Congress to expand the NCUA’s enforcement capability to include subpoenas over credit union technology providers, specifically aimed at increasing AI oversight.

AI Compliance Risks in Banking

Banks and credit unions must navigate the organizational use of AIin the face of several specific regulatory risks:

  • Leaking Sensitive Data via AI Prompts: When employees prompt AI tools with customer account details or personal identifiers through external AI platforms, that information leaves the bank's secure environment, potentially violating privacy laws.
  • Unlogged or Unmonitored AI Interactions: None of the leading AI assistants include proper logging of their model responses when used in financial decisions and communication. This leaves financial institutions without the audit trail that regulators expect. For example, when a loan officer uses AI to generate an explanation for a credit decision without recording it, the bank is in violation of recordkeeping rules.
  • Decisions Made by AI without Human Supervision: Complex "black box" AI models can make decisions without explanation or justification. Under fair lending laws, a bank must explain why it denied a loan. Regulators have indicated that using a "black box" is not a defense—firms are expected to maintain accountability for all decisions.
  • Bias and Discrimination in AI Outputs: AI systems can inadvertently learn biases from training data, potentially creating a AI materials that unfairly impact protected classes. Fair lending laws prohibit discriminatory outcomes, whether from a human or machine. Even unintentional bias can result in enforcement under the Equal Credit Opportunity Act or Fair Housing Act.
  • Use of Unapproved or "Shadow" AI Tools:  All of the above problems are exacerbated when employees use unsanctioned third-party tools. This shadow usage bypasses security controls, vendor oversight, and legal review. Unapproved tools likely do not comply with data retention policies and often store or even publicize sensitive data. When errors occur, financial institutions remain liable.

Preempting the Next Crackdown: How to Safely Harness AI

Financial institutions should leverage the benefits of AI while also maintaining proper controls. Rather than waiting for enforcement, proactive banks and credit unions are taking these steps now:

  • Conduct an AI Usage Audit: Immediately assess where AI is already being used in your organization. Assume that 30% of your employees regularly use AI tools, regardless of your policies. Anonymously survey your employees about their use of AI assistants like ChatGPT, Claude, Grok, Gemini, Copilot, and others, and look for shadow AI usage by monitoring network traffic. Understanding your current exposure is the first step toward controlling it.
  • Implement a "Safe Harbor" Approach: Banning all AI tools often drives usage underground. Instead, create an approved channel for AI usage with proper controls. Consider investing in a financial-grade AI platform with proper logging and security controls. By giving employees legitimate access to AI capabilities, you greatly reduce their likelihood of using unapproved tools.
  • Develop a Phased Implementation Plan: Start with low-risk use cases where AI outputs can be reviewed before affecting customers or decisions. Create a timeline to expand from initial use cases to more sensitive applications only after controls are proven effective. Document this plan for regulators to demonstrate that you are taking a thoughtful approach.
  • Form a Cross-Functional AI Governance Committee: Bring together stakeholders from compliance, legal, IT, risk management, and business lines to oversee AI adoption. This committee should review and approve AI applications, monitor compliance, and keep leadership informed. Having a formal governance structure signals to regulators that you are treating AI risk seriously.
  • Engage with Regulators Early: Proactively discuss your AI governance approach with regulators during examinations rather than waiting for them to ask. By demonstrating awareness and commitment to responsible AI use, you position your institution as a thoughtful adopter rather than a potential enforcement target.

Conclusion: Avoiding the Next Wave of Enforcement

AI and generative tools hold enormous promise for banks and credit unions, but that promise comes with compliance challenges. Financial institutions cannot afford to adopt AI without the same level of oversight they apply to other critical activities. Regulators from the OCC to the CFPB have been clear: existing rules apply fully to AI and will be enforced.

As with the $2 billion in fines for unsanctioned use of messaging apps in 2022-23, financial regulators view new technology use without proper controls as a serious compliance failure.

Banks and credit unions should act now, before major AI enforcement actions begin.

If you are operating a financial institution, ask yourself:

  • Do we offer a sanctioned, compliant AI platform?
  • Are we preserving and monitoring AI interactions?
  • Have we tested our AI usage for bias and compliance in line with regulations?
  • Can we demonstrate a robust control framework if regulators inquire?

By following the strategies outlined in this guide, financial institutions can greatly reduce the risk of AI-related enforcement action while leveraging AI's benefits. Financial institutions that innovate responsibly will thrive with added AI capabilities.

Guardrails for AI Usage

Banks and credit unions partner with Userfront to deliver branded AI portals that meet compliance and reporting requirements while internally hosting and connecting to the latest models like ChatGPT, Microsoft Copilot, and others. Userfront also offers a comprehensive library of generative AI prompts designed to help financial workers adopt AI and become more productive across all functions.

Contact Userfront to learn more about your options.

Subscribe to the newsletter

Receive the latest posts to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By subscribing, you agree to our Privacy Policy.

Add Guardrails For Your AI

Deliver leading AI assistants and implement your AI use policies.