This article was originally published on BankNews and is republished here with permission as it originally appeared on October 15, 2025.

As artificial intelligence continues to transform the financial services industry, banks — especially smaller institutions with limited resources — need to consider how to responsibly and effectively integrate these tools into their operations. AI offers numerous opportunities to add value to existing processes, including enhanced efficiency, risk management and improving customer service. However, these new AI technologies also bring complex regulatory, ethical and operational challenges banks must be prepared to address.

Banking leadership and oversight

Before moving forward with AI implementation, senior leadership, including the board of directors, should develop a comprehensive AI strategy that aligns with overall business objectives. This includes identifying specific goals, which can guide the selection and deployment of AI technology. By setting clear objectives, banks can prioritize their AI investments and measure the effectiveness of AI tools and solutions to achieve these goals, while also ensuring alignment with a bank’s legal obligations. Successful AI implementation requires enterprise-wide commitment from top to bottom to foster a culture of accountability and shared responsibility. It may be prudent for banks to consider forming a dedicated AI committee with a formal charter, cross-functional members, defined responsibilities,and decisionmaking authority to navigate this process.  

Data management and infrastructure

To ensure successful implementation of AI, banks should also establish a solid governance structure, which could leverage existing industry resources such as the National Institute of Standards and Technology (NIST) AI risk management framework or the ISO/IEC 42001: AI management system. Such governance structures should include comprehensive policies and procedures, investing in staff training, and establishing processes for testing, monitoring and auditing AI technology in areas such as data quality of inputs, discriminatory or biased results, the privacy and security of data used in AI solutions. Banks should also be prepared to take swift corrective action, when appropriate, to address issues that may arise when using AI, including remediating bias and discrimination in any decisions made using AI. This is essential to maintain customer trust and compliance with relevant laws.

Transparency and explainability

Transparency and explainability are especially critical for consumer-facing AI applications. Banks must ensure clear disclosures, offer opt-out rights and maintain the ability to explain AI-driven decisions to customers. Documenting AI decisions and preserving human oversight are key steps towards enhancing the effectiveness of AI models and ensuring compliance with regulatory standards. Such documentation should also comply with newly enacted state AI laws and regulations, such as those in Colorado, California, Utah and Texas.

Regulatory compliance and ethical considerations

Banks should be aware that complying with new AI laws and regulations is not sufficient. Because they operate in a highly regulated space, banks must ensure AI technology is used in compliance with existing financial laws and regulations, consumer protection laws, as well as ethical standards. For example, under the Equal Credit Opportunity Act and the Fair Credit Reporting Act, an AI system must have the ability to provide specific and accurate reasons for an adverse action taken relating to a consumer application. AI is also widely used in Anti-Money Laundering and Know-Your-Customer compliance under the Bank Secrecy Act and must be accurate and capable of explaining why fraud was detected or suspicious activity flagged. AI technology can also pose risks under federal and state unfair, deceptive, or abusive acts or practices laws, such as chatbots that may inadvertently provide inaccurate information.

AI technology that handles large volumes of nonpublic financial information must be designed with data security and privacy management controls in compliance with the federal Gramm-Leach-Bliley Act. States are also highly active with regards to AI in financial services. For example, the New York State Department of Financial Services issued guidance warning of the increased cybersecurity risks that arise from the use of AI — phishing attacks and overreliance on vendors that may introduce vulnerabilities and supply-chain risks.

Third-party risk and vendor management

Smaller banks face unique challenges as they implement AI, as they may be more likely to rely on third-party AI technology due to limited resources. If banks do not have sufficient staff or internal resources to conduct thorough due diligence and negotiate appropriate contractual terms when onboarding AI vendors, banks should engage external consultants and experts as necessary or choose not to deploy AI. Banks are expected to have strong vendor management processes, which should be applied —  without exception — to the latest wave of AI technologies.

Conclusion

As AI adoption accelerates, banks, especially smaller banks, must balance innovation with caution, ensuring their practices meet evolving regulatory standards and ethical expectations. With careful planning and oversight, these financial institutions can harness AI’s benefits while safeguarding their customers’ data and their own reputations.