Jesse Silverman, a partner in Troutman Pepper Locke’s Consumer Financial Services Practice Group, was quoted in the January 28, 2026 American Banker article, “Bankers at Bankwell Meetup Share Hopes for, Advice on AI.”

  • Expert quote: “What we’re really talking about here is whether or not AI is going to eat the whole universe and put us all out of a job in the next 12 months or so,” said Jesse Silverman, counsel at Troutman Pepper.
  • “What we’re really talking about here is whether or not AI is going to eat the whole universe and put us all out of a job in the next 12 months or so,” said Jesse Silverman, counsel at Troutman Pepper. “I have no opinion. AI right now – not in the future, not in 12 months, not in 10 years – is absolutely perfect. It works.”
  • Silverman said AI could be used almost anywhere. “Look at every single aspect of your business,” he said. “Where do you spend the most human time? And I guarantee right now there’s an AI solution that will make it more efficient.” Examples include underwriting and call reviews – bank compliance officers typically review a sampling of sales calls to see what clients are being told.

    “Now, with the same staff, you can choose to review 100% of calls leveraging AI, though you still need that human in the loop,” Silverman said. “So if you’ve got fantasies of AI just replacing everything, please get rid of that.”

  • About 11 years ago, Silverman tested a machine learning system at a bank that produced adverse action notices for people being turned down for loans.

    “At the time, this was brand new,” he said. “We had no idea what was going on inside that black box, and it started producing some wild, although in hindsight understandable, adverse action notices.”

  • “The system had plainly decided that for this individual applicant it would have been objectively better for that person to have declared personal bankruptcy,” Silverman said. “There are lots of those kinds of challenges. You’re going to have to test and test, but they’re everywhere.”
  • Silverman agreed banks need to “socialize” their AI plans with regulators. “It’s one thing that it works, but it’s another aspect that everyone knows and believes that it works,” he said. “We’re going to be in that cycle for a while, where you’re going to be having conversations with the regulators and saying, ‘This is what we do, and this is why we believe it works, or in many cases, why we believe it works better.'”

  • Banks also need to have AI usage policies, Silverman cautioned.

    “If you don’t have AI policies at your bank, you probably have employees who are taking your bank work product and putting it into ChatGPT” or Google Gemini or Microsoft Copilot, he said. “So one of the most important things is getting control of that data and making sure that you’ve got some agreements in place that you know where that data is going.”

  • “It’s got to be explainable and reproducible, meaning you’ve got to have a document that shows what you did,” Silverman said. “The bottom line is, all of those policies and procedures that you have” need to be applied to any new AI tool.

    Silverman warned that any use case that involves personally identifiable information may trigger complicated state laws.

    “In some states, you have to disclose affirmatively that you’re using AI for underwriting purposes, but you don’t have to disclose that if you’re using the AI to monitor calls,” he said. The many conflicting state rules provide “wonderful lifetime employment for me as a lawyer,” he joked.
Insight Industries + Practices