AI in fintech: An adoption roadmap
The widespread use of AI in fintech is inevitable, but issues like legal, educational and technological ones must be addressed. As they get resolved, several factors will still increase use in the interim.
As society generates exploding volumes of data, it provides unique challenges for financial companies, Shield VP of Data Science Shlomit Labin said. Shield assists banks, trading organizations and other firms with monitoring for such risks as market abuse, employee conduct and other compliance concerns.

The growing pressure on compliance personnel

Labin said financial services firms need technological assistance because their communications volume is far beyond the human capacity to assess. Recent regulatory shifts exacerbate the problem. Random sampling would have sufficed in the past, but it is insufficient today.
“We have to have something in place, which brings additional challenges,” Labin said. “That something needs to be good enough because, let’s say, I have to pick up one percentor one-tenth of one percent, of the communications. I want to ensure that these are the good ones… the real high-risk ones, for any compliance team to review.”
“We see firsthand and hear from our clients about the challenges of managing and dealing with these exploding volumes of data,” said Eric Robinson, VP of Global Advisory Services and Strategic Client Solutions at KLDiscovery. “Leveraging traditional linear data management models is no longer practical or feasible. So leveraging AI in whatever form in these processes has become less of a luxury and more of a necessity.
“Given the idiosyncrasies of language and the sheer volumes of data, trying to do this linearly with manual document and data evaluation processes is no longer feasible.”
Consider recent legal developments where judges castigated lawyers for using AI in core litigation and e-discovery, Robinson, a lawyer by trade, said. Not using it borders on malfeasance as organizations risk fines for lack of supervision, surveillance, or inappropriate protocols and systems.

AI can address evolving fraud patterns

As technology evolves, so do efforts to avoid detection, Robinson and Labin cautioned. Perhaps a firm needs to monitor trader communication. Standard rules might include barring communication on some social media platforms. Monitors have lists of taboo words and terms to watch for.
Unscrupulous traders could adopt code words and hidden sentences to thwart communications staff. Combine that with higher data volumes and old technologies, and you get compliance team alert fatigue.
However, that realization hasn’t left the door wide open for technology. AI-based compliance technologies are new, and more than just judges are skeptical. The suspicious cite news reports of judicial caution and AI-manufactured case law.

Patience required as AI technologies evolve

Labin and Robinson said that, like all technologies, AI-based compliance tools continuously evolve, as do societal attitudes. Result quality improves. AI is utilized across more industries; we are getting more accustomed to it.
“AI technology is becoming much more robust,” Labin said. “I keep telling people, you don’t like the AI, but you look at your phone 100 times a day, and you expect it to open automatically, with advanced AI technologies being used today.”
“The environment for acceptance of technology is very different today than it was 10 or 15 years ago,” Robinson added. “Artificial intelligence like predictive coding, latent semantic analysis, logistic regression, SVM, all these other elements that laid the foundation for many things that the legal industry has used… early in compliance.
“The adoption rate is very different because we’ve seen a rapid advancement and what’s available. Three or four years ago, we started to see the emergence of things like natural language processing, which enhances those technologies because it allows you to leverage the context.”

Regulation brings good, bad, to AI

Regulatory pressures have been both a curse and a blessing. Organizations, lawyers and technologists have been forced to develop solutions.
The situation is evolving, but Robinson said old-school tech doesn’t cut it. Regulators expect more, and that has smoothed the path for AI. Younger generations are more comfortable with it. As they move into authority positions, it will help.
But there are many issues to resolve as AI applies to everything from contract lifecycle management to discovery and big data analytics. Confidentiality, bias and avoiding hallucinations (i.e. fictitious legal cases) are three Robinson cited.
“I think compliance is a critical element here,” Robinson said. “Some courts ask how they can rely on what they’re being told when they have evidence that these AI tools are inaccurate. I think that becomes a core conversation as generative AI becomes more ingrained in these processes.”

How AI works best

Labin believes we can no longer live without AI. It has created huge breakthroughs and is getting better in such areas as natural language understanding.
But it works best in concert with other technologies and the human element. Humans can work with the most suspect cases. AI-based findings from one provider can be double- and triple-checked with other solutions.
“To make your AI safer, you have to make sure that you use it in multiple ways,” Labin explained. “And with multiple layers, if you ask a question, you are not supplied with one methodology to get the answer. You validate it against multiple models and multiple systems and multiple breaks in place to ensure that you cover everything first and second, that you do not get garbage.”
“One of the keys is that there’s no one technology,” Robinson added. “The effective solution is a combination of tools that allow us to do the analysis, the identification, and the validation elements. It is a question of how we fit these things together to create a defensible, effective and efficient solution.”
“The way to cope with it is to monitor the model post-facto because the model is already too large and too complicated and too sophisticated for me to make sure that it did not learn any kind of bias,” Labin offered.

Removing bias from AI models

Labin said a top challenge is ridding systems of bias (both intentional and inadvertent) against people with low incomes and minority groups. With clear evidence of bias against these groups, one cannot simply input raw data from past decisions; you’ll only get a more streamlined discriminatory system.
Be dedicated to removing information that can quickly identify vulnerable groups. Technology is already capable enough to determine who applicants are from addresses and other information.
Is the solution an in-house model created specifically for one institution? Highly unlikely. They cost millions of dollars to develop and need significant information to be effective.
“If you don’t have a large enough data set, then by design, you’re creating an inherent bias in the outcome because there’s not enough information there,” Labin said.

Helping compliance

Because AI-based systems generate decisions based on complex information patterns, they can prohibit compliance officers from understanding how assessments and decisions are made. That opens up legal and compliance issues, especially given the shaky regulatory trust in the technology.
Labin said GenAI models can provide a process called “chain of thoughts,” where the model can be asked to break down its decision into explainable steps. Ask small questions and derive the thought pattern from the responses.
“The core challenge is validation and explainability,” Robinson said. “Once those get solved, you’ll see a significantly enhanced adoption. Several AM Law 100 firms have jumped both feet into this generative AI. They’re not using it yet but jumping in to develop solutions.
“A law firm has significant concerns around confidentiality, data protection, and privilege in the context of data and client information. Until those things get solved in a way that can be qualified and quantified… Once we have a solution for the understanding, qualification and quantification elements, I think we’ll see adoption take off. And it will blow up many things that we’ve done traditionally.”

 

Link: https://www.fintechnexus.com/ai-in-fintech-an-adoption-radmap/?utm_source=pocket_saves

Source: https://www.fintechnexus.com

Leave a reply

Please enter your comment!
Please enter your name here