Responsible AI in Financial Services – Better Results Sooner

N.C., United States | 7 February 2023

As a long-time data nerd in financial services, AI was a regular topic of water cooler conversation over the past couple decades, but it has ratcheted up in the past few months.

As credit risk experts, we felt AI could never replace an experienced credit risk modeler. (I think horses at troughs everywhere said this in the early 1900s about an innovative technology of the time.)

Now, as an AI nerd, I realize that while we were working out how AI applied to us, we had a myopic focus on the technical details.  Technology evolves quickly, and we should have spent that time brainstorming on how to bring AI into the loop.

Early on, AI seemed overwhelmingly complicated and something that didn’t concern us in the credit space because it was difficult to explain. With all the talk of generative AI and Chat GPT, explainability becomes an even more important consideration.

If you set aside the mathematical and programming detail for a minute, the use of AI is reasonably straightforward. AI and machine learning techniques enable an individual to act on insights built using incomprehensible volumes of information through scaled analysis and automation. 

While there is so much promise for the future of generative AI, there are tools available today to drive tangible benefits for bankers and the businesses they serve.  The RDC platform brings together regulatory best practice and modern data science techniques to drive these benefits. Before we unpack that, let’s start with a basic definition. 

What is a model?

When we think about modeling, it’s important to remember that even rules-based determination falls under regulatory model risk management requirements. 

Global regulators continue to broaden the scope of the definition.  The Fed’s SR 11-7 guidance and the PRA SS1/23 both include language extending the definition to include expert judgment and qualitative inputs.  

“The definition of model also covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature. “ (SR 11-7 pg3) 

“The definition of a model includes input data that are quantitative and / or qualitative in nature or expert judgement-based, and output that are quantitative or qualitative. “ (SS1/23 pg 9)  

There is similar language, “judgmental assumptions” in OSFI’s E-23 for Canadian institutions, and while it is not explicitly written APRA commentary indicates they share the view. 

Considering this global trend, a straightforward definition of a model becomes: A model is a simplified representation of the real-world used to understand past behavior and / or make future predictions and projections.  

The simplified representation is not explicitly mathematical, and often in a lending context models are a combination of quantitative models, policy rules, and decision steps.  

Considering our plain English definition, understanding past behavior produces insights, and combining these insights with future prediction and projections produces sound, strategic decision-making.   

Bringing AI into the loop in financial services, there are opportunities to automate this strategic decision making throughout the credit lifecycle. However, we need to consider the model risk management implications because that automated decision becomes a model, and models have an ethical and regulatory obligation to deploy Responsible AI. 


Explainable + Contestable = Responsible AI 

At Rich Data Co (RDC), our Explainable AI Decisioning platform streamlines the model deployment and monitoring process.  When we layer in the RDC platform’s Self-Describing Decision, we produce an Explainable AI solution that is also contestable. 

Consider the 3 components of a model: information input, processing, and reporting.  While it’s reasonably straightforward to automate the documentation for these components, that doesn’t mean that there is a system in place for the “effective challenge.” (SR 11-7 pg4, E-23 pg4 & SS1/23 pg7) 

From an ethical perspective, effective challenge should apply equally to all 3 lines of defense in the risk management structure, and perhaps most importantly, to the customers impacted by the decisions.  

Think about that pivotal point in a tennis match. The line umpire makes the best call that they can from their viewpoint, but there are times when the player needs to call a challenge. If the chair umpire accepts the players challenge, they lean on their Hawk Eye technology.  

Hawk Eye is a system of cameras that tracks the ball through space and time creating a shadow mark exactly where the ball hits the court during the point.  

The documentation gives confidence that the umpires and players understand how the Hawk Eye system records the ball’s movement, but it’s the accompanying video recordings that validate the system’s determination.   

The RDC platform’s stateful, Self-Describing Decision functions in a similar way. The Self-Describing Decision records the input information and data processing steps, and then produces all the necessary reporting, monitoring, and insights to ensure bankers are using AI responsibly.  

Furthermore, the native model monitoring capabilities simplify the validation process, delivering “always on” validation points for continuous model monitoring. Every time there is new information or decisions, the model performance outputs update.  

 The RDC decisioning system delivers a Responsible AI solution that is both explainable and contestable, and the out-of-the-box solution includes the documentation and validation to prove it.    


AI In the Loop – Defensible Automation 

 Automation is awesome, but it needs guardrails.  More importantly, it needs to amplify existing human capability. We live in a human-centered world, and it is ultimately humans who are impacted by many of the automated decisions.  To me, that implies a human loop from the start, and I prefer to talk about bringing AI into the loop. 

Given the human impact and regulatory oversight in financial services, there is a high burden of proof when automating decisions. The first step is to start with a recommendation that is easily reviewed and validated.   

Using business lending compliance reviews as an example, we know that most of the time there is a positive or no change to the underlying credit position of the business.  This indicates a likely area for automation because it is a repetitive process with consistent outcomes.  

We deploy a suite of AI/ML models, along with policy rules and overlays, to create a decision strategy that indicates whether there is a change in the business’ credit profile that requires deeper investigation.   

Then a system of action, such as nCino, serves the recommendation to a banker. The banker can attest based on the Self-Describing Decision and other evidence that the automated review determination is correct, and then the banker takes the appropriate action.  

 Commonly, the banker attests that the business is healthy, freeing time for when they need to conduct a full or partial review for a business that needs additional support. 

 On the flipside, this also frees up time to have more advisory conversations to help healthy businesses grow.  Bringing AI in the loop amplifies the bankers’ human skills, and creates space for the bankers to show business customers that their bank knows, trusts, and helps them grow.   

With Responsible AI in the loop, we have human interaction at scale, and the result is exponential value creation for both banks and the businesses they serve. 


Realizing the Future of Credit Today 

Hidden in the haze of conflicting information about the use of AI and its impacts on society, there is a clear path to creating value with human-centered, Responsible AI.  When we think of this in terms of business credit, forward-thinking institutions can lead the charge as we move towards more inclusive, equitable and sustainable access to credit for small and medium size businesses.  

These are the businesses that employ 90% of our population so the value truly is exponential.  

The team at RDC spent a lot of time building the ultimate enterprise solution for business credit decisioning strategies, and we love to share our learnings.   

Even if you just want to have a chat to grab some ideas you can work into your existing process, reach out.   

We love to talk about data and AI, especially when it relates to the future of credit.


AUTHOR: John Zugelder, CFA, Head of Client Advisory (North America) at RDC


Stay updated via LinkedIn.

Contribute to the discussion on the future of AI and lending