Financial regulation in the age of AI: Why better algorithms aren’t always the solution

Financial regulation in the age of AI: Why better algorithms aren’t always the solution

By SMU City Perspectives team

Published 19 July, 2022


POINT OF VIEW

Even in the age of AI, the key to achieving fair credit scoring practices and financial inclusion lies in effective regulation, and not only in the creation of better algorithms or finding unbiased data.

Nydia Remolina Leon

Assistant Professor of Law


In brief

  • The use of complex AI models and Big Data has changed the money lending landscape, and opens up possibilities for increasing overall access to credit. However, these changes come with their own sets of challenges and often fail to address existing ones.
  • Unfair lending continues to exist, and new problems like the inability to contest outcomes, lack of data, noisy data, and the exploitation of alternative data now require attention.
  • Regulators play a critical role in addressing these problems, and a key step is to create mandatory and sector-specific guidelines that protect customers while still giving companies the flexibility to innovate.

As the use of artificial intelligence (AI) gains traction across industries today, it is often forgotten that the use of statistical models and algorithms have been around for decades. Credit bureaus, in particular, have long been using these techniques to generate credit scores which are then used by banks to assess an individual’s creditworthiness. The real game-changer therefore, is the introduction of smartphones, social media, geo-localization models and other interconnectivity devices that make new types of data available for use within these models. These new characteristics form what is now called ‘algorithmic credit scoring’ and has led to incredible changes in the lending market landscape. 

New competitors have emerged, and attempt to fill the gaps traditional financial institutions have left behind. Through the use of complex AI models and alternative data that are mined from online and offline activities, these new types of products allow a wider pool of customers to access finance, including previously excluded groups. Meanwhile, Big Techs now offer similar services to the small businesses and enterprises that are already on their platforms; using both traditional and non-traditional data to assess their creditworthiness. While the verdict is still out on how accurate these types of predictions really are, the market seems to be responding well to these new competitors due to the possibilities they present.

What insights come to mind?

What insights come to mind?

Click to respond and see what others think too

What makes you skeptical?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you curious?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you optimistic?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you on the fence?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

Story successfully submitted.

Story successfully submitted.

Thank you for your story. We'll be consolidating all stories to kickstart a discussion portal in our next release. Subscribe to get updates on its launch.

I consent to SMU collecting, using and disclosing my personal data to provide information relating to XXX offered by SMU that I am signing up for/that I have indicated my interest in.

I can find out about my rights and choices and how my personal data is used and disclosed here.

Are we moving towards fairer and more inclusive credit scoring practices? 

This begs the question: are these technological advances the key to achieving fairer credit scoring practices and greater financial inclusion?  Assistant Professor of Law Nydia Remolina Leon from SMU’s Center for Artificial Intelligence and Data Governance (CAIDG) shares that even in the age of AI, the key to achieving these goals lies in effective regulation and not only in the creation of better algorithms or finding unbiased data. In a recent research paper, she shares a list of problems that continue to exist, or in some cases have been exacerbated, due to the use of AI models and Big Data in lending markets today. Current practices by regulators are not adequately addressing these issues and she therefore offers her solutions.

Click the icons to interact

1. Solving ineffective anti-discrimination techniques

One might think that the use of AI models (i.e. removing human bias from the equation) would lead to fairer creditworthiness assessment processes, but Asst Prof Remolina shares that this is not necessarily the case. To promote fair lending in some jurisdictions, regulatory bodies require financial institutions to avoid asking for specific information about the person (e.g. gender or race) in the use of AI models. Additionally, in most jurisdictions, AI fairness principle is not prescriptive. In her study that looked at 17,000 observations containing traditional and alternative data of loan applicants, the outcome showed that controlling the inputs of the algorithm was ineffective in preventing unfair discrimination. Instead, the ‘problematic’ variable was reflected in other variables, a concept known as omitted variable bias that is consistent with previous studies in this field.

Her suggestion is to create an ex-post testing mandatory for all lenders. By having lenders periodically test their models for discriminatory practices and then take the appropriate measures to correct them, she believes that fair lending can ultimately be achieved.  

2. Implementing a ‘right to know’  

The only thing more frustrating than getting a loan application rejected is perhaps, being unable to obtain the information needed to contest the decision. This is a problem many loan-seekers are facing since current data protection laws in many jurisdictions are not straightforward in mandating financial institutions to disclose outcomes or inferences made by algorithms that are used as input within the decision-making process. Furthermore, regulators currently take a 'light touch approach' in their quest for transparency as an AI Governance principle, as seen in the fact that lenders are encouraged (but not obligated) to abide by a list of principle-based guidelines in their use of AI. More often than not, these guidelines are difficult to translate into action, which prevents them from being an adequate solution to the problem we see today.  

Like other academics and technologists, Asst Prof Remolina advocates for a ‘right to know’ the inferences or outcomes of algorithmic credit scores when they are used as part of a decision-making process in the credit worthiness assessment. Ultimately, she also hopes to see more regulators translate these principles into practice with sector-specific use cases, so that companies can take more decisive steps that ultimately work in the favour of a more effective financial consumer protection regime in this context.

3. Creating accurate datasets for minority groups

Credit history, as it currently stands, is dependent on the volume and type of data linked to the individual. This means that minority groups who have ‘thin’ credit histories are automatically left with lower credit scores and thus lower access to finance. These groups also face the issue of flawed data since a single blemish in their record would disproportionately affect their credit score, unlike individuals with substantial credit files. The opportunities that Big Data present in lending markets therefore cannot be consistently extended to these groups, until this problem is solved.   

Asst Prof Remolina’s suggestion is to use the sandbox approach to generate accurate data for these groups; this means encouraging financial companies to run experiments (in a regulated setting) in which they approve loans to these groups. The goal here is to create richer and more accurate datasets that can then be used to solve the problem of financial exclusion and misallocation of credit.

4. Using alternative data responsibly

While alternative data can open new doors to financial inclusion, they can equally be used to exploit consumer’s vulnerabilities and engage in less desirable forms of price discrimination. Since some of these new types of lenders are not subject to the same regulation as financial institutions and the data protection laws in many jurisdictions are not adequately equipped, consumers are in danger of predatory behaviour. Financially excluded and less tech-savvy groups are especially susceptible to cycles of dependency, since these loans are easy to get even if they are difficult to pay back.  

Asst Prof Remolina says that it is necessary to level the playing field and eliminate regulatory arbitrage in lending markets, since the protection of financial consumers and the stability of the financial sector are of ultimate importance. Furthermore, consumers are not well aware of the fact that their posts on social media and other sources of alternative data could affect them down the line when it comes to credit scoring. More steps need to be taken to protect consumers through responsible lending frameworks, so that they have greater control over their data and how it is being used.

Stepping away from ‘Grey areas’ 

Regulators face the difficult task of achieving two seemingly conflicting objectives: protecting the consumer while also allowing companies the flexibility to innovate. All this, while also playing catch up to the extraordinary technological advances that are happening every day. Asst Prof Remolina stresses that the challenges each jurisdiction faces are unique to them, and therefore more empirical studies need to be done to understand the context and specific steps that need to be taken in each industry. 

The Veritas initiative by the Monetary Authority of Singapore (MAS) for example, is moving towards practical application of the principles of fairness, ethics, accountability and transparency (FEAT) by providing use case examples from the banking sector, including credit scoring companies. While the initiative is still a work in progress and does not address all the issues discussed above, stakeholders are encouraged to provide feedback so that the framework can be refined over time. As such, Asst Prof Remolina believes that it is a very good step towards the assessment of fairness in AI credit scoring.  

Digital Self-Determination  

Lastly, she hopes to see greater awareness of digital self-determination, a concept that she and the team at SMU’s CAIDG are committed to exploring through their research, events and public engagement. With the goal to empower consumers with the autonomy to decide what type of data gets used and who they are in the digital space, she believes that action must be taken now so that humans are always kept in the loop. By coupling digital self-determination with effective data governance, we stand a better chance at ensuring that technology is being used to optimise the human experience, while also correcting societal imbalances.