The Future of Credit: AI or Human Judgment?

DCU Law and Tech regularly publishes blog posts discussing the topics Law and Technology written by a variety of authors.

Joseph Boyer

Imagine a world where algorithms, not humans, determine your financial future. This is the reality of algorithmic lending in some countries, a powerful tool that promises efficiency but carries significant concerns regarding fairness and discrimination.

While algorithmic lending has the potential to improve access to credit and streamline the lending process, it’s crucial to address the underlying risks. Biased algorithms can perpetuate existing societal inequalities, leading to unfair treatment and limited opportunities for marginalized groups. Studies have shown that algorithms trained on historical data can disproportionately reject loan applications from individuals belonging to certain racial or gender groups.

This blog post is a portion of Joseph Boyer’s master’s dissertation research named “Algorithmic Creditworthiness: A Legal Dissection”, which explores the effectiveness of legal frameworks in addressing algorithmic bias within AI credit lending in depth.

Dangers of Biased Algorithms

Algorithmic lending, powered by machine learning, is revolutionizing the financial landscape. These algorithms, trained on vast datasets can perpetuate existing societal inequalities, leading to unfair treatment and limited opportunities for marginalized groups due to bias.

There are many kinds of bias such as historical bias, which are rooted in past discriminatory practices and can be embedded in the data used to train algorithms. Selection bias can occur when data collection methods overlook certain groups, such as those with limited credit histories. Measurement bias arises from using inaccurate or incomplete data, while technical bias can stem from issues like overfitting or underfitting the model.

These biases can have real-world consequences. Individuals from marginalized groups may be denied the credit they deserve, hindering their ability to start businesses, buy homes, or achieve their financial goals.

The Apple Card Controversy

The Apple Card, launched in 2019, aimed to disrupt the credit card industry with its focus on user experience and transparency. However, it quickly became embroiled in controversy over alleged gender bias in its credit limit setting. This case study highlights the potential pitfalls of algorithmic lending.

Apple Card utilized a credit-scoring model that combined traditional factors like credit history with additional data points. While promoting efficiency, this model’s opacity raised concerns about fairness.

Accusations arose when couples with shared finances received vastly different credit limits, fuelling suspicion of gender bias in the algorithm.

Despite Apple and Goldman Sachs’ claims of fairness, the lack of transparency hindered public understanding of credit decisions. This case underscores the need for regulations that promote transparency and accountability in developing and using these algorithms.

While the investigation found no legal violations, the Apple Card case serves as a cautionary tale. It emphasizes the need for a balance between innovation and consumer protection. Algorithmic lending holds promise, but ensuring fair and equitable access to credit for all requires robust safeguards and a commitment to responsible development.

European vs. Swiss Perspective

Switzerland, despite its close relationship with the European Union (EU) and its reliance on the banking sector as a main economic activity, has adopted a different approach to regulating algorithmic lending. While both aim to protect data and consumer rights, their frameworks vary in scope and enforcement.

In the EU, the General Data Protection Regulation (GDPR) provides a comprehensive framework for data protection, including provisions related to automated decision-making (ADM). The EU AI Act specifically addresses high-risk AI applications like credit scoring, mandating human oversight and explainability. Additionally, the Directive (EU) 2023/2225 on Credit Agreements for Consumers establishes transparency and consumer rights related to algorithmic credit decisions.

Switzerland, on the other hand, relies primarily on the Federal Act on Data Protection (FADP) for data protection. While the FADP offers a general framework, it lacks specific regulations for algorithmic lending. The Swiss AI Guidelines provide non-binding recommendations for responsible AI use but lack enforcement mechanisms.

The EU’s regulatory framework is generally more robust, offering stronger enforcement and a more comprehensive approach to ADM. However, it can be rigid and may not fully address systemic biases. Switzerland’s approach prioritizes flexibility and innovation but may compromise consumer protection.

Building a Fairer Future

To ensure fair and unbiased algorithmic lending, a comprehensive regulatory framework is needed. This framework should prioritize data governance, algorithmic transparency, consumer protection, and industry accountability.

Strict data quality standards and bans on discriminatory data use are essential. Algorithmic transparency and explainability must be ensured through clear explanations, independent audits, and interpretable models. Consumer protection requires empowering consumers with the right to dispute decisions, access information, and file complaints. Industry accountability can be achieved through strict liability for biased outcomes, bias testing, and ethical guidelines.

Technical and ethical solutions are also crucial. Explainable AI and fair machine learning algorithms can help mitigate bias. Ethical development can be fostered through diverse teams and awareness training. Financial institutions should seek guidance from data privacy, cybersecurity, and AI specialists.

Policymakers must adopt a comprehensive regulation encompassing data governance, transparency, accountability, and consumer protection. Balancing innovation with effective controls, securing sufficient resources for enforcement, and achieving global consensus on standards will be challenging.

Continuous monitoring and adaptation of regulatory frameworks are necessary due to the fast-paced nature of AI. Future research should focus on developing metrics to assess algorithmic fairness, the long-term impacts of AI credit scoring, alternative credit assessment models, and ethical considerations in AI finance.

Bridging the Gap

This research explored the effectiveness of legal frameworks in addressing algorithmic bias within AI credit lending. While progress has been made, particularly in the EU, significant gaps remain.

The GDPR, Directive 2023/2225, and EU AI Act offer a more comprehensive approach than Switzerland’s FADP, FLCC, and AI Guidelines. However, even the EU framework has limitations in addressing systemic bias and ensuring effective enforcement.

To improve the regulatory landscape, we need stronger data governance, algorithmic transparency, consumer empowerment, and industry accountability. Future research should focus on developing bias detection methods, examining long-term impacts, and exploring alternative credit assessment models. Let us strive for a future where technology serves humanity, not the other way around.

Joseph Boyer, a Mexican-French lawyer and graduate of the first EMILDAI cohort, is passionate about cybersecurity, data privacy, and the intersection of law and AI. He earned his Bachelor of Laws, with a minor in International Business, from the Monterrey Institute of Technology and Higher Education and has worked with top law firms like Chevez, Ruiz, Zamarripa y Cía. At the Health Service Executive in Dublin, he worked on data protection policy and legal compliance. As a former editor of the EMILDAI blog, he continues to explore how emerging technologies are reshaping legal frameworks, especially in data protection and AI.

More Blog Posts

Data Localisation in South-Asia
Ashit Srivastava
Dharmashastra National Law University, Jabalpur
There is a subtle but consolidated growth for demand of data-localization among the developing nations; interestingly, South-Asian countries are taking the…