Social Implications of Algorithmic Decision-Making in Housing Finance: Examining the broader social impacts of deploying machine learning in lending decisions, including potential disparities and community effects
DOI:
https://doi.org/10.60087/jklst.v4.n1.009Abstract
The integration of algorithmic decision-making in housing finance, particularly through the use of machine learning (ML) technologies, has revolutionized lending practices by enhancing efficiency, accuracy, and scalability. However, this shift also raises critical social implications that demand thorough examination. This article explores the broader societal impacts of deploying machine learning in housing finance, with a focus on potential disparities and effects on communities.
First, the paper highlights the transformative potential of algorithm-driven systems in automating risk assessments, credit evaluations, and loan approvals, reducing reliance on traditional manual processes. However, it also emphasizes how biases embedded in historical data and algorithmic designs can perpetuate systemic inequalities, disproportionately affecting marginalized groups. Through a detailed analysis, the discussion delves into key fairness concerns, including sample bias, proxy discrimination, and algorithmic opacity, which can result in discriminatory outcomes.
The study further examines the community-level effects, such as the reinforcement of socioeconomic divides and the exacerbation of housing inequalities, which may arise from biased lending decisions. It underscores the tension between the promise of inclusive financial systems and the risk of deepening disparities if ethical considerations and regulatory oversight are inadequate.
In addition to identifying these challenges, the paper proposes actionable strategies for promoting fairness, accountability, and transparency in ML-driven lending. By advocating for robust frameworks, stakeholder collaboration, and continuous monitoring, the article outlines a pathway toward leveraging algorithmic decision-making to achieve equitable outcomes in housing finance.
Ultimately, this article calls for a balanced approach that integrates technological innovation with ethical foresight, ensuring that the adoption of machine learning aligns with broader societal goals of fairness, inclusivity, and sustainability.
Downloads
References
Schneider, V. (2020). Locked out by big data: how big data algorithms and machine learning may undermine housing justice. Colum. Hum. Rts. L. Rev., 52, 251.
Allen, J.A., 2019. The color of algorithms: An analysis and proposed research agenda for deterring algorithmic redlin-ing. Fordham Urb. LJ, 46, p.219.
Gilman, Michele E. "Poverty lawgorithms: a poverty lawyer’s guide to fighting automated decision-making harms on low-income communities." Data & Society (2020).
Packin, Nizan Geslevich, and Yafit Lev-Aretz. "Learning algorithms and discrimination." In Research handbook on the law of artificial intelligence, pp. 88-113. Edward Elgar Pub-lishing, 2018.
Lepri, Bruno, Nuria Oliver, Emmanuel Letouzé, Alex Pent-land, and Patrick Vinck. "Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges." Philosophy & Technology 31, no. 4 (2018): 611-627.
Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT press.
Krupiy, T.T., 2020. A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective. Computer law & security review, 38, p.105429.
Heaton, D., Clos, J., Nichele, E., & Fischer, J. E. (2023, July). The social impact of decision-making algorithms: reviewing the influence of agency, responsibility and accountability on trust and blame. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems (pp. 1-11).
Akter, Shahriar, et al. "Algorithmic bias in machine learn-ing-based marketing models." Journal of Business Re-search 144 (2022): 201-216.
Sargeant H. Algorithmic decision-making in financial ser-vices: economic and normative outcomes in consumer credit. AI and Ethics. 2023 Nov;3(4):1295-311.
Onebunne, Amaka Peace, and Bolape Alade. "BIAS AND FAIRNESS IN AI MODELS: ADDRESSING DISPARITIES IN MACHINE LEARNING APPLICATIONS."
Perrault, A., Fang, F., Sinha, A., & Tambe, M. (2020). Artifi-cial intelligence for social impact: Learning and planning in the data-to-deployment pipeline. AI Magazine, 41(4), 3-16.
Morse, A., & Pence, K. (2021). Technological innovation and discrimination in household finance (pp. 783-808). Springer International Publishing.
Morse, A., & Pence, K. (2021). Technological innovation and discrimination in household finance (pp. 783-808). Springer International Publishing.
Herzog, L. (2021). Algorithmic bias and access to opportuni-ties. In The Oxford Handbook of Digital Ethics. Oxford: Ox-ford Academic.
Akter, Shahriar, Grace McCarthy, Shahriar Sajib, Katina Mi-chael, Yogesh K. Dwivedi, John D’Ambra, and Kathy Ning Shen. "Algorithmic bias in data-driven innovation in the age of AI." International Journal of Information Management 60 (2021): 102387.
Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A sys-tematic review of the empirical literature. Big Data & Socie-ty, 9(2), 20539517221115189.
Richardson, Rashida. "Racial segregation and the data-driven society: How our failure to reckon with root causes perpetu-ates separate and unequal realities." Berkeley Tech. LJ 36 (2021): 1051.
Cowgill, Bo, and Catherine E. Tucker. "Economics, fairness and algorithmic bias." preparation for: Journal of Economic Perspectives (2019).
Boppiniti, S. T. (2023). Data Ethics in AI: Addressing Chal-lenges in Machine Learning and Data Governance for Re-sponsible Data Science. International Scientific Journal for Research, 5(5).
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online)

This work is licensed under a Creative Commons Attribution 4.0 International License.
©2024 All rights reserved by the respective authors and JKLST.