Cognitive Biases:

Understanding and Designing Fair AI Systems for Software Development

Authors

  • Sheriff Adefolarin Adepoju Department of Computer Science, College of Engineering, Prairie View A&M University, Texas, United States Author https://orcid.org/0009-0006-6741-8518
  • Mildred Aiwanno-Ose Adepoju Department of Computer Information Systems, College of Engineering, Prairie View A&M University, Texas, United States Author

DOI:

https://doi.org/10.60087/jklst.v4.n2.004

Keywords:

Cognitive Biases, Fair AI Systems, Algorithmic Bias, Software Development, Bias Mitigation, Fairness in Software Development, Bias Mitigation in AI Systems

Abstract

Artificial Intelligence (AI) systems, while advancing software development, are often susceptible to cognitive biases that lead to unfair outcomes. This study explores the roles of confirmation bias, anchoring bias, and automation bias in influencing AI decision-making. These biases commonly emerge from unrepresentative datasets, algorithmic design flaws, and subjective human decisions. Through a qualitative methodology involving literature review and case analysis, the research identifies the origins and manifestations of cognitive bias in AI, particularly within domains like criminal justice, healthcare, and recruitment. The study proposes several mitigation strategies: incorporating diverse and representative data, adopting fairness-aware algorithm designs, and conducting routine bias audits. Evaluation criteria include each strategy’s effectiveness, feasibility, transparency, and scalability. Findings indicate that while these techniques significantly improve fairness in AI outputs, they also present practical challenges such as reduced model precision and resource constraints. The study emphasizes that eliminating cognitive bias requires not only technical adjustments but also interdisciplinary collaboration and ethical considerations. The findings serve as a guide for developers, stakeholders, and policymakers aiming to design responsible AI systems that uphold transparency, accountability, and social equity across software development environments.

Downloads

Download data is not yet available.

References

Bernault, C., Juan, S., Delmas, A., Andre, J. M., Rodier, M., & Chraibi Kaadoud, I. (2023, July). Assessing the impact of cognitive biases in AI project development. In International Conference on Human-Computer Interaction (pp. 401–420). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-35891-3_24

Scatiggio, V. (2020). Tackling the issue of Bias in artificial intelligence to design AI-driven fair and inclusive service systems. How human biases are breaching into AI algorithms, with severe impacts on individuals and societies, and what designers can do to face this phenomenon and change for the better. https://hdl.handle.net/10589/186118

Vakali, A., & Tantalaki, N. (2024). Rolling in the deep of cognitive and AI biases. arXiv preprint arXiv:2407.21202. https://doi.org/10.48550/arXiv.2407.21202

Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for iden-tifying and managing Bias in artificial intelligence (Vol. 3, p. 00). Gaithersburg, MD: US Department of Commerce, National Institute of Standards and Technology. https://doi.org/10.7717/peerj-cs.1630

Varona, D., & Suárez, J. L. (2022). Discrimination, Bias, fairness, and trustworthy AI. Applied Sciences, 12(12), 5826. (https://creativecommons.org/licenses/by/4.0/

Chen, Y., Clayton, E. W., Novak, L. L., Anders, S., & Malin, B. (2023). Human-centered design to address biases in arti-ficial intelligence. Journal of medical Internet research, 25, e43251. https://doi.org/10.2196/43251

AlMakinah, R., Goodarzi, M., Tok, B., & Canbaz, M. A. (2024). Mapping artificial intelligence bias: a network-based framework for analysis and mitigation. AI and Ethics, 1-20. 1684–1692 (2023). https://doi.org/10.1093/jamia/ocad118

Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., ... & Zhang, Y. (2019). AI Fairness 360: An ex-tensible toolkit for detecting and mitigating algorithmic Bias. IBM Journal of Research and Development, 63(4/5), 4–1. DOI: 10.1147/JRD.2019.2942287

Fuad, N. R. Cognitive Bias and AI Technology: The Human Element in Automated Recruitment and Selection. 6. https://doi.org/10.1038/d41586-018-05707-8

Tejani, A. S., Ng, Y. S., Xi, Y., & Rayan, J. C. (2024). Understanding and mitigating bias in imaging artificial intelli-gence. Radiographics, 44(5), e230067. https://doi.org/10.1148/rg.230067

Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic Bias: review, synthesis, and future research directions. Eu-ropean Journal of Information Systems, 31(3), 388-409. https://doi.org/10.1080/0960085X.2021.1927212

Mensah, G. B. (2023). Artificial intelligence and ethics: a comprehensive review of bias mitigation, transparency, and accountability in AI Systems. Preprint, November, 10(1). https://doi.org/10.2196/23218

Militello, L. G., Diiulio, J., Wilson, D. L., Nguyen, K. A., Harle, C. A., Gellad, W., & Lo-Ciganic, W. H. (2025). Using human factors methods to mitigate Bias in artificial intelligence-based clinical decision support. Journal of the American Medical Informatics Association, 32(2), 398–403. https://doi.org/10.1093/jamia/ocae291

Oguntibeju, O. O. (2024). Mitigating artificial intelligence bias in financial systems: A comparative analysis of de-biasing techniques. Asian Journal of Research in Computer Science, 17(12), 165-178. https://doi.org/10.9734/ajrcos/2024/v17i12536

Pant, A., Hoda, R., Tantithamthavorn, C., & Turhan, B. (2024). Navigating Fairness: Practitioners' Understanding, Challenges, and Strategies in AI/ML Development. arXiv preprint arXiv:2403.15481. https://doi.org/10.48550/arXiv.2403.15481 16

Balayn, A., Lofi, C., & Houben, G. J. (2021). Managing Bias and unfairness in data for decision support: A survey of machine learning and data engineering approaches to identify and mitigate Bias and unfairness within data manage-ment and analytics systems. The VLDB Journal, 30(5), 739–768. https://doi.org/10.1145/1810295.1810326

Zou J, Schiebinger L (2018). AI can be sexist and racist—it’s time to make it fair. Na-ture. https://doi.org/10.1038/d41586-018-05707-8

Hall, P., & Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47(7), 1264-1279. https://doi.org/10.1108/OIR-08-2021-0452

Cirillo, D., & Rementeria, M. J. (2022). Bias and fairness in machine learning and artificial intelligence. In Sex and gender bias in technology and artificial intelligence (pp. 57-75). Academic Press. https://doi.org/10.1016/B978-0-12-821392-6.00006-6

Sheriff Adefolarin Adepoju, “How machine learning can revolutionize building comfort: Accessing the impact of occupancy prediction models on HVAC control system,” World J. Adv. Res. Rev., vol. 25, no. 1, pp. 2315–2327, Jan. 2025, doi: 10.30574/wjarr.2025.25.1.0161.

Soleimani, M. (2022). Developing unbiased artificial intelligence in recruitment and selection: a processual framework: a dissertation presented in partial fulfillment of the requirements for the degree of doctor of philosophy in management at Massey University, Albany, Auckland, New Zealand (Doctoral dissertation, Massey University). http://hdl.handle.net/10179/17686

Devillers, L., Fogelman-Soulié, F., & Baeza-Yates, R. (2021). AI & human values: Inequalities, biases, fairness, nudge, and feedback loops. Reflections on artificial intelligence for humanity, 76-89. https://doi.org/10.1145/3213765

Cary Jr, M. P., Bessias, S., McCall, J., Pencina, M. J., Grady, S. D., Lytle, K., & Economou‐Zavlanos, N. J. (2025). Em-powering nurses to champion Health equity & BE FAIR: Bias elimination for fair and responsible AI in healthcare. Journal of Nursing Scholarship, 57(1), 130–139. https://doi.org/10.1111/jnu.13007

Desouza, K. C., Dawson, G. S., & Chenok, D. (2020). Designing, developing, and deploying artificial intelligence systems: Lessons from and for the public sector. Business Horizons, 63(2), 205–213. https://doi.org/10.1016/j.bushor.2019.11.004

Smith, C. J. (2019). Designing trustworthy AI: A human-machine teaming framework to guide development. arXiv preprint arXiv:1910.03515. https://doi.org/10.48550/arXiv.1910.03515

Downloads

Published

15-05-2025

How to Cite

Adepoju, S., & Adepoju, M. (2025). Cognitive Biases:: Understanding and Designing Fair AI Systems for Software Development. Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online), 4(2), 44-54. https://doi.org/10.60087/jklst.v4.n2.004