Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Review Article

Assessing and Mitigating Bias in Artificial Intelligence: A Review

Author(s): Akruti Sinha, Devika Sapra, Deepak Sinwar*, Vijander Singh and Ghanshyam Raghuwanshi

Volume 17, Issue 1, 2024

Published on: 24 July, 2023

Article ID: e230523217242 Pages: 10

DOI: 10.2174/2666255816666230523114425

Price: $65

Abstract

There has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI model, which could be due to a biased dataset or biased assumptions. While the bias in artificially taught models is attributed able to bias in the dataset provided by humans, there is still room for advancement in terms of bias mitigation in AI models. The failure to detect bias in datasets or models stems from the "black box" problem or a lack of understanding of algorithmic outcomes. This paper provides a comprehensive review of the analysis of the approaches provided by researchers and scholars to mitigate AI bias and investigate the several methods of employing a responsible AI model for decision-making processes. We clarify what bias means to different people, as well as provide the actual definition of bias in AI systems. In addition, the paper discussed the causes of bias in AI systems thereby permitting researchers to focus their efforts on minimising the causes and mitigating bias. Finally, we recommend the best direction for future research to ensure the discovery of the most accurate method for reducing bias in algorithms. We hope that this study will help researchers to think from different perspectives while developing unbiased systems.

Graphical Abstract

[1]
R. Schwartz, A. Vassilev, K. Greene, L. Perine, A. Burt, and P. Hall, Towards a Standard for Identifying and Managing Bias in Artificial Intelli gence, Special Publication (NIST SP)., National Institute of Standards and Technology: Gaithersburg, MD, 2022.
[http://dx.doi.org/10.6028/NIST.SP.1270]
[2]
J. Chakraborty, S. Majumder, Z. Yu, and T. Menzies, "Fairway: a way to build fair ML software", Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020, pp. 654-665.
[http://dx.doi.org/10.1145/3368089.3409697]
[3]
A. Gao, "Getting to the root of data bias in AI", Available from: https://medium.com/bcggamma/getting-to-the-root-of-data-bias-in-ai-a8179a54f45e (Accessed on: Nov. 24, 2021).
[4]
B. Maryfield, Implicit Racial Bias, 2018. Available from: https://www.jrsa.org/pubs/factsheets/jrsa-factsheet-implicit-racial-bias.pdf (Accessed on: Mar. 06, 2023).
[5]
A.I. Data, "Seven types of data bias in machine learning", Available from: https://www.telusinternational.com/insights/ai-data/article/7-types-of-data-bias-in-machine-learning
[6]
K.M. Kostick-Quenet, I.G. Cohen, and J.S. Blumenthal-Barby, et al. "Mitigating racial bias in machine learning", J. Law Med. Ethics, vol. 50, no. 1, pp. 92-100, 2022.
[http://dx.doi.org/10.1017/jme.2022.13] [PMID: 35243993]
[7]
"How to remove bias in training data", Available from: https://appen.com/blog/how-to-remove-bias-in-training-data/ (Accessed on: Nov. 25, 2021).
[8]
B.A. Herrschaft, "Evaluating the reliability and validity of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool", RUcore: Rutgers University Community Repository, 2014.
Available from: [http://dx.doi.org/10.7282/T38C9XX2]
[9]
J. Larson, S. Mattu, L. Kirchner, and J. Angwin, "How We Analyzed the COMPAS Recidivism Algorithm", Pro Publica, 2016. Available from: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
[10]
R.K.E. Bellamy, "AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias", arXiv, 2018.
[11]
B. Cowgill, F. Dell’Acqua, S. Deng, D. Hsu, N. Verma, and A. Chaintreau, "Biased programmers? Or biased data? A field experiment in operationalizing AI ethics", Proceedings of the 21st ACM Conference on Economics and Computation, 2020, pp. 679-681.
[http://dx.doi.org/10.1145/3391403.3399545]
[12]
R. Srinivasan, and A. Chander, "Biases in AI systems", Commun. ACM, vol. 64, no. 8, pp. 44-49, 2021.
[http://dx.doi.org/10.1145/3464903]
[13]
M. Tomalin, B. Byrne, S. Concannon, D. Saunders, and S. Ullmann, "The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing", Ethics Inf. Technol. 2021,, vol. 23. no. 3, pp. 419-433.
[http://dx.doi.org/10.1007/s10676-021-09583-1]
[14]
A. Howard, and J. Borenstein, "The ugly truth about ourselves and our robot creations: The problem of bias and social inequity", Sci. Eng. Ethics, vol. 24, no. 5, pp. 1521-1536, 2018.
[http://dx.doi.org/10.1007/s11948-017-9975-2] [PMID: 28936795]
[15]
J. Silberg, and J. Manyika, "Notes from the AI frontier: Tackling bias in AI (and in humans)", McKinsey Global Institute, vol. 1, no. 6, 2019.
[16]
K. Crawford, "The Hidden Biases in Big Data", Available from: https://hbr.org/2013/04/the-hidden-biases-in-big-data
[17]
C. Blakeney, "Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation", Available from: https://github.com/codestar12/pruning-distilation-bias.
[18]
S. Hooker, A. Courville, G. Clark, Y. Dauphin, and A. Frome, "What do compressed deep neural networks forget?", arXiv, 2019.
[19]
M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein, "SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", arXiv [stat.ML], 2017.
[20]
P.A. Noseworthy, Z.I. Attia, and L.C. Brewer, et al. "Assessing and mitigating bias in medical artificial intelligence", Circ. Arrhythm. Electrophysiol., vol. 13, no. 3, p. e007988, 2020.
[http://dx.doi.org/10.1161/CIRCEP.119.007988] [PMID: 32064914]
[21]
R. Shrestha, K. Kafle, and C. Kanan, "An investigation of critical issues in bias mitigation techniques", Workshop on Applications of Computer Vision, 2021. Available from: https://github.com/erobic/bias-mitigators
[22]
R. Liu, J. Lehman, and P. Molino, et al. "An intriguing failing of convolutional neural networks and the coordconv solution", arXiv[cs.CV], 2018.
[23]
K.V. Deshpande, S. Pan, and J.R. Foulds, "Mitigating demographic Bias in AI-based resume filtering", Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020.
[http://dx.doi.org/10.1145/3386392.3399569]
[24]
M. Soleimani, A. Intezari, and D.J. Pauleen, "Mitigating cognitive biases in developing AI-assisted recruitment systems: A knowledge-sharing approach", Int. J. Knowl. Manage., vol. 18, no. 1, pp. 1-18, 2021.
[http://dx.doi.org/10.4018/IJKM.290022]
[25]
M. Vasconcelos, C. Cardonha, and B. Gonçalves, "Modeling epistemological principles for bias mitigation in AI systems: an illustration in hiring decisions", Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 323-329, 2018.
[http://dx.doi.org/10.1145/3278721.3278751]
[26]
A. Peng, B. Nushi, E. Kiciman, K. Inkpen, and E. Kamar, "Investigations of performance and bias in human-AI teamwork in hiring", Proc. Conf. AAAI Artif. Intell., vol. 36, no. 11, pp. 12089-12097, 2022.
[http://dx.doi.org/10.1609/aaai.v36i11.21468]
[27]
N. Tilmes, "Disability, fairness, and algorithmic bias in AI recruitment", Ethics Inf. Technol., vol. 24, no. 2, p. 21, 2022.
[http://dx.doi.org/10.1007/s10676-022-09633-2]
[28]
A. Domnich, and G. Anbarjafari, "Responsible AI: Gender bias assessment in emotion recognition", arXiv, 2021.
[29]
O. Parkhi, A. Vedaldi, and A. Zisserman, "Deep face recognition", In: BMVC 2015-Proceedings of the British Machine Vision Conference 2015, 2015, pp. 1-12.
[http://dx.doi.org/10.5244/C.29.41]
[30]
D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri, "A closer look at spatiotemporal convolutions for action recognition", Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450-6459, 2018.
[http://dx.doi.org/10.1109/CVPR.2018.00675]
[31]
B. Hasani, and M.H. Mahoor, "Facial expression recognition using enhanced deep 3D convolutional neural networks", Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 30-40, 2017.
[http://dx.doi.org/10.1109/CVPRW.2017.282]
[32]
J. Haddad, O. Lézoray, and P. Hamel, "3d-cnn for facial emotion recognition in videos", In: Advances in Visual Computing: 15th International Symposium, ISVC 2020, vol. 15. 2020, no. Part II, pp. 298-309. San Diego, CA, USA
[33]
M. Georgopoulos, J. Oldfield, M.A. Nicolaou, Y. Panagakis, and M. Pantic, "Mitigating demographic bias in facial datasets with style-based multi-attribute transfer", Int. J. Comput. Vis., vol. 129, no. 7, pp. 2288-2307, 2021.
[http://dx.doi.org/10.1007/s11263-021-01448-w]
[34]
I.J. Goodfellow, M. Mirza, and J Pouget-Abadie, et al. Generative Adversarial Nets, 2014. Available from: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
[35]
X. Huang, and S. Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization", Proceedings of the IEEE international conference on computer vision, pp. 1501-1510, 2017.
[http://dx.doi.org/10.1109/ICCV.2017.167]
[36]
Z. Wang, X. Tang, W. Luo, and S. Gao, "Face aging with identity-preserved conditional generative adversarial networks", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7939-7947, 2018.
[37]
Z. Zhang, Y. Song, and H. Qi, "Age progression/regression by conditional adversarial autoencoder", Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5810-5818.
[http://dx.doi.org/10.1109/CVPR.2017.463]
[38]
D. Arias-Garzón, R. Tabares-Soto, J. Bernal-Salcedo, and G.A. Ruz, "Biases associated with database structure for COVID-19 detection in X-ray images", Sci. Rep., vol. 13, no. 1, p. 3477, 2023.
[http://dx.doi.org/10.1038/s41598-023-30174-1] [PMID: 36859430]
[39]
E. Tasci, Y. Zhuge, K. Camphausen, and A.V. Krauze, "Bias and class imbalance in oncologic data-towards inclusive and transferrable AI in large scale oncology data sets", Cancers, vol. 14, no. 12, p. 2897, 2022.
[http://dx.doi.org/10.3390/cancers14122897] [PMID: 35740563]
[40]
A. Wang, A. Liu, and R. Zhang, et al. "REVISE: A tool for measuring and mitigating bias in visual datasets", Int. J. Comput. Vis., vol. 130, no. 7, pp. 1790-1810, 2022.
[http://dx.doi.org/10.1007/s11263-022-01625-5]
[41]
Z. Wang, K. Genova, and K. Qinami, et al. "Towards fairness in visual recognition: Effective strategies for bias mitigation", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8919-8928, 2020.
[http://dx.doi.org/10.1109/CVPR42600.2020.00894]
[42]
C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, "Fairness through awareness", Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012.
[http://dx.doi.org/10.1145/2090236.2090255]
[43]
E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko, "Simultaneous deep transfer across domains and tasks", Proceedings of the IEEE international conference on computer vision, pp. 4068-4076, 2015.
[http://dx.doi.org/10.1109/ICCV.2015.463]
[44]
M. Alvi, A. Zisserman, and C. Nellåker, "Turning a blind eye: Explicit removal of biases and variation from deep neural network embeddings", Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.
[45]
J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K-W. Chang, "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017.
[http://dx.doi.org/10.18653/v1/D17-1323]
[46]
H.J. Ryu, H. Adam, and M. Mitchell, "Inclusivefacenet: Improving face attribute detection with race and gender diversity", arXiv[cs.CV], 2017.
[47]
V. Nair, Z. Yu, T. Menzies, N. Siegmund, and S. Apel, "Finding faster configurations using flash", IEEE Trans. Softw. Eng., vol. 46, no. 7, pp. 794-811, 2020.
[http://dx.doi.org/10.1109/TSE.2018.2870895]
[48]
E.M. Bender, and B. Friedman, "Data statements for natural language processing: Toward mitigating system bias and enabling better science", Trans. Assoc. Comput. Linguist., vol. 6, pp. 587-604, 2018.
[http://dx.doi.org/10.1162/tacl_a_00041]
[49]
S.L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, "“Language(technology) is power: A critical survey of” bias” in nlp", Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020.
[http://dx.doi.org/10.18653/v1/2020.acl-main.485]
[50]
I. Garrido-Muñoz, A. Montejo-Ráez, F. Martínez-Santiago, and L.A. Ureña-López, "A survey on bias in deep nlp", Appl. Sci., vol. 11, no. 7, p. 3184, 2021.
[http://dx.doi.org/10.3390/app11073184]
[51]
T. Sun, A. Gaut, and S. Tang, et al. "Mitigating gender bias in natural language processing: Literature review", arXiv, pp. 1630-1640, July 2019.
[http://dx.doi.org/10.18653/v1/P19-1159]
[52]
H. Devinney, J. Björklund, and H. Björklund, "Theories of ‘gender’ in NLP bias research", 2022 ACM Conference on Fairness., Accountability, and Transparency, pp. 2083-2102, 2022.
[http://dx.doi.org/10.1145/3531146.3534627]
[53]
A. Amini, A.P. Soleimany, W. Schwarting, S.N. Bhatia, and D. Rus, "Uncovering and mitigating algorithmic bias through learned latent structure", Proceedings of the 2019 AAAI/ACM Conference on AI., Ethics, and Society, pp. 289-295, 2019.
[http://dx.doi.org/10.1145/3306618.3314243]
[54]
J. Buolamwini, and T. Gebru, "Gender shades: Intersectional accuracy disparities in commercial gender classification", Conference on fairness, accountability and transparency, pp. 77-91, 2018.
[55]
B.H. Zhang, B. Lemoine, and M. Mitchell, "Mitigating unwanted biases with adversarial learning", Proceedings of the 2018 AAAI/ACM Conference on AI., Ethics, and Society, pp. 335-340, 2018.
[http://dx.doi.org/10.1145/3278721.3278779]
[56]
R. Drew, M. Jeanna, and T. Nisha, "Managing bias in AI", In WWW ‘19: Companion Proceedings of The 2019 World Wide Web Conference, pp. 539-544, 2019.
[57]
Y. Zhou, M. Kantarcioglu, and C. Clifton, "Improving fairness of AI systems with lossless de-biasing", arXiv, 2021.
[58]
Y. Choi, M. Choi, M. Kim, J-W. Ha, S. Kim, and J. Choo, "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8789-8797, 2018.
[http://dx.doi.org/10.1109/CVPR.2018.00916]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy