Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Review Article

A Survey of Explainable Artificial Intelligence in Bio-signals Analysis

Author(s): Sow Chen Wei, Yun-Huoy Choo*, Azah Kamilah Muda and Lee Chien Sing

Volume 16, Issue 3, 2023

Published on: 23 August, 2022

Article ID: e160522204851 Pages: 10

DOI: 10.2174/2666255815666220516141153

Price: $65

Abstract

Background: In contrast to the high-interest rate in Artificial Intelligence (AI) for business, AI adoption is much lower. It has been found that a lack of consumer trust would adversely influence consumers’ evaluations of information given by AI. Hence the need for explanations in model results.

Methods: This is especially the case in clinical practice and juridical enforcement, where improvements in prediction and interpretation are crucial. Bio-signals analysis, such as EEG diagnosis, usually involves complex learning models, which are difficult to explain. Therefore, the explanatory module is imperative if the results are released to the general public. This research shows a systematic review of explainable artificial intelligence (XAI) advancement in the research community. Recent XAI efforts on bio-signals analysis were reviewed. The explanatory models favor the interpretable model approach due to the popularity of deep learning models in many use cases.

Results: The verification and validation of explanatory models appear to be one of the crucial gaps in XAI bio-signals research. Currently, human expert evaluation is the easiest validation approach. Although the bio-signals community highly trusts the human-directed approach, it suffers from persona and social bias issues.

Conclusion: Hence, future research should investigate more objective evaluation measurements towards achieving the characteristics of inclusiveness, reliability, transparency, and consistency in the XAI framework.

Keywords: Explainable artificial intelligence, interpretability, explanatory, blackbox explainer, bio-signals analysis, artificial intelligence productization.

Graphical Abstract

[1]
T. Jessica, "Intelligent economies: AI’s transformation of industries and society", The Economist: A report from The Economist Intelligence Unit: Microsoft, 2018.
[2]
J. Bughin, B. McCarthy, and M. Chui, "A survey of 3,000 executives reveals how businesses succeed with AI", Harvard Bus. Rev. Digit. Artic., vol. 28, pp. 2-7, 2017. Available from: https://hbr.org/2017/08/a-survey-of-3000-executives-reveals-howbusinesses-succeed-with-ai
[3]
P. Bedué, and A. Fritzsche, "Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption", J. Enterp. Inf. Manag., 2021.[Ahead-of-print],
[http://dx.doi.org/10.1108/JEIM-06-2020-0233]
[4]
J. Kim, M. Giroux, and J.C. Lee, "When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations", Psychol. Mark., vol. 38, no. 7, pp. 1140-1155, 2021.
[http://dx.doi.org/10.1002/mar.21498]
[5]
P. Pu, and L. Chen, Trust building with explanation interfaces. IUI ’06: Proceedings of the 11th international conference on Intelligent user interfaces 29 January, 2006, pp. 93-100.
[http://dx.doi.org/10.1145/1111449.1111475]
[6]
L. Zhu, and T. Williams, "Effects of Proactive Explanations by Robots on Human-Robot Trust", In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12483. 2020, p. 85-95.
[http://dx.doi.org/10.1007/978-3-030-62056-1_8]
[7]
M. Ridley, "Explainable artificial intelligence", Res. Libr. Issues, no. 299, pp. 28-46, 2019.
[http://dx.doi.org/10.29242/rli.299.3]
[8]
DARPA (Defense Advanced Research Projects Agency). AI Next Campaign, 2018. Available from: https://www.darpa.mil/work-with-us/ai-next-campaign
[9]
C. Oxborough, E. Cameron, A. Rao, and C. Westermann, "Explainable AI - Driving business value through greater understanding". Available from: https://www.scribd.com/document/433129497/EAI
[10]
S.T. Mueller, R.R. Hoffman, W. Clancey, A. Emrey, and G. Klein, "Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI". Available from: https://doi.org/10.48550/arXiv.1902.01876
[11]
S.C-H. Yang, and P. Shafto, "Explainable artificial intelligence via bayesian teaching", In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 2017.
[12]
A. Adadi, and M. Berrada, "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)", IEEE Access, vol. 6, pp. 52138-52160, 2018.
[http://dx.doi.org/10.1109/ACCESS.2018.2870052]
[13]
P. Briefing, "Explainable AI: The basics policy briefing". Available from: https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf
[14]
I. Wigmore, "What is explainable AI (XAI)?", Available from: https://whatis.techtarget.com/definition/explainable-AI-XAI (accessed Aug. 12, 2021).
[15]
"Explainable AI - Malaysia | IBM,” IBM", Available from: https://www.ibm.com/my-en/watson/explainable-ai (accessed Aug. 12, 2021).
[16]
T. Zhou, H. Sheng, and I. Howley, Assessing post-hoc explainability of the BKT algorithm. AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February, 2020, p. 407-13.
[http://dx.doi.org/10.1145/3375627.3375856]
[17]
M.A. Clinciu, and H.F. Hastie, A survey of explainable AI terminology. Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), Association for Computational Linguistics, 2019, p. 8-13.
[http://dx.doi.org/10.18653/v1/W19-8403]
[18]
A.B. Arrieta, "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", Inf. Fusion, vol. 58, pp. 82-115, 2019.
[http://dx.doi.org/10.1016/j.inffus.2019.12.012]
[19]
H. Nori, S. Jenkins, P. Koch, and R. Caruana, "InterpretML: A unified framework for machine learning interpretability", arXiv., 2019. [preprint arXiv:1909.09223]
[20]
C. Chen, O. Li, C. Tao, A.J. Barnett, J. Su, and C. Rudin, "This looks like that: Deep learning for interpretable image recognition", Adv. Neural Inf. Process. Syst., vol. 32, 2019.
[21]
B. Hidasi, and C. Gáspár-Papanek, "ShiftTree: An interpretable model-based approach for time series classification", In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)., vol. 6912. 2011, p. 48-64.
[http://dx.doi.org/10.1007/978-3-642-23783-6_4]
[22]
J. Zeng, B. Ustun, and C. Rudin, "Interpretable classification models for recidivism prediction", J. R. Stat. Soc. Ser. A Stat. Soc., vol. 180, no. 3, pp. 689-722, 2017.
[http://dx.doi.org/10.1111/rssa.12227]
[23]
U. Johansson, C. Sönströd, U. Norinder, and H. Boström, "Trade-off between accuracy and interpretability for predictive in silico modeling", Future Med. Chem., vol. 3, no. 6, 2011.
[http://dx.doi.org/10.4155/fmc.11.23]
[24]
R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad, Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, pp. 1721-1730.
[http://dx.doi.org/10.1145/2783258.2788613]
[25]
O. Loyola-Gonzalez, "Black-box vs. White-Box: Understanding their advantages and weaknesses from a practical point of view", IEEE Access, vol. 7, pp. 154096-154113, 2019.
[http://dx.doi.org/10.1109/ACCESS.2019.2949286]
[26]
M.T. Ribeiro, S. Singh, and C. Guestrin, Why should i trust you?’ Explaining the predictions of any classifier. KDD ’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining August 2016, 2016, pp. 1135-1144.
[http://dx.doi.org/10.1145/2939672.2939778]
[27]
S.M. Lundberg, S-I.I. Lee, P.G. Allen, and S-I.I. Lee, A Unified Approach to Interpreting Model Predictions. vol. 2017, NIPS, pp. 4766-4775, 2017. Available from: https://github.com/slundberg/shap
[28]
V. Arya, "One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques", Available from: https://arxiv.org/abs/1909.03012
[29]
K.S. Gurumoorthy, A. Dhurandhar, G. Cecchi, and C. Aggarwal, Efficient data representation by selecting prototypes with importance weights. Proceedings - IEEE International Conference on Data Mining, 2019, p. 260-269.
[http://dx.doi.org/10.1109/ICDM.2019.00036]
[30]
A. Kumar, P. Sattigeri, and A. Balakrishnan, Variational inference of disentangled latent concepts from unlabeled observations., ICLR, 2018.
[31]
M. Hind, TED: Teaching AI to explain its decisions. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 123-129.
[http://dx.doi.org/10.1145/3306618.3314273]
[32]
S. Dash, O. Günlük, and D. Wei, "Boolean decision rules via column generation", Adv. Neural Inf. Process. Syst., vol. 2018, pp. 4655-4665, 2018.
[33]
D. Wei, S. Dash, T. Gao, and O. Günlük, Generalized linear rule models. 36th International Conference on Machine Learning, vol. 2019. 2019, pp. 11589-11605.
[34]
A. Dhurandhar, R. Luss, K. Shanmugam, and P. Olsen, "Improving simple models with confidence profiles", Adv. Neural Inf. Process. Syst., vol. 2018, pp. 10296-10306, 2018.
[35]
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, D. Pedreschi, and F. Giannotti, A survey of methods for explaining black box models. ACM Comput. Surv., vol. 51. 2018, no. 5, p. 1-42.
[36]
A. Gosiewska, K. Woznica, and P. Biecek, "Interpretable meta-measure for model performance", Available from: https://arxiv.org/abs/2006.02293
[37]
D. Alvarez-Melis, and T.S. Jaakkola, Towards robust interpretability with self-explaining neural networks. Adv. Neural Inf. Process. Syst., vol. 2018. 2018, p. 7775-7784.
[38]
R. Luss, P.Y. Chen, A. Dhurandhar, P. Sattigeri, K. Shanmugam, and C.C. Tu, "Generating contrastive explanations with monotonic attribute functions", arXiv, p. 1-21, 2019.
[39]
O.S. Vidmant, "Forecasting the volatility of financial time series by tree ensembles", world new Econ., vol. 12, no. 3, p. 82-89, 2019.
[http://dx.doi.org/10.26794/2220-6469-2018-12-3-82-89]
[40]
B. Qian, Y. Xiao, Z. Zheng, M. Zhou, W. Zhuang, S. Li, and Q. Ma, "Dynamic multi-scale convolutional neural network for time series classification", IEEE Access, vol. 8, pp. 109732-109746, 2020.
[http://dx.doi.org/10.1109/ACCESS.2020.3002095]
[41]
D. Smirnov, and E.M. Nguifo, Time series classification with recurrent neural networks. ECML/PKDD Work. Adv. Anal. Learn. Temporal Data, 2018, pp. 1-8.
[42]
J. Thomas, L. Comoretto, J. Jin, J. Dauwels, S.S. Cash, and M.B. Westover, EEG classification via convolutional neural network-based interictal epileptiform event detection. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2018, p. 3148-3151.
[http://dx.doi.org/10.1109/EMBC.2018.8512930]
[43]
T. Rojat, R. Puget, D. Filliat, J. Del Ser, R. Gelin, and N. Díaz-Rodríguez, "Explainable artificial intelligence (XAI) on timeSeries Data: A survey", Available from: https://arxiv.org/abs/2104.00950
[44]
R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-CAM: Visual explanations from deep networks via gradient-based localization", Int. J. Comput. Vis., vol. 128, no. 2, pp. 336-359, 2020.
[http://dx.doi.org/10.1007/s11263-019-01228-7]
[45]
J. Xu, J. Yang, X. Xiong, H. Li, J. Huang, K.C. Ting, Y. Ying, and T. Lin, "Towards interpreting multi-temporal deep learning models in crop mapping", Remote Sens. Environ., vol. 264, p. 112599, 2021.
[http://dx.doi.org/10.1016/j.rse.2021.112599]
[46]
P-J. Kindermans, "The (Un)reliability of Saliency Methods", Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11700, p. 267-280, 2019.
[http://dx.doi.org/10.1007/978-3-030-28954-6_14]
[47]
A.A. Ismail, M. Gunady, H.C. Bravo, and S. Feizi, "Benchmarking deep learning interpretability in time series predictions", Adv. Neural Inf. Process. Syst., vol. 2020, 2020.
[48]
M. Komatsu, C. Takada, C. Neshi, T. Unoki, and M. Shikida, Feature extraction with shap value analysis for student performance evaluation in remote collaboration. 2020 15th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), 18-20 Nov. 2020, Bangkok, Thailand, IEEE, 2020.
[http://dx.doi.org/10.1109/iSAI-NLP51646.2020.9376830]
[49]
S. Tonekaboni, S. Joshi, K.R. Campbell, D. Duvenaud, and A. Goldenberg, "What went wrong and when? Instance-wise feature importance for time-series black-box models", Adv. Neural Inf. Process. Syst., p. 2020, 2020.
[50]
H. Alsuradi, W. Park, and M. Eid, "Explainable Classification of EEG Data for an Active Touch Task Using Shapley Values", In: Stephanidis C., Kurosu M., Degen H., Reinerman-Jones L. (eds) HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence. HCII 2020. Lecture Notes in Computer Science, vol. 12424. Springer, Cham, .
[http://dx.doi.org/10.1007/978-3-030-60117-1_30]
[51]
M. Mansour, F. Khnaisser, and H. Partamian, "An explainable model for EEG seizure detection based on connectivity features", Available from: https://arxiv.org/abs/2009.12566
[52]
H. Taniguchi, T. Takata, M. Takechi, A. Furukawa, J. Iwasawa, A. Kawamura, T. Taniguchi, and Y. Tamura, "Explainable artificial intelligence model for diagnosis of atrial fibrillation using holter electrocardiogram waveforms", Int. Heart J., vol. 62, no. 3, pp. 534-539, 2021.
[http://dx.doi.org/10.1536/ihj.21-094] [PMID: 34053998]
[53]
J. Cui, Y. Liu, Z. Lan, O. Sourina, and W. Müller-wittig, "EEG-based cross-subject driver drowsiness recognition with interpretable CNN", Available from: https://arxiv.org/abs/2107.09507v1
[54]
C.A. Ellis, "A novel local ablation approach for explaining multimodal classifiers", bioRxiv, pp. 1-6, 2021.
[http://dx.doi.org/10.1109/BIBE52308.2021.9635541]
[55]
C.A. Ellis, R.L. Miller, and V.D. Calhoun, "A novel local explainability approach for spectral insight into raw EEG-based deep learning classifiers", bioRxiv, 2021.
[http://dx.doi.org/10.1109/BIBE52308.2021.9635243]
[56]
W.S. Liew, C.K. Loo, and S. Wermter, "Emotion recognition using explainable genetically optimized fuzzy ART ensembles", IEEE Access, vol. 9, pp. 61513-61531, 2021.
[http://dx.doi.org/10.1109/ACCESS.2021.3072120]
[57]
L.D. Barnes, K. Lee, A.W. Kempa-Liehr, and L.E. Hallum, "Detection of sleep apnea from single-channel electroencephalogram (EEG) using an explainable convolutional neural network", bioRxiv, p. 2021.04.11.439385, 2021.
[http://dx.doi.org/10.1101/2021.04.11.439385]
[58]
C. Ieracitano, N. Mammone, A. Hussain, and F.C. Morabito, "A novel explainable machine learning approach for EEG-based brain-computer interface systems", Neural Comput. Appl., 2021.
[http://dx.doi.org/10.1007/s00521-020-05624-w]
[59]
I. Neves, D. Folgado, S. Santos, M. Barandas, A. Campagner, L. Ronzio, F. Cabitza, and H. Gamboa, "Interpretable heartbeat classification using local model-agnostic explanations on ECGs", Comput. Biol. Med., vol. 133, p. 104393, 2021.
[http://dx.doi.org/10.1016/j.compbiomed.2021.104393] [PMID: 33915362]
[60]
A.Y. Al Hammadi, C.Y. Yeun, E. Damiani, P.D. Yoo, J. Hu, H.K. Yeun, and M-S. Yim, "Explainable artificial intelligence to evaluate industrial internal security using EEG signals in IoT framework", Ad Hoc Netw., vol. 123, p. 102641, 2021.
[http://dx.doi.org/10.1016/j.adhoc.2021.102641]
[61]
C.A. Ellis, D.A. Carbajal, R. Zhang, R.L. Miller, V.D. Calhoun, and M.D. Wang, "An explainable deep learning approach for multimodal electrophysiology classification", bioRxiv, pp. 12-15, 2021.
[http://dx.doi.org/10.1101/2021.05.12.443594]
[62]
S. Pathak, C. Lu, S.B. Nagaraj, M. van Putten, and C. Seifert, "STQS: Interpretable multi-modal Spatial-Temporal-seQuential model for automatic Sleep scoring", Artif. Intell. Med., vol. 114, p. 102038, 2021.
[http://dx.doi.org/10.1016/j.artmed.2021.102038] [PMID: 33875157]
[63]
M. Doborjeh, Z. Doborjeh, N. Kasabov, M. Barati, and G.Y. Wang, "Deep learning of explainable eeg patterns as dynamic spatiotemporal clusters and rules in a brain‐inspired spiking neural network", Sensors (Basel), vol. 21, no. 14, p. 4900, 2021.
[http://dx.doi.org/10.3390/s21144900] [PMID: 34300640]
[64]
D.O. Nahmias, and K.L. Kontson, Easy perturbation EEG algorithm for spectral importance (easyPEASI): A simple method to identify important spectral features of EEG in deep learning models. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2020, pp. 2398-2406.
[http://dx.doi.org/10.1145/3394486.3403289]
[65]
R. Ganeshkumar, R. Vinayakumar, V. Sowmya, E.A. Gopalakrishnan, and K.P. Soman, "Explainable deep learning-based approach for multilabel classification of electrocardiogram", IEEE Trans. Eng. Manage., pp. 1-13, 2021.
[http://dx.doi.org/10.1109/TEM.2021.3104751]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy