Abstract
Background: In contrast to the high-interest rate in Artificial Intelligence (AI) for business, AI adoption is much lower. It has been found that a lack of consumer trust would adversely influence consumers’ evaluations of information given by AI. Hence the need for explanations in model results.
Methods: This is especially the case in clinical practice and juridical enforcement, where improvements in prediction and interpretation are crucial. Bio-signals analysis, such as EEG diagnosis, usually involves complex learning models, which are difficult to explain. Therefore, the explanatory module is imperative if the results are released to the general public. This research shows a systematic review of explainable artificial intelligence (XAI) advancement in the research community. Recent XAI efforts on bio-signals analysis were reviewed. The explanatory models favor the interpretable model approach due to the popularity of deep learning models in many use cases.
Results: The verification and validation of explanatory models appear to be one of the crucial gaps in XAI bio-signals research. Currently, human expert evaluation is the easiest validation approach. Although the bio-signals community highly trusts the human-directed approach, it suffers from persona and social bias issues.
Conclusion: Hence, future research should investigate more objective evaluation measurements towards achieving the characteristics of inclusiveness, reliability, transparency, and consistency in the XAI framework.
Keywords: Explainable artificial intelligence, interpretability, explanatory, blackbox explainer, bio-signals analysis, artificial intelligence productization.
Graphical Abstract
[http://dx.doi.org/10.1108/JEIM-06-2020-0233]
[http://dx.doi.org/10.1002/mar.21498]
[http://dx.doi.org/10.1145/1111449.1111475]
[http://dx.doi.org/10.1007/978-3-030-62056-1_8]
[http://dx.doi.org/10.29242/rli.299.3]
[http://dx.doi.org/10.1109/ACCESS.2018.2870052]
[http://dx.doi.org/10.1145/3375627.3375856]
[http://dx.doi.org/10.18653/v1/W19-8403]
[http://dx.doi.org/10.1016/j.inffus.2019.12.012]
[http://dx.doi.org/10.1007/978-3-642-23783-6_4]
[http://dx.doi.org/10.1111/rssa.12227]
[http://dx.doi.org/10.4155/fmc.11.23]
[http://dx.doi.org/10.1145/2783258.2788613]
[http://dx.doi.org/10.1109/ACCESS.2019.2949286]
[http://dx.doi.org/10.1145/2939672.2939778]
[http://dx.doi.org/10.1109/ICDM.2019.00036]
[http://dx.doi.org/10.1145/3306618.3314273]
[http://dx.doi.org/10.26794/2220-6469-2018-12-3-82-89]
[http://dx.doi.org/10.1109/ACCESS.2020.3002095]
[http://dx.doi.org/10.1109/EMBC.2018.8512930]
[http://dx.doi.org/10.1007/s11263-019-01228-7]
[http://dx.doi.org/10.1016/j.rse.2021.112599]
[http://dx.doi.org/10.1007/978-3-030-28954-6_14]
[http://dx.doi.org/10.1109/iSAI-NLP51646.2020.9376830]
[http://dx.doi.org/10.1007/978-3-030-60117-1_30]
[http://dx.doi.org/10.1536/ihj.21-094] [PMID: 34053998]
[http://dx.doi.org/10.1109/BIBE52308.2021.9635541]
[http://dx.doi.org/10.1109/BIBE52308.2021.9635243]
[http://dx.doi.org/10.1109/ACCESS.2021.3072120]
[http://dx.doi.org/10.1101/2021.04.11.439385]
[http://dx.doi.org/10.1007/s00521-020-05624-w]
[http://dx.doi.org/10.1016/j.compbiomed.2021.104393] [PMID: 33915362]
[http://dx.doi.org/10.1016/j.adhoc.2021.102641]
[http://dx.doi.org/10.1101/2021.05.12.443594]
[http://dx.doi.org/10.1016/j.artmed.2021.102038] [PMID: 33875157]
[http://dx.doi.org/10.3390/s21144900] [PMID: 34300640]
[http://dx.doi.org/10.1145/3394486.3403289]
[http://dx.doi.org/10.1109/TEM.2021.3104751]