Generic placeholder image

Recent Patents on Engineering

Editor-in-Chief

ISSN (Print): 1872-2121
ISSN (Online): 2212-4047

Review Article

Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review

Author(s): Jeetesh Sharma*, Murari Lal Mittal, Gunjan Soni and Arvind Keprate

Volume 18, Issue 5, 2024

Published on: 06 June, 2023

Article ID: e170423215860 Pages: 9

DOI: 10.2174/1872212118666230417084231

Price: $65

Abstract

Predictive maintenance (PdM) is a technique that keeps track of the condition and performance of equipment during normal operation to reduce the possibility of failures. Accurate anomaly detection, fault diagnosis, and fault prognosis form the basis of a PdM procedure. This paper aims to explore and discuss research addressing PdM using machine learning and complications using explainable artificial intelligence (XAI) techniques. While machine learning and artificial intelligence techniques have gained great interest in recent years, the absence of model interpretability or explainability in several machine learning models due to the black-box nature requires further research. Explainable artificial intelligence (XAI) investigates the explainability of machine learning models. This article overviews the maintenance strategies, post-hoc explanations, model-specific explanations, and model-agnostic explanations currently being used. Even though machine learningbased PdM has gained considerable attention, less emphasis has been placed on explainable artificial intelligence (XAI) approaches in predictive maintenance (PdM). Based on our findings, XAI techniques can bring new insights and opportunities for addressing critical maintenance issues, resulting in more informed decisions. The results analysis suggests a viable path for future studies.

Graphical Abstract

[1]
B. de Jonge, R. Teunter, and T. Tinga, "The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance", Reliab. Eng. Syst. Saf., vol. 158, pp. 21-30, 2017.
[http://dx.doi.org/10.1016/j.ress.2016.10.002]
[2]
Y. Liu, and X. Xu, "Industry 4.0 and cloud manufacturing: A comparative analysis", J. Manuf. Sci. Eng., vol. 139, no. 3, p. 034701, 2017.
[http://dx.doi.org/10.1115/1.4034667]
[3]
Z. Zhao, J. Wu, T. Li, C. Sun, R. Yan, and X. Chen, "Challenges and opportunities of AI-enabled monitoring, diagnosis & prognosis: A review", Chin. J. Mech. Eng., vol. 34, no. 1, p. 56, 2021.
[http://dx.doi.org/10.1186/s10033-021-00570-7]
[4]
R. Zhao, D. Wang, R. Yan, K. Mao, F. Shen, and J. Wang, "Machine health monitoring using local feature-based gated recurrent unit networks", IEEE Trans. Ind. Electron., vol. 65, no. 2, pp. 1539-1548, 2018.
[http://dx.doi.org/10.1109/TIE.2017.2733438]
[5]
G.A. Susto, A. Schirru, S. Pampuri, S. McLoone, and A. Beghi, "Machine learning for predictive maintenance: A multiple classifier approach", IEEE Trans. Industr. Inform., vol. 11, no. 3, pp. 812-820, 2015.
[http://dx.doi.org/10.1109/TII.2014.2349359]
[6]
T.P. Carvalho, F.A.A.M.N. Soares, R. Vita, R.P. Francisco, J.P. Basto, and S.G.S. Alcalá, "A systematic literature review of machine learning methods applied to predictive maintenance", Comput. Ind. Eng., vol. 137, p. 106024, 2019.
[http://dx.doi.org/10.1016/j.cie.2019.106024]
[7]
B. Zhang, S. Zhang, and W. Li, "Bearing performance degradation assessment using long short-term memory recurrent network", Comput. Ind., vol. 106, pp. 14-29, 2019.
[http://dx.doi.org/10.1016/j.compind.2018.12.016]
[8]
B. Luo, H. Wang, H. Liu, B. Li, and F. Peng, "Early fault detection of machine tools based on deep learning and dynamic identification", IEEE Trans. Ind. Electron., vol. 66, no. 1, pp. 509-518, 2019.
[http://dx.doi.org/10.1109/TIE.2018.2807414]
[9]
K.T.P. Nguyen, and K. Medjaher, "A new dynamic predictive maintenance framework using deep learning for failure prognostics", Reliab. Eng. Syst. Saf., vol. 188, pp. 251-262, 2019.
[http://dx.doi.org/10.1016/j.ress.2019.03.018]
[10]
J.R. Ruiz-Sarmiento, J. Monroy, F.A. Moreno, C. Galindo, J.M. Bonelo, and J. Gonzalez-Jimenez, "A predictive model for the maintenance of industrial machinery in the context of industry 4.0", Eng. Appl. Artif. Intell., vol. 87, p. 103289, 2020.
[http://dx.doi.org/10.1016/j.engappai.2019.103289]
[11]
A. Theissler, J. Pérez-Velázquez, M. Kettelgerdes, and G. Elger, "Predictive maintenance enabled by machine learning: Use cases and challenges in the automotive industry", Reliab. Eng. Syst. Saf., vol. 215, p. 107864, 2021.
[http://dx.doi.org/10.1016/j.ress.2021.107864]
[12]
W. Zhang, D. Yang, and H. Wang, "Data-driven methods for predictive maintenance of industrial equipment: A survey", IEEE Syst. J., vol. 13, no. 3, pp. 2213-2227, 2019.
[http://dx.doi.org/10.1109/JSYST.2019.2905565]
[13]
S. Sayyad, S. Kumar, A. Bongale, P. Kamat, S. Patil, and K. Kotecha, "Data-driven remaining useful life estimation for milling process: Sensors, algorithms, datasets, and future directions", IEEE Access, vol. 9, pp. 110255-110286, 2021.
[http://dx.doi.org/10.1109/ACCESS.2021.3101284]
[14]
S.M.A. Lopes, R.A. Flauzino, and R.A.C. Altafim, "Incipient fault diagnosis in power transformers by data-driven models with over-sampled dataset", Electr. Power Syst. Res., vol. 201, p. 107519, 2021.
[http://dx.doi.org/10.1016/j.epsr.2021.107519]
[15]
T. Tinga, "Application of physical failure models to enable usage and load based maintenance", Reliab. Eng. Syst. Saf., vol. 95, no. 10, pp. 1061-1075, 2010.
[http://dx.doi.org/10.1016/j.ress.2010.04.015]
[16]
T. Tinga, and R. Loendersloot, Physical model-based prognostics and health monitoring to enable predictive maintenance.in: Predictive Maintenance in Dynamic Systems., Berlin: Springer, 2019, pp. 313-353.
[http://dx.doi.org/10.1007/978-3-030-05645-2_11]
[17]
D.L. Rivera, M.R. Scholz, M. Fritscher, M. Krauss, and K. Schilling, "Towards a predictive maintenance system of a hydraulic pump", IFAC-PapersOnLine, vol. 51, no. 11, pp. 447-452, 2018.
[http://dx.doi.org/10.1016/j.ifacol.2018.08.346]
[18]
P. Wang, B.D. Youn, and C. Hu, "A generic probabilistic framework for structural health prognostics and uncertainty management", Mech. Syst. Signal Process., vol. 28, pp. 622-637, 2012.
[http://dx.doi.org/10.1016/j.ymssp.2011.10.019]
[19]
K. Le Son, M. Fouladirad, A. Barros, E. Levrat, and B. Iung, "Remaining useful life estimation based on stochastic deterioration models: A comparative study", Reliab. Eng. Syst. Saf., vol. 112, pp. 165-175, 2013.
[http://dx.doi.org/10.1016/j.ress.2012.11.022]
[20]
R.K. Neerukatti, K.C. Liu, N. Kovvali, and A. Chattopadhyay, "Fatigue life prediction using hybrid prognosis for structural health monitoring", J. Aeros. Inf. Sys., vol. 11, no. 4, pp. 211-232, 2014.
[http://dx.doi.org/10.2514/1.I010094]
[21]
Y. Deng, A.D. Bucchianico, and M. Pechenizkiy, "Controlling the accuracy and uncertainty trade-off in RUL prediction with a surrogate Wiener propagation model", Reliab. Eng. Syst. Saf., vol. 196, p. 106727, 2020.
[http://dx.doi.org/10.1016/j.ress.2019.106727]
[22]
W. Samek, G. Montavon, A. Vedaldi, L.K. Hansen, and K-R. Muller, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning., ;, Berlin: Springer, vol. 11700 2019,, p. 435. http://link.springer.com/10.1007/978-3-030-28954-6
[23]
L. Fischer, "Applying AI in Practice: Key Challenges and Lessons Learned", In: Machine Learning and Knowledge Extraction Berlin., Springer, 2020, pp. 451-471.
[http://dx.doi.org/10.1007/978-3-030-57321-8_25]
[24]
D. Gunning, "DARPA’s explainable artificial intelligence (XAI) program", Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics).
vol. 12279, LNCS, 2020, pp. 451–471. [http://dx.doi.org/10.1145/3301275.3308446]
[25]
D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.Z. Yang, "XAI-Explainable artificial intelligence", Sci. Robot., vol. 4, no. 37, p. eaay7120, 2019.
[http://dx.doi.org/10.1126/scirobotics.aay7120] [PMID: 33137719]
[26]
T. Rieg, J. Frick, H. Baumgartl, and R. Buettner, "Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms", PLoS One, vol. 15, no. 12, p. e0243615, 2020.
[http://dx.doi.org/10.1371/journal.pone.0243615] [PMID: 33332440]
[27]
S. O’Sullivan, M. Janssen, A. Holzinger, N. Nevejans, O. Eminaga, C.P. Meyer, and A. Miernik, "Explainable artificial intelligence (XAI): Closing the gap between image analysis and navigation in complex invasive diagnostic procedures", World J. Urol., vol. 40, no. 5, pp. 1125-1134, 2022.
[http://dx.doi.org/10.1007/s00345-022-03930-7] [PMID: 35084542]
[28]
S.N. Payrovnaziri, Z. Chen, P. Rengifo-Moreno, T. Miller, J. Bian, J.H. Chen, X. Liu, and Z. He, "Explainable artificial intelligence models using real-world electronic health record data: A systematic scoping review", J. Am. Med. Inform. Assoc., vol. 27, no. 7, pp. 1173-1185, 2020.
[http://dx.doi.org/10.1093/jamia/ocaa053] [PMID: 32417928]
[29]
A.F. Markus, J.A. Kors, and P.R. Rijnbeek, "The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies", J. Biomed. Inform., vol. 113, p. 103655, 2021.
[http://dx.doi.org/10.1016/j.jbi.2020.103655] [PMID: 33309898]
[30]
Q. Bai, "Application advances of deep learning methods for de novo drug design and molecular dynamics simulation", Wiley Interdiscip. Rev. Comput. Mol. Sci., vol. 12, no. 3, p. e1581, 2021.
[http://dx.doi.org/10.1002/wcms.1581]
[31]
S. O’Sullivan, S. Leonard, A. Holzinger, C. Allen, F. Battaglia, N. Nevejans, F.W.B. van Leeuwen, M.I. Sajid, M. Friebe, H. Ashrafian, H. Heinsen, D. Wichmann, M. Hartnett, and A.G. Gallagher, "Operational framework and training standard requirements for AI-empowered robotic surgery", Int. J. Med. Robot., vol. 16, no. 5, pp. 1-13, 2020.
[http://dx.doi.org/10.1002/rcs.2020] [PMID: 31144777]
[32]
A.M. Westerlund, J.S. Hawe, M. Heinig, and H. Schunkert, "Risk prediction of cardiovascular events by exploration of molecular data with explainable artificial intelligence", Int. J. Mol. Sci., vol. 22, no. 19, p. 10291, 2021.
[http://dx.doi.org/10.3390/ijms221910291] [PMID: 34638627]
[33]
T. Harren, H. Matter, G. Hessler, M. Rarey, and C. Grebner, "Interpretation of structure–activity relationships in real-world drug design data sets using explainable artificial intelligence", J. Chem. Inf. Model., vol. 62, no. 3, pp. 447-462, 2022.
[http://dx.doi.org/10.1021/acs.jcim.1c01263] [PMID: 35080887]
[34]
S. Laura, "Linking maintenance strategies to performance", Int. J. Prod. Econ., vol. 70 2001, no. 3, pp. 237-244, .
[35]
C.W. Gits, "Design of maintenance concepts", Int. J. Prod. Econ., vol. 24, no. 3, pp. 217-226, 1992.
[http://dx.doi.org/10.1016/0925-5273(92)90133-R]
[36]
A.K.S. Jardine, D. Lin, and D. Banjevic, "A review on machinery diagnostics and prognostics implementing condition-based maintenance", Mech. Syst. Signal Process., vol. 20, no. 7, pp. 1483-1510, 2006.
[http://dx.doi.org/10.1016/j.ymssp.2005.09.012]
[37]
R.K. Mobley, An Introduction to predictive Maintenance., Elsevier: Amsterdam, 2002, pp. 1-10.
[38]
M.T. Ribeiro, S. Singh, and C. Guestrin, "Why should i trust you?’ Explaining the predictions of any classifier", NAACL-HLT 2016 - 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Demonstrations Session,",
[http://dx.doi.org/10.18653/v1/N16-3020]
[39]
M. Christoph, Interpretable Machine Learning A Guide for Making Black Box Models Explainable., Book, 2020, p. 247. https://christophm.github.io/interpretable-ml-book
[40]
J.H. Friedman, "Greedy Function Approximation: A gradient boosting machine", Ann. Stat., vol. 29, no. 5, pp. 1189-1232. www.jstor.org/stable/2699986
[41]
Z. C. Lipton, "The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery", Queue, vol. 16, no. 3, 2018.
[http://dx.doi.org/10.1145/3236386.3241340]
[42]
O. Biran, and C. Cotton, "Explanation and Justification in Machine Learning: A Survey", IJCAI-17 Work. Explain. AI, pp. 8-13, 2017.
[43]
L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, "Explaining explanations: An overview of interpretability of machine learning", Proc. - 2018 IEEE 5th Int. Conf. Data Sci. Adv. Anal. DSAA, 2019.
[http://dx.doi.org/10.1109/DSAA.2018.00018]
[44]
A. Adadi, and M. Berrada, "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)", IEEE Access, vol. 6, pp. 52138-52160, 2018.
[http://dx.doi.org/10.1109/ACCESS.2018.2870052]
[45]
J. Grezmak, P. Wang, C. Sun, and R.X. Gao, "Explainable convolutional neural network for gearbox fault diagnosis", Procedia CIRP, vol. 80, pp. 476-481, 2019.
[http://dx.doi.org/10.1016/j.procir.2018.12.008]
[46]
M. Madhikermi, A.K. Malhi, and K. Främling, "Explainable artificial intelligence based heat recycler fault detection in air handling unit", In: Explainable, Transparent Autonomous Agents and Multi-Agent Systems., Springer: Berlin, 2019, pp. 110-125.
[http://dx.doi.org/10.1007/978-3-030-30391-4_7]
[47]
L.C. Brito, G.A. Susto, J.N. Brito, and M.A.V. Duarte, "An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery", Mech. Syst. Signal Process., vol. 163, p. 108105, 2022.
[http://dx.doi.org/10.1016/j.ymssp.2021.108105]
[48]
O. Serradilla, and E. Zugasti, "J. Ramirez de Okariz, J. Rodriguez, and U. Zurutuza, “Adaptable and explainable predictive maintenance: Semi-supervised deep learning for anomaly detection and diagnosis in press machine data”", Appl. Sci., vol. 11, no. 16, p. 7376, 2021.
[http://dx.doi.org/10.3390/app11167376]
[49]
C.M.A. Roelofs, M.A. Lutz, S. Faulstich, and S. Vogt, "Autoencoder-based anomaly root cause analysis for wind turbines", Energy and AI, vol. 4, p. 100065, 2021.
[http://dx.doi.org/10.1016/j.egyai.2021.100065]
[50]
C.W. Hong, C. Lee, K. Lee, M.S. Ko, and K. Hur, "Explainable artificial intelligence for the remaining useful life prognosis of the turbofan engines", Proceedings of the 3rd IEEE International Conference on Knowledge Innovation and Invention 2020, ICKII 2020, pp. 144-147, 2020.
[http://dx.doi.org/10.1109/ICKII50300.2020.9318912]
[51]
A. Galli, V. Moscato, G. Sperlí, and A. De Santo, "An explainable artificial intelligence methodology for hard disk fault prediction", Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
vol. 12391 LNCS, pp. 403–413, 2020. [http://dx.doi.org/10.1007/978-3-030-59003-1_26]
[52]
Z. Allah Bukhsh, A. Saeed, I. Stipanovic, and A.G. Doree, "Predictive maintenance using tree-based classification techniques: A case of railway switches", Transp. Res., Part C Emerg. Technol., vol. 101, pp. 35-54, 2019.
[http://dx.doi.org/10.1016/j.trc.2019.02.001]
[53]
G.B. Jang, and S.B. Cho, "Anomaly detection of 2.4l diesel engine using one-class svm with variational autoencoder", Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM, vol. 11, no. 1, 2019.
[http://dx.doi.org/10.36001/phmconf.2019.v11i1.804]
[54]
M. Berno, "A machine learning-based approach for advanced monitoring of automated equipment for the entertainment industry", 2021 IEEE International Workshop on Metrology for Industry 4.0 and IoT, MetroInd 4.0 and IoT 2021 - Proceedings, pp. 386-391, 2021.
[http://dx.doi.org/10.1109/MetroInd4.0IoT51437.2021.9488481]
[55]
C. Oh, and J. Jeong, "VODCA: Verification of diagnosis using CAM-based approach for explainable process monitoring", Sensors, vol. 20, no. 23, p. 6858, 2020.
[http://dx.doi.org/10.3390/s20236858] [PMID: 33266164]
[56]
H.Y. Chen, and C.H. Lee, "Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) approach: Application on bearing faults diagnosis", IEEE Access, vol. 8, pp. 134246-134256, 2020.
[http://dx.doi.org/10.1109/ACCESS.2020.3006491]
[57]
J. Jakubowski, P. Stanisz, S. Bobek, and G.J. Nalepa, "Anomaly detection in asset degradation process using variational autoencoder and explanations", Sensors, vol. 22, no. 1, p. 291, 2021.
[http://dx.doi.org/10.3390/s22010291] [PMID: 35009832]
[58]
O. Serradilla, E. Zugasti, C. Cernuda, A. Aranburu, J.R. De Okariz, and U. Zurutuza, "Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery", IEEE International Conference on Fuzzy Systems, pp. 1-8, 2020.
[http://dx.doi.org/10.1109/FUZZ48607.2020.9177537]
[59]
J. Sharma, M. L. Mittal, and G. Soni, "Condition-based maintenance using machine learning and role of interpretability: A review", Int. J. Syst. Assur. Eng. Manag., 2022.
[http://dx.doi.org/10.1007/s13198-022-01843-7]
[60]
C. Bove, J. Aigrain, M-J. Lesot, C. Tijus, and M. Detyniecki, "Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users", International Conference on Intelligent User Interfaces, Proceedings IUI, pp. 807-819, 2022.
[http://dx.doi.org/10.1145/3490099.3511139]
[61]
S. Mohseni, N. Zarei, and E.D. Ragan, "A multidisciplinary survey and framework for design and evaluation of explainable AI systems", ACM Trans. Interact. Intell. Syst., vol. 11, no. 3-4, pp. 1-45, 2021.
[http://dx.doi.org/10.1145/3387166]
[62]
W. Samek, and K-R. Müller, Towards Explainable Artificial Intelligence.Explainable AI: Interpreting, Explaining and Visualizing Deep Learning., Springer: Berlin, 2019, pp. 5-22.
[http://dx.doi.org/10.1007/978-3-030-28954-6_1]
[63]
C. Lacave, and F.J. Díez, "A review of explanation methods for Bayesian networks", Knowl. Eng. Rev., vol. 17, no. 2, pp. 107-127, 2002.
[http://dx.doi.org/10.1017/S026988890200019X]
[64]
D. Martens, B. Baesens, T. Van Gestel, and J. Vanthienen, "Comprehensible credit scoring models using rule extraction from support vector machines", Eur. J. Oper. Res., vol. 183, no. 3, pp. 1466-1476, 2007.
[http://dx.doi.org/10.1016/j.ejor.2006.04.051]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy