Abstract
Introduction: Image processing technology is widely used for crack detection. This technology is used to build a data acquisition system and uses computer vision technology for image analysis. Because of its simplicity in processing, many of the image processing detection methods were proposed. It is relatively easy to deploy and has a low cost.
Methods: The heterogeneity of the external light usually changes the authenticity of each target in the image, which will seriously cause the experiment to fail. At this time, the image needs to be processed by the gamma transform. Based on the analysis of the characteristics of the image of the mine car baffle, this paper improves the Gamma transform and uses the it to enhance the image.
Results: It can be concluded that the algorithm in this paper can accurately detect crack areas with an actual width greater than 1.2 mm, and the error between the detected crack length and the actual length is between (-2, 2) mm. In practice, this error is completely acceptable.
Discussion: To compare the performance of a new crack detection method with the existing methods. The two most well-known traditional methods, Canny and Sobel edge detection, are selected. Although the Sobel edge detection provides some crack information. The texture of the surface of the mine cart baffle detected has caused great interference to the crack identification.
Conclusion: If the cracks appearing on the mine car baffle are not found in time, they often cause accidents. Therefore, effective crack detection must be performed. If a manual inspection is adopted for crack detection, it will be labor-intensive and easy to miss inspection. In order to reduce the labor of crack detection of mine cars and improve the accuracy of detection, this paper, based on the detection platform built, performs pre-processing, image enhancement, and convolution operations on the collected crack images of the mine car baffle.
Keywords: Car baffle, crack detection, image analysis, gamma transform, gradient filtering, computer vision technology.
Graphical Abstract
[http://dx.doi.org/10.1201/9780429441134]
[http://dx.doi.org/10.1201/9780429287794]
[http://dx.doi.org/10.4018/IJACI.2018100105]
[http://dx.doi.org/10.1016/j.autcon.2006.05.003]
[http://dx.doi.org/10.1016/j.autcon.2013.06.011]
[http://dx.doi.org/10.1109/TIM.2015.2509278]
[http://dx.doi.org/10.3390/s17071670] [PMID: 28726746]
[http://dx.doi.org/10.1109/TASE.2013.2294687]
[http://dx.doi.org/10.1061/(ASCE)CF.1943-5509.0001006]
[http://dx.doi.org/10.1111/mice.12263]
[http://dx.doi.org/10.1111/mice.12334]
[http://dx.doi.org/10.1109/TIE.2019.2945265]
[http://dx.doi.org/10.1016/j.autcon.2013.06.009]
[http://dx.doi.org/10.1108/01445151311294720]
[http://dx.doi.org/10.1016/j.sigpro.2019.107403]
[http://dx.doi.org/10.1109/TCE.2012.6414993]
[http://dx.doi.org/10.1109/ITAIC.2011.6030226]
[http://dx.doi.org/10.1016/j.measurement.2019.02.006]
[http://dx.doi.org/10.1016/j.autcon.2011.03.004]
[http://dx.doi.org/10.1016/j.conbuildmat.2018.08.011]
[http://dx.doi.org/10.1109/JSEN.2018.2827386]
[http://dx.doi.org/10.1061/(ASCE)CP.1943-5487.0000645]
[http://dx.doi.org/10.1007/s12205-015-0461-6]