Abstract
Background: ViBE (Visual Background Extractor) is an algorithm with a variety of advantages in video moving object detection which utilizes a pixel-level background modeling. However, it is not suitable for distinguishing the scene of drastic change, adapts poorly to the sudden change of the illumination and may lost the object easily, because this algorithm uses a fixed threshold to distinguish the background from the foreground.
Methods: In this paper, an improved ViBE algorithm is proposed, which an adaptive dynamic threshold method is introduced for classification of the foreground and the background in the changing scenes. When reconstructing the model it required for drastic change of illumination, Otsu algorithm is used to judge the threshold and select the appropriate frame to complete the reconstruction to achieve quick adapt to the light.
Results: Experimental results show that the proposed algorithm has higher recall value, better precision and F value comparing to the original algorithm. The improved algorithm has the highest classification accuracy among other similar algorithms and therefore the improved algorithm significantly improves the detection results.
Conclusion: After analyzing the principle of ViBE algorithm, this paper proposed improvements to it from two aspects to aim at its deficiency. Taking into account of the dynamic changes of different environments, the change factor was proposed to measure the dynamic degree of background. According to the value of the factor, adaptive clustering was obtained and clustering threshold was updated to improve the adaptability of the algorithm to the dynamic environments. The improved ViBE algorithm can find the appropriate frame to reconstruct model structure in the case of abrupt light change, which can quickly adapt itself to the light change and be more accurate in the object detection.
Keywords: ViBE, background extractor, adaptive threshold, moving object detection, recall value, better precision.
Graphical Abstract
[http://dx.doi.org/10.1109/JSTSP.2018.2869111]
[http://dx.doi.org/10.1016/j.cviu.2013.12.005]