Abstract
Computer vision is one of the prime domains that enable to derive meaningful and crisp information from digital media, such as images, videos, and other visual inputs.
Background: Detection and correctly tracking the moving objects in a video streaming is still a challenging problem in India. Due to the high density of vehicles, it is difficult to identify the correct objects on the roads.
Methods: In this work, we have used a YOLO.v5 (You Only Look Once) algorithm to identify the different objects on road, such as trucks, cars, trams, and vans. YOLO.v5 is the latest algorithm in the family of YOLO. To train the YOLO.v5, KITTY dataset was used having 11682 images having different objects in a traffic surveillance system. After training and validating the dataset, three different models have been constructed setting various parameters. To further validate the proposed approach, results have also been evaluated on the Indian traffic dataset DATS_2022.
Results: All the models have been evaluated using three performance metrics, such as precision, recall, and mean average precision (MAP). The final model has attained the best performance on KITTY dataset as 93.5% precision, 90.7% recall, and 0.67 MAP for different objects. The results attained on the Indian traffic dataset DATS_2022 included 0.65 precision, 0.78 recall value, and 0.74 MAP for different objects.
Conclusion: The results depict the proposed model to have improved results as compared to stateof- the-art approaches in terms of performance and also reduce the computation time and object loss.
Graphical Abstract
[http://dx.doi.org/10.4218/etrij.15.2314.0077]
[http://dx.doi.org/10.1109/ICUFN.2018.8436710]
[http://dx.doi.org/10.1109/ICPR.2010.816]
[http://dx.doi.org/10.1109/ICUAS48674.2020.9213912]
[http://dx.doi.org/10.1109/JIOT.2017.2705560]
[http://dx.doi.org/10.1002/9781118971666.ch5]
[http://dx.doi.org/10.1007/s00607-020-00869-8]
[http://dx.doi.org/10.1016/j.physa.2020.125691]
[http://dx.doi.org/10.1007/s12544-008-0002-1]
[http://dx.doi.org/10.29207/resti.v4i3.1871]
[http://dx.doi.org/10.1186/s12544-019-0390-4]
[http://dx.doi.org/10.1007/978-3-319-09339-0_35]
[http://dx.doi.org/10.1109/ACCESS.2019.2963486]
[http://dx.doi.org/10.1049/iet-its.2018.5316]
[http://dx.doi.org/10.3390/fi11080169]
[http://dx.doi.org/10.1109/JIOT.2019.2902887]
[http://dx.doi.org/10.1088/1742-6596/1325/1/012084]
[http://dx.doi.org/10.1109/ACCESS.2019.2922479]
[http://dx.doi.org/10.1109/IVS.2016.7535375]
[http://dx.doi.org/10.1007/s10489-020-01704-5]
[http://dx.doi.org/10.1109/CISP.2009.5301770]
[http://dx.doi.org/10.1109/CVPR.1999.784637]
[http://dx.doi.org/10.1109/TPAMI.2005.102] [PMID: 15875805]
[http://dx.doi.org/10.1109/AVSS.2008.12]
[http://dx.doi.org/10.1016/j.infrared.2013.12.012]
[http://dx.doi.org/10.1016/j.infrared.2014.03.007]
[http://dx.doi.org/10.1049/iet-its.2018.5039]
[http://dx.doi.org/10.1109/CVPR.2014.81]
[http://dx.doi.org/10.1016/j.ifacol.2018.09.412]
[http://dx.doi.org/10.1016/j.compeleceng.2019.05.009]
[http://dx.doi.org/10.1109/ACCESS.2020.2979260]
[http://dx.doi.org/10.1109/I-SMAC47947.2019.9032502]