Abstract
The advent of technology has brought about seismic shifts in our lives. It is
hard to imagine the world without the very technology that was considered
groundbreaking a while back, such as smartphones, the internet, etc. One such field that
has seen a tremendous rise in recent years is the field of artificial intelligence, along
with computer vision. Object detection and recognition is, and will continue to be, one
of the most important fields of research in the coming years due to our ever-increasing
demand for various technologies to substitute for the human eye. Traffic sign detection
and recognition is an important subset of this, having far-reaching real-world benefits.
Various methods and algorithms have been proposed to achieve this in the past few
years, with more novel technologies improving upon the previous works. The
emergence of advanced driving assistance systems (ADAS), used for driving
assistance, has led to many companies testing various systems on their novel car
models for better accuracy and reliability. There is still some way to go until object
recognition algorithms are deployed on these ADAS systems worldwide. One such
important part of this system is traffic sign detection and recognition. This work
proposes one such traffic sign recognition method. The proposed system is
implemented in two processes, namely detection and recognition. The former is
implemented using the You Only Look Once (YOLO) detection algorithm, which
performs grid classification on the image to predict the bounded boxes. This is
followed by finding the probability of a particular object’s presence in a particular grid.
For the latter process, a 4-layer CNN model is deployed to classify the object into 43
separate classes. The model is trained using the German traffic sign benchmark dataset.
Upon testing with other standard models such as VGGNet and ResNet-50, the proposed
model was found to be more accurate. Real-time implementation of the proposed
model gives a training accuracy of 99.51%, while the testing accuracy is found to be
97.13%.