Abstract
Background: In recent years, deep learning techniques have dramatically enhanced mobile robot sensing, navigation, and reasoning. Due to the advancements in machine vision technology and algorithms, visual sensors have become increasingly crucial in mobile robot applications in recent years. However, due to the low computing efficiency of current neural network topologies and their limited adaptability to the requirements of robotic experimentation, there will still be gaps in implementing these techniques on real robots. It is worth noting that AI technologies are used to solve several difficulties in mobile robotics using visuals as the sole source of information or with additional sensors like lasers or GPS. Over the last few years, many works have already been proposed, resulting in a wide range of methods. They built a reliable environment model, calculated the position of the model, and managed the robot's mobility from one location to another.
Objective: The proposed method aims to detect an object in the smart home and office using optimized, faster R-CNN and improve accuracy for different datasets.
Methods: The proposed methodology uses a novel clustering technique based on faster R-CNN networks, a new and effective method for detecting groups of measurements with a continuous similarity. The resulting communities are coupled with the metric information given by the robot's distance estimation through an agglomerative hierarchical clustering algorithm. The proposed method optimizes ROI layers for generating the optimized features.
Results: The proposed approach is tested on indoor and outdoor datasets, producing topological maps that aid semantic location. We show that the system successfully categorizes places when the robot returns to the same area, despite potential lighting variations. The developed method provides better accuracy than VGG-19 and RCNN methods.
Conclusion: The findings were positive, indicating that accurate categorization can be accomplished even under varying illumination circumstances by adequately designing an area's semantic map. The Faster R-CNN model shows the lowest error rate among the three evaluated models.
Keywords: Convolutional neural network, robot localization, semantic segmentation, mobile robotics, deep learning, clustering.
Graphical Abstract
[http://dx.doi.org/10.5753/sbesc_estendido.2021.18504]
[http://dx.doi.org/10.1016/j.robot.2021.103862]
[http://dx.doi.org/10.1016/j.robot.2020.103567]
[http://dx.doi.org/10.1016/j.eswa.2020.114195]
[http://dx.doi.org/10.1021/acsnano.7b04525] [PMID: 28803481]
[http://dx.doi.org/10.1109/TPAMI.2016.2644615] [PMID: 28060704]
[http://dx.doi.org/10.1109/CVPR.2018.00387]
[http://dx.doi.org/10.1109/IECON.2019.8927168]
[http://dx.doi.org/10.1109/ISIE.2017.8001512]
[http://dx.doi.org/10.5753/sbesc_estendido.2020.13096]
[http://dx.doi.org/10.1016/j.isatra.2020.02.017] [PMID: 32085878]
[http://dx.doi.org/10.1016/j.rcim.2021.102229]
[http://dx.doi.org/10.1016/j.isatra.2020.08.025] [PMID: 32863054]
[http://dx.doi.org/10.1016/j.rcim.2020.102077]
[http://dx.doi.org/10.1109/ACCESS.2020.3025575]
[http://dx.doi.org/10.1109/TII.2021.3107556]
[http://dx.doi.org/10.1109/ICRA48506.2021.9560787]
[http://dx.doi.org/10.1109/TII.2021.3139609]
[http://dx.doi.org/10.1155/2018/1627185]
[http://dx.doi.org/10.1109/IROS45743.2020.9341284]