Abstract
Full text available
Introduction of Robot Vision on the Aspects from Configuration to Measurement and Control Methods
Page: 1-14 (14)
Author: De Xu
DOI: 10.2174/978160805166311001010001
PDF Price: $15
Abstract
Robot vision is a kind of science and technology concerned with multiple disciplines to enable a robot to see. Its specified aspects that are attracting the attention of the researchers in robot community include architecture and calibration of visual systems, visual measurement methods and visual control approaches. The aspects above are investigated and analyzed according to current works, respectively. Furthermore, their tendencies are also predicted. The visual measurement principles from parallax to knowledge and the visual control strategies from traditional control methods to humanoid approaches are regarded to have promising future.
Hardware and Software Design of an Embedded Vision System
Page: 15-29 (15)
Author: Jia Liu
DOI: 10.2174/978160805166311001010015
PDF Price: $15
Abstract
Vision system is very important for robots to sense the environments where they work. Recently, embedded vision system such as smart camera has been rapidly developed and widely used. In this chapter an embedded robot version system using ARM processor and CMOS image sensor is introduced. Its hardware structure, software design and some useful programs are also described in detail.
Embedded Vision Positioning System Based on ARM Processor
Page: 30-46 (17)
Author: Wei Zou, De Xu and Junzhi Yu
DOI: 10.2174/978160805166311001010030
PDF Price: $15
Abstract
This chapter presents an embedded system for image capturing and visual measurement. The architecture of this system is single processor structure and an ARM processor which operates at 406 MHz is utilized. The image capturing device is a CMOS camera choosing the OV7620 chip as its core whose grabbing speed is 30 frames per second. The program code is stored in a Flash memory and running in a SDRAM memory. The functions include object segmentation, object detection and positioning. The object to be identified is a color block in experiments. The object segmentation and detection are accomplished by checking the color block. Experimental results verify the robustness and performance of the proposed system.
Collaboration Based Self-localization Algorithm for Humanoid Robot with Embedded Vision System
Page: 47-55 (9)
Author: Wei Zou, De Xu and Junzhi Yu
DOI: 10.2174/978160805166311001010047
PDF Price: $15
Abstract
Collaborative self-localization algorithm for humanoid robot is a challenge in practice; here we present a novel self-localization algorithm for collaborated humanoid robots with an embedded vision system. The algorithm is established on an embedded vision system constituted by a DSP DM642 central processor with ethereal controller and an 802.11g WiFi module, which could provide high speed network for large throughput based multiple robots communication. Our self-localization algorithm built up in the embedded vision system analyzes the image information with a speed of ten frames per second (720×480 pixels), and self-localizes by fusing vision information of collaborated robots via WiFi communication. Experimental results on collaboration among multiple humanoid soccer robots are presented to demonstrate the effectiveness of our embedded vision system and self-localization algorithm.
Application of Vision Sensor to Seam Tracking of Butt Joint in Container Manufacture
Page: 56-82 (27)
Author: Zao Jun Fang and De Xu
DOI: 10.2174/978160805166311001010056
PDF Price: $15
Abstract
This chapter designs a vision-based seam tracking system for butt joint of thin plate. Image-based visual control method is adopted due to its many merits. A key feature of this system is that it is compact and reliable. Smart camera is selected as the vision sensor and programmable logic controller (PLC) as the controller. Since the seam is so narrow, the visual measurement is based on natural lighting. To effectively extract the image features, “and” operation is proposed to eliminate the effect of splash of welding. Moreover, a new adaptive thresholding method is presented to segment the seam from the image, and combination of Hough transform and least square line fitting technique is used to extract the seam line. In terms of controller, a novel method is proposed to define reference and feedback image features to form a closed loop in image space. In addition, image error filtering and output pulse verification are added to the controller to improve the reliability. Finally, a series of experiments were conducted to verify the performance of the proposed seam tracking system.
Vision System Design and Motion Planning for Table Tennis Robot
Page: 83-102 (20)
Author: Zheng Tao Zhang, Ping Yang and De Xu
DOI: 10.2174/978160805166311001010083
PDF Price: $15
Abstract
The state of arts for table tennis robot is introduced. Then a binocular stereovision vision system and related algorithm are proposed including the image processing to find the ball and the trajectory prediction model. The vision system is integrated with two smart cameras and used to track table tennis ball. The system adopts a distributed parallel processing architecture based on local area network. A set of novel algorithms with little computation and good robustness running in the smart cameras is also proposed to recognize and track the ball in the images. A computer receives the image coordinates of the ball from the cameras via local area network and computes its 3D positions in the working frame. Then the flying trajectory of the ball is estimated and predicted according to the measured positions and the flying and rebound models. The main motion parameters of the ball such as landing point and striking point are calculated from its predicted trajectory. The motion planning of the paddle of the table tennis robot is designed. Experimental results show that the developed image processing algorithms are robust enough to distinguish the ball from complex dynamic background. The predicted landing point and striking point of the ball have satisfactory precision. The robot can strike the ball to the semi-table at opponent side successfully.
Object Recognition Using Local Context Information
Page: 103-118 (16)
Author: Nong Sang and Changxin Gao
DOI: 10.2174/978160805166311001010103
PDF Price: $15
Abstract
The object recognition method based on local features is significant in computer vision. However, the robustness of this method is limited, since it is often sensitive to large intra-class variance, occlusion, insignificant variety of poses, low-revolution conditions, background clutter etc. Context information gives an access to resolve this problem. Local feature context, object context and scene context can be used in computer vision system. This chapter focuses on the first one, and presents two object recognition approaches with two different types of local context: neighbour-based context and geometric context. Our experimental results demonstrated the good performance of our methods.
The Structured Light Vision System and Application in Reverse Engineering and Rapid Prototyping
Page: 119-131 (13)
Author: Bingwei He and Shengyong Chen
DOI: 10.2174/978160805166311001010119
PDF Price: $15
Abstract
In recent years, computer vision systems develop computer models of the real world through processing of image data from sensors such as cameras or range scanners. Reverse engineering (RE) techniques from original acquiring three dimension surface data of the object have been developed to could convert point cloud data into CAD models by NURBS or STL (triangular mesh) format. The above CAD models can be reconstructed by using material incremental method (as rapid prototyping, RP).
Abstract
Full text available
Introduction
Embedded vision systems such as smart cameras have been rapidly developed recently. Vision systems have become smaller and lighter, but their performance has improved. The algorithms in embedded vision systems have their specifications limited by frequency of CPU, memory size, and architecture. The goal of this e-book is to provide a an advanced reference work for engineers, researchers and scholars in the field of robotics, machine vision, and automation and to facilitate the exchange of their ideas, experiences and views on embedded vision system models. The effectiveness for all methods is emphasized in a practical sense for systems presented in this e-book.