An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision Article uri icon

abstract

  • A vision-based control methodology is presented in this paper that can perform accurate, three-dimensional (3D), positioning and path-tracking tasks. Tested with the challenging manufacturing task of welding in an unstructured environment, the proposed methodology has proven to be highly reliable, consistently achieving terminal precision of 1 mm. A key limiting factor for this high precision is camera-space resolution per unit physical space. This paper also presents a means of preserving and even increasing this ratio over a large region of the robot%27s workspace by using data from multiple vision sensors. In the experiments reported in this paper, a laser is used to facilitate the image processing aspect of the vision-based control strategy. The laser projects laser spots over the workpiece in order to gather information about the workpiece geometry. Previous applications of the control method were limited to considering only local, geometric information of the workpiece, close to the region where the robot%27s tool is going to be placed. This paper presents a methodology to consider all available information about the geometry of the workpiece. This data is represented in a compact matrix format that is used within the algorithm to evaluate an optimal robot configuration. The proposed strategy processes and stores the information that comes from various vision sensors in an efficient manner. An important goal of the proposed methodology is to facilitate the use of industrial robots in unstructured environments. A graphical-user-interface (GUI) has been developed that simplifies the use of the robot/vision system. With this GUI, complex tasks such as welding can be successfully performed by users with limited experience in the control of robots and welding techniques. © 2003 Elsevier Ltd. All rights reserved.
  • A vision-based control methodology is presented in this paper that can perform accurate, three-dimensional (3D), positioning and path-tracking tasks. Tested with the challenging manufacturing task of welding in an unstructured environment, the proposed methodology has proven to be highly reliable, consistently achieving terminal precision of 1 mm. A key limiting factor for this high precision is camera-space resolution per unit physical space. This paper also presents a means of preserving and even increasing this ratio over a large region of the robot's workspace by using data from multiple vision sensors. In the experiments reported in this paper, a laser is used to facilitate the image processing aspect of the vision-based control strategy. The laser projects laser spots over the workpiece in order to gather information about the workpiece geometry. Previous applications of the control method were limited to considering only local, geometric information of the workpiece, close to the region where the robot's tool is going to be placed. This paper presents a methodology to consider all available information about the geometry of the workpiece. This data is represented in a compact matrix format that is used within the algorithm to evaluate an optimal robot configuration. The proposed strategy processes and stores the information that comes from various vision sensors in an efficient manner. An important goal of the proposed methodology is to facilitate the use of industrial robots in unstructured environments. A graphical-user-interface (GUI) has been developed that simplifies the use of the robot/vision system. With this GUI, complex tasks such as welding can be successfully performed by users with limited experience in the control of robots and welding techniques. © 2003 Elsevier Ltd. All rights reserved.

publication date

  • 2003-01-01