9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Image Synthesis from Projective Displacement Application to Image-Based Visual Servoing

Image Synthesis from Projective Displacement: Application to Image-Based Visual Servoing

JAE SEOK PARK and MYUNG JIN CHUNG Department of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon, KOREA. jspark@cheonji.kaist.ac.kr; mjchung@ee.kaist.ac.kr Abstract: - Projective framework provides useful cues of 3-D structure even without metric information or calibration processes. However, it shows a tendency to suffer from the image noises. In this paper, an application of the projective framework to image syntheses of a 3-D object is introduced with a proposition about a noise resistive estimation of the projective displacement. The synthesized images are used to generate image trajectories to handle the limitations of the image-based visual servoing. Results of the simulation demonstrate its effectiveness. Key-Words: - Projective displacement, image-based visual servoing, image synthesis, uncalibrated stereo rig 1 Introduction Use of projective geometry is spreading its range from a fundamental understanding of projective views to a variety of robotic applications. It gives convenient tools to interpret the geometric structures from the uncalibrated images. Since the metric calibration of a vision system is an awesome job, a number of novel approaches not to involve the metric space have been attempted in performing the visionbased applications. The main purpose of this paper is to introduce an approach to handle the known limitations in image-based visual servoing [1] using projective geometry and to consider some computational issues in the approach. In the image-based visual servoing, in spite of many positive aspects, there are some serious limitations. Since the image Jacobian relates only the tangent space of the image plane with the tangent space of the workspace in a speci?c con?guration, convergence is not guaranteed when the initial pose discrepancy is large. Another serious problem under the large initial pose discrepancy is that the feature points may leave the camera’s ?eld of view [4]. Therefore, we have proposed a novel method to overcome these problems in [5]. The problems have been handled by planning a straight path in the image space such that the current con?guration of the manipulator should be always close to the reference con?guration in any time instant. A number of intermediate views of the robot gripper have been synthesized to construct the image trajectories that allow the robot gripper to track a straight path in the 3-D workspace. The contribution of the work was that the method generates the intermediate views of the 3-D structured object only with the image information within the framework of the projective space. However, the proposed method is likely to fail under the in?uence of relatively large image noises or distortions since the framework of the projective space shows a tendency to suffer from the image noise. In the proposed method, a projective displacement of a rigid body has been estimated using the image correspondence. In this paper, an enhanced estimation is proposed to give a better result generating the image space trajectories of the gripper points. It is assumed that uncalibrated stereo cameras are given as in [5]. The cameras are independently ?xed and observe the gripper attached to a manipulator. It is also assumed that the initial and the goal gripper points are within the camera’s ?eld of view. The remainder of this article is structured as follows. Section 2 gives a brief review to the image space path generation method in [5]. Section 3 considers the computational issues and proposes an enhanced estimation of a projective displacement. Image-based visual servoing with the desired image trajectories is presented in section 4 and the results of computer simulations are presented in section 5 to show the feasibility of the proposed approach. Finally, conclusions are given in section 6 with some directions for the further work.

2 Projective Representation of Intermediate

where

(2)

?

?

where are eigenvectors of . In case that all the eigenvectors are not linearly independent, one of two identical eigenvectors can be replaced with an arbitrary linearly independent vector with respect to the remaining eigenvectors. This modi?cation still ensures that the direction of the screw

Although (3) provides far more reliable solution than any other linear equations, it does not guarantee that the undergoing transformation corresponds to the Euclidean displacement of the gripper. In consequence, the resulting transformation may not have its eigenvalues in the form of , which may cause breaking the process of the proposed algorithm. To avoid this situation, additional constraints have to be considered in the cost function (3). More speci?cally, the eigenvalues of should take the desirable form by such constraints. If we assume that the transformation has the desirable form of eigenvalues as

?p? ? ? ? ?? ?B? ?? ?¤?p? ? ? ? ? ?? ? ? ?? ? ? H? x? u ? ?w? ? ¨?p?H? x? u ? ?w? ¨2? n??}? ~ ? ? ?yu ? ?w? ? yu ? ?w¤ ? ? |j{zxy2weustr uv m mm p9 ? ? ? ? ? ?w o ?dT? mnh lij?6?d? ? ? 5 Vge6?d? ? ? 5 h ?%????? ? ? k h f

q ¨r

Suppose that are projective transformations representing the intermediate screw motions enforced on the initial pose of the gripper. If the rotation angles associated with the intermediate screw motions are interpolated as , the eigenvalues of should be in the form of . To ensure that the screw axis of should be as same as that of , the eigenvectors of also have to be as same as those of . Therefore, can be constructed as

!¨? ??§?§

?

(1)

?? ( ??5 ? ??

?

? ??? ? ?? ? ¨? ?? ? ? ? ?

?

?

??EEE? 6 H!FFF§

In this section, the process of generating the intermediate poses of a rigid body in the projective space [5] is brie?y reviewed. The process is composed of two stages. First, a number of screw motions are constructed and represented in the form of projective transformations. The rotational angle associated with each screw motion is interpolated from the rotational angle associated with the initial projective displacement. Second, appropriate pure translations are added to the constructed screw motions to allow the virtual objects transformed by the original motions to be in a straight path. If the corresponding image points between two cameras and between the initial and the goal gripper points are found, the initial projective displacement as well as the two 3 4 projection matrices conveniently denoted by and can be obtained [7]. Since the projective displacement is conjugated to a Euclidean displacement, has to have the eigen, even though values in the form of the metric representation of the displacement is unknown. Therefore, the rotation angle about the screw axis is found from as

3 Estimation of Projective Displacement In this section, a robust estimation method for the projective displacement is presented. With at least ?ve feature points on the gripper such that no four of them are linearly dependent, that relates two corresponding sets of projective points can be computed by a simple linear equation [7]. However, the linear method usually causes undesirable results because the projective reconstructions from the true images are too sensitive to noise. In the case that more than ?ve points are available, the estimation of can be enhanced by several optimization methods including the one proposed in [9]. The basic idea in [9] is to compare the projections of the transformed and inversely transformed projective points with the true image points. Suppose that and are true image points of the gripper where the capital letters in the subscripts indicate the left or right camera images and the asterisk symbols indicate is then estimated by the goal pose of the gripper. minimizing the nonlinear cost function de?ned as (3)

G?EEE? C A 6 H!FFF§ ( D5 T?

Poses

axis of is as same as that of . A numbers of virtual grippers are obtained by transforming with in the projective space. The virtual grippers are then realigned straightly by pure translations using some of geometric properties of the projective space and incidence relationships [5].

?

A B?

G & !PQC I( A &

?

?

? i U g h

ff ef

!???¨? § § ? ? @ § 9 5 67? ¨

h x q u q r yvwtstsq q aIU p ( ` `c` S2RFA ` d`c` Y S 2RFA ` §b` UV( T? ` A ` ` `a§ WXXX AB?

? ¤? ?

G E ? C A 6 H?!FEFEF§ ( D5 B? 8 12)0¤'%#" 43 ( &$

A S B? 2RFA

?

?¤?

?

?

2RFA S

B? A A ?? T? §¨? §

?

above, the characteristic equation of as

? ( 6 ?5

(9)

(5)

a vision-based task function is de?ned as [2]

where is composed of the current image coordiis the pseudo inverse of the estimated nates and image Jacobian. The velocity control law that forces the task function to vanish exponentially is given by [2]

(7)

where

and

are positive weighting factors.

4 Visual Servoing The representations of the intermediate poses of the gripper are made in section 2. A set of images of the virtual grippers can then be generated using and . Suppose that two projection matrices the projective coordinates of the virtual grippers are , corresponding image coordinates of the left and the right image planes are obtained by

In a conventional image-based visual servoing system, is often considered as a constant matrix during the entire servoing task since the precision of the task is not strongly affected by the accuracy of the image Jacobian [3]. However, because the considered system has the large pose discrepancy initially, the image Jacobian reveals considerable variations during the servoing task. Therefore, the image Jacobian has to be estimated and dynamically updated in the real-time loop of the visual servoing algorithm as in [6],[8]. 5 Simulations Fig. 1 shows the environment for the simulations. A six DOF manipulator is given with a PUMA-560 kinematic model and an arti?cial gripper is attached to it. Two cameras are assumed to observe the gripper in ?xed positions using simple pinhole camera

6 45

(8)

where and are appropriate scale factors. Since represents the order of the intermediate motions, it

) hi A ?A 0 9 ?A 0

674 5

) A @ hi 9 674 5 9 A0 A0 i i6 ?A 0 9 A 0 5 674 5 899

(

&

If all the cost functions are combined into one objective function, it gives not only an enhanced estimation complies with a of , but also an assurance that Euclidean motion. The resulting function to be minimized is written as

$ §

8

(6)

with a proportional gain . If we denote the discrete time interval by , the control law in a discrete time domain can ?nally be written as

)

@

@

9 99 ( 8

45 67¤

)

&

A

6 45

A0

where for the matrix whose element is . Even though the CayleyHamilton theorem states only the suf?cient condition, (5) can still make a contribution to having a speci?c form of eigenvalues. For the unit eigenvalue, more strict condition can be developed using the determiif and only if nant equation. Since has an eigenvalue equal to one, another additional cost function is de?ned as

(10)

(11)

(12)

C

FFE ? ? h EE

A

'

?? h

A

&

??

A

1

6 ?A 0 9 A 0 5

'

? 32x ?? 1 & A

?? 1 ' E FFAE E

? ? 1 & FFE EE ? ? h ' A ?? h & p A A

(

( ?A 0

According to the Cayley-Hamilton theorem [10], we . Therefore, an additional cost funchave tion can be de?ned as

?

@ ? ? $ 6 § i &?%#" ? 8 9 r ?5 6 § i ? 8 9 r ?5 ? ? 6 9 ??5 6 9 ?5 r 6 § 9 ?5 A ` ( 6 o 9 ? !! 5 ? ¤? ? ? ?

? ( § ? ? ? ? % ¤? ? ()( g ? ?? ' Y ? ? ? % ? ?? ? ?? A A ef A & WX

3 3h h GBlG ? ? ¨?§? r S ? S ?( r mm m mm o &?%#" ? 8 9 r ? 5 6 o i ? 8 9 r ? 5 m $ r 6 i ? n6 & ? ? 5 r f

uk rk h ? e??? w? ? ? ? ? 67? 5 u u lj6 & ? ? 5 r r lj ?6 ? ? ? 5 # ????? k i k i

?

can be written

(4)

can also be treated as a discrete time index, which allows the resulting set of image coordinates to be a desired image trajectory. The image trajectory is then applied to the image-based visual servoing system to guide the gripper to the goal pose through a straight path. Suppose that there are feature points on the gripper and suppose that the image set point at time is given in the form of

o p9 ? !!"? g7? 5 u ? 5 f 6 r 6

? ¤

? ( § g ? ?? ' Y ()( ? ?? A ef A & WX

( 7? 5 6

(

?

?

??

?

A

%

C

?

0.5

0

0

50

50

Z

0 y PUMA 560 x z

100

100

150

150

Y

?0.5 0.4 0.2 0 ?0.2 ?0.4 Y 0 0.2 X 0.6 0.4 1 0.8 1.2

200

Y

200

250

250

300 0

50

100

X

150

200

250

300 0

50

100

X

150

200

250

(a)

(b)

(c)

Figure 1: Set-up for the simulations. (a) 3-D structure of the workspace. (b) Left image plane. (c) Right image plane. models. They are depicted as two small cylinders in Fig. 1(a). The image planes are limited by 300 300 rectangles and all the image coordinates are rounded up to integers. Image noises are also considered by adding random numbers with a bound of (-3, 3). Fig. 1(b) and 1(c) show arti?cial views obtained from the left and right camera models. The projection matrices are computed using 31 corresponding image points of the gripper. The goal images are depicted beside the hexahedral object in Fig. 1(b) and 1(c). The projective coordinates of the initial and the goal gripper points are reconstructed and applied to (7) with several numeric values of weighting factors to give the estimations of . Table 1 gives the eigenvalues of the transformations estimated with different combinations of the cost functions. It is de?nitely shown that the last set, which is obtained using all the three cost functions, is closest to the expected form of eigenvalues. Numbers of situational image-based visual servoing tasks have been tested on a PC as well using the Matlab toolbox. New views are assumed to be capCost functions used , , , 0.91 1.12 1.03 0.91 0.92 1.00 0.42 0.44 + 0.88 0.43 + 0.88 0.42 0.44 - 0.88 0.43 - 0.88 obtained by three different tured every 100 msec for the feedback loop. Among the gripper points, only ?ve points among them are used for the servoing tasks. For the image space task functions, it is also assumed that the goal images of the gripper are given as ?ve image points. In the ?rst case of task simulation, the initial joint angles of the manipulator are given by -0.3770, -0.9424, 0.8168, -0.1026, -1.9478, 0.6284 in radians. To introduce large pose discrepancy, the goal joint angles, which the manipulator has to reach when the goal images are achieved, are given by -0.3770, -0.0628, -0.4398, 0.7540, -0.8168, 0.5027 . This task is performed with and without generation of desired image trajectories proposed above. Fig. 2(a) shows the motion of the gripper in the right image plane when desired image trajectories are not used. The gripper points get out of the image bounds despite that the image Jacobian is exactly computed using the model parameters that are assumed for the simulations. Even though the ?nal goal seems to be achieved by ignoring the image limits in the simulation, real systems would de?nitely fail in this situation. Fig. 2(b) shows synthesized trajectories for this case using the proposed method. Fig. 2(c) exhibits the resulting motion of the gripper when the desired image trajectories are used. At this time, image Jacobian is estimated online to endure the assumption that the camera parameters are unknown. The estimation is achieved based on a least squared error method with a forgetting factor, which is presented in [6]. In spite of this worse condition, the gripper tracks a straight path to avoid getting out of the image bounds with the help of the desired image trajectories. The read values of the joint angles

Table 1: Eigenvalues of ways

?

0.51 + 0.51 0.86 + 0.86 -

?

?

?

h u? r? ?

h r? ?

?

h?

0

0

0

50

50

50

100

100

100

150

150

150

Y

Y

200

200

Y

200

250

250

250

300 0

50

100

X

150

200

250

300 0

50

100

X

150

200

250

300 0

50

100

X

150

200

250

(a)

(b)

(c)

Figure 2: Visual servoing task for the case 1. Right image planes for (a)the motion of the gripper without desired image trajectories, (b)desired image trajectories planned using the proposed method and (c)the resulting motion of the gripper when the desired image trajectories are used.

0 0 0

50

50

50

100

100

100

150

150

150

Y

Y

200

200

Y

200

250

250

250

300 0

50

100

X

150

200

250

300 0

50

100

X

150

200

250

300 0

50

100

X

150

200

250

(a)

(b)

(c)

Figure 3: Visual servoing task for the case 2. Right image planes for (a)the motion of the gripper without desired image trajectories, (b)desired image trajectories planned using the proposed method and (c)the resulting motion of the gripper when the desired image trajectories are used. in the ?nal position are -0.3812, -0.0681, -0.4305, 0.7704, -0.8114, 0.5057 . Fig. 3 shows another example of visual servoing task. Fig. 3(a) gives the result of this task without desired image trajectories. The initial and goal joint angles are given by -0.3770, -0.9424, 0.8168, -0.1026, -1.9478, 0.6284 and 0.1784, 0.1154, 0.4670, 0.1708, -0.4523, 0.6115 respectively. In this case, the gripper does not even converge to the goal con?guration despite that the image bounds are not taken into account again. With the desired image trajectories in Fig. 3(b), however, the ?nal goal is successfully achieved as depicted in Fig. 3(c) with the same initial conditions. The error trajectories for the ?ve feature points in the right image plane are shown in Fig. 4. The manipulator is stabilized at the joint angles of 0.1714, 0.1162, -0.4615, 0.2152, -0.4442, 0.6180 . 6 Conclusion In this paper, an estimation method for a projective displacement has been proposed and utilized in an image-based visual servoing system to handle the limitations that the image-based visual servoing systems may have. It has been shown that the intermediate views can be successfully synthesized from the initial projective displacement, because the result of the estimation complies with the Euclidean motion

?

?

?

?

40 30 20

X?Axis

10 0 ?10 ?20 ?30 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000

50 0 ?50

Y?Axis

?100 ?150 ?200 ?250 0 500 1000 1500 2000 2500 3000 Time (msec) 3500 4000 4500 5000

Figure 4: Error trajectories for the ?ve feature points in the right image plane. Above: X-axis. Below: Yaxis. in a reasonable bounds with the help of the proposed cost functions in (7). Although the method gives enhancement in estimating the projective displacement, there remain some other limitations to be further practical. Since the method uses nonlinear optimization, there is a possibility to be stabilized at a local minimum. Another problem is that the nonlinear optimization demands too much computing time to be implemented in a realtime application. Further work has to be devoted to these limitations. References: [1] S. Hutchinson, G. D. Hager and P. I Corke, “A tutorial on visual servo control,” IEEE Trans. Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. [2] B. Espiau, F. Chaumette and P. Rives, “A new approach to visual servoing in robotics,” IEEE Trans. Robotics and Automation, vol. 8, no. 3, pp. 313–326, 1992. [3] G. D. Hager, “A modular system for robust positioning using feedback from stereo vision,” IEEE Trans. Robotics and Automation, vol. 13, no. 4, pp. 582–595, 1997. [4] Y. Mezouar and F. Chaumette, “Path planning in image space for robust visual servoing,” in Proc.

IEEE Int. Conf. Robot. Automat., 2000, pp.27592764. [5] J. Park and M. Chung, “Image space trajectory generation for image-based visual servoing under large pose error,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2001, pp. 1159–1164. [6] K. Hosoda, K. Sakamoto and M. Asada, “Trajectory generation for obstacle avoidance of uncalibrated stereo visual servoing without 3D reconstruction,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 1995, Vol. 1, pp. 29–34. [7] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, Cambridge, MA: MIT Press, 1993. [8] J. A. Piepmeier, G. V. McMurray and H. Lipkin, “A dynamic Jacobian estimation method for uncalibrated visual servoing,” in Proc. IEEE/ASME Int. Conf. Advanced Intell. Mech., 1999, pp.944– 949. [9] R. Horaud and G. Csurka , “Self-calibration and Euclidean reconstruction using motions of a stereo rig,” in Proc. Sixth IEEE Int. Conf. Computer Vision, 1998, pp. 96–103. [10] C. T. Chen, Linear System Theory and Design, Oxford University Press, 1984.

赞助商链接

- Fuzzy Model Based Control Applied to Image-Based Visual Servoing
- 2 12 D visual servoing a possible solution to improve image-based and position-based visual
- Abstract Planar Image Based Visual Servoing as a Navigation Problem
- Complex object tracking by visual servoing based on 2D image motion
- IEEE TRANSACTIONS ON ROBOTICS (TO APPEAR) 1 Visual Servoing Based on Structure From Control
- Towards local control for image-based texture synthesis
- Image-based visual servoing for nonholonomic mobile robots with central catadioptric camera
- Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping
- Robustness of image-based visual servoing with respect to depth distribution errors
- C-BIRD Content-Based Image Retrieval from Digital Libraries Using Illumination Invariance a
- Fuzzy Model Based Control Applied to Image-Based Visual Servoing
- A New Approach to Visual Servoing in Robotics
- Planar image based visual servoing as a navigation problem
- Image Based Visual Simulation and Tele-Assisted Robot Control
- 视觉伺服机器人对运动目标操作的研究

更多相关文章：
更多相关标签：