9512.net
甜梦文库
当前位置:首页 >> >>

Human–Robot Tactile Interaction



Human–Robot Tactile Interaction?
Thomas W¨ osch, Wendelin Feiten Corporate Technology Siemens AG 81730 Munich, Germany {thomas.woesch,wendelin.feiten}@mchp.siemens.de
Abstract

In the ?eld of service robotics, robots serve and assist human beings. It is natural for humans to directly interact with the robot via tactile interfaces. This paper introduces several kinds of tactile interactions between a user and the robot as well as interactions of the robot with the environment. All interactions are implemented in a single paradigm: Forces measured from tactile sensors result in motion vectors at the contact points. The motion vectors from different sensors are superimposed and then determine the robot’s joint velocities. We present results from our experimental setup consisting of an 8 degrees of freedom manipulator arm mounted on a mobile platform. In the illustrated example, a human interacts with the robot using only the tactile interface. pressures. Appropriate to the activated mode, we determine for each contact point a motion vector describing a motion for that contact point, see sections 5-6. How this motion vector results into a motion of the robot is described in section 7. The experimental results are presented in section 8. A short description of our experimental setup is also given. The paper is concluded in section 9.

2

Related Work

1

Introduction

One of the most challenging parts in robotics during the recent years was and still is service robotics. In general service robots perform no repetitive tasks. The working environment is dynamic and in most cases only partially known. Powerful planning and control structures are demanded to move collision-free. Because these robots serve, assist and work with humans their ?rst priority is to guarantee the humans security. The motions should look human-like. No one will trust a machine that acts in a way that seems unpredictable for an observer. The robot must have interaction possibilities that appear natural to a user, e.g. natural language interaction or gesture recognition. Subject of this paper is the direct interaction between a human and a robot. We focus on tactile interaction using a new kind of tactile sensor, brie?y described in section 3. Different interaction possibilities are introduced in section 4, ranging from special motion modes to a tactile alphabet for setting these modes. When the system gets in contact with the environment or a human, the tactile sensors provide information about the contact points and the
work was partially funded under contract 01IN601A2 and 01IL902DO by BMBF, Morpha project.
? This

Specially in the ?eld of service robotics user-robot interaction is of crucial interest. [1] introduces a robot that uses patterns of interactions similar to those found when people interact with each other. This makes the robot motions appear more con?dential for the humans in the robots environment. Speech and contact panels are used to interact with the robot. Motivational systems for an autonomous robot for sophisticated human-robot interaction that express the robots mood are investigated [2, 3]. [4] uses tactile information from a force/torque sensor mounted on the tool center point of the robot to modify the tool center points trajectory. In literature there can be found various interaction possibilities between a human and a robot. Since this paper is focused on tactile interactions, we will only consider those. [5] proposes an arti?cial skin for a snake like manipulator, using a resistive principle. [6] introduces a modularized sensitive skin using infrared sensors. An acoustic resonant tensor cell for tactile sensing is shown in [7]. [8] proposes a full sensor ?exible suit made of conductive fabric and strings. A often used principle is the capacitive one, as described in [9, 10]. A disadvantage of most of these sensor systems is, that their shock absorbency is low. The sensor information can be used to control a robot platform or manipulator arm. To control a robot system in an environment of average complexity a lot of work was done during recent years. Very popular for controlling a robot have been reactive control schemes based on arti?cial forces [11, 12, 13]. A reason therefore might be the easiness how they were used and can be extended to higher-dimensional problems with

more than four degrees of freedom (DOF). In addition they are faster than planning techniques [14, 15] and can work on-line. Their major drawback and the reason why slower planning techniques are still required in robot motion planning is, that reactive mechanisms suffer from local minima. Frameworks [16] where reactive control schemes are merged with planning components, to combine the advantages of reactive controllers (fastness) and the advantages of planners (?nd a solution whenever one exists) have been proposed.

types of interaction with its different modes. In the next section it is shown how all of these types can implemented in a single paradigm.

4.1

User guided robot motion

3

Tactile Sensor

Figure 1: Experimental setup with tactile sensors. Tactile sensor segments1 are mounted all over the robot, as illustrated in Figure 1a. Figure 1c shows one tactile sensor segment in detail. The segment consists of two conductive foams, separated by an isolating net. When pressure is exerted on the foams, both foams get in contact. The higher the pressure, the lower is the resistance between the foams at the contact spot. The point of pressure and the pressure itself is measured. Therefore each tactile sensor segment is connected to a micro-controller. When more than one sensor segment is used, every controller is connected via a CAN bus. The segments are referenced by unique identi?cation numbers. Each controller can be considered as master. The bus sharing is done according the master-master principle. To get sensor information no polling is required. Ef?cient data acquisition can be done by this way.

Among the ?rst implementations of user guided robot motion is the guidance of a tool using force torque sensor and zero force control. The force and torque measured by a sensor are compensated by those induced by the tool. The remaining part is assigned to the user’s impact on the tool and controlled to zero by appropriate robot motions. Similarly, with our tactile skin the user can apply a force on the robot and make it move. In contrast to classical zero force control, since the tactile skin is not in?uenced by robot workpieces or tools we don’t have to compensate for those in?uences. Instead, the measured forces can be used directly. In the simplest case, the robot will evade the touch. How the motion is derived from the contact force is explained in section 5. However, our tactile skin allows us to implement a lot more complex modes of tactile interaction than this simple Push mode. If the user touches the robot and the robot is in Pull mode, retracting the contact point will make the robot follow to keep the contact. These modes are not restricted to single contacts, but are extended to multiple contacts with the robot. Each individual sensor segment can be set to Push or Pull individually, and the resulting robot motions can be superimposed. This also allows for grasping a robot link and guiding it precisely. This can be used to interactively control the redundant degrees of freedom.

4.2

Tactile robot-human communication on a symbolic level

4

Tactile Interaction

Our experimental setup allows us to investigate different types of tactile interaction. These comprise interaction of the user with the robot as well as interaction of the robot with the environment. This section explains these different
1 Siemens

patent, cat. no. 19959703.0

Since the robot has different modes of tactile interaction, mechanisms are needed to switch between these modes. One intuitive way to set the modes is to use the tactile sensor itself. Therefore a meaning is assigned to certain contact points, to the time when they are pressed and the order in which they are pressed. This may be compared to the Morse-alphabet. For example, two short impacts with a short break could set a given segment to Push mode, and two short impacts with a longer break could set it to Pull mode. Of course, the information given to the robot can also be more complex. For example, touching the inner gripper pads in an alternating fashion with a certain frequency - left-right-left-right in one second - means to the robot: close the gripper and grasp the object between the ?ngers, i.e. sets the robot to Grasp mode. Along these lines, a general tactile input alphabet can be de?ned. However, care has to be taken that this alphabet doesn’t interfere with other tasks of the tactile sensor, and that it is intuitive for the user. This also implies that similar

motions are used to convey similar information. Continuing the example of grasping, the robot indicates that it is ready to hand over an object by moving the object leftright-left-right in one second. Then, the user grasps the object and tells the robot that he or she now is holding the object and the robot should release it by wriggling it leftright-left-right in one second between the robot ?ngers. Finding the optimal tactile alphabet requires a lot of experiments and is far from easy. In this investigation, inspiration might be found in communication theory.

4.3 Tactile interaction of the robot with the environment
Obviously obstacle detection is a very important feature for an autonomous robot. With the tactile sensor, the robot can detect contacts all over its surface. Therefore, if the robot moves autonomously (not under user supervision or tactile guidance), the tactile sensor segments are set to Obstacle Detection mode. In this mode, each contact in the direction of motion is interpreted as a collision and the motion is stopped and then reversed. Since the contact in this case is longer than the short pulses used in the tactile alphabet, even in this mode tactile communication on the symbolic level is still available. The tactile sensor on our robot can be used to build a model of the environment. In Exploration mode, the robot will actively (though carefully) seek contact with the environment and, with reference to the known robot platform con?guration [17], insert vertices into a 3D surface representation. Figure 2: Interactive tactile sensor calibration

tactile interaction is by direct user command. This can be done using the tactile sensor itself, but also using other information channels like natural language interaction, gesture recognition or keyboard commands. Tactile sensor mode switching is also part of the autonomous robot operation. Just as other sensor modalities are selected (together with the target state of the environment in the image of this sensor), the mode of the tactile sensor appropriate for achieving a given subgoal is part of the task description (see [18]).

5

Force generation

4.4 Interactive tactile sensor calibration
Due to inhomogeneities of the sensor material and variations in the mechanical construction, the tactile sensor system has to be calibrated. This calibration is based on a geometrical model of the robot, with the robot surface patches associated to tactile sensor segments. The calibration software displays this model on a screen, highlighting characteristic surface points on the robot one by one (see Figure 2a). In our experiments, we choose these points to be the corners of the rectangular surface patches, with additional points on more complicated surface patches. For each highlighted point on the screen, the user decides which spot on which tactile sensor segment corresponds to this point and exerts a force on it (see Figure 2b). From this contact, the sensor system automatically determines the corresponding sensor segment and adds the segment id and the corresponding tuple of voltages to a list. In section 5 it is shown how such a tuple of voltages is aligned to points on the robot.

Tactile sensors are mounted on our whole experimental system, as shown in Figure 1a. In this section we will only consider one tactile sensor segment. The way of pressing on the sensor (see Figure 3a) and determining the corresponding force (see Figure 3b) for the motion generation (see section 6) is explained. When a force is exerted on the sensor shown in Figure 3a, the sensor system evaluates the pressure point u ∈ V ? R2 that lies in between the voltage tuple domain V ? R2 of the sensor, as shown in Figure 3c. The vertices v i ∈ V ? R2 , ?i = 1, . . . , 4

describe the voltage tuples aligned to the points on the corners of the tactile sensor. The voltage tuple set is given by a convex quadrangle. Introducing the vectors v ?1 v ?2 v ?3 = = = v4 ? v1 v2 ? v1 v3 ? v2

4.5 Mode switching
The modes of tactile interaction need to be appropriate for the current state and goals of the robot. As has been discussed above, an important mechanism to set the mode of

the convex combination of the pressure point can be described by ?1 + α (? v 2 + λ (? v3 ? v ?1 )) u = v 1 + λv (1)

and the parameters (3) and (4), the point f is given by the convex combination ? 1 + α (w ? 2 + λ (w ?3 ? w ? 1 )) f = w1 + λw . (5)

For each corner of the virtual convex quadrangle W in Figure 3d additional vectors ni ∈ R 3 , ?i = 1, . . . , 4 (6)

Figure 3: Force generation and contact point evaluation by pressing a tactile sensor.

are de?ned. These vectors describe a direction for the vertices w i (?i = 1, . . . , 4) of the virtual quadrangle. When a point inside the convex quadrangle is considered, the corresponding direction vector is approximated. With the parameters (3), (4) and the vectors n ?1 n ?2 n ?3 = = = n4 ? n1 n2 ? n1 n3 ? n4

.

with λ, α : [0, 1] Using the notation . (2)

the vector n ∈ R3 describes a direction for the point f in the convex quadrangle W by n = n1 + λn ? 1 + α (? n2 + λ (? n3 ? n ? 1 )) . (7)

u=

ux uy v ?2 =

v1 = bx by

vx vy v ?3 = cx cy

v ?1 =

ax ay

,

The vertices w i (?i = 1, . . . , 4) of the quadrangle shown in Figure 3d are aligned with the corners si ∈ R3 ,
(3)

the parameters λ and α are obtained by
λ= 1 2F 1 2A ±D + ?4AB + C 2

?i = 1, . . . , 4

on the robot link (see Figure 3b). A simple coordinate transformation is used to describe the alignment. When m11 ? m21 M =? m31 0 ? m12 m22 m32 0 m13 m23 m33 0 ? m14 m24 ? ? m34 1

α=

±E +

?4AB + C 2

(4)

with
A B C D E F = = = = = = a y bx ? a x by + b y c x ? b x c y a y u x ? ax u y ? ay v x + a x v y cy ux ? cx uy ? ay (bx + ux ? vx )? ?cy vx + ax (by + uy ? vy ) + cx vy cy ux ? cx uy + ay (bx ? ux + vx )+ +cx vy ? cy vx ? ax (by ? uy + vy ) ?cy ux + cx uy + ay (bx + ux ? vx )+ +cy vx ? ax (by + uy ? vy ) ? cx vy a y cx ? a x cy

The voltage domain V is aligned with a virtual convex quadrangle W ∈ R2 , as illustrated in Figure 3d. Point f ∈ W ? R2 in the domain of the virtual surface W equals the original pressure point p in the voltage duple domain V . Using the vectors w ?1 w ?2 w ?3 = = = w4 ? w1 w2 ? w1 w3 ? w4

describes the coordinate transformation between a point on the virtual surface and the robot link, point f (5) can be transformed into the contact point c ∈ S ? R3 ? ? ? ? m11 m12 m