9512.net
甜梦文库
当前位置:首页 >> >>

Orientation and Navigation in Virtual Haptic-Only Environments


Orientation and Navigation in Virtual Haptic-Only Environments
Henry K?nig, Jochen Schneider, Thomas Strothotte Department of Simulation and Graphics Otto-von-Guericke University of Magdeburg Universit?tsplatz 2, 39106 Magdeburg, Germany {hkoenig|josch|tstr}@isg.cs.uni-magdeburg.de

Abstract
In this paper we present methods which allow users to orient themselves and navigate through haptic-only environments. The fundamental idea is to use non-realistic haptic rendering to organize and structure the scene into different planes. Based on these planes, we use additional natural language and haptic information to provide the users with help for determining their current position within the environment, to support them during the examination of the shape of objects, and to guide them to special locations. Keywords: Orientation, navigation, haptic environments, non-realistic haptic rendering, blind and visually impaired people Categories: (1) Concepts, models, paradigms of navigation and way?nding, (3) Way?nding— determining a strategy, direction and course for a destination—and navigation—determining and following a course through an environment

1. Introduction
Currently, many applications use visually guided techniques to provide interaction methods within a virtual environment. These techniques are based on the spatial knowledge that users have to collect from visible parts of the environment or from additional information presented (see R ITTER et. al [8]). Users also need this collected spatial knowledge to utilize the input devices or an optionally available force feedback device which in most cases is only an alternative possibility to interact with the virtual environment or is used to provide additional haptic information. However, there are applications which are based on haptic interaction and that use an unusual transformation between the visual and the haptic feedback. When using such applications, the requirements that users have to ful?ll to compensate for these different “views” onto the scene as well as to solve their tasks are much higher. In particular, the use of non-matching alignments in haptic and visual space causes problems for the users’ eye-hand coordination. This makes it dif?cult for them to carry out their interaction tasks (see T ENDICK and H EGARTY [12]). The renunciation of the visual feedback and the emphasis on haptic information frees users from the task of combining the different aspects of feedbacks and reduces the problems which are caused by this combination. Furthermore, the concept of haptic-only environments also promises to improve the possibilities for blind and visually impaired people to gain access to computer applications and virtual worlds (see S J?STR?M [9], JANNSON, F ?NGER, K ?NIG et. al [5]).

Using haptic-only environments, the haptic feedback does not contain global information about the scene which is necessary for enabling the determination of the current position, orientation, and navigation within the haptic space. Therefore, users cannot verify their position, and what is even more important, they have no information over and above their own spatial memory about the position and the alignment of objects within a virtual scene. Considering these haptic-only environments, users face various problems while using them, such as: having trouble orienting themselves, ?nding objects in the scene, and keeping in contact with objects during the process of exploration (see C OLWELL, P ETRIE, KORNBROT et. al [1]). In this paper we investigate methods for a passive and active guidance of users through hapticonly environments. The implemented methods are based on what we call non-realistic haptic rendering that restructures and reorganizes the model and the usable 3D space. This restructuring and reorganization is realized in a way that essential 2D geometric information—which should be perceivable by the users—is mapped onto different planes. The use of planes reduces the users’ efforts down to 2D tasks for determining their current position and for carrying out their navigation task. Furthermore, the methods for a passive and active guidance include support for users during the examination of the shape of a single object as well as for the exploration of a whole scene. The exploration task is combined with the determination of the actual position in a way that the system informs the users about their position within the scene using additional haptic and natural language information. This paper is organized as follows: In section 2 we give an overview of design principles, orientation and navigation and determine the requirements which are a prerequisite for enabling orientation and navigation information. Section 3 describes the basic idea, the non-realistic haptic rendering, and the extensions of our rendering system which provide support for the different exploration tasks. Section 4 introduces our prototype and ?nally, we give an outlook on future work in section 5.

2. Related Work
During the exploration of visual virtual worlds, users can often be observed having different problems which are associated with the determination of the current position, orientation, navigation, and way?nding tasks in such environments. For example, users moved around aimlessly when they wanted to ?nd a place for the ?rst time and had dif?culties relocating places which they visited recently. Furthermore, they were often unable to realize the topological structure of the virtual space. All these problems were determined using large environments in which users cannot see the whole world from a single point (see DARKEN and S IBERT [3]). Comparable effects which were noticed utilizing haptic-only virtual environments show the relation between both problematics in spite of the different systems (see C OLWELL, P ETRIE, KORNBROT et. al [1]). Taking advantage of this relation, way?nding strategies in visual virtual worlds as well as orientation and navigation methods can transfer to haptic-only environments and enable support for the different orientation and navigation task. 2.1. Design Principle of Virtual Environments Independent of the environment or of the feedback used, a prerequisite for a controlled movement through it is the conceptual spatial knowledge about the environment as a whole and a simple access to this information (see DARKEN and S IBERT [2]). Both aspects, the spatial knowledge and the simple access to it, require two different prerequisites which in?uence the

design of the environment. The ?rst prerequisite is a subdivision of the environment into smaller, more speci?c parts which allow the use of these parts and of the knowledge about corresponding objects as a frame of reference. The chose subdivision should adapt to the environment and to the including objects respectively, so that it depends on special features of the environment, for example on districts, paths, objects, and so on. The use of these parts as a frame of reference and their adaptation to special features allows the inclusion of information about object locations (also in relation to other objects) and of object distances if the objects are located in different parts. The object distances can represented be using the parts which are located between the objects that have to be described. The second prerequisite is the use of a simple organization principle, for example a grid or a logical spatial ordering which connects the different parts with each other. This organization principle structures the parts and allows an easier and more ?exible access to the spatial information. Especially if the users search something within the environment the organization principle helps them using their spatial knowledge for the determination of the current position, orientation and navigation tasks.

2.2. Additional Navigation Information Based on the design principle of the environment, an overview about possible targets is needed which allows an interaction and the implementation of different search and navigation tasks. This overview requires further spatial information about the environment which would be a part of visible navigational aids like maps so that the users can determine a course through the environment to the target starting from their current position. Depending on the design principle and on the available spatial information about the target, it is possible to use different representations of spatial information: the graph structure, the anchor structure, the square grid structure, and the radial grid structure (see Figure 1, D ARKEN and S IBERT [2]). These representations show the general use of spatial information and can be used as basis for navigation information within haptic-only environments.

2.3. Orientation and Navigation within Haptic-Only Environments As opposed to visual VR-systems, our input device only uses three degrees of freedom. Therefore, we do not have to observe the alignment of the users within the environment for orientation and navigation tasks. Thus, we reduce the task to orient themselves down to the determination of the current position and the navigation task down to the determination of the targets’ position in relation to the current position or other targets. In this context, we de?ne orientation as a process of determining the current position in relation to the surroundings. Furthermore, we de?ne navigation as a process of determining the path or the course towards any target. In relation with orientation and navigation, we use the terms passiveness and activeness to describe the feedback used. We call a system passive if it uses acoustic feedback and active if the system uses additional force feedback.

T T T T T
T

T T T T

(a) graph structure

(b) square grid structure

A T

T

T A
(c) anchor structure (d) radial grid structure

Figure 1: Cognitive representations of spatial information using different navigation information

3. Haptic-Only Exploration
3.1. Basic Idea Our basic idea is to use non-realistic haptic rendering (NRHR). The term non-realistic or nonphotorealistic is normally used in computer graphics and describes a principle which uses a more abstract representation to reduce parts of included non-essential information and to emphasize special and more important aspects (see L ANSDOWN et. al[7], S TROTHOTTE, P REIM, R AAB et. al [11]). For the viewer, this allows for a simpli?cation of the representation by discarding unnecessary and confusing details and for an emphasis of special details. The intentional usage of both aspects focuses the viewers’ attention and in?uences the viewers’ perception (see S TROTHOTTE et. al [13]). In our work, we use the term non-realistic rendering applied to haptic representations with a similar meaning and refer to the process of producing such presentations as non-realistic haptic rendering. To implement the NRHR, we changed the mapping of 3D raw data onto 3D haptic space (see Figure 2, K ?NIG, S CHNEIDER, and S TROTHOTTE [6]) and divide it into two different parts. The ?rst part is to map 2D geometric information of our model onto planes within the haptic space. The second is to map the remaining dimension and additional nongeometric data onto haptic information layers. The use of planes which represent the geometric information has two advantages which apply to the support for different orientation and navigation tasks. If the planes use the whole dimension and they align to system of coordinate of haptic space like in our representation they are much easier to ?nd than objects which can be small and are located somewhere in haptic space. Furthermore, the user’s orientation and navigation tasks are reduced to the knowledge about the

Model

Haptic presentations

mapping 2D data

planes

3rd dimension and non-geometric information

mapping

information layers

Figure 2: Used mapping of 3D model data onto 3D haptic space

location of such planes and the movement on the planes (see Figure 3). A new element to express further aspects are haptic information layers which we de?ne as abstract information objects. The layers which represent different parts of information are related to corresponding parts of the represented planes they are mapped onto. Depending on the kind of information, the information layers are represented in the scene as thin haptic layers (e.g. haptic textures or parameters), natural language, or interaction elements. The layers have an important task with respects to the part of information which was reduced during the abstraction process. They can compensate the reduced information and provide alternative and additional information about the corresponding geometric parts onto which they are mapped. Using different interaction methods, the users can interact with the scene and choose the joined information they need. The interaction techniques include the possibility to change the current representation and to display a further scenes. These scenes can represent different views of objects which can improve the perception of the geometric information. Furthermore, they also can include objects with simpler geometry which can make it easier for users to orient themselves and navigate through the whole scene to recognize their shape (see Figure 4). The different representations allow ?exible exploration of the haptic model corresponding to experience and abilities of the users as well as the limitations of the device used. A disadvantage of our method is in fact related to the advantage of reducing and emphasizing geometric information using NRHR. Currently, there is little possibility to measure the quality and the success of the mapping, and there are no comprehensive rules how the scene should be rendered. Another factor which should be taken into account is the abilities of the users which can be the cause for misunderstandings. 3.2. Passive Methods Based on the non-realistic representation, our passive methods exclusively use natural language to provide support for orientation and navigation within the scene as de?ned above. The additional information informs the users about the structure of the scene and suggests directions they can choose to reach special objects within the neighbourhood. The advantage of using natural language information is that the users can move around without any restrictions and have the choice to go to another direction. However, the passive methods

(a) virtual reconstruction

(b) non-realistic representation Figure 3: Virtual reconstruction of a building (a) as basis for the non-realistic representation of the ground plan (b)

(a) original

(b) simple

Figure 4: The non-realistic representation of column (a) as basis for a simpler representation (b) which allows easier haptic navigation to recognize the shape

can suggest direction to the users and describe the corresponding locations but it is not certain that they can ?nd the chosen destination. Currently, only one-way communication is used, so that this users do not have the possibility to ask for information in which they are interested. Through this one-way communication, the users cannot control the moment of the acoustic feedback like in a conversation. That is why the users’ attention can be overtaxed by noticing to much different acoustic information for each

location. To allow a selected interaction and reduce unnecessary repetitions, we interpret a very slow movement or sudden discontinuity as a state of helplessness. In such situations, the users need more help and the methods to generate additional acoustic information. The choice of this information depends on a multitude of statistical information about the users’ behavior within the scene such as the current position also in relation to destinations within the scene, the distances to this destinations, and the stay-time the users spend in the area around the destination. Passive Orientation The ?rst method—the passive orientation—employs natural language to support users during the process of the determination of the current position. Normally, users utilize the position and the movement of their hand to estimate their position within the virtual environment. However, the structure of the scene and available special features which could be used as reference cannot be taken into account. To recognize and to learn the spatial connections between different objects, users must often repeat the passage between them. The basis for the passive orientation is a square grid structure like it was proposed as design principle which allows a simple imagination of the scene. However, we change the grid so that the borders between the squares are adjusted to ?t important edges of the objects to allow an adaptation to the structure of the scene (see Figure 5). To adapt the grid to the complexity of the scene and to minimize the number of information required, it is also possible to add new squares or to remove squares that belong to the grid. All these squares within the grid can used for the passive orientation.

Figure 5: A non-realistic representation (a ground plan) of a building with the used structure

Based on the square grid structure, the users hear the information of the square which is nearest to their position. The acoustic information describes the position within the scene using the different points of a compass. To de?ne a reference, the scene can be considered as a map and north can assigned somewhere. In this case, the users should know the direction which corresponds to the reference. However, it is also possible to utilize objects within the scene as a reference which can focus the users’ attention or to use both methods.

Passive Navigation The second method, passive navigation, uses natural language to support users during the exploration of the scene. Normally, haptic-only environments do not include spatial information about the structure of the scene. That is why users only have collected a few facts about objects within scene they found and explored before. The construction of this knowledge about the objects, their locations, and their spatial connections requires a precise examination of the scene which needs much time and imposes great demands on abilities of the users. Analogous to the passive orientation, passive navigation uses the adjusted square grid structure described above. For each square within the grid, the acoustic information describes the directions which guide the users to the different possible destinations within the neighbourhood of their current position (see Figure 6). The description of the directions is also de?ned by the points of compass. To minimize the needed acoustic information, only important objects which are located in the squares around the current position are described.

Figure 6: Starting from the current position (the circle), the arrows show the described directions for one square.

Using this acoustic information about the neighbourhood, simple search methods are possible which allow the exploration of the whole scene. Step by step, the users hear the information and can decide whether they want to explore a destination or whether they want to move to the next square. An organized sequence of examined squares guarantees that all important destinations can found by the users. 3.3. Active Methods As opposed to the passive methods, active methods use additional force components which are added to the force vector that was calculated as result of the haptic rendering process. This additional force components are used with the goal to support the users during the recognition of the shape of objects and guide them to prede?ned destinations. Compared to the passive methods, the force components can in?uence the users’ haptic perception directly and can pull them to the corresponding destination. That is why it is certain that the users can reach the chosen destination if they are cooperative. To reach the destination, the users need no knowledge about the direction they have to choose or the distance. The disadvantage of the active methods is that the users are limited in their freedom of movement by the forces. Although not very strong, these forces are perceivable. The forces are

limited by the ability of the users to leave the range of effect which is necessary for the methods to suggest a direction to choose. Another problem is when using force components only one additional force component should displayed which represents one piece of information. The attempt to represent several components leads to an overlay of the different pieces of information. Now, the users have the additional tasks to dismantle the overlay into the included information and to choose the information they want. This is dif?cult, sometimes impossible to solve, and distracting. Similar to the problem which was already described in the case of passive methods, users need a possibility to select information they require. As opposed to the one-way communication, in many situations the users can sign special ?gures which are used as gestures. Together with the collected statistical information about users’ behavior, they can decide themself whether and how long they require support. Active Orientation The ?rst method which calculates a additional force vector to support the users is the active orientation. This method is used to reduce the negative consequences of the slip-off-problem. In different studies which were carried out using haptic-only environments it was observed that the users have dif?culties keeping in contact with the objects (see C OLWELL, P ETRIE, KORNBROT et. al [1]). Especially at edges, the users typically slip off into empty haptic space until they get back to ?nd the objects again. For the users, the slip-off-problem has two important consequences. First, they have dif?culties to recognize the shape of objects, especially the shape of the different edges. Furthermore, the users can lose their orientation in relation to the object during the movement through the empty haptic space because they do not feel any forces they can use as reference or as an orientational aid. The active orientation simulates a force ?eld around the objects. After the object is touched, a force vector is calculated which pulls the users in the direction of the centre of the object and guides the user back to surface. The force vector is only calculated within a small limited area around the objects so that the effect can recognize but the vector do not in?uence the users during the exploration of scene. The calculated force eliminates all consequences of the slip-off-problem. The force opposes that the users can leave the surface and makes it more dif?cult for them to leave the area around the object. Hence, the users need to use more pressure if they want to leave this area, so that they can recogize if they move away from the surface. Furthermore, the force takes up the usually empty haptic space around the object and can used by the users as spatial reference object within the scene. Active Navigation As opposed to the active orientation, the active navigation supports the users during the exploration of the whole scene. Similar to the passive navigation, this method is used to reduce the disadvantage that there is no global spatial information about objects within the usually haptic representation. However, the active navigation also includes objects which are located outside the neighbourhood. The active navigation based on small marks within the scene which de?ne paths through the whole environment (see Figure 7). After the start, the calculated force vector which is used for a guidance of the users works in the direction of the next mark until the corresponding

destination was touched. This calculation method prevents a limitation of the users’ freedom of movement and allows the sign of gestures during the guidance.

Figure 7: Using marks (cubes) to de?ne a path through the ground plan

The advantage of the active navigation is that the users need not to know anything about the spatial position of the destinations and about the direction they can choose. Furthermore, the exploration can be organized using statistical information about paths already used and examined destinations. Based on this method, different techniques are implemented which allow an adaptation to the users. First, a guidance can de?ne which allows the exploration of the whole environment. After the start, the users are guided from destination to destination, so that they can examine the whole environment without missing any important object. Secondly, the users can be guided to a single destination which they have not found yet.

4. The Prototype
A prototypical implementation combines the different methods into a guidance and non-realistic haptic rendering tool. The tool allows the free interactive as well as guided exploration of architectural models. The program has been implemented on a PC using a PHANToM 1.5 of Sensable Technologies Inc. as haptic display. The basis of our rendering tool is the extraction of the geometric data represented as well as the de?nition of the possible orientation and navigation methods. Our raw data consists of a 3D model which was used for a photorealistic rendering. Admittedly, such models utilize too many polygons for the visualization and contain artifacts which make straightforward haptic rendering awkward. For example, if two objects are not joined perfectly and overlap themselves, no force is calculated within this area (see H ARDWICK, T URNER, and RUSH [4]). Therefore, it is better to design new scenes which are more amenable non-realistic transformations. The geometric information used in the system was stored in the VRML97-format which allows a description of 3D scenes. Furthermore, it is possible to employ anyone of various 3D modeling tools to construct this scenes. However, the disadvantage is that this ?le format does not contain haptic parameters (e.g. stiffness, static and dynamic friction as well as damping). Therefore, it is not possible to enrich scenes which were stored in the VRML97-format with additional haptic parameters and textures as well as with additional information which is used for the different extensions. This is why we designed our own ?le format. Based on the stored information, the users can feel the objects within the scene and then they hear information about the touched object and their shape. Furthermore, they can hear the

stored information about their position within the scene and can get suggestions where to ?nd important objects in the neighbourhood. If paths are de?ned the users can sign gestures with the force feedback device to search important objects and activate the guidance. Furthermore, the users can use gestures to get more information about the objects, to change the scene, to change the haptic level of detail, to scale objects.

5. Future Work
In the future we plan continue our work in three directions. First, we will perform an systematic evaluation of different passive and active methods with sighted and blind people. Currently, only informal tests have been carried out which showed promising results. The second point is that we will change the architecture of our program into a client-server based system. That is why we have some runtime and stability problems if we represent complex scenes. Finally, we will write a tool to make it easier to generate the ?le for the exploration information.

Acknowledgements
Presently, to demonstrate the abilities of our tool, the virtual reconstruction of the palace of OTTO THE G REAT is used (see [10]). The authors want to thank the students and colleagues of our institute for modelling the palace.

References
[1] C. Colwell, H. Petrie, D. Kornbrot, A. Hardwick, and S. Furner. Haptic virtual reality for blind computer users. Proceedings of the Third International ACM Conference on Assistive Technologies, 1998. [2] R. P. Darken and J. L. Sibert. Navigating large virtual spaces. International Journal of Human-Computer Interaction, 1996. [3] R. P. Darken and J. L. Sibert. Way?nding strategies and behaviors in large virtual worlds. Proceedings of the Conference on Human Factors in Computing Systems, pages 49–72, 1996. [4] A. Hardwick, S. Turner, and J. Rush. Tactile Access for Blind People to Virtual Reality on the World Wide Web. IEE Displays Journal, 12, 1996. [5] G. Jannson, J. F?nger, H. K?nig, and K. Billberger. Visually impaired persons’ use of the phantom for information about texture and 3d form of virtual objects. Proceedings of the Third PHANToM Users Group Workshop, 1998. [6] H. K?nig, J. Schneider, and Th. Strothotte. Haptic Exploration of Virtual Buildings Using Non-Realistic Rendering. Proceedings of the International Conference on Computers Helping People with Special Needs, in Press, 2000. [7] J. Lansdown and S. Scho?eld. Expressive Rendering: A Preview of Nonphotorealistic Techniques. IEEE Computer Graphics and Applications, pages 29–37, 5 1995. [8] F. Ritter, B. Preim, O. Deussen, and Th. Strothotte. Using a 3D Puzzle as a Metaphor for Learning Spatial Relations. Proccedings of Graphics Interface, pages 171–178, 2000.

[9] C. Sj?str?m. The Phantasticon: The PHANToM for Blind People. Proceedings of the Second PHANToM User’s Group Workshop, 1997. [10] Th. Strothotte, M. Masuch, and T. Isenberg. Visualizing Knowledge About Virtual Reconstructions of Ancient Architecture. Computer Graphics International, pages 36–43, 1999. [11] Th. Strothotte, B. Preim, A. Raab, J. Schumann, and D. R. Forsey. How to Render Frames to In?uence People. Proccedings of Eurographics, pages 455–466, 1994. [12] F. Tendick and M. Hegarty. Elucidating, Assessing, and Training Spatial Skills in Minimally Invasive Surgery Using Virtual Environments. Proceedings of the AAAI Spring Symposium, pages 148–155, 2000. [13] Th. Strothotte et. al. Abstraction in Interactive Computational Visualisation Exploring Complex Information Spaces. Springer Verlag, 1998.


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图