How far can we get with just visual information? [Elektronische Ressource] : path integration and spatial updating studies in virtual reality / vorgelegt von Bernhard E. Riecke
213 pages
English

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris

How far can we get with just visual information? [Elektronische Ressource] : path integration and spatial updating studies in virtual reality / vorgelegt von Bernhard E. Riecke

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus
213 pages
English
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus

Description

How far can we get with just visual information?Path integration and spatial updating studies inVirtual RealityDissertationzur Erlangung des Grades eines Doktorsder Naturwissenschaftender Fakultät für Mathematik und Physikder Eberhard Karls Universität zu Tübingenvorgelegt vonBernhard E. Rieckeaus Reutlingen2003iiTag der mündlichen Prüfung: 14.07.2003Dekan: Prof. Dr. Herbert Müther1. Berichterstatter: Prof. Dr. Hanns Ruder und Prof. Dr. Heinrich H. Bülthoff2. Prof. Dr. Bernhard Schölkopfiii1 SummaryHow do we find our way around in everyday life? In real world situations, it typically takes a con siderable amount of time to get completely lost. In most Virtual Reality (VR) applications, however,users are quickly lost after only a few simulated turns. This happens even though many recent VRapplications are already quite compelling and look convincing at first glance. So what is missing inthose simulated spaces? Why is spatial orientation there not as easy as in the real world? In otherwords, what sensory information is essential for accurate, effortless, and robust spatial orientation?How are the different sources combined and processed?In this thesis, these and related questions were approached by performing a series of spatial orienta tion experiments in various VR setups as well as in the real world.

Sujets

Informations

Publié par
Publié le 01 janvier 2003
Nombre de lectures 5
Langue English
Poids de l'ouvrage 7 Mo

Extrait

How far can we get with just visual information?
Path integration and spatial updating studies in
Virtual Reality
Dissertation
zur Erlangung des Grades eines Doktors
der Naturwissenschaften
der Fakultät für Mathematik und Physik
der Eberhard Karls Universität zu Tübingen
vorgelegt von
Bernhard E. Riecke
aus Reutlingen
2003ii
Tag der mündlichen Prüfung: 14.07.2003
Dekan: Prof. Dr. Herbert Müther
1. Berichterstatter: Prof. Dr. Hanns Ruder und Prof. Dr. Heinrich H. Bülthoff
2. Prof. Dr. Bernhard Schölkopfiii
1 Summary
How do we find our way around in everyday life? In real world situations, it typically takes a con
siderable amount of time to get completely lost. In most Virtual Reality (VR) applications, however,
users are quickly lost after only a few simulated turns. This happens even though many recent VR
applications are already quite compelling and look convincing at first glance. So what is missing in
those simulated spaces? Why is spatial orientation there not as easy as in the real world? In other
words, what sensory information is essential for accurate, effortless, and robust spatial orientation?
How are the different sources combined and processed?
In this thesis, these and related questions were approached by performing a series of spatial orienta
tion experiments in various VR setups as well as in the real world. Modeling of the underlying spatial
orientation processes finally led to a comprehensive framework based on logical propositions, which
was applied to both our experiments and selected experiments from the literature. Using VR allowed
us to disentangle the different information sources, sensory modalities, as well as possible spatial ori
entation processes and strategies. It further offered the precise control, repeatability, and flexibility
of stimuli and experimental conditions, which is difficult to achieve in real world experiments.
A first series of experiments (part II) investigated the usability of purely visual cues, with particular
focus on optic flow, for basic navigation and spatial orientation tasks. According to the prevailing
opinion in the literature, those cues should not be sufficient: Proprioceptive and especially vestibu
lar cues are supposedly prerequisites even for simple navigation and spatial orientation tasks if they
involve rotations of the observer. Furthermore, visual cues alone are often considered insufficient
for good spatial orientation, especially when useful reference points (landmarks) are missing. To
test this notion, we conducted a set of experiments in virtual environments where only visual cues
were provided. Participants had to execute simulated turns, reproduce distances or perform trian
gle completion tasks. Most experiments were performed in a simulated 3D field of blobs, thus
restricting navigation strategies to path integration based on optic flow. For our experimental setup
◦ ◦(half cylindrical 180 x50 projection screen), optic flow information alone proved to be sufficient for
untrained participants to perform turns and reproduce distances with negligible systematic errors, ir-
respective of movement velocity. Path integration by optic flow was sufficient for homing by triangle
completion, but homing distances were biased towards the mean response. Additional landmarks that
were only temporarily available did not improve homing performance. Navigation by stable, reliable
landmarks, however, led to almost perfect homing Compared to similar experiments
using virtual environments (Kearns et al., 2002; Péruch et al., 1997) or blind locomotion (Loomis
et al., 1993; Klatzky et al., 1990), we did not find any distance undershoot or strong regression
◦towards mean turn responses. Using a Virtual Reality setup with a half cylindrical 180 projection
screen allowed us to demonstrate that visual path integration without any vestibular or kinesthetic
cues can indeed be sufficient for elementary navigation tasks like rotations, translations, and homing
via triangle completion.
Nevertheless, we did observe some systematic errors that could not be convincingly explained by
the literature or by the experiments themselves. A detailed analysis of participants’ behavior sug
gested that general cognitive abilities and mental spatial reasoning abilities in particular might have
been the determining factor. Positive correlations between navigation performance and mental spa
tial abilities test scores corroborated this hypothesis. In comparable real world situations, however,
no higher cognitive processes seem to be needed (even animals as simple as ants can perform compa
rable homing tasks). Instead, we seem to know automatically and effortlessly where relevant objects
in our immediate surround are when moving about without having to think much about it. Hence, we
hypothesized that this “automatic spatial updating” of self to surround relations during ego motion
was not functioning properly in our and many other VR studies. So what was missing in the sim
ulations? The literature suggests that vestibular cues from physical motions are indispensable foriv Section.1 Summary
automatic spatial updating. Furthermore, visual cues alone should be insufficient, especially when
ego rotations are involved.
To test these hypotheses, we established a rapid pointing paradigm and performed a second se
ries of experiments that investigated the influence and interaction of visual and vestibular stimulus
parameters for spatial updating in real and virtual environments (part III). After real and/or visu
ally simulated ego turns, participants were asked to accurately and quickly point towards different
previously learned target objects that were currently not visible. The rapid egocentric response en
sured that participants could not solve the task cognitively.
Unpredicted by the literature, visual cues alone proved sufficient for excellent automatic spatial up
dating performance even without any vestibular motion cues. Furthermore, participants were vir-
tually unable to ignore or suppress the visual stimulus even when explicitly asked to do so. This
indicates that the visual cues alone were even sufficient to evoke reflex like “obligatory spatial up
dating”. Comparing performance in the real environment and a photorealistic virtual replica revealed
similar performance as long as the field of view was the same. That is, a simulated view onto a
consistent, landmark rich environment was as powerful in turning our mental spatial representation
(even against our own conscious will) as a corresponding view onto the real world. This highlights
the power and flexibility of using highly photorealistic virtual environments for investigating hu
man spatial orientation and spatial cognition. It furthermore validates our VR based experimental
paradigm, and suggests the transferability of results obtained in this VR setup to comparable real
world tasks. From a number of additional parameters investigated, only the field of view and the
availability of landmarks had a consistent influence on spatial updating performance. Unexpect
edly, motion parameters did not show any clear influence, which might be interpreted as a dominant
influence of static visual (display) information over dynamic (motion) information.
Modeling spatial orientation processes in a comprehensive framework based on logical propositions
(part IV) allowed for a deeper understanding of the underlying mechanisms in both our experiments
and experiments from the literature. Furthermore, the logical structure of the framework suggests
novel ways of quantifying spatial updating and “spatial presence” (which can be seen as the consis
tent feeling of being in a specific spatial context, and intuitively knowing where one is with respect
to the immediate surround). In particular, it allows the disambiguation between two complementary
types of automatic spatial updating found in our experiments: On the one hand, the well known
“continuous spatial updating” induced by continuous movement information. On the other hand, a
novel type of discontinuous, teleport like “instantaneous spatial updating” that allowed participants
to quickly adopt the reference frame of a new location without any explicit motion cues, just by
presenting a novel view from a different viewpoint. Last but not least, the framework suggested
novel experiments and experimental paradigms, was used to generate new hypotheses and testable
predictions, and already stimulated the scientific discussion in the presence research community.
In addition to assessing spatial cognition, the logical framework proved helpful in tackling the
human computer interface issue. Several critical simulation and display parameters required for
quick and effortless spatial orientation were pinpointed: First of all, any application that does not
enable automatic spatial updating is bound to decrease quick and effortless spatial orientation per-
formance and hence unnecessarily increase cognitive load. In addition, most current VR displays
do not allow for effective ego motion simulation and/or tend to produce rather large artifacts in ego
motion perception. This is especially true for head mounted displays. Hence, the importance of
designing effective VR displays can hardly be overestimated. Furthermore, the simulated objects
should be salient enough, non repetitive, and constitute one coherent scene that can be updated as a
w

  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents