×


Spatial perception and navigation play an important role in our daily life, for instance for path finding and avoiding potential threats. While we rely heavily on vision, audition may play a hitherto neglected part in these tasks. Contrary to vision, audition is not limited to the frontal field of view and to objects in the line of sight, thus providing information in an omnidirectional manner. Unlike for auditory source localization in the horizontal plane, the underlying sensory cues necessary for spatial navigation are likely to be more dependent on the source and the acoustic properties of the environment. The goal of this project is to bridge the gap between auditory spatial perception, visual spatial navigation, as well as self-motion. We investigate i) how auditory information contributes to world-centered spatial awareness, ii) how current computational models for spatial behavior can be applied to auditory information and sensory integration, and iii) how auditory sensory cues influence the brain regions involved in world-centered navigation. To do this, we will look beyond sound source-based functions, such as head-centric localization or distance perception, to the global spatial information that sound can provide. Experiments will investigate the effect of self-motion on sound source localization, the ability to judge the size of an environment using auditory and visual information, the effect of different modes of representing self-motion in virtual reality (VR) on spatial navigation, learning a spatial representation of the environment, and finally, the contribution of auditory information to navigating in a complex virtual environment. We employ state-of-the-art VR techniques using a computer game engine for the visual rendering connected with our own specifically adapted and optimized (room) acoustics simulator. In addition to behavioral measurements, we will identify related brain areas with functional magnetic resonance imaging (MRI). This project will provide results on: 1) Quantification of the degree of audiovisual sensory integration for global spatial perception of sources and environments, and for navigation within complex environments. Using cognitive computational modeling, we will be able to measure weighting and biases in both sensory modalities. 2) How situations with restricted (head) movements and posture, such as in simulator systems or an MRI scanner, change spatial orientation and navigation behavior. 3) Novel insights into how audiovisual information is combined in the brain, and how auditory spatial information affects the brain networks for spatial navigation. 4) Advancement of acoustic VR and MRI sequences for conditions involving acoustic stimulation. We will provide concrete recommendations about how to optimize the usability of VR software for various applications such as training/rehabilitation, teaching, and audiovisual research.

People:



Steven van de Par
Carl von Ossietzky Universität Oldenburg

Project Leader


Stephan Ewert
Carl von Ossietzky Universität Oldenburg

Project Leader


Virginia Flanagin
University Hospital Munich

Project Leader