Interactive Virtual Environments (iVEs) provide new possibilities to study human perception and social cognitive processing in complex scenes, thereby opening new research perspectives. Although it is commonly agreed that auditory stimuli are crucial in studying perception and cognition in iVEs, the quality of the audio rendering is often neglected. The most advanced level of audio rendering is at present found in room-acoustical auralisations. Even in this audio-centric field, the current knowledge lacks concepts to validate the transfer of perceived impressions from the original room to its virtual representation delivered in a different reproduction room. More specifically, even when all acoustic cues are represented fully authentically, the mismatch between the physical room in which an observer is present and the acoustic environment that is rendered may cause the rendering to be perceived differently from the original scene.We therefore propose to first develop a new concept to assess the transfer of the acoustic perception from the original to the rendered scene, which allows to extend beyond the established concepts of authenticity and plausibility. This concept will be referred to as realism in the context of the proposed project. In addition to auditory stimuli, we will use congruent visual stimuli presented via head-mounted displays (HMDs). One important research question regarding practical applications is to investigate whether visual congruence with the acoustically rendered environment will alleviate the necessity to measure individual binaural room impulse responses (BRIRs). A high degree of auditory realism in iVEs allows to study more complex cognitive processing phenomena in close-to-real-life scenes. Social cognition is one very prominent field of research where iVEs can help to advance experimental paradigms and to understand the complex process of social interactions. Therefore, we investigate the impact of the audio rendering on selected cognitive processes relevant for the field and specifically for the investigation and treatment of social anxiety. These cognitive processes include perception and cross-modal attention. Furthermore, we investigate how precisely realistic audio rendering in social audio-visual iVEs contributes to evoke emotional responses such as fear and enhance potential mediating factors such as social presence. Finally, the effect of the audio rendering on social emotional learning processes will be investigated.The long-term goal of this project is to establish iVEs with binaural rendering of audio and visual rendering via HMDs as a research tool in room acoustics and for the investigation and treatment of social interactions and social anxiety. We expect to establish recommendations and procedures for audio rendering in room-acoustics and psychological, especially clinical and social psychology, research.

People:



Matthias Blau
Jade Hochschule Wilhelmshaven

Project Leader


Steven van de Par
Carl von Ossietzky Universität Oldenburg

Project Leader


Andreas Mühlberger
University of Regensburg

Project Leader



Felix Stärz
Jade Hochschule Wilhelmshaven

PhD Student


Leon Kroczek
University of Regensburg

PhD Student


Sarah Roßkopf, M.Sc.
University of Regensburg

PhD Student