Parametrization, auralization, and authoring of room acoustics for virtual reality applications

No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
Doctoral thesis (article-based)
Checking the digitized thesis and permission for publishing
Instructions for the author
Date
2003-05-26
Major/Subject
Virtual room acoustics
Mcode
Degree programme
Language
en
Pages
74, [86]
Series
Report / Helsinki University of Technology, Laboratory of Acoustics and Audio Signal Processing, Raportti / Teknillinen korkeakoulu, akustiikan ja äänenkäsittelytekniikan laboratorio, 70
Abstract
The primary goal of this work has been to develop means to represent acoustic properties of an environment with a set of spatial sound related parameters. These parameters are used for creating virtual environments, where the sounds are expected to be perceived by the user as if they were listened to in a corresponding real space. The virtual world may consist of both visual and audio components. Ideally in such an application, the sound and the visual parts of the virtual scene are in coherence with each other, which should improve the user immersion in the virtual environment. The second aim was to verify the feasibility of the created sound environment parameter set in practice. A virtual acoustic modeling system was implemented, where any spatial sound scene, defined by using the developed parameters, can be rendered audible in real time. In other words the user can listen to the auralized sound according to the defined sound scene parameters. Thirdly, the authoring of creating such parametric sound scene representations was addressed. In this authoring framework, sound scenes and an associated visual scene can be created to be further encoded and transmitted in real time to a remotely located renderer. The visual scene counterpart was created as a part of the multimedia scene acting simultaneously as a user interface for renderer-side interaction.
Description
Keywords
virtual acoustics, room acoustic modeling, 3D sound, sound scene description, MPEG-4, authoring
Other note
Parts
  • Väänänen R., Huopaniemi J., Välimäki V. and Karjalainen J., 1997. Efficient and parametric reverberator for room acoustics modeling. Proceedings of the International Computer Music Conference (ICMC '97). Thessaloniki, Greece, 25-30 September 1997, pages 200-203.
  • Scheirer E., Väänänen R. and Huopaniemi J., 1999. AudioBIFS: Describing audio scenes in MPEG-4 multimedia standard. IEEE Transactions on Multimedia 1, No. 3, pages 237-250.
  • Väänänen R. and Huopaniemi J., 1999. Virtual acoustics rendering in MPEG-4 multimedia standard. Proceedings of the International Computer Music Conference (ICMC 1999). Beijing, China, October 1999, pages 585-588.
  • Väänänen R. and Huopaniemi J., 2000. Spatial processing of sounds in MPEG-4 virtual worlds. Proceedings of EUSIPCO 2000 Conference. Tampere, Finland, September 2000. Vol. 4, pages 2209-2212.
  • Väänänen R., Huopaniemi J. and Pulkki V., 2000. Comparison of sound spatialization techniques in MPEG-4 scene description. Proceedings of the International Computer Music Conference (ICMC 2000). Berlin, Germany, September 2000, pages 288-291.
  • Väänänen R. and Huopaniemi J., Advanced AudioBIFS: Virtual acoustics modeling in MPEG-4 scene description. IEEE Transactions on Multimedia, accepted for publication.
  • Väänänen R., 2003. User interaction and authoring of 3D sound scenes in the Carrouso EU project. Preprint No. 5764 of the 114th Convention of the Audio Engineering Society (AES). Amsterdam, The Netherlands, March 2003.
Citation
Permanent link to this item
https://urn.fi/urn:nbn:fi:tkk-000528