Audioscape

Introduction to Audioscape


Audioscape is an open source research project, exploring 3D audio modeling for live music performance. It provides a framework for immersive spatial audio performance, where a user's body can be modelled within a virtual 3-D world, and the propagation of audio is computed based on acoustic physical modelling. This framework is among only a few that have explored virtual environments from the perspective of music and digital signal processing (DSP). It is an interactive and perceptually engulfing experience that causes users to feel like they are inside of an artwork or instrument. Rich 3-D graphics are available for display, providing visual feedback to help control the artwork. However, the primary focus of this research is the development of a paradigm for interacting with 3-D sound, and a method of accomplishing complicated signal processing using spatial relations.

Overview


* Multiple users are immersed in a virtual environment, each having a subjective visual and auditory rendering of the scene.
* Spatialized audio is provided using a virtual microphone technique; the number of virtual mics corresponds to the number of loudspeakers being used.
* Realtime physical modelling of virtual sound is used to provide the necessary signals to the virtual mics.
* The scene also contains other interactive sonic objects, which can perform DSP at specific 3-D locations.
* Users have the ability to ‘steer’ sound through 3-D space to determine how their sound signals will be modified.

Framework for Modelling Virtual Audio



Node-based approach with a scene graph data structure:

An audio scene is composed of sound processing nodes that exist at some 3-D location and maintain various parameters to aid in DSP computation. These nodes are organized in a scene graph, which is a tree-like data structure (commonly found in computer graphics applications) that forms parent-child relationships between nodes. Any operation applied on a node will automatically propagate to all of it’s children.

The ‘soundNode’:

The soundNode is the fundamental building block of a virtual audio scene. It can behave as either a source, or sink, or both at the same time. The case where it represents “both” is particularly since this is how spatially localized DSP is realized; the node will collect audio at a specific 3-D location, apply some processing, and radiate the result back into the scene.

The ‘soundConnection’:

The soundConnection specifies the propagation model between soundNodes. That is, it defines how audio travels from a source node to a sink, based on physically-modelled audio propagation (decay with distance, travel time, diffraction, reflection, etc.). When constructing DSP applications, soundConnections perform a similar function to traditional patch cords that used in sound studios or even those found in patcher-based audio software like Max/MSP. The obvious difference is that there is processing involved in the soundConnection based on models of acoustics, hence the connections do not pass signals at unity gain.

The ‘soundSpace’:

The soundSpace provides volumetric processing rather than the localized processing of a soundNode. These nodes are typically defined by some 3-D model (exported from Maya, 3D Studio Max, Blender, etc.), and capture sound from nodes within. These nodes are useful for simulated acoustic effects such as reverberation, but can also be used to define spatial regions of specific sound processing.

Physical Simulation … and bending the rules

Sound that travels through the virtual scene is physically modelled to simulate phenomena such as:

* exponential decay of energy during travel
* travel time according to the speed of sound
* diffraction of low frequencies around volumes
* absorption of high frequencies as a function of distance
etc.

One important feature however, is the ability to bend the rules of physics that govern the propagation of sound. By manipulating the parameters of acoustic models, users can achieve results that are more interesting for artistic or musical purposes than those provided by standard audio simulation. For example, decay & delay of sound can be diminished in order to ‘teleport’ sound from one place to another, Doppler shift can be eliminated to preserve the tonal aspects of a musical piece, and sound can be ‘steered’ in a precise direction instead of propagating as a spherical wavefront.

Publications



1 Cooperstock, J.R., Wozniewski, M., Settel, Z. (2007). Towards mobile spatial audio for distributed musical systems and multi-user virtual environments. Spatial Audio for Mobile Devices, Workshop in conjunction with International Conference on Human Interaction with Mobile Devices and Services (!MobileHCI), Singapore, Sept. 9.
2 Wozniewski, M., Settel, Z. and Cooperstock, J.R. (2007). User-specific audio rendering and steerable sound for distributed virtual environments. International Conference on Auditory Display, June, 26-29, Montreal.
3 Wozniewski, M., Settel, Z. Cooperstock, J.R. (2007). AudioScape: A Pure Data library for management of virtual environments and spatial audio. !Pure Data Convention, Montreal, Aug. 21-26.
4 Wozniewski, M., Settel, Z. and Cooperstock, J.R. (2006). A Paradigm for Physical Interaction with Sound in 3-D Audio Space. International Computer Music Conference, Nov. 6-11, New Orleans.
5 Wozniewski, M., Settel, Z. and Cooperstock, J.R. (2006). A Spatial Interface for Audio and Music Production. International Computer on Digital Audio Effects (DAFx), Sept. 18-20, Montreal.
6 Wozniewski, M., Settel, Z. and Cooperstock, J.R. (2006). A framework for immersive spatial audio performance. New Interfaces for Musical Expression (NIME), June 5-7, Paris.


Artworks

Below are some example artworks, realized in Audioscape

4Dmix3 , a multi-user installation for audio remix creation in space. (Zack Settel 2007)

Menagerie Imaginaire , a live performance work for av-capturer and two musicians playing live in a virtual audiovisual scene. (Dumas, Settel,Wozneiwski 2007)


Acknowledgments

The authors wish to acknowledge the generous support of NSERC and Canada Council for the Arts, which have funded the research and artistic development for this project in their New Media Initiative.
 
< Prev   Next >