Is SRS Labs Multi-Dimensional Audio the Future of Surround?

May 31, 2012
share

by Gina Collecchia 

Santa Ana, California-based audio technology company SRS Labs has developed a dynamic way to spatialize sound precisely in three dimensions. Plug-ins that perform surround panning on audio typically only allow a user to move tracks around on a flat plane, making the assumption that speakers are at a uniform height. The solution offered by SRS enables tracks to have the freedom to render audio to custom speaker locations, particularly those not restricted to a one-dimensional line (stereo) or two-dimensional plane (5.1 and other popular surround formats).

imgThe technology is called Multi-Dimensional Audio (MDA), and debuted in January 2012. The user is required to input two things: (1) raw tracks containing “sound objects” such as a solo piano or a dog barking, and (2) the three-dimensional positions of the speakers. The output of the system consists of these objects precisely mapped to your unique speaker configuration, unrestricted by speaker quantity or position.

Sounds that might qualify as objects are single tracks, or musical instruments recorded in isolation. Sounds containing lots of reverb will need the timing of their reflections altered to be perceptually accurate. However, during the demo I watched and heard, noisy sounds like a cheering audience worked surprisingly well when moved around in virtual space. 

Every time that you have a new layout of speakers, you'll need to update the system with their new physical locations. A simple solution is a laser ruler (such as the Ultrasonic Distance Meter with Laser Pointer for $24.99 from Harbor Freight) which can retrieve three-dimensional data.

There do exist plug-ins that can spatialize audio in three dimensions, but none of them output the open interchange format like MDA. Herein lies the powerful potential of VBAP (vector-based amplitude panning), which MDA drives seamlessly. This means that an MDA session can be automatically remapped to any speaker configuration without any edits to the audio by the user: once you save an MDA session, you can spatialize it precisely in any auditorium or studio, given that you know the locations of the speakers.

VBAP critically aids the complicated process of mapping any number of tracks to any number of desired listening channels with a minimal amount of effort from the user. Traditionally, converting a 5.1 configuration of audio to 7.1 was virtually impossible: the mixer must account for phase cancellation, gain and saturation problems, and channel overloading. VBAP was formulated by Ville Pulkki in his 2001 dissertation "Spatial Sound Generation and Perception byAmplitude Panning Techniques," and no one has taken advantage of its potential quite like SRS has.

imgWith the MDA iPad interface (shown), users have the ability to move sound objects around in a space, manipulate their gain, and even substitute objects for other objects—somewhat gimmicky, yes, but totally cool. You can even receive a call on Skype and treat that as another object. It works over WiFi to talk to MDA Creator (the plug-in) and override the previously stored position information, all in real time.

MDA calls this extension of raw audio to include positional information “PCM+.” It associates an XML file to the raw audio samples with the overall gain, 3D coordinates as defined by VBAP, and temporal tags indicating that one or more of these things has changed—all treated as metadata. These values are given and manipulated by the user in the plug-in, which looks like the familiar box with a moveable ball representing the sound object. 

Treating sound tracks as objects is a clever way of working around situations that involve huge amounts of tracks and a variable amount of output channels. MDA is most impressive aurally when speakers are located at multiple heights, but it’s the best of both worlds to be able to switch between conventional and unconventional configurations automatically. The flexibility of SRS Labs’ MDA system brings 3D sound into a new era.

 
 
 
imgGina Collecchia is the author of Numbers & Notes: An Introduction to Musical Signal Processing, published by Perfectly Scientific Press (Portland, OR). She is also an electronic musician, software engineer, and Master's candidate at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). Read her blog at numbersandnotes.com.

You Might Also Like...

Virtual Instruments
Virtual Instruments
Show Comments

These are my comments.

Reader Poll

Have rotary simulations gotten good enough that you don't miss a real Leslie at the gig?


See results without voting »