Friday, November 3, 2017

Wave Field Synthesis

In our Trends in Modern Sound Design class, we often begin the quarter by gathering a list of topics we are interested in learning and proceed to study them over the course of the term. One of our topics delved into how Dolby Atmos and realistic sound imaging function and thus we were tasked with the project of conceptualizing and building a system that could test the capabilities of Wave Field Synthesis. Wave Field Synthesis deals with the production of artificial wave fronts synthesized by a large number of loudspeakers. The process of creating these artificial wavefronts is to create a virtual starting point for these sources, recreating the source in the space (whether in or outside of the space itself) and being able to localize to that point regardless of the listener’s position within the room.

Andrew, Hunter, and Ning prepping the system


In determining how to properly navigate this project, we began by designing a robust surround system that could allow us to accurately create a wave field. As a team we ended up with a 20-channel surround system, designed in our classroom/lab space comprising of a mix of Meyer cabinets: UP-J, UP-Junior, MM4, and UP-4s. One of the most important factors we had to consider with a variety of cabinets was not only placing the speakers in the correct position on the rigging supports, but to ensure that each speaker was in line at the driver level.




We decided to use the Focusrite Liquid Saffire 56 for our routing needs. Once we ensured that every speaker had proper power and signal, we took SPL measurements via Smaart of each cabinet and recorded that data within a spreadsheet. Taking into account the different models of speakers, we needed to standardize the SPL from each cabinet to prevent sourcing to any one particular cabinet once signal is sent to every one.




From this point forward, there were two tasks that needed to happen in order to proceed: Generate data points corresponding to both attenuation levels and delay times for each speaker based off of an imaginary sound source, and an interface that could interpolate these data points and send sound to each speaker.

We decided to use Max/MSP to create a patch that could allow us to route audio and control all of these data points. This particular approach to the programming involved creating a 1x20 (in/out) monophonic mixer that could allow for setting initial Gain/Trim levels for each speaker and then have points to add our attenuation and delay times in. Visually, the patch began to take the shape reminiscent of a standard mixing console or related interface control software with movable faders and mutes for each individual channel.

Attenuation and Delay calculations - 4 locations 
In our first run, we began with a single file, Suzanne Vega’s vocal-only recording of “Tom’s Diner” and listened to how it behaved in our system. It was a very interesting experience, being able to hear and localize the sound but be unable to point to any one specific speaker as its source. Walking around the room, our placed sound retains its spot as if Suzanne Vega were singing in the room… floating above our heads (I’ll talk more about that in a few paragraphs).

Our next step tasked us with adding in multiple other sources to place within the room. New calculations were taken and stored in the data sheet, and a new version of the patch was programmed. In both learning how to navigate this particular system and understanding more of Max’s programming, I would continue to find oversights and weaknesses in the programming that would be fixed with updated versions. Though the patch worked completely fine, developing methods for increasing time efficiency and finding a way to create recallable presets became a priority.

In the next round of testing, we upgraded the programming to take four channels of audio and apply the calculations of four distinct points within the room. In Max, despite the seemingly chaotic visuals on screen, it was a simple duplication of the original 1x20 mixer, each taking a different channel of the audio and then being routed back out to the ~dac object. Playing the four channels, the imaging was once again very successful. Sounds were placed in the room, and the sense of distance and location was achieved very successfully.


WFS - Max Patch (4 source mixer)

Getting back to Suzanne Vega singing above our heads… in doing this process, we uncovered some of the holes in our initial methodologies for attaining calculations. The rig in our lab is built to be about 10’ above the ground, and thus any speakers hung would be anywhere from 8-10’ above our heads. Our calculations were taken via a 2D ground plan drawing, giving us the information for the X and Y axes. But because our speakers are in the air above our heads, we would also have needed to factor in Z axis information to properly achieve the exact measurement. As it stood, our measurements were taken in the same plane of the speakers, hence the floating Suzanne Vega. With that being said though, by standing on a ladder and becoming more level with the speaker plane, the imaging becomes even more realistic and stronger.


Things we took away from this project: Wave Field Synthesis is pretty amazing and very powerful. Creating the sense that someone could be singing or playing music from the room next door, and giving that same experience to anyone within that Wave Field is enthralling. The greatest challenge of this is resources. If we wanted to get an even better sense of realism, we would need an even larger amount of speakers in multiple planes: ideally one at head height, in the air, and overheads. Doing something like that is just not practical. If we had more time to play with the system, we would take recalculate and use a 3D model of the space to achieve the most accurate information  (and this is what would be needed for multiple planes of speakers too).

This project also brings up interesting ideas into how this can be incorporated into theatre or other productions of live performance. We would need to find a way to build the patch that allows for easy creations, saving, and recalling of multiple presets. In a theatrical environment, the biggest challenge is the programming is to be able to make quick changes immediately in the rehearsal room and more importantly the tech process. Developing an interface that can function like QLab could allow realistic spatialization to become easier.

Overall, while not perfect and with having no experience of the project prior to this, our experiments became an incredibly successful endeavor that yielded a lot of valuable knowledge that we hope to apply once again in the future, in both theatrical mediums as well as any other immersive sound system. 

- J

2 comments:

  1. Have you guys seen this? http://www.juliusmedia.com/stereo-is-obsolete-l-acoustics-l-isa-is-here/

    Do yo uptick it will be as much of a game-changer as the article's author implies?

    ReplyDelete