|Andrew, Hunter, and Ning prepping the system
We decided to use the Focusrite Liquid Saffire 56 for our routing needs. Once we ensured that every speaker had proper power and signal, we took SPL measurements via Smaart of each cabinet and recorded that data within a spreadsheet. Taking into account the different models of speakers, we needed to standardize the SPL from each cabinet to prevent sourcing to any one particular cabinet once signal is sent to every one.
From this point forward, there were two tasks that needed to happen in order to proceed: Generate data points corresponding to both attenuation levels and delay times for each speaker based off of an imaginary sound source, and an interface that could interpolate these data points and send sound to each speaker.
We decided to use Max/MSP to create a patch that could allow us to route audio and control all of these data points. This particular approach to the programming involved creating a 1x20 (in/out) monophonic mixer that could allow for setting initial Gain/Trim levels for each speaker and then have points to add our attenuation and delay times in. Visually, the patch began to take the shape reminiscent of a standard mixing console or related interface control software with movable faders and mutes for each individual channel.
|Attenuation and Delay calculations - 4 locations
Our next step tasked us with adding in multiple other sources to place within the room. New calculations were taken and stored in the data sheet, and a new version of the patch was programmed. In both learning how to navigate this particular system and understanding more of Max’s programming, I would continue to find oversights and weaknesses in the programming that would be fixed with updated versions. Though the patch worked completely fine, developing methods for increasing time efficiency and finding a way to create recallable presets became a priority.
In the next round of testing, we upgraded the programming to take four channels of audio and apply the calculations of four distinct points within the room. In Max, despite the seemingly chaotic visuals on screen, it was a simple duplication of the original 1x20 mixer, each taking a different channel of the audio and then being routed back out to the ~dac object. Playing the four channels, the imaging was once again very successful. Sounds were placed in the room, and the sense of distance and location was achieved very successfully.
|WFS - Max Patch (4 source mixer)
Getting back to Suzanne Vega singing above our heads… in doing this process, we uncovered some of the holes in our initial methodologies for attaining calculations. The rig in our lab is built to be about 10’ above the ground, and thus any speakers hung would be anywhere from 8-10’ above our heads. Our calculations were taken via a 2D ground plan drawing, giving us the information for the X and Y axes. But because our speakers are in the air above our heads, we would also have needed to factor in Z axis information to properly achieve the exact measurement. As it stood, our measurements were taken in the same plane of the speakers, hence the floating Suzanne Vega. With that being said though, by standing on a ladder and becoming more level with the speaker plane, the imaging becomes even more realistic and stronger.
Things we took away from this project: Wave Field Synthesis is pretty amazing and very powerful. Creating the sense that someone could be singing or playing music from the room next door, and giving that same experience to anyone within that Wave Field is enthralling. The greatest challenge of this is resources. If we wanted to get an even better sense of realism, we would need an even larger amount of speakers in multiple planes: ideally one at head height, in the air, and overheads. Doing something like that is just not practical. If we had more time to play with the system, we would take recalculate and use a 3D model of the space to achieve the most accurate information (and this is what would be needed for multiple planes of speakers too).
This project also brings up interesting ideas into how this can be incorporated into theatre or other productions of live performance. We would need to find a way to build the patch that allows for easy creations, saving, and recalling of multiple presets. In a theatrical environment, the biggest challenge is the programming is to be able to make quick changes immediately in the rehearsal room and more importantly the tech process. Developing an interface that can function like QLab could allow realistic spatialization to become easier.
Overall, while not perfect and with having no experience of the project prior to this, our experiments became an incredibly successful endeavor that yielded a lot of valuable knowledge that we hope to apply once again in the future, in both theatrical mediums as well as any other immersive sound system.