Friday, November 3, 2017

Wave Field Synthesis

In our Trends in Modern Sound Design class, we often begin the quarter by gathering a list of topics we are interested in learning and proceed to study them over the course of the term. One of our topics delved into how Dolby Atmos and realistic sound imaging function and thus we were tasked with the project of conceptualizing and building a system that could test the capabilities of Wave Field Synthesis. Wave Field Synthesis deals with the production of artificial wave fronts synthesized by a large number of loudspeakers. The process of creating these artificial wavefronts is to create a virtual starting point for these sources, recreating the source in the space (whether in or outside of the space itself) and being able to localize to that point regardless of the listener’s position within the room.

Andrew, Hunter, and Ning prepping the system


In determining how to properly navigate this project, we began by designing a robust surround system that could allow us to accurately create a wave field. As a team we ended up with a 20-channel surround system, designed in our classroom/lab space comprising of a mix of Meyer cabinets: UP-J, UP-Junior, MM4, and UP-4s. One of the most important factors we had to consider with a variety of cabinets was not only placing the speakers in the correct position on the rigging supports, but to ensure that each speaker was in line at the driver level.




We decided to use the Focusrite Liquid Saffire 56 for our routing needs. Once we ensured that every speaker had proper power and signal, we took SPL measurements via Smaart of each cabinet and recorded that data within a spreadsheet. Taking into account the different models of speakers, we needed to standardize the SPL from each cabinet to prevent sourcing to any one particular cabinet once signal is sent to every one.




From this point forward, there were two tasks that needed to happen in order to proceed: Generate data points corresponding to both attenuation levels and delay times for each speaker based off of an imaginary sound source, and an interface that could interpolate these data points and send sound to each speaker.

We decided to use Max/MSP to create a patch that could allow us to route audio and control all of these data points. This particular approach to the programming involved creating a 1x20 (in/out) monophonic mixer that could allow for setting initial Gain/Trim levels for each speaker and then have points to add our attenuation and delay times in. Visually, the patch began to take the shape reminiscent of a standard mixing console or related interface control software with movable faders and mutes for each individual channel.

Attenuation and Delay calculations - 4 locations 
In our first run, we began with a single file, Suzanne Vega’s vocal-only recording of “Tom’s Diner” and listened to how it behaved in our system. It was a very interesting experience, being able to hear and localize the sound but be unable to point to any one specific speaker as its source. Walking around the room, our placed sound retains its spot as if Suzanne Vega were singing in the room… floating above our heads (I’ll talk more about that in a few paragraphs).

Our next step tasked us with adding in multiple other sources to place within the room. New calculations were taken and stored in the data sheet, and a new version of the patch was programmed. In both learning how to navigate this particular system and understanding more of Max’s programming, I would continue to find oversights and weaknesses in the programming that would be fixed with updated versions. Though the patch worked completely fine, developing methods for increasing time efficiency and finding a way to create recallable presets became a priority.

In the next round of testing, we upgraded the programming to take four channels of audio and apply the calculations of four distinct points within the room. In Max, despite the seemingly chaotic visuals on screen, it was a simple duplication of the original 1x20 mixer, each taking a different channel of the audio and then being routed back out to the ~dac object. Playing the four channels, the imaging was once again very successful. Sounds were placed in the room, and the sense of distance and location was achieved very successfully.


WFS - Max Patch (4 source mixer)

Getting back to Suzanne Vega singing above our heads… in doing this process, we uncovered some of the holes in our initial methodologies for attaining calculations. The rig in our lab is built to be about 10’ above the ground, and thus any speakers hung would be anywhere from 8-10’ above our heads. Our calculations were taken via a 2D ground plan drawing, giving us the information for the X and Y axes. But because our speakers are in the air above our heads, we would also have needed to factor in Z axis information to properly achieve the exact measurement. As it stood, our measurements were taken in the same plane of the speakers, hence the floating Suzanne Vega. With that being said though, by standing on a ladder and becoming more level with the speaker plane, the imaging becomes even more realistic and stronger.


Things we took away from this project: Wave Field Synthesis is pretty amazing and very powerful. Creating the sense that someone could be singing or playing music from the room next door, and giving that same experience to anyone within that Wave Field is enthralling. The greatest challenge of this is resources. If we wanted to get an even better sense of realism, we would need an even larger amount of speakers in multiple planes: ideally one at head height, in the air, and overheads. Doing something like that is just not practical. If we had more time to play with the system, we would take recalculate and use a 3D model of the space to achieve the most accurate information  (and this is what would be needed for multiple planes of speakers too).

This project also brings up interesting ideas into how this can be incorporated into theatre or other productions of live performance. We would need to find a way to build the patch that allows for easy creations, saving, and recalling of multiple presets. In a theatrical environment, the biggest challenge is the programming is to be able to make quick changes immediately in the rehearsal room and more importantly the tech process. Developing an interface that can function like QLab could allow realistic spatialization to become easier.

Overall, while not perfect and with having no experience of the project prior to this, our experiments became an incredibly successful endeavor that yielded a lot of valuable knowledge that we hope to apply once again in the future, in both theatrical mediums as well as any other immersive sound system. 

- J

Saturday, June 17, 2017

A furious end to our year.

UCI runs on the quarter system (as opposed to the semester system), which means that we're just now, in mid-June, ending our week of final exams.  Most of the rest of the country has been out of classes for almost a month.  Not us.

Last weekend, we ended our school year with a very busy weekend full of activities.  On Friday night, we held the thesis critique for graduating student Andrea Allmond. For thesis productions, Mike and I always invite an outside sound designer to watch the production and provide feedback. For Andrea's work on Avenue Q, we were delighted to host Broadway Sound Designer (and legend) Tony Meola. Tony flew in on Friday afternoon, had dinner with Andrea, Mike, and I, saw the show, and then joined the rest of the program for a critique in the Meyer Sound Design Studio.  The critique was robust and valuable, and I did manage to snap a very serious-looking photograph of the goings-on.



On Saturday morning, before we shuttled Tony back to LAX for his flight home, the program gathered at my place for a breakfast.  It was cloudy and slightly misty at first, but soon the sun came out and we all enjoyed our food and visit.




On Sunday, everyone gathered back at my place for a much sadder occasion.  Mike and I hosted a memorial celebration for our old friend and colleague, BC Keller.  BC passed in May from a relatively short battle with cancer, and we knew that his friends and colleagues needed an opportunity to grieve and remember him.  We opened the doors to everyone who knew him: family, friends, UCI co-workers, LA co-workers, students, and alumni.  The Claire Trevor School of the Arts Dean's office also helped out by providing some refreshments.  

Here's a photo of BC's mom Patty checking out a quilt that one of her friends made with some of BC's tshirts.






And now, with that flurry of activity behind us, it's time to say good-bye to this school year.  Congratulations to Andrea Allmond and Ben Scheff, our graduating MFA students, and to Jacques Zwielich, our Honors in Design BA! You'll go off and do great things.  But first, take a short break, if you can. You've earned it.

Friday, May 5, 2017

BC Keller, our friend forever

Mike and I are heartbroken to share that BC Keller, UCI’s longtime Sound Supervisor, passed away on the morning of 4 May due to complications from cancer. He was a hell of a guy, a prince among men, and his death has left a big hole in all of our hearts.

I first met BC in 2006. I was living in NYC at the time but contemplating a move to LA. I flew out for a week of meetings and temperature-taking, and during my visit to South Coast Repertory, he gave me a tour of the facilities. Shortly after I started at UCI in 2007, BC left SCR to take the Sound Supervisor position at Dallas Theatre Center.  A few years later, when we had an opening at UCI, Mike and I were able to tempt him back to southern California.  He was with us at UCI for the rest of his life.

BC was super-smart. He knew how to tackle complex concepts and gear. He was our resident expert on some of the equipment that Mike and I occasionally struggled with.  He taught our LCS & D-Mitri modules, and the students knew that he was the go-to person with questions when they were furstrated with programming. 

BC was kind and generous.  He came in early and stayed late. He responded to text messages and phone calls from his students even on his (rare) days off, and he maintained just as active a relationship with our alumni as Mike and I do. We often thought of him as our third faculty member in the sound design program. He didn’t formally teach, but he understood that in his work with the sound department, he was mentoring and teaching students. He took that responsibility seriously.

BC had a love for life that was inspiring. Even when he was at his most exhausted and frustrated, he was full of corny jokes and a mischievous sense of humor.  When I sent an email to the current and former students announcing his death, I asked them to share some of his sense of humor with me.  In response, I’ve gotten lots of screengrabs of text message exchanges.  Everyone had a special relationship with BC. I know I did.

Mike and I are devastated by his death, for reasons both personal and professional.  Not only was BC a colleague at work, but he was a friend. We hung out with him, ate with him, drank with him. We visited him in the hospital and at home during his last months, as he rested, enjoyed catching up with family and friends, watching movies, listening to music. We watched his mother Patty as she cared for him, and we wished, as always, that we could do more, could ease his burden, just as he eased ours.

On behalf of our MFA family: Cory, Joe, Palmer, Tim, Todd, Noelle, Stephen, Patricia, Beth, Jeff, Mark, Matt, Josh, Brian, Kelsi, Matt, Ben, Andrea, Jordan, Ning, and Andrew, and our undergrad family: Sinan, Karli, Elliot, Kate, Cinthia, Ryan, and Jacques, we want to thank BC for spending so much of his short time on Earth with us.


We’ll never forget you.








Wednesday, March 8, 2017

Welcome to our MFA, class of 2020!

UCI Drama is excited to present our two incoming students for next year: Hunter Moody and Jack Bueermann!  


Hunter is native Arkansan, and he first discovered the fascinating world of sound design in his undergraduate years. After graduation, he set out for eastern Kentucky to join the 2013 summer season of Jenny Wiley Theatre as the A2. With the advent of potential full-time work in 2015, he was promoted to resident sound and graphic media designer. He has since designed the sound for and engineered over thirty productions between JWT and the budding University of Pikeville Theatre program. Hunter has developed extensive experience in educational theatre, working with the Jenny Wiley Footlights Theatre, a pre-professional TFYA program, and guest lecturing at the University of Pikeville. Outside of theatre, he has designed sound for dance, voice-over, and competition, with experience in audio engineering for concert venues, church events, and live comedy. His research interests include the historical contexts of music, the psychoanalytic effects of sound upon memory and mood, and innovations in audio production technology. In his free time, Hunter is an avid video gamer and loves (trying) to bake. He has a BFA in Theatrical Design/Technology from Arkansas State University.



Jack recently graduated from the University of Redlands, where he designed multiple major productions. Since graduating, he has worked professionally as an assistant sound designer in the Los Angeles area. More recently, he has been working as the A1 at the San Diego Repertory Theatre where he mixed the world premier of Hershey Felder's Our Great Tchaikovsky. Jack is passionate about theater and story-telling through sound, and is excited to collaborate with the talented faculty and students at UCI.

We had a number of excellent applicants this year, and Mike and I are both thrilled and Jack and Hunter will be joining us in the fall.  Be on the lookout for great things from them both!




Friday, January 20, 2017

Field Recording

Last fall, the last time my Trends in Modern Sound Design class met was to go on a field recording field trip to 1000 Steps Beach in Orange County.  The beach is about 15 minutes from campus down a steep stairway (about 232 steps - not that I was counting) from the Pacific Coast Highway.  Once you're on the beach, the road is inaudible, and at 9am, there's little human activity on the beach.  So, on the Monday of Finals Week, we hauled a bunch of gear down to the beach to make some field recordings.



Earlier in the quarter, we spent some time talking about how to strategize the recording session for optimal content collection and editing speed.  We first talked about what kinds of recordings we'd like to make. We settled on these formats:

* close-miked sound effects (water, animals, etc.)
* binaural ambience
* spaced-omni pair ambience
* coincident pair (XY) ambience
* omni ambience
* 5.1 ambience
* M/S ambience
* spaced-cardioid pair ambience

When we looked at the list of formats, it became apparent that we needed three teams of students. So, I divided the class into three teams.


  • Ben, Ning, and Mingna would together handle a system that included a SoundField ST-350 microphone and a pair of spaced AKG 414 microphones. The ST-350 sends four channels (ambisonic WXYZ), but can be decoded to omni, coincident pair, 5.1, and M/S. The AKG 414 microphones have an adjustable polar pattern, so the team could choose a spaced-pair omni or cardioid. Those six channels (WXYZ & two 414) fed into a Zoom F8 recorder.  They would set up on the beach and record ambience.  

  • Andrea and Andrew would together handle the shotgun mic for close-miked sounds.  They mounted a Sennheiser MKH 416 on a boom pole, recording onto a Zoom H6. Andrea handled the boom.  Andrew handled the recorder and monitored on headphones.  They were tethered by cable.

  • Jordan handled the binaural recording by himself.  He wore a custom-made binaural ball-cap, with the omni elements sewn into the headband at the ear positions.  Those mics fed a Zoom H4.  


We met on campus at 8:30 and caravanned to the beach.  Once we were on the beach, we found a rocky plateau sufficiently far up the beach (the tide was slowly coming in) to make an impromptu camp.  Each team was responsible for collecting all of the gear (including stands, cables, batteries, etc.) they'd need, so once we got to the beach, the teams started setting up.  Teams shotgun and binaural were first out of the gate, heading south to a rocky water cave with lots of great water-against-rock sounds.  Team multi-channel took longer to set up.


The recording session itself was lots of fun.  Team shotgun enjoyed scrambling over rocks to get some great sounds of water lapping against the rocks, but their attempts at wildlife was less than successful.







Team multichannel spent some time adjusting their microphone spacing and distance from the surf, but got some excellent recordings of surf.



Team binaural (Jordan) took had a much-needed opportunity to have some quiet communion with nature after a particularly busy quarter (and an emotionally-taxing production).



As the session wound to a close, the teams started packing up, and Ning found herself in the wrong place at the wrong time with respect to the tide.  


Ultimately, we collected some great recordings!  After the session, the students went back into the studio to edit and render the recordings.  Teams binaural and shotgun only had to create two and one-track recordings to render, but team multi-channel needed to use some specialized software in the Meyer Sound Design Studio to render their recordings into all of the requisite formats.  I created a Google Doc that automatically turned the recording information that the teams entered into the appropriate data formats for uploading, both to our private sound effects server and to FreeSound.org, where we've put our recordings for public access.  We're still working on mastering and uploading all the audio, but once it's done, I'll post a link on this blog!


Wednesday, January 11, 2017

Disneyland Field Trip

The 2nd of December marked the last day of the quarter for us, but it was also the awaited day of our Disneyland visit - what a way to end the quarter where dreams come true!

Jerry Tomlinson, hiring manager of technical services were our first point of contact after a 17 mile morning drive to Disneyland Park. Technical Services at Disneyland involves technicians for all live events (stage shows, marching bands, parades, etc.), and Jerry is constantly on the lookout for talented technicians to bring onto the Disney team. His enthusiasm was infectious, but he was also extremely detailed in the operations of park as we headed straight to the Technical Services building, walking past rows of neatly pressed crew uniforms and costumes, and finally to their main equipment shop.

It was fascinating to see a central shop area that supports all equipment for the entire Disneyland. The efficiency and level of precision in equipment preparation was definitely an eye-opener. Each request for equipment is submitted to the shop, checked and prepared by the full-time technicians before they are sent out to the designated location. This speeds up the pull process but most importantly keeps inventory in check. I can definitely see how having a dedicated shop like this would have solved the issues I have encountered in professional settings elsewhere.

As we walk past a few rehearsal stages, we come across one for the Disney Performing Arts (DPA) programme. The DPA programme is a series of workshops for youths to learn vocal, instrumental performance, and dance to eventually perform in front of Disney Park guests at the end of the series. A team of experienced music recording engineers work with each group to mix and record the ensemble, and each group is presented with a USB drive containing their recording upon completion of the clinic.

A tour in Disneyland will never be complete without the next character. The theatrical stage showing Mickey and the Magical Map is a heavily automated production, with features such as Mickey’s trap door and moving set pieces. Jerry shared an interesting observation about the high occurrence of automation technicians originating from being sound technicians. We chatted about this and attributed to the programming of automation being extremely detailed and layered - possibly similar to programming sound in this digital age. 
Photos were off-limits in the backstage area, but we were finally allowed to take pictures upon entering the public area! (yay!) We watched the second half of “Beauty And The Beast” at the Royal Theatre, accompanied by a pianist similarly dressed in a simple yet elegant period costume. Kudos again to the Disney experience! We also observed the stage manager and sound technician calling and operating the show from a little corner behind the audience. The stage manager had a fairytale like music stand and an iPad control of was housed inside a treasure box. After the show ended, everything disappeared into the treasure box (just close the lid!) leaving only the beautiful music stand, clean and unobtrusive.  

We then followed Mary Poppins and the marching band to the area in front of the castle and watched their performance. Jerry explained that  each show has its own “zone” of speakers. This performance of Mary Poppins was isolated to the area in front of the castle. The system was designed to “draw people in” to the direction of the show but also not to spill too much to the surrounding areas. Jerry also explained that only the trumpeter and narrator were miked, and everything else was acoustic. This was when I asked about where the radiomic antennas, the sound console, and the sound operator were. To my astonishment, he explained that antennas were well hidden inside street signs, in trees and other structures all around the park and managed by a central system in Audio Central. This is done so that the RF signal from wireless microphones are never isolated to one pair of antennas only, providing redundancy and flexibility; there is always the freedom to stage a performance in a different location just by some re-programming at Audio Central (which we’d visit later in the tour). Imagine the amount of manual labor required for the changeover if this had been a conventional setup! 

Microphone levels and individual EQs are pre-programmed at Audio Central and cue changes are cued by a stage manager or technician who follows the performers. Fine adjustments, if required, are communicated to Audio Central via walkie-talkies so that changes can be made real time without the need of having a sound console and sound operator on the ground. I felt this definitely added to the “magical” experience at Disneyland.

Then came the most important highlight of the entire day - Audio Central. This is truly the mastermind, the brain, the “air traffic control” of everything audio around Disneyland. The most crucial software we were introduced to was the Disneyland Entertainment Control System.

We talked about how during parades, each parade truck has its own assigned show music and playback is from a SD card housed within within truck itself but Audio Central is able to override anytime if the truck system failed. The location of each parade truck is visible on the map in Audio Central and color coded according to which music it was playing. The speakers surrounding each parade will automatically crossfade between the background music to the parade music once the parade truck is within its zone. The speaker icons on the screen display a color change accordingly.

With some research on the internet, I found that this was a system developed by Don Dorsey exclusively for Disneyland. According to a guest post on http://www.mouseinfo.com/forums/disneyland-resort/87894-disneyland-show-control.html:

“The opening of California Adventure and the birth of the Eureka! Parade gave Dorsey the chance to implement a newly enhanced version of DECS. This technology represents a major advance in parade show control. Back in 1980, if Dorsey wanted to change something, he had to go into each of the 20 zones and change every individual cue line manually, a task that took hours. In 2001, DECS uses an “interpretation” feature, which allows a change to be written into one zone and immediately “expanded” to all 33 zones. “You can now rewrite the entire script in the amount of time it takes the parade to turn around,” Dorsey notes. “We finally have a system with the capability to control complicated parades and respond quickly to change requests from the creative team.”

It was still fascinating to find out so much detail was put into the show control system to ensure every single element is fully controllable yet can be overridden on the fly to adapt to changes on the ground transparent to the guests. This is critical to a high accountability venue where the going down of a single performance might trigger multitudes of complaints or loss in revenue and faith in Disney. 

After lunch we took a nice walk over to Disney California Adventure Park. Here we visited Cars Land where Jerry shared about Walt Disney’s attention to detail and every landmark or structure was built exactly as what it would have looked like in the animations. 

We then took a breathtaking walk along the Broadwalk, visited the control room of the World of Color, and eventually caught the parade of the new Disney princess Elena of Avalor. And then of  course Jordan had to give in to the lure of the churros kiosk along the way!

We finally got to the other main highlight of the day - Frozen the Musical. We walked through the backstage of the Hyperion Theatre, past the crew and actors getting ready for the show. Jerry explained that it was a video heavy show - two huge panels of LED screens join at the middle of  upstage centre to form the backdrop of most of the content of the show while the rest of the proscenium was covered with projection. The LED wall opens down the middle during moments of the show like a door for entrances and exits from upstage centre. Three bigger-than-life doors fly in at different moments of the show, one of which will be used in the famous Ana and Elsa song “Do You Want To Build A Snowman”. There was a giant intricate ice shaped chandelier above the audience very cleverly masked by bright auditorium lighting preshow. Ice spikes that were to be used for Elsa’s ice castle scene were wonderfully hidden between the gaps of the stage floor — the audience would never have guessed until they get a pleasant surprise during the scene.

The most impressive elements in the production was the LED wall and video projection. These set the scene for most of the show and freely transformed between scenes with realistic animation. The sound design by Broken Chord Collective definitely helped make the show. Together with the projection design of the show, it was a fascinating journey into the world of Frozen and definitely a magical experience for the many children and families that were watching.

We had a wonderful time seeing how dreams  come true and it definitely made some of our own dreams come true too.