Monday, March 8, 2021

UCI Sound MFA, class of 2024

Every year when we interview potential MFA students for our Sound Design program, Mike and I always lament how hard it is to choose just two students from a qualified pool.  There are generally three or four students that are strong candidates, and it can be an excruciating decision to select which students to invite to join us for three years.  Part of selecting two students means necessarily saying 'no' to many more, including the few top-ranked alternates.  

This year was much harder, due mostly to COVID-19.  Because our industry is basically at a standstill, many folks are deciding that now is a good time to get their degree.  The number of qualified interested students shot up 500% this year. Instead of a pool of three or four top candidates, we had nine.  And, while in a normal year, we'd invite those three or four applicants to campus to sit in on classes, meet with faculty/staff/students, tour the facilities, and generally get to know us, this year UCI's campus is virtually shut down to outsiders.  So, we pivoted, scheduling Zoom sessions so that all nine top applicants could sit in on classes, meet with faculty, staff, students, get a portfolio review, etc.  It was a bit of a logistical nightmare and something I hope we don't have to repeat next year, but it gave us a chance to meet so many students that we'd never have gotten the chance to meet before. 

It also made our decisions harder, because while the number of excellent applicants increased, the number of people we could accept didn't.  We still had to choose two.  So while we're thrilled with the two women who are joining us this fall, we also want to publicly say that we wish we could have taken more of you!  Those applicants who are reading this - we wish you all the best and look forward to seeing your careers grow!

This year, Mike and I are thrilled to welcome to the UCI Sound Design MFA program two extraordinary designers.  I've asked them to write their own bios.  Here they are:

Friday, January 29, 2021


In order to have a fully functional adequate Sound Shop, the concept of the SWIFT (Sound Workshop for Integrated Fine-Arts Technology) needed to be reimagined.  Since the SWIFT has now been renamed the Keller (in honor of BC Keller), this was the perfect opportunity.

When I came on board to attempt to fill in BC’s shoes (let’s face it, anyone who knew him, knows this is impossible to do), the first thing I noticed was the sound shop was in chaos and needed some direction, since it was almost two years since a full-time sound supervisor was part of the CTSA. As evidence of the photos below, you can see there was some infrastructure and equipment being housed in those infrastructure elements.  

A change was needed and a drastic one at that.  The Sound Shop needed to feel renewed with a sense of new found purpose.  Over the course of the next 15 months, I devised a plan to slowly implement changes and improvements to how the Sound shop will function.  The first was to determine a new layout that will allow the most flexibility and storage capability.  I kept BC’s idea in keeping the center clear for a prep area and rearranged and purchased new equipment infrastructure to improve the layout.  I removed the old tool chest and donated it to the xMPL booth, removed the old test rack and any storage containers that didn’t match the new esthetic.  I decided to keep the long Shelving unit, Wireless cabinet, rigging cabinet, workbench, storage bins rack and filing cabinet.  I proceeded to purchase a second matching storage bins rack, replace the mic cabinet with a different cabinet, purchase a new shelving unit, add a new cabinet and purchase a tool chest. 

This gave the new layout a new look and allowed for more storage.  I was able to add the new storage equipment in the Sound Shop and actually gain 12 square feet of floor space.  Once the new layout was implemented, the SWIFT or Sound Shop, was no longer and The Keller was born.

When you now walk into the Keller, there is a new standard in how the Sound Design program operates and runs for each production.  Since the implementation, the shows are pulled faster, more efficiently and have a place to live before the equipment gets loaded into a venue.

I will now give a photographical tour of the Keller since we live in COVID times and only authorized people may be on campus.


When you open the door to enter, there are three cabinets on the left, rigging, microphones and wireless.  Continuing on is the workbench, with the long storage shelves along the back wall now.  On the right is the electronics cabinet and two units of storage bin racks followed by the tool chest.  Immediately on the right is the computer station.


The Rigging cabinet is now fully organized and labeled.  Speaker manufacture bolts and smaller hardware are separated out in bins or drawers.  It’s now easy to grab what is needed for a show pull and have it.  This is also where the bolts, screws and washers are stored by size.


The microphone cabinet is the latest addition to the Keller.  Each microphone model is grouped together in its own bin so it is convenient to see what is in stock and how many of each are available for a project. There are currently empty bins, which leaves room for more mics in the future.  Various mic hardware is also stored in the door to make mic mounting on non-stands more convenient.


The wireless cabinet contains the mic elements, wireless mic transmitters and any in-ear monitor units for wireless speakers.  


The workbench is full functional and equipped with a soldering station, carbon filter and vices to assist in repairing equipment.  The desktop also includes the test rack, which contains a cable tester, Clear-Com unit and mixer.  Above the workbench is Akro-Mils drawers for various components, adapters and parts.  To the right of the workbench is the repair shelving unit.  The top shelf is for cable that needs repaired, the next shelf is for mics and smaller equipment.  Below that shelf is for rack gear that needs repaired.  The bottom shelf is for Clear-Com units and larger equipment.


Along the back wall is the long storage shelving units that stores our computer monitors, Clear-Com units.  Since we now do custom racks for each production, our rack making equipment is also in this unit.  This includes panels, panel connectors, vents, lacer bars and blank panels.  Above the rack making section is the tech table supplies and computer keyboards and mice.  The very top is where the spools of various cables are stored to build new cables.  There is still plenty of room and many empty bins to be able to store future equipment on this unit.


The electronics cabinet is for storing of the smaller networking switches (non-rackmountable), wireless routers, MIDI, Yamaha Cards and video equipment.


The storage bins rack are for storing various equipment for easy access.  This includes XLR turnaround cables, RS232 cables, adapter cables, Computer Cables and other miscellaneous cables.   On the top is for connectors to make new cables and to the right is power adapters for equipment.


The tool chest has many drawers to keep tools organized for use in the Keller.  These tools are are only for the Keller as we have a separate tool chest for load-ins and strikes.  There are also empty drawers for future tool storage.


The computer station is equipped with a Mac mini on the left and a PC on the right.  This flexibility gives us the ability to run any software needed in the shop.  Both computers are connected to the 42” HDTV  so the display can be seen across the Keller.  This is helpful in pull lists and checking out equipment.  The Keyboards and mice are wireless, so it is also easy to move the keyboard around the Keller.  The PC also has a wireless scanner which helps assist in the checking out and returning of equipment.  

So there you have it.  You have witnessed the death of the SWIFT and the birth of the KELLER!

Tuesday, December 15, 2020

Track and Field 2020

 In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student usually gets to work on each thread.

This quarter, I had eleven students, so there are eleven threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy! 

(side note: Thread 7 featured some prominent commercial music, so SoundCloud won't let me present it to you.  Your loss!)

Tuesday, December 1, 2020

Teaching D-Mitri Remotely

Every fall, in our Digital Audio Systems class, I teach our first-year sound designers a two-week intensive overview on the Meyer Sound D-Mitri system.  D-Mitri is a powerful tool for live sound that combines the functions of a digital mixing console, a sound content playback device, a multi-channel sound spatialization tool, a room acoustics enhancement tool, and a show control hub all into one package.  D-Mitri systems are found in large scale sound installations around the world, from theme parks to Broadway to more.  The ubiquity and capabilities of D-Mitri are so large (as is the learning curve, frankly), that we typically have a number of second- and third-year students join us for the D-Mitri training to refresh their skills.

UCI has a small D-Mitri system, and we use it both as a teaching tool and in production.  When we teach with it, we roll the rack into the Meyer Sound Design Studio and patch eight D-Mitri outputs directly into our eight-channel loudspeaker system so that we can learn and work with it while experiencing its spatialization capabilities in real time.  D-Mitri programming happens through software called CueStation, which functions in a client-server capacity.  Multiple users can be logged into D-Mitri at the same time, each working on a different aspect of the programming.  Our D-Mitri classes typically involve everyone in the studio, sitting at their laptops, all wired into D-Mitri with a nest of ethernet cables.

The Meyer Sound Design Studio, in the before-times.

Of course, we can't do that this year.  We could have delayed the training module until we were able to meet safely, but I don't know when that will be, and I'm honestly tired of delaying things because of the freaking pandemic.  I didn't want to let the perfect be the enemy of the good, to paraphrase Voltaire.

So, in a pandemic, how do you teach a class that requires both client-server access AND the ability to perceive spatialized sound?  In order to solve this, I needed to think through a number of different challenges.  Here they are, how I thought through them, and how I eventually solved them.

Physical Locations

We knew that the D-Mitri rack would need to live in the Meyer Sound Design Studio.  The studio is currently cleared to be occupied by three people, but I was uncomfortable coming to campus for in-person class (I'm teaching all of my classes remotely this term).  Plus, I know how important the refresher is to our more senior students, and I didn't want to cut them out of the experience.  So, each student would be remote, logging in with their computers (with wired connections, preferably).  I came into the studio to teach the classes so that I could take care of any issues that came up while teaching that I couldn't deal with remotely.

Even though I'd be teaching from the studio, I expected that I'd need to be able to remote into the host computer in order to tweak details from home. Early in the quarter, while testing, I found that if I were on campus, I could screenshare with the host computer (an iMac that we call Chalkboard), but when I returned home, I couldn't screenshare with Chalkboard at all.  After consulting with our IT department, we determined that we needed a more robust screensharing tool.  We installed TeamViewer on Chalkboard so that I could control the host computer, restart failed connections, etc.  TeamViewer mostly worked like a champ, though there were a few times where I couldn't log on to Chalkboard at all. 

Connecting CueStation to D-Mitri

The easiest way to share a CueStation screen with the students was to just share my laptop's desktop via Zoom, but if I did that, they'd just be watching me click things, which is hardly useful when teaching a tool.  The students needed to be able to control CueStation on their own in order to get their (virtual) hands on the (virtual) machine.  I asked Richard Bugg with D-Mitri about how we might address this issue, and he noted that D-Mitri systems can be controlled from around the globe using a proxy server. The folks at D-Mitri use this feature to troubleshoot systems without having to fly halfway around the world, but it was just as useful for my needs.  Richard walked me through the steps to set it up and spent some time doing some testing with me. The proxy server required Chalkboard to be running CueStation, but as long as it was running CueStation and the proxy server was active, I could have up to eight clients logged in at the same time.  Sometimes it took a while to get all students onto the proxy server at the same time. The folks at Meyer use the proxy server to do maintenance on machines that are difficult to get to, not to teach D-Mitri to a class, so they don't typically have the user count that we did. 


So, we've figured out where everyone would be, and we figured out how everyone can control D-Mitri using a proxy server.  How can we send spatialized sound to the students so that they can all monitor the D-Mitri environment well?

My first thought was to build a SpaceMap (D-Mitri's spatialization tool) replica of the Meyer Sound Design Studio's loudspeaker configuration, take the eight outputs of D-Mitri into a DAW, place them into a 5.1 session, stream the six-channel output over the net, and then have students monitor with 5.1 headphones.  But, we ran into a number of challenges with this idea.  First, I couldn't find a reliable six-channel sample-accurate streaming tool.  We've been using AudioMovers, which does a great job with two-channel signals, but in testing, multiple two-channel instances did not sync to each other (there are rumors of a >2-channel upgrade, but I haven't tested it yet).  Also, six channels of audio is three times the bandwidth of two channels, which could impair networks in dorms and homes.  Finally, I was hoping to avoid having to seek out funds to buy enough 5.1 headphones to outfit the class.  So, back to the drawing board.

SpaceMap of the Meyer Sound Design Studio
A spacemap of the MSDS studio.

I solved my problem in the next iteration.  I still took the eight channels from D-Mitri in to my DAW, but instead of mixing to 5.1, I mixed to binaural.  I initially used Binauralizer by Noisemakers to render each D-Mitri input relative to the loudspeaker's position in the room, though I shifted to the dearVR Micro plug-in due to better low frequency management.  The DAW mixed all eight inputs together, and then I used AudioMovers to send a two-channel stream out to the students.  I cut the audio bandwidth by two-thirds and removed the need for 5.1 headphones!  

ProTools Session Set-Up
ProTools handled the binaural routing.

The eight iterations of NoiseMakers, positioned relative to their corresponding spatialization position.
Eight binauralization plug-ins spatialized the sound.

Ultimately, the students were able to listen to high-quality spatialized audio with a relatively low latency.  It wasn't the same as being in the room, but it was pretty close.

Returning to the Studio

We spent four two-hour sessions learning the basics (and some details) of D-Mitri remotely, and on the fifth and final day of the module, the two students and I met in person in the Meyer to review their work in person.  They had created a spatialized sonic event from their apartments, but they presented their work in person, through the eight-loudspeaker system that we have in the Meyer. This gave us an additional opportunity to discuss how well the binaural monitoring situation translated into actual meat-space monitoring.  Their work more or less translated well, but we note that monitoring a sound panned to the center of the room revealed itself differently in speakers than in headphones.  Via headphones, all eight ambisonic channels were being addressed, which imaged the sound to the center of our image. But in the studio, having all eight speakers firing didn't image to the center.  It either imaged EVERYWHERE (if you were sitting in the sweet spot in the room), or to whatever speaker you're closest to (if you're not in the sweet spot).  

Final Thoughts

You won't catch me yearning to do this again if I have the option to teach in person, but overall, I'm pleased with the results.  If I have to do this again, I'd need to address these issues:

  • Input source.  I was using a single channel of audio from ProTools as an input source.  I set ProTools to loop playback but sometimes the session would stop on its own.  Next time, I'd use a different, more reliable input source.  An FM radio would be a nice low-tech tool.
  • Remote Access via proxy server.  It wasn't as solid as I would have liked it to be.  In fact, on the first day of class, no one could connect except me.  
  • AudioMovers wasn't designed to stream audio 24/7 for 3 weeks, and it occasionally failed.  When that happened, I had to log into the computer, restart the transmission, and send the link around again.  I had to do that once a day or so.  Not a deal breaker.  Just a thing to note.

Overall, this was a huge success!  If you're thinking about doing something like this, let's talk!  I'd be happy to share my thoughts and brainstorm other/better solutions!

Saturday, April 18, 2020

Welcome to the Class of 2023!

So, we usually do this earlier in the academic year, but like everyone else on the planet, we got pre-occupied with Covid-19 response. Now that our 'new normal' is feeling settled, I'm able to get back to some of the things that weren't quite as pressing before.... which leads me to this:

Mike and I are proud to welcome our incoming first years students to the UCI Sound Design program: the class of 2023:

Costandina Daros is a soon-to-be graduate of Western Michigan University. Throughout her undergrad, she has worked on several productions in both the Theatre and Dance Departments, her favorites including Sound Design for Jesus Christ Superstar and Tribes. In her summers, she has worked at The Utah Festival Opera and Musical Theatre in Logan, Utah and The Stephen Foster Story in Bardstown, Kentucky. In her time at UCI, she is looking forward to exploring new ways of using sound to enhance storytelling.

Aerik Harbert hails from West Virginia by way of Maryland, where he currently works at Towson University. Before his job at Towson, Aerik worked at the Hangar Theatre and Baltimore Center Stage. He is a graduate of U. Maryland Baltimore County.

Tuesday, January 21, 2020

Field Trip: Warner Brothers Studios

A Hollywood landmark!
Once you hear it, there’s no mistaking what you’re supposed to see – the oafish Patsy clomping together two coconut halves, always two steps behind Arthur as popularized in the 1975 cult classic Monty Python and the Holy Grail. But where did that idea come from? Where did this style of sound design emerge?

Sound effects first became incorporated into the “talkies” in the late 1920s with the work of audio pioneer Jack Foley. The field he developed – the art of creating and recording live sound effects (often with unconventional materials) in sync with a finished reel during the post-production process – revolutionized motion pictures in the early years of cinema and continues to permeate the industry today. This type of work is rarely required within the scope of traditional theatre (unless it is a specific choice for the production), so it was a real treat for the MFA sound design students when we had to opportunity to explore the sound stages at Warner Brothers Studios, taking an in depth look at one of their Foley facilities.

This field trip (coordinated by Professor Vincent Olivieri as part of DRAMA 255: Trends in Modern Sound Design) began on the cool morning of December 6, 2019. After we carpooled up to Burbank, we left our vehicles with the valets and made our way past the multiple ongoing studio tours to the commissary where we grabbed a quick bite to eat and some hot joe (side note: they have a killer breakfast spread). After a few minutes to take in the enticing aromas of scrambled eggs and bacon, we headed back outside to meet our host Alyson Lee Moore, an accomplished Foley artist with over thirty years of experience in the industry, half of which she has spent with Warner Brothers. She is also a two-time Emmy award-winning Foley artist (a recent win shared with the department for their work on HBO’s Barry) with numerous Golden Reel nominations from Motion Picture Sound Editors.

One "street" of the studio

Alyson first took us around the various sets situated about the lot – a small township built out of scenic skeletons and optical illusions meant to be filmed from specific angles. Each unit is highly configurable – some even had false walls (they were quite convincing, even up close) creating striking visual partitions within the spaces. Everywhere we looked, there were full crews of carpenters, electricians, and stagehands hard at work preparing for the next set. At times, I felt a bit like a pedestrian on a construction site, but no one seemed to mind us: they are likely used to random people walking about. We also spotted some unique staged statements scattered about meant for quick selfies from tourists. We obviously had to take a few for ourselves…

MFA Sound Design Students, 2019-2020

Next, we headed into the museum on the lot where technical aspects from titles in the Warner Brothers catalog were on display – from the beautiful gowns worn by Lady Gaga in A Star is Born (2018) to a scenic reproduction of Central Perk from Friends (1994-2004) or the forced perspective table used in The Hobbit (2012). Towards the end of our stroll through the museum we exited off into a small enclosed room. Here, the audio from the film Gravity (2013) was played in a stemmed format so that we could listen to the sound effects, recorded dialogue, and soundtrack independently. Afterwards, we listened to all three together to hear how the tracks were crafted to complement each other – the spatialized mixing in ProTools really brought everything to life. We concluded in, where else? The gift shop!

Costume pieces and props from A Star Is Born (2018)

One of the many stages located on the lot
Next, Alyson took us around for a peek at some of the various sound stages while we waited for the working Foley artists to go to lunch (we didn’t want to disturb them while they were working). There were dozens of various sizes (some akin to aircraft hangars), and most of them had full crews within loading in the next production or striking the previous one. Each stage has a unique placard located next to the entrance with a comprehensive list every title that had been worked on in that space. Then, we meandered through the main properties storage facility on-site which, to me, looked more like the best stocked antique store that you could ask for. Alyson said that this was a frequent haunt for her, as many of the items required for Foley could normally be considered props. We finally came full circle, ending up back at the commissary where we departed for the recently vacated Foley studio.

The Foley studio, which was underground in the post-production facility, was comprised of a main room where all of the actual Foley would take place, a kitchenette-style area with large tubs for water work, a smaller side room dedicated for storage, and a control room complete with studio-grade recording equipment. From Alyson’s description, three artists would be working in the facility for a given project – two Foley artists and a mix engineer. She also let us in on some of the more… unique sounds that she has had to come up with over her career, like the use of semi-frozen gelatin to capture the likeness of footsteps on an alien planet. The main space was full of odds and ends (all noisemakers), a pit full of sand, gravel and debris, and some great shotgun microphones. After she fielded our questions for a bit, we had a chance to make some noise of our own. Then, we headed upstairs towards the daylight and contemplated lunch. 

As the day ended, we headed out with Alyson to one of our favorite cafes right as an afternoon downpour swept over the city. We spoke more with her about some of the specifics of her work, but also what she enjoyed doing in her free time, ongoing hobbies, and the ever-present question of work/life balance. Something that I found insightful is that although longer hours are sometimes inevitable, her daily schedule was fairly regular with hours from around 8 am to 5 pm. After the rain let up a bit, we said our goodbyes and made our way back to Irvine.
All in all, it was a fantastic day full of spectacle and even more insight into a boundlessly creative line of work. Throughout the tour, one descriptor kept coming to my mind that perfectly encapsulated the career and underlined its inherent connection to live theatre - resourcefulness.

I’m incredibly grateful that we had a fantastic quarter in our Trends class last fall and that so much of it was able to be spent out in the field or exploring other industries within sound design; this final excursion was the perfect cherry on top.

Biggest of thank yous to Vincent Olivieri, Alyson Dee Moore, and all the wonderful folks at Warner Brothers.

Photos by Garrett Gagnon, Vincent Olivieri, and Meghan Roche.

Field Trip: Backstage Disneyland

When I told friends and family that my class was going to get to go on a backstage tour of Disneyland to learn about their audio & other tech systems, I think they may have thought I was reading them a page from my dream journal. I have had a longstanding love of theme park history and design that started when I was a kid visiting Disneyland--asking Cast Members (the Disney term for "employees") annoying questions, using terrible dial-up internet to scour the ‘net for ride show scripts, reading every book on Imagineering that I could, and eventually keeping lists and notes and spreadsheets about how the park has changed over time and how it might operate. (Okay, maybe it was a little bit of an obsession. It's fine.) 

(Ah, I should quickly interject here that I'll be using a lot of parentheses in the post ahead--I was perhaps a touch too excited in revisiting the trip & wound up with a lot of vaguely-related sidenotes...sorry about that!)

Getting to combine theatre tech with this longstanding fascination is something I was looking forward to for weeks, so I am happy to report that when the day finally came, it somehow managed to surpass my already-high expectations.

Because Disney has some somewhat strict policies re: secrecy of backstage magic, there’s a limit to what I can share, but here’s a breakdown (sorta) of the day:
  • We started out at TDA (Team Disney Anaheim, a giant complex of what seemed to be primarily administrative buildings) and met our host, Jerry, who is an Entertainment Manager for Technical Services and whose history with the company goes back almost 25 years across a variety of roles! (We also discovered that I have photographic proof of having met him while he performed one of these roles in the late 90s, which was WILD, but due to Disney Magic reasons I’m not sure if I can share much more about that here, unfortunately…
  • We saw where the fireworks get shot off every night behind ToonTown--for the last 15 years or so, Disneyland has been using a somewhat unique technology to set off their fireworks which significantly reduces both the smoke and noise produced by the show. (If you’re at all interested in the history of their fireworks show and how they used to be set off, as well as how Disney pioneered the use of music synchronization with pyrotechnics, I highly recommend giving this podcast episode a listen:
  • We also saw where the parade floats are all kept when not “on-stage” and learned how audio gets distributed to (and from) them. (There were a number of Entertainment and Imagineering employees also gathered in the warehouse to test out a future parade addition, but that one is definitely going to need to stay a secret for now.)
  • The very first steps we took “on-stage” were somewhere most of us hadn’t had a chance to see yet: the brand-new Galaxy’s Edge expansion! It was gorgeous, the complexity of the audio alone was kind of mind-blowing, it really contributes a lot to the storytelling the land is able to pull off. I got to talk to a Stormtrooper who snuck up on me and accused me of having Rebel sympathies (and who, we learned, uses a crazy sophisticated system to talk to people in a way that allows them to personalize every single interaction while keeping a consistent voice, more Disney magic I probably shouldn’t divulge here) AND try the infamous Blue Milk. (It was $8, VERY sweet, and had a texture that turned out to be deeply polarizing among those who sampled it.) 
Backstage photos at Disney are very against the rules, so this is, unfortunately,
one of the few photos we have from the day. At least we look very cool in it?
  • We visited the venue for Mickey’s Magical Map in the large outdoor Fantasyland Theatre venue (which, fun fact, initially started out as a teen night club in the 1980s, more info on that here: and got to take a peek into the booth of the theatre before watching the show, chatting a bit with the show’s audio technician and stage manager. It was a good show, but because of the noise of the nearby passing train and our lovely VIP seats (which had our backs against a pretty reflective wall) the overall mix of the show was pretty loud. It took us a minute to recover. (I personally loved watching the little kids get totally enthralled in the action onstage, though.)
Waiting for the show to start! (I think? I am just now noticing that Garrett and
Jack are very focused on something up front...)
  • Lunch! We took a break from walking around the park, and grabbed a bite backstage at one of the Cast Member eateries backstage.
  • AUDIO CENTRAL. Okay, again, not sure how much I can divulge here, but the bulk of the audio for the park (especially for parades and other outdoor shows, area music, etc.) is controlled from one very fancy room sitting right above one of the guest-accessible levels of a Main Street, USA building. And when I say fancy, I mean that it sort of resembled a TV show art director’s idea of what a theme park command center might look like--shades all drawn, two people behind glass at giant control desks with many monitors flashing different numbers and graphs, a wall of video feeds and light-up maps of both Disneyland and Disney’s Californa Adventure…et cetera. (I just found an old Disney Parks Blog post about Audio Central that is relevant here, though it is 5 years old and they have since doubled the size of the facility and apparently updated a ton of the gear. Check it out:
  • Backstage at Frozen in the Hyperion Theatre in Disney’s California Adventure! Because it was a “dark day,” we did not get to see any action happening, but it did mean that we got to walk around the entire theatre without worrying about a huge time crunch. We saw the set-pieces and crazy costumes backstage (sidenote: the troll costumes are TERRIFYING without any actors inside of them), learned how some of their very intense automated lighting tracking worked, and walked all around the house of the theatre--clapping and marveling at how remarkably acoustically dead of a space the designers of the theater managed to make.
  • Finally, we got back to Disneyland and walked through the park to see parts of the Christmas parade that was running at the time. It was pretty neat to see in action all of the technical elements we had even learning about all day, and a nice way to put a bow on the end of the day.
This blog post is now approximately 4 times longer than I was told it should be, so I’m going to go ahead and wrap things up now. TL:DR? Theme parks are very cool and very technologically complicated! And we got to go learn about it for a day!

I leave you now with a photo of these two outrageously photogenic Disneyland ducks:

It's amazing how advanced Disney's Audio-Animatronic tech
has gotten over the years!

Sunday, January 19, 2020


Lots of Company! Life is Company! Love is Company!

...Sondheim wrote a hell of a show in Company. The unique Sondheim harmonic structures, overlapping vocal lines, and thick orchestration do not escape this show. I have only had the privilege of designing a Sondheim show twice before (Gypsy and Into the Woods), so I was thrilled at the opportunity when I learned I was going to be designing Company as the season opener.

When I was thinking through an outline for my design process, there were a couple specific challenges to overcome and goals to meet.

First of all, the show was in the Irvine Barclay Theatre (IBT), which is a fantastic 750-seat proscenium theatre, and our usual venue for the season opening musical. However, as this venue is also its own business entity, we don't get to call the shots in the same way as we might in a UCI educational venue. We couldn't configure the equipment with as much flexibility. We had very strict hours during which we could work; being forced to leave the theatre at midnight during tech week is both a wonderful and terrible thing! And lastly, we didn't have free access to the venue until load-in, so a lot more guessing and relying on CAD drawings are necessary to put the design together before load in.

Secondly, some of my design goals caused some challenges around what gear our shop was able to supply. In a venue the size of the IBT, I wanted arrays, rather than point-source boxes, as my main system. And, being a musical theatre production, we used a left-center-right speaker position as is typical of most amplified musicals. Our main line array cabinet of choice at UCI is the Meyer M1D. We have 16 of these boxes, so I opted to use 8 per side for the L and R hangs. We were then fortunate to receive a little extra funding and rent the newer Meyer LINA system as the center hang (10 LINA cabinets).

My overarching goal of this design was to create a great sounding, reliable musical theatre system. Immersive audio and surround sound are hugely popular explorations in modern sound design, but for this show I wanted to instead focus that energy on perfecting the fundamentals. The show also did not call for a huge amount of additional sound content and effect. I added some NYC soundscapes from time to time, as well as a few incidentals where called for, but for the most part, anything more complex started to step on the score's toes.

The Barclay has a few in-house QSC KLA12 boxes per side, but given some of the restrictions mentioned above, we had much more control bringing in our own M1D arrays. Being able to rent the LINA array for the center vocal system meant that I had an amazing level of quality to work with. The groundplan, section, and array drawings show where I ended up with the system.

As you can see, the system also includes front fills, balcony fills, and "side fills," which were two Meyer UPJs hung immediately next to the LINA array, but focused toward the far outside sections of the orchestra level closer to the stage. This is the area the LINA didn't quite cover within a 6db tolerance. With those UPJs and the front fills (Meyer UPMs), the orchestra level had even coverage everywhere except for about 4-6 seats.
Arrays and front fills just after the truss went up.

In the last year, we have also acquired a slew of new equipment to fold into our designs. The main impetus was the old and decrepit nature of our former large-format mixing console. The Avid VENUE from years past was falling apart physically, and becoming quickly outdated as an educational tool. We're now using an Allen & Heath dLive system, consisting of the dLive DM48 MixRack with S7000 surface, and several digital snakes (DX32 and DX168) for expansion. This system finally allows enough I/O to handle most modern designs without restriction. The following shot of the tech table shows the control software for the dLive system. It also shows the Allen & Heath IP8, which is an amazingly versatile fader bank that gives me control of the system in a much more intuitive way without being at the console.

The wireless mics used on performers were the Shure UHF-R series that have been so ubiquitous in the last decade (or two...). We were able to use DPA 4061 microphone elements on all of the UR1 body packs, double miking our lead, "Bobby," since he is so rarely offstage. The fantastic A2 crew, trained by assistant sound designer Kyle Causey, was always keeping tabs on the status and performance of each mic, such that we didn't have any major incidents during the run of the show. They were using Wavetool, which is on the screen of the iPad in the previous picture. It's a software that combines the RF monitoring of Wireless Workbench with audio monitoring capabilities to let them look and listen for each microphone on a computer, iPad, or iPod Touch anywhere on the deck. This was our first show using Wavetool, and it proved to be an awesome update to the A2 workflow here!

Another purchase we've made in the last two years is Waves SoundGrid. This is an external FX processing server that loops into the console via an expansion card - it allows us to add any Waves plugins to the live effects chain in the console! In this case, I was able to use Waves plugins like their DeEsser, RVerb, TrueVerb, C6, and a few others to augment the limited processing the board can do. At the far end of the FOH table picture is the monitor with control software for SoundGrid. In front of the S7000 is our mixer/A1 JJ Margolis, who was thankfully able to put up with me during this whole process! The picture was candid for JJ but less so for Amin, our QLab operator.

The orchestra was the last major piece of the [sound] puzzle, and in this case, they were located on stage, between the NYC buildings and the actor platform. Having the new digital snakes with the Allen & Heath equipment made it so much easier to integrate the orchestra into the system. Because of the orchestra's location, there was a pretty large distance from them to the audience, so amplifying them and keeping their stage sound level under control was more manageable than I anticipated. The actors had the advantage of being closer to the orchestra at all times (though we still used a full stage monitor system). And, I actually found it much easier to get a consistent orchestra/vocal blend throughout the audience since the orchestra was not immediately in front of the front row! I didn't snap any up-close pictures of the orchestra, but this might give an idea of their relation to the audience:

I was lucky to have mixed the season opening musical the year prior, Legally Blonde, with Jack Bueermann as designer. That show was also in the Barclay, so I was able to get a grasp on how to work best in the space and what might work best when it came around to designing Company. Because of the extra obstacles that working in the Barclay introduces, I tried to be ahead of the game in as many areas as possible. I was able to start thinking about system design options the previous Spring, and got the bulk of the drafting completed before we came back to the Fall quarter. I was met with a constant feeling that I was missing something, or something was bound to go wrong, but in the end the preparation paid off, and the team was able to get every piece of the puzzle together. I was able to enjoy working in the Barclay and felt rewarded by the success of the show.

Assistant: Meghan Roche
Assistant: Kyle Causey
Mix engineer: JJ Margolis

Wednesday, January 15, 2020

Field Recording, Crystal Cove State Beach

Last fall, I took the students in my Trends in Modern Sound Design class (all six MFA designers, and a few music PhD students) to Crystal Cove State Beach to do some field recording.  We divided the class into three groups: one group made b-format and spaced pair recordings, one group made co-incident pair stereo recordings, and one group made close-up mono and other recordings.  We talked about how to plan for a field recording trip, but each group of students had to plan, prep, pack, and carry their own gear.

We started out at the central section of the beach, where there are few cafes and lots of people. Some groups chose to wander far from the people to get some isolated sounds, but others chose to embrace the public and get some 'folks at the seaside' recordings.

After that, we packed up and drove the north edge of the park, which was virtually deserted except for a few seagulls.  We hoped to get some sounds of sea spray on rocks, but the surf was decidedly down that day.  Oh well.  At least we got a great sunset!

The student are now madly editing the sound effects and editing metadata.  Once that post-production process is complete, the sounds and metadata will be uploaded to our sound effects server so that the UCI Sound Design community can have access to it in perpetuity.  Once all that is done, I'll share some examples here!

Thursday, January 9, 2020

Track and Field 2019

In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student gets to work on each thread.

This quarter, I had nine students, so there are nine threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy!

Thursday, January 24, 2019

Are you a Sound Designer/Engineer headed to USITT in Louisville in March? This year, Sound will be included in the Portfolio review sessions, so if you want to schedule a time for some pros to give you advice/feedback, here's your chance! Apply here!

Anyone is welcome to present, at any level of your career. Maybe you're prepping a portfolio for grad school, or maybe you're prepping a portfolio for a tenure review? Maybe you're about to graduate and want to give your portfolio a last bit of polish before hitting the job market?

Maybe you're just looking for a little career advice?

 Sign up using the link if you want to present. I'll be looking at each presenter and assigning them a pair of respondents based on what the presenter is interested in.

 And, if you are interested in being a volunteer respondent, please email me at

Wednesday, December 12, 2018

Track and Field, 2018

In my Trends in Modern Sound Design class, we sometimes do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students bring in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student gets to work on each thread.

This quarter, I had ten students, so there are ten threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy!