Tuesday, April 19, 2022

BOURN KIND- Tiny Kindness Project to premiere at Tribeca Film Festival

It was hard keeping this news under wraps for the past few months, but I'm excited to announce that BOURN KIND- Tiny Kindness Project will premiere in June at the Tribeca Film Festival!





This is the second project that director Rachel Myers and I have worked on, and the post-production audio was done entirely in the Meyer Sound Design Studios here at UCI.  I did the foley performance, editing, mixing, and mastering, and MFA student Costa Daros worked as the foley mixer.  Here we are hard at work recording and mixing foley.




Congrats to everyone who made this beautiful film happen!  Check it out if you can!

Tuesday, April 12, 2022

UCI Sound Design: class of 2025

Recruiting students into the MFA Sound Design program is always tough for us.  We can only take two students each year, and there are always terrific students out there that we have to decline. This year was particularly difficult - the number of exceptional candidates was four times higher than it usually is, which made our decision much more difficult.

That said, Mike and I, along with the rest of the program are delighted to welcome our incoming students for this fall: Melanie Lopez and Jeremiah Turner.  Melanie and Jeremiah stood out to us with exceptional combinations of talent, creativity, curiosity, professionalism, and experience.  We're looking forward to working with them for the next three years!

I asked each of them to submit a short bio, so here they are, in their own words:


Thursday, January 20, 2022

Audio Theater - our return to the stage!

I was going to start this blog post by talking about the impacts on UCI Sound by COVID-19, but there’s nothing new under the sun there.  It stinks, and we all know it.  So let’s skip ahead.


In November 2021, UCI Sound was an integral part of the first live in-person mainstage performance at UCI since the pandemic began.  AUDIO THEATER was a bill of six student-written plays, performed live and recorded for eventual streaming (stay tuned!).  The plays were performed as part of a single bill, and each performance was recorded.  At the end of the run, the designers and directors chose the strongest performance, and the sound design team then prepared the recordings for streaming.


photo by Paul Kennedy



For UCI Sound, this was a complex task.  Six plays, each with their own sonic landscape, each with their own cast.  Six playwrights, each with their own text, some of which were in more flux than others.  Six directors, each with their own vision, who were serving both the audience live in the theater and the eventual streaming audience.  Five designers (four sound, one lighting – no scenic, costumes, or projections), who needed to serve their plays and support each other.  A phalanx of stage managers, who kept rehearsals efficient, safe, and effective. Dozens of actors, plenty of staff and support personnel.   One sound system, with one production assistant and one A1 (our two first-year MFAs).  


For pandemic reasons, we mounted these productions with actors spaced out, at microphones, in static positions.  They could look at each other and communicate over distance, but they couldn’t leave their microphone position. This gave the event more of a broadcast feeling rather than a theatrical feeling.  


photo by Paul Kennedy



Each console input and output was recorded for maximum flexibility when mixing/producing after the run.  Once the designers had finished mixing their pieces, they passed their final mixes off to one student who served as the mastering engineer for the whole bill.


Each of the designers has written a bit about their play and their approach to the sound design for it.  So, here they are in their own words (edited slightly):


DO YOU TRUST ME? – JJ Margolis, Sound Designer


Do You Trust Me, written by Summer Savonna Session and directed by Cybelle Kaehler, is a nostalgic flashback story set in the late 90’s. After meeting in middle school, Leon and his best friend Ty develope a tradition of exploring haunted buildings in their town. In high school, they take their girlfriends to a warehouse prepared with alien-themed props, traps, and sound effects to scare Jess, and in that warehouse they encounter… something they do not expect.

The task of sound for this story was to help support the world and ground the vocal performance in a believable, but not wholly realistic space. Through the combination of recorded audio effects and synthesized sounds, the ‘scenery’ of the play is established. One of the most interesting challenges of this is the balancing act of building a supernatural suspense without confirming in one way or another whether there is truly anything unnatural present.

As the core of the play, the vocals stood central at all times. I used different reverb effects to bring the characters into the spaces that they flashed back to. In addition, I needed to separate Leon into both a narrator and a character. I accomplished this by isolating the lines that he spoke as narrator, keeping them clean of any aforementioned reverb and adding a tube-modeled compressor to support the warmth of those lines. Finally, I layered very subtle processing onto some of the characters as the story progressed… but that would be spoilers.



LIE IN STATE – Aerik Harbert, Sound Designer


As we round out a year since the January 6th insurrection in the Capitol, Lie In State, written by Meliza Gutierrez and directed by Angela Cruz, serves as a testimonial to the events of that day. Not of reality, but of possibility; it’s impossible to know the exact complexities of what it meant to be a police officer, congressperson, or innocent bystander on location that day, but Lie In State gives us a taste of their fear, hope, and humanity. When confronting the possibility of death first hand, who knows what may come to mind?


Many of us have seen video taken on location, but there is an inherent safety in a recording because we know that we were not personally the targets of that mob. I needed to keep the feeling of danger up so that our characters never felt truly at ease, while still leaving space for them to speak their piece. There are discrete events supported by sound as protesters break into the building which highlight specific moments of danger, but the true danger in this setting is the human element, so I chose to use a persistent bed of protesters which always lived in the background.


This background detail ebbs and flows dynamically to carve space for important moments of dialogue for the actors, and it only peaks at specific moments where the increase in danger was necessary to keep stakes up as we moved from dramatic beat to dramatic beat. There are some non-diegetic elements, but overall, the sound lived within the world of the play so that the characters and audience are engaged in the same level of energy.


WHEN WILL MI HIJA VISIT ME? – Meghan Roche, Sound Designer


When Will Mi Hija Visit Me? written by Cindy El & directed by Leah Jackson, focuses in on a mother telling her friend the story of her daughter’s disappearance from her life. Maria is our main (and only speaking) character, and we wanted to keep her “real world” and “memory world” as separate as possible so there would be moments where the audience felt firmly in the world of Maria’s literally telling this story to her psychological space as she recounted it; that became my main focus as I designed the piece.


To achieve the “real world” moments, I used room tone and other low sound effects, like distant outdoor ambiance and a ticking clock, to place us in her kitchen. We also had live foley accompanying the very first and very last moments of the piece as Maria makes her coffee and fiddles with her spoon and mug. This took a fair amount of trial and error; we made good use of UCI’s extensive prop shop to find the right materials to make the sounds we wanted, and I auditioned a lot of different options for room tone—the ones I’d been working with initially wound up feeling much more out of place in the physical space than they did when I was making mock-ups and pulling SFX on my computer at home! But eventually we found a good balance, and the room tones in combination with EQ and reverb processing felt quite convincing in the Claire Trevor Theater.


The “memory world” also made use of EQ and reverb, but were meant to feel significantly less grounded than the kitchen FX did, and all of the SFX I used in the more involved flashback sequences also had a fair amount of that same reverb. There was also subtle tonal underscoring in a number of places to help support the most emotional moments.


The in-person presentation of this piece was a little different than what you will hear in the recording, as it also involved two non-speaking actors onstage doing movement work as Maria’s husband and daughter, silently acting out the story as she told it in a stylized and very physical manner. Because we do not get that in the recording, a few spots were trimmed down and adjusted to make the pauses in our actor’s speech make more sense.


The Happiest Bunch – Costa Daros, Sound Designer


The Happiest Bunch, written by Mary Hill and directed by Mary Hill and Arizsia Staton, was a slice-of-life comedy break in our night of dramas. I wanted to embrace the endearing sitcom style, so the design was mostly realistic with some exaggerated moments to heighten the comedy like Martha's murmuring and our moment within their phone conversation. Overall, the sound helped to bring the piece to life and give the characters something to react to during their performance.


Nearer By Far – Meghan Roche, Sound Designer


Nearer By Far, written by Erica Clare Bogan and directed by Dylan Seeley, was another one-actor play. It takes place in a submersible unit—essentially a very tiny submarine designed to go deep in the ocean—and is told in the form of the taped journal entries recorded by one-half of an underwater duo who were there doing undisclosed scientific research. Early in the piece, however, we learn that the other half of the team was killed, and our main character has to figure out what went wrong, and how to survive on her own.


From the beginning of our discussions about this piece, we knew that we wanted to lean into the “found tape” aspect of it, using vocal processing to really compress & affect our actor’s vocals in a way that emulated the feeling of listening to an old recording & adding in sound effects that were very specific to starting and stopping a tape recording. This came less from a desire for realism, but instead as a way to heighten drama and show the passage of time between journal entries. I also made heavy use of various drone styles—more easily heard in-person, with our beefy subwoofers and lovely Meyer speakers, than on the average computer speakers, but still present nonetheless—for the drama piece of things. They shifted subtly from entry to entry, but all throughout there was also a separate drone that faded up very slowly and imperceptibly as it went, which made the dropout of almost all of our atmospheric sounds at a power outage moment late in the piece all the more emotionally affecting.



The Informant – Costa Daros, Sound Designer


The Informant, written by Grace McCarney and directed by Lucas Nelson, was the retelling of a woman's involvement in WWII to her granddaughter. As she narrates the story, the characters she talks about come to life and take over the dialogue for her. I designed the sound in order to help divide these worlds of the story being narrated by her future-self from the story taking place in real time. This gave us the opportunity to incorporate live foley into the design which helped the characters around the woman create the world she talked about.


Right now, we’re putting the finishing touches on the mixes and mastering them.   Once the elements are in the can, we’ll be streaming them over the nets!  I’ll post again then.  

Monday, March 8, 2021

UCI Sound MFA, class of 2024

Every year when we interview potential MFA students for our Sound Design program, Mike and I always lament how hard it is to choose just two students from a qualified pool.  There are generally three or four students that are strong candidates, and it can be an excruciating decision to select which students to invite to join us for three years.  Part of selecting two students means necessarily saying 'no' to many more, including the few top-ranked alternates.  

This year was much harder, due mostly to COVID-19.  Because our industry is basically at a standstill, many folks are deciding that now is a good time to get their degree.  The number of qualified interested students shot up 500% this year. Instead of a pool of three or four top candidates, we had nine.  And, while in a normal year, we'd invite those three or four applicants to campus to sit in on classes, meet with faculty/staff/students, tour the facilities, and generally get to know us, this year UCI's campus is virtually shut down to outsiders.  So, we pivoted, scheduling Zoom sessions so that all nine top applicants could sit in on classes, meet with faculty, staff, students, get a portfolio review, etc.  It was a bit of a logistical nightmare and something I hope we don't have to repeat next year, but it gave us a chance to meet so many students that we'd never have gotten the chance to meet before. 

It also made our decisions harder, because while the number of excellent applicants increased, the number of people we could accept didn't.  We still had to choose two.  So while we're thrilled with the two women who are joining us this fall, we also want to publicly say that we wish we could have taken more of you!  Those applicants who are reading this - we wish you all the best and look forward to seeing your careers grow!

This year, Mike and I are thrilled to welcome to the UCI Sound Design MFA program two extraordinary designers.  I've asked them to write their own bios.  Here they are:

Friday, January 29, 2021

The KELLER


In order to have a fully functional adequate Sound Shop, the concept of the SWIFT (Sound Workshop for Integrated Fine-Arts Technology) needed to be reimagined.  Since the SWIFT has now been renamed the Keller (in honor of BC Keller), this was the perfect opportunity.


When I came on board to attempt to fill in BC’s shoes (let’s face it, anyone who knew him, knows this is impossible to do), the first thing I noticed was the sound shop was in chaos and needed some direction, since it was almost two years since a full-time sound supervisor was part of the CTSA. As evidence of the photos below, you can see there was some infrastructure and equipment being housed in those infrastructure elements.  









A change was needed and a drastic one at that.  The Sound Shop needed to feel renewed with a sense of new found purpose.  Over the course of the next 15 months, I devised a plan to slowly implement changes and improvements to how the Sound shop will function.  The first was to determine a new layout that will allow the most flexibility and storage capability.  I kept BC’s idea in keeping the center clear for a prep area and rearranged and purchased new equipment infrastructure to improve the layout.  I removed the old tool chest and donated it to the xMPL booth, removed the old test rack and any storage containers that didn’t match the new esthetic.  I decided to keep the long Shelving unit, Wireless cabinet, rigging cabinet, workbench, storage bins rack and filing cabinet.  I proceeded to purchase a second matching storage bins rack, replace the mic cabinet with a different cabinet, purchase a new shelving unit, add a new cabinet and purchase a tool chest. 


This gave the new layout a new look and allowed for more storage.  I was able to add the new storage equipment in the Sound Shop and actually gain 12 square feet of floor space.  Once the new layout was implemented, the SWIFT or Sound Shop, was no longer and The Keller was born.


When you now walk into the Keller, there is a new standard in how the Sound Design program operates and runs for each production.  Since the implementation, the shows are pulled faster, more efficiently and have a place to live before the equipment gets loaded into a venue.


I will now give a photographical tour of the Keller since we live in COVID times and only authorized people may be on campus.



THE TOUR



When you open the door to enter, there are three cabinets on the left, rigging, microphones and wireless.  Continuing on is the workbench, with the long storage shelves along the back wall now.  On the right is the electronics cabinet and two units of storage bin racks followed by the tool chest.  Immediately on the right is the computer station.




RIGGING CABINET


The Rigging cabinet is now fully organized and labeled.  Speaker manufacture bolts and smaller hardware are separated out in bins or drawers.  It’s now easy to grab what is needed for a show pull and have it.  This is also where the bolts, screws and washers are stored by size.




MICROPHONE CABINET


The microphone cabinet is the latest addition to the Keller.  Each microphone model is grouped together in its own bin so it is convenient to see what is in stock and how many of each are available for a project. There are currently empty bins, which leaves room for more mics in the future.  Various mic hardware is also stored in the door to make mic mounting on non-stands more convenient.






WIRELESS CABINET


The wireless cabinet contains the mic elements, wireless mic transmitters and any in-ear monitor units for wireless speakers.  





WORKBENCH AREA


The workbench is full functional and equipped with a soldering station, carbon filter and vices to assist in repairing equipment.  The desktop also includes the test rack, which contains a cable tester, Clear-Com unit and mixer.  Above the workbench is Akro-Mils drawers for various components, adapters and parts.  To the right of the workbench is the repair shelving unit.  The top shelf is for cable that needs repaired, the next shelf is for mics and smaller equipment.  Below that shelf is for rack gear that needs repaired.  The bottom shelf is for Clear-Com units and larger equipment.




SHELVING UNIT 


Along the back wall is the long storage shelving units that stores our computer monitors, Clear-Com units.  Since we now do custom racks for each production, our rack making equipment is also in this unit.  This includes panels, panel connectors, vents, lacer bars and blank panels.  Above the rack making section is the tech table supplies and computer keyboards and mice.  The very top is where the spools of various cables are stored to build new cables.  There is still plenty of room and many empty bins to be able to store future equipment on this unit.






ELECTRONICS CABINET


The electronics cabinet is for storing of the smaller networking switches (non-rackmountable), wireless routers, MIDI, Yamaha Cards and video equipment.





STORAGE BINS RACK


The storage bins rack are for storing various equipment for easy access.  This includes XLR turnaround cables, RS232 cables, adapter cables, Computer Cables and other miscellaneous cables.   On the top is for connectors to make new cables and to the right is power adapters for equipment.





TOOL CHEST


The tool chest has many drawers to keep tools organized for use in the Keller.  These tools are are only for the Keller as we have a separate tool chest for load-ins and strikes.  There are also empty drawers for future tool storage.








COMPUTER STATION


The computer station is equipped with a Mac mini on the left and a PC on the right.  This flexibility gives us the ability to run any software needed in the shop.  Both computers are connected to the 42” HDTV  so the display can be seen across the Keller.  This is helpful in pull lists and checking out equipment.  The Keyboards and mice are wireless, so it is also easy to move the keyboard around the Keller.  The PC also has a wireless scanner which helps assist in the checking out and returning of equipment.  








So there you have it.  You have witnessed the death of the SWIFT and the birth of the KELLER!

Tuesday, December 15, 2020

Track and Field 2020

 In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:


1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student usually gets to work on each thread.

This quarter, I had eleven students, so there are eleven threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy! 

(side note: Thread 7 featured some prominent commercial music, so SoundCloud won't let me present it to you.  Your loss!)

Tuesday, December 1, 2020

Teaching D-Mitri Remotely

Every fall, in our Digital Audio Systems class, I teach our first-year sound designers a two-week intensive overview on the Meyer Sound D-Mitri system.  D-Mitri is a powerful tool for live sound that combines the functions of a digital mixing console, a sound content playback device, a multi-channel sound spatialization tool, a room acoustics enhancement tool, and a show control hub all into one package.  D-Mitri systems are found in large scale sound installations around the world, from theme parks to Broadway to more.  The ubiquity and capabilities of D-Mitri are so large (as is the learning curve, frankly), that we typically have a number of second- and third-year students join us for the D-Mitri training to refresh their skills.

UCI has a small D-Mitri system, and we use it both as a teaching tool and in production.  When we teach with it, we roll the rack into the Meyer Sound Design Studio and patch eight D-Mitri outputs directly into our eight-channel loudspeaker system so that we can learn and work with it while experiencing its spatialization capabilities in real time.  D-Mitri programming happens through software called CueStation, which functions in a client-server capacity.  Multiple users can be logged into D-Mitri at the same time, each working on a different aspect of the programming.  Our D-Mitri classes typically involve everyone in the studio, sitting at their laptops, all wired into D-Mitri with a nest of ethernet cables.

The Meyer Sound Design Studio, in the before-times.


Of course, we can't do that this year.  We could have delayed the training module until we were able to meet safely, but I don't know when that will be, and I'm honestly tired of delaying things because of the freaking pandemic.  I didn't want to let the perfect be the enemy of the good, to paraphrase Voltaire.

So, in a pandemic, how do you teach a class that requires both client-server access AND the ability to perceive spatialized sound?  In order to solve this, I needed to think through a number of different challenges.  Here they are, how I thought through them, and how I eventually solved them.


Physical Locations

We knew that the D-Mitri rack would need to live in the Meyer Sound Design Studio.  The studio is currently cleared to be occupied by three people, but I was uncomfortable coming to campus for in-person class (I'm teaching all of my classes remotely this term).  Plus, I know how important the refresher is to our more senior students, and I didn't want to cut them out of the experience.  So, each student would be remote, logging in with their computers (with wired connections, preferably).  I came into the studio to teach the classes so that I could take care of any issues that came up while teaching that I couldn't deal with remotely.

Even though I'd be teaching from the studio, I expected that I'd need to be able to remote into the host computer in order to tweak details from home. Early in the quarter, while testing, I found that if I were on campus, I could screenshare with the host computer (an iMac that we call Chalkboard), but when I returned home, I couldn't screenshare with Chalkboard at all.  After consulting with our IT department, we determined that we needed a more robust screensharing tool.  We installed TeamViewer on Chalkboard so that I could control the host computer, restart failed connections, etc.  TeamViewer mostly worked like a champ, though there were a few times where I couldn't log on to Chalkboard at all. 


Connecting CueStation to D-Mitri

The easiest way to share a CueStation screen with the students was to just share my laptop's desktop via Zoom, but if I did that, they'd just be watching me click things, which is hardly useful when teaching a tool.  The students needed to be able to control CueStation on their own in order to get their (virtual) hands on the (virtual) machine.  I asked Richard Bugg with D-Mitri about how we might address this issue, and he noted that D-Mitri systems can be controlled from around the globe using a proxy server. The folks at D-Mitri use this feature to troubleshoot systems without having to fly halfway around the world, but it was just as useful for my needs.  Richard walked me through the steps to set it up and spent some time doing some testing with me. The proxy server required Chalkboard to be running CueStation, but as long as it was running CueStation and the proxy server was active, I could have up to eight clients logged in at the same time.  Sometimes it took a while to get all students onto the proxy server at the same time. The folks at Meyer use the proxy server to do maintenance on machines that are difficult to get to, not to teach D-Mitri to a class, so they don't typically have the user count that we did. 


Monitoring

So, we've figured out where everyone would be, and we figured out how everyone can control D-Mitri using a proxy server.  How can we send spatialized sound to the students so that they can all monitor the D-Mitri environment well?

My first thought was to build a SpaceMap (D-Mitri's spatialization tool) replica of the Meyer Sound Design Studio's loudspeaker configuration, take the eight outputs of D-Mitri into a DAW, place them into a 5.1 session, stream the six-channel output over the net, and then have students monitor with 5.1 headphones.  But, we ran into a number of challenges with this idea.  First, I couldn't find a reliable six-channel sample-accurate streaming tool.  We've been using AudioMovers, which does a great job with two-channel signals, but in testing, multiple two-channel instances did not sync to each other (there are rumors of a >2-channel upgrade, but I haven't tested it yet).  Also, six channels of audio is three times the bandwidth of two channels, which could impair networks in dorms and homes.  Finally, I was hoping to avoid having to seek out funds to buy enough 5.1 headphones to outfit the class.  So, back to the drawing board.

SpaceMap of the Meyer Sound Design Studio
A spacemap of the MSDS studio.


I solved my problem in the next iteration.  I still took the eight channels from D-Mitri in to my DAW, but instead of mixing to 5.1, I mixed to binaural.  I initially used Binauralizer by Noisemakers to render each D-Mitri input relative to the loudspeaker's position in the room, though I shifted to the dearVR Micro plug-in due to better low frequency management.  The DAW mixed all eight inputs together, and then I used AudioMovers to send a two-channel stream out to the students.  I cut the audio bandwidth by two-thirds and removed the need for 5.1 headphones!  

ProTools Session Set-Up
ProTools handled the binaural routing.




The eight iterations of NoiseMakers, positioned relative to their corresponding spatialization position.
Eight binauralization plug-ins spatialized the sound.




Ultimately, the students were able to listen to high-quality spatialized audio with a relatively low latency.  It wasn't the same as being in the room, but it was pretty close.


Returning to the Studio

We spent four two-hour sessions learning the basics (and some details) of D-Mitri remotely, and on the fifth and final day of the module, the two students and I met in person in the Meyer to review their work in person.  They had created a spatialized sonic event from their apartments, but they presented their work in person, through the eight-loudspeaker system that we have in the Meyer. This gave us an additional opportunity to discuss how well the binaural monitoring situation translated into actual meat-space monitoring.  Their work more or less translated well, but we note that monitoring a sound panned to the center of the room revealed itself differently in speakers than in headphones.  Via headphones, all eight ambisonic channels were being addressed, which imaged the sound to the center of our image. But in the studio, having all eight speakers firing didn't image to the center.  It either imaged EVERYWHERE (if you were sitting in the sweet spot in the room), or to whatever speaker you're closest to (if you're not in the sweet spot).  


Final Thoughts

You won't catch me yearning to do this again if I have the option to teach in person, but overall, I'm pleased with the results.  If I have to do this again, I'd need to address these issues:

  • Input source.  I was using a single channel of audio from ProTools as an input source.  I set ProTools to loop playback but sometimes the session would stop on its own.  Next time, I'd use a different, more reliable input source.  An FM radio would be a nice low-tech tool.
  • Remote Access via proxy server.  It wasn't as solid as I would have liked it to be.  In fact, on the first day of class, no one could connect except me.  
  • AudioMovers wasn't designed to stream audio 24/7 for 3 weeks, and it occasionally failed.  When that happened, I had to log into the computer, restart the transmission, and send the link around again.  I had to do that once a day or so.  Not a deal breaker.  Just a thing to note.

Overall, this was a huge success!  If you're thinking about doing something like this, let's talk!  I'd be happy to share my thoughts and brainstorm other/better solutions!

Saturday, April 18, 2020

Welcome to the Class of 2023!

So, we usually do this earlier in the academic year, but like everyone else on the planet, we got pre-occupied with Covid-19 response. Now that our 'new normal' is feeling settled, I'm able to get back to some of the things that weren't quite as pressing before.... which leads me to this:

Mike and I are proud to welcome our incoming first years students to the UCI Sound Design program: the class of 2023:

Costandina Daros is a soon-to-be graduate of Western Michigan University. Throughout her undergrad, she has worked on several productions in both the Theatre and Dance Departments, her favorites including Sound Design for Jesus Christ Superstar and Tribes. In her summers, she has worked at The Utah Festival Opera and Musical Theatre in Logan, Utah and The Stephen Foster Story in Bardstown, Kentucky. In her time at UCI, she is looking forward to exploring new ways of using sound to enhance storytelling.



Aerik Harbert hails from West Virginia by way of Maryland, where he currently works at Towson University. Before his job at Towson, Aerik worked at the Hangar Theatre and Baltimore Center Stage. He is a graduate of U. Maryland Baltimore County.


Tuesday, January 21, 2020

Field Trip: Warner Brothers Studios



A Hollywood landmark!
Once you hear it, there’s no mistaking what you’re supposed to see – the oafish Patsy clomping together two coconut halves, always two steps behind Arthur as popularized in the 1975 cult classic Monty Python and the Holy Grail. But where did that idea come from? Where did this style of sound design emerge?

Sound effects first became incorporated into the “talkies” in the late 1920s with the work of audio pioneer Jack Foley. The field he developed – the art of creating and recording live sound effects (often with unconventional materials) in sync with a finished reel during the post-production process – revolutionized motion pictures in the early years of cinema and continues to permeate the industry today. This type of work is rarely required within the scope of traditional theatre (unless it is a specific choice for the production), so it was a real treat for the MFA sound design students when we had to opportunity to explore the sound stages at Warner Brothers Studios, taking an in depth look at one of their Foley facilities.

This field trip (coordinated by Professor Vincent Olivieri as part of DRAMA 255: Trends in Modern Sound Design) began on the cool morning of December 6, 2019. After we carpooled up to Burbank, we left our vehicles with the valets and made our way past the multiple ongoing studio tours to the commissary where we grabbed a quick bite to eat and some hot joe (side note: they have a killer breakfast spread). After a few minutes to take in the enticing aromas of scrambled eggs and bacon, we headed back outside to meet our host Alyson Lee Moore, an accomplished Foley artist with over thirty years of experience in the industry, half of which she has spent with Warner Brothers. She is also a two-time Emmy award-winning Foley artist (a recent win shared with the department for their work on HBO’s Barry) with numerous Golden Reel nominations from Motion Picture Sound Editors.

One "street" of the studio

Alyson first took us around the various sets situated about the lot – a small township built out of scenic skeletons and optical illusions meant to be filmed from specific angles. Each unit is highly configurable – some even had false walls (they were quite convincing, even up close) creating striking visual partitions within the spaces. Everywhere we looked, there were full crews of carpenters, electricians, and stagehands hard at work preparing for the next set. At times, I felt a bit like a pedestrian on a construction site, but no one seemed to mind us: they are likely used to random people walking about. We also spotted some unique staged statements scattered about meant for quick selfies from tourists. We obviously had to take a few for ourselves…

MFA Sound Design Students, 2019-2020

Next, we headed into the museum on the lot where technical aspects from titles in the Warner Brothers catalog were on display – from the beautiful gowns worn by Lady Gaga in A Star is Born (2018) to a scenic reproduction of Central Perk from Friends (1994-2004) or the forced perspective table used in The Hobbit (2012). Towards the end of our stroll through the museum we exited off into a small enclosed room. Here, the audio from the film Gravity (2013) was played in a stemmed format so that we could listen to the sound effects, recorded dialogue, and soundtrack independently. Afterwards, we listened to all three together to hear how the tracks were crafted to complement each other – the spatialized mixing in ProTools really brought everything to life. We concluded in, where else? The gift shop!

Costume pieces and props from A Star Is Born (2018)


One of the many stages located on the lot
Next, Alyson took us around for a peek at some of the various sound stages while we waited for the working Foley artists to go to lunch (we didn’t want to disturb them while they were working). There were dozens of various sizes (some akin to aircraft hangars), and most of them had full crews within loading in the next production or striking the previous one. Each stage has a unique placard located next to the entrance with a comprehensive list every title that had been worked on in that space. Then, we meandered through the main properties storage facility on-site which, to me, looked more like the best stocked antique store that you could ask for. Alyson said that this was a frequent haunt for her, as many of the items required for Foley could normally be considered props. We finally came full circle, ending up back at the commissary where we departed for the recently vacated Foley studio.


The Foley studio, which was underground in the post-production facility, was comprised of a main room where all of the actual Foley would take place, a kitchenette-style area with large tubs for water work, a smaller side room dedicated for storage, and a control room complete with studio-grade recording equipment. From Alyson’s description, three artists would be working in the facility for a given project – two Foley artists and a mix engineer. She also let us in on some of the more… unique sounds that she has had to come up with over her career, like the use of semi-frozen gelatin to capture the likeness of footsteps on an alien planet. The main space was full of odds and ends (all noisemakers), a pit full of sand, gravel and debris, and some great shotgun microphones. After she fielded our questions for a bit, we had a chance to make some noise of our own. Then, we headed upstairs towards the daylight and contemplated lunch. 

As the day ended, we headed out with Alyson to one of our favorite cafes right as an afternoon downpour swept over the city. We spoke more with her about some of the specifics of her work, but also what she enjoyed doing in her free time, ongoing hobbies, and the ever-present question of work/life balance. Something that I found insightful is that although longer hours are sometimes inevitable, her daily schedule was fairly regular with hours from around 8 am to 5 pm. After the rain let up a bit, we said our goodbyes and made our way back to Irvine.
All in all, it was a fantastic day full of spectacle and even more insight into a boundlessly creative line of work. Throughout the tour, one descriptor kept coming to my mind that perfectly encapsulated the career and underlined its inherent connection to live theatre - resourcefulness.


I’m incredibly grateful that we had a fantastic quarter in our Trends class last fall and that so much of it was able to be spent out in the field or exploring other industries within sound design; this final excursion was the perfect cherry on top.

Biggest of thank yous to Vincent Olivieri, Alyson Dee Moore, and all the wonderful folks at Warner Brothers.


Photos by Garrett Gagnon, Vincent Olivieri, and Meghan Roche.

Field Trip: Backstage Disneyland


When I told friends and family that my class was going to get to go on a backstage tour of Disneyland to learn about their audio & other tech systems, I think they may have thought I was reading them a page from my dream journal. I have had a longstanding love of theme park history and design that started when I was a kid visiting Disneyland--asking Cast Members (the Disney term for "employees") annoying questions, using terrible dial-up internet to scour the ‘net for ride show scripts, reading every book on Imagineering that I could, and eventually keeping lists and notes and spreadsheets about how the park has changed over time and how it might operate. (Okay, maybe it was a little bit of an obsession. It's fine.) 

(Ah, I should quickly interject here that I'll be using a lot of parentheses in the post ahead--I was perhaps a touch too excited in revisiting the trip & wound up with a lot of vaguely-related sidenotes...sorry about that!)

Getting to combine theatre tech with this longstanding fascination is something I was looking forward to for weeks, so I am happy to report that when the day finally came, it somehow managed to surpass my already-high expectations.

Because Disney has some somewhat strict policies re: secrecy of backstage magic, there’s a limit to what I can share, but here’s a breakdown (sorta) of the day:
  • We started out at TDA (Team Disney Anaheim, a giant complex of what seemed to be primarily administrative buildings) and met our host, Jerry, who is an Entertainment Manager for Technical Services and whose history with the company goes back almost 25 years across a variety of roles! (We also discovered that I have photographic proof of having met him while he performed one of these roles in the late 90s, which was WILD, but due to Disney Magic reasons I’m not sure if I can share much more about that here, unfortunately…
  • We saw where the fireworks get shot off every night behind ToonTown--for the last 15 years or so, Disneyland has been using a somewhat unique technology to set off their fireworks which significantly reduces both the smoke and noise produced by the show. (If you’re at all interested in the history of their fireworks show and how they used to be set off, as well as how Disney pioneered the use of music synchronization with pyrotechnics, I highly recommend giving this podcast episode a listen: https://www.youtube.com/watch?v=_1DRHSCuqyI
  • We also saw where the parade floats are all kept when not “on-stage” and learned how audio gets distributed to (and from) them. (There were a number of Entertainment and Imagineering employees also gathered in the warehouse to test out a future parade addition, but that one is definitely going to need to stay a secret for now.)
  • The very first steps we took “on-stage” were somewhere most of us hadn’t had a chance to see yet: the brand-new Galaxy’s Edge expansion! It was gorgeous, the complexity of the audio alone was kind of mind-blowing, it really contributes a lot to the storytelling the land is able to pull off. I got to talk to a Stormtrooper who snuck up on me and accused me of having Rebel sympathies (and who, we learned, uses a crazy sophisticated system to talk to people in a way that allows them to personalize every single interaction while keeping a consistent voice, more Disney magic I probably shouldn’t divulge here) AND try the infamous Blue Milk. (It was $8, VERY sweet, and had a texture that turned out to be deeply polarizing among those who sampled it.) 
Backstage photos at Disney are very against the rules, so this is, unfortunately,
one of the few photos we have from the day. At least we look very cool in it?
  • We visited the venue for Mickey’s Magical Map in the large outdoor Fantasyland Theatre venue (which, fun fact, initially started out as a teen night club in the 1980s, more info on that here: https://www.youtube.com/watch?v=NqgJ0N99eGg) and got to take a peek into the booth of the theatre before watching the show, chatting a bit with the show’s audio technician and stage manager. It was a good show, but because of the noise of the nearby passing train and our lovely VIP seats (which had our backs against a pretty reflective wall) the overall mix of the show was pretty loud. It took us a minute to recover. (I personally loved watching the little kids get totally enthralled in the action onstage, though.)
Waiting for the show to start! (I think? I am just now noticing that Garrett and
Jack are very focused on something up front...)
  • Lunch! We took a break from walking around the park, and grabbed a bite backstage at one of the Cast Member eateries backstage.
  • AUDIO CENTRAL. Okay, again, not sure how much I can divulge here, but the bulk of the audio for the park (especially for parades and other outdoor shows, area music, etc.) is controlled from one very fancy room sitting right above one of the guest-accessible levels of a Main Street, USA building. And when I say fancy, I mean that it sort of resembled a TV show art director’s idea of what a theme park command center might look like--shades all drawn, two people behind glass at giant control desks with many monitors flashing different numbers and graphs, a wall of video feeds and light-up maps of both Disneyland and Disney’s Californa Adventure…et cetera. (I just found an old Disney Parks Blog post about Audio Central that is relevant here, though it is 5 years old and they have since doubled the size of the facility and apparently updated a ton of the gear. Check it out: https://disneyparks.disney.go.com/blog/2014/03/behind-the-sounds-inside-audio-central-at-disney-california-adventure-park/)
  • Backstage at Frozen in the Hyperion Theatre in Disney’s California Adventure! Because it was a “dark day,” we did not get to see any action happening, but it did mean that we got to walk around the entire theatre without worrying about a huge time crunch. We saw the set-pieces and crazy costumes backstage (sidenote: the troll costumes are TERRIFYING without any actors inside of them), learned how some of their very intense automated lighting tracking worked, and walked all around the house of the theatre--clapping and marveling at how remarkably acoustically dead of a space the designers of the theater managed to make.
  • Finally, we got back to Disneyland and walked through the park to see parts of the Christmas parade that was running at the time. It was pretty neat to see in action all of the technical elements we had even learning about all day, and a nice way to put a bow on the end of the day.
This blog post is now approximately 4 times longer than I was told it should be, so I’m going to go ahead and wrap things up now. TL:DR? Theme parks are very cool and very technologically complicated! And we got to go learn about it for a day!

I leave you now with a photo of these two outrageously photogenic Disneyland ducks:

It's amazing how advanced Disney's Audio-Animatronic tech
has gotten over the years!