Tuesday, December 15, 2020

Track and Field 2020

 In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student usually gets to work on each thread.

This quarter, I had eleven students, so there are eleven threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy! 

(side note: Thread 7 featured some prominent commercial music, so SoundCloud won't let me present it to you.  Your loss!)

Tuesday, December 1, 2020

Teaching D-Mitri Remotely

Every fall, in our Digital Audio Systems class, I teach our first-year sound designers a two-week intensive overview on the Meyer Sound D-Mitri system.  D-Mitri is a powerful tool for live sound that combines the functions of a digital mixing console, a sound content playback device, a multi-channel sound spatialization tool, a room acoustics enhancement tool, and a show control hub all into one package.  D-Mitri systems are found in large scale sound installations around the world, from theme parks to Broadway to more.  The ubiquity and capabilities of D-Mitri are so large (as is the learning curve, frankly), that we typically have a number of second- and third-year students join us for the D-Mitri training to refresh their skills.

UCI has a small D-Mitri system, and we use it both as a teaching tool and in production.  When we teach with it, we roll the rack into the Meyer Sound Design Studio and patch eight D-Mitri outputs directly into our eight-channel loudspeaker system so that we can learn and work with it while experiencing its spatialization capabilities in real time.  D-Mitri programming happens through software called CueStation, which functions in a client-server capacity.  Multiple users can be logged into D-Mitri at the same time, each working on a different aspect of the programming.  Our D-Mitri classes typically involve everyone in the studio, sitting at their laptops, all wired into D-Mitri with a nest of ethernet cables.

The Meyer Sound Design Studio, in the before-times.

Of course, we can't do that this year.  We could have delayed the training module until we were able to meet safely, but I don't know when that will be, and I'm honestly tired of delaying things because of the freaking pandemic.  I didn't want to let the perfect be the enemy of the good, to paraphrase Voltaire.

So, in a pandemic, how do you teach a class that requires both client-server access AND the ability to perceive spatialized sound?  In order to solve this, I needed to think through a number of different challenges.  Here they are, how I thought through them, and how I eventually solved them.

Physical Locations

We knew that the D-Mitri rack would need to live in the Meyer Sound Design Studio.  The studio is currently cleared to be occupied by three people, but I was uncomfortable coming to campus for in-person class (I'm teaching all of my classes remotely this term).  Plus, I know how important the refresher is to our more senior students, and I didn't want to cut them out of the experience.  So, each student would be remote, logging in with their computers (with wired connections, preferably).  I came into the studio to teach the classes so that I could take care of any issues that came up while teaching that I couldn't deal with remotely.

Even though I'd be teaching from the studio, I expected that I'd need to be able to remote into the host computer in order to tweak details from home. Early in the quarter, while testing, I found that if I were on campus, I could screenshare with the host computer (an iMac that we call Chalkboard), but when I returned home, I couldn't screenshare with Chalkboard at all.  After consulting with our IT department, we determined that we needed a more robust screensharing tool.  We installed TeamViewer on Chalkboard so that I could control the host computer, restart failed connections, etc.  TeamViewer mostly worked like a champ, though there were a few times where I couldn't log on to Chalkboard at all. 

Connecting CueStation to D-Mitri

The easiest way to share a CueStation screen with the students was to just share my laptop's desktop via Zoom, but if I did that, they'd just be watching me click things, which is hardly useful when teaching a tool.  The students needed to be able to control CueStation on their own in order to get their (virtual) hands on the (virtual) machine.  I asked Richard Bugg with D-Mitri about how we might address this issue, and he noted that D-Mitri systems can be controlled from around the globe using a proxy server. The folks at D-Mitri use this feature to troubleshoot systems without having to fly halfway around the world, but it was just as useful for my needs.  Richard walked me through the steps to set it up and spent some time doing some testing with me. The proxy server required Chalkboard to be running CueStation, but as long as it was running CueStation and the proxy server was active, I could have up to eight clients logged in at the same time.  Sometimes it took a while to get all students onto the proxy server at the same time. The folks at Meyer use the proxy server to do maintenance on machines that are difficult to get to, not to teach D-Mitri to a class, so they don't typically have the user count that we did. 


So, we've figured out where everyone would be, and we figured out how everyone can control D-Mitri using a proxy server.  How can we send spatialized sound to the students so that they can all monitor the D-Mitri environment well?

My first thought was to build a SpaceMap (D-Mitri's spatialization tool) replica of the Meyer Sound Design Studio's loudspeaker configuration, take the eight outputs of D-Mitri into a DAW, place them into a 5.1 session, stream the six-channel output over the net, and then have students monitor with 5.1 headphones.  But, we ran into a number of challenges with this idea.  First, I couldn't find a reliable six-channel sample-accurate streaming tool.  We've been using AudioMovers, which does a great job with two-channel signals, but in testing, multiple two-channel instances did not sync to each other (there are rumors of a >2-channel upgrade, but I haven't tested it yet).  Also, six channels of audio is three times the bandwidth of two channels, which could impair networks in dorms and homes.  Finally, I was hoping to avoid having to seek out funds to buy enough 5.1 headphones to outfit the class.  So, back to the drawing board.

SpaceMap of the Meyer Sound Design Studio
A spacemap of the MSDS studio.

I solved my problem in the next iteration.  I still took the eight channels from D-Mitri in to my DAW, but instead of mixing to 5.1, I mixed to binaural.  I initially used Binauralizer by Noisemakers to render each D-Mitri input relative to the loudspeaker's position in the room, though I shifted to the dearVR Micro plug-in due to better low frequency management.  The DAW mixed all eight inputs together, and then I used AudioMovers to send a two-channel stream out to the students.  I cut the audio bandwidth by two-thirds and removed the need for 5.1 headphones!  

ProTools Session Set-Up
ProTools handled the binaural routing.

The eight iterations of NoiseMakers, positioned relative to their corresponding spatialization position.
Eight binauralization plug-ins spatialized the sound.

Ultimately, the students were able to listen to high-quality spatialized audio with a relatively low latency.  It wasn't the same as being in the room, but it was pretty close.

Returning to the Studio

We spent four two-hour sessions learning the basics (and some details) of D-Mitri remotely, and on the fifth and final day of the module, the two students and I met in person in the Meyer to review their work in person.  They had created a spatialized sonic event from their apartments, but they presented their work in person, through the eight-loudspeaker system that we have in the Meyer. This gave us an additional opportunity to discuss how well the binaural monitoring situation translated into actual meat-space monitoring.  Their work more or less translated well, but we note that monitoring a sound panned to the center of the room revealed itself differently in speakers than in headphones.  Via headphones, all eight ambisonic channels were being addressed, which imaged the sound to the center of our image. But in the studio, having all eight speakers firing didn't image to the center.  It either imaged EVERYWHERE (if you were sitting in the sweet spot in the room), or to whatever speaker you're closest to (if you're not in the sweet spot).  

Final Thoughts

You won't catch me yearning to do this again if I have the option to teach in person, but overall, I'm pleased with the results.  If I have to do this again, I'd need to address these issues:

  • Input source.  I was using a single channel of audio from ProTools as an input source.  I set ProTools to loop playback but sometimes the session would stop on its own.  Next time, I'd use a different, more reliable input source.  An FM radio would be a nice low-tech tool.
  • Remote Access via proxy server.  It wasn't as solid as I would have liked it to be.  In fact, on the first day of class, no one could connect except me.  
  • AudioMovers wasn't designed to stream audio 24/7 for 3 weeks, and it occasionally failed.  When that happened, I had to log into the computer, restart the transmission, and send the link around again.  I had to do that once a day or so.  Not a deal breaker.  Just a thing to note.

Overall, this was a huge success!  If you're thinking about doing something like this, let's talk!  I'd be happy to share my thoughts and brainstorm other/better solutions!

Saturday, April 18, 2020

Welcome to the Class of 2023!

So, we usually do this earlier in the academic year, but like everyone else on the planet, we got pre-occupied with Covid-19 response. Now that our 'new normal' is feeling settled, I'm able to get back to some of the things that weren't quite as pressing before.... which leads me to this:

Mike and I are proud to welcome our incoming first years students to the UCI Sound Design program: the class of 2023:

Costandina Daros is a soon-to-be graduate of Western Michigan University. Throughout her undergrad, she has worked on several productions in both the Theatre and Dance Departments, her favorites including Sound Design for Jesus Christ Superstar and Tribes. In her summers, she has worked at The Utah Festival Opera and Musical Theatre in Logan, Utah and The Stephen Foster Story in Bardstown, Kentucky. In her time at UCI, she is looking forward to exploring new ways of using sound to enhance storytelling.

Aerik Harbert hails from West Virginia by way of Maryland, where he currently works at Towson University. Before his job at Towson, Aerik worked at the Hangar Theatre and Baltimore Center Stage. He is a graduate of U. Maryland Baltimore County.

Tuesday, January 21, 2020

Field Trip: Warner Brothers Studios

A Hollywood landmark!
Once you hear it, there’s no mistaking what you’re supposed to see – the oafish Patsy clomping together two coconut halves, always two steps behind Arthur as popularized in the 1975 cult classic Monty Python and the Holy Grail. But where did that idea come from? Where did this style of sound design emerge?

Sound effects first became incorporated into the “talkies” in the late 1920s with the work of audio pioneer Jack Foley. The field he developed – the art of creating and recording live sound effects (often with unconventional materials) in sync with a finished reel during the post-production process – revolutionized motion pictures in the early years of cinema and continues to permeate the industry today. This type of work is rarely required within the scope of traditional theatre (unless it is a specific choice for the production), so it was a real treat for the MFA sound design students when we had to opportunity to explore the sound stages at Warner Brothers Studios, taking an in depth look at one of their Foley facilities.

This field trip (coordinated by Professor Vincent Olivieri as part of DRAMA 255: Trends in Modern Sound Design) began on the cool morning of December 6, 2019. After we carpooled up to Burbank, we left our vehicles with the valets and made our way past the multiple ongoing studio tours to the commissary where we grabbed a quick bite to eat and some hot joe (side note: they have a killer breakfast spread). After a few minutes to take in the enticing aromas of scrambled eggs and bacon, we headed back outside to meet our host Alyson Lee Moore, an accomplished Foley artist with over thirty years of experience in the industry, half of which she has spent with Warner Brothers. She is also a two-time Emmy award-winning Foley artist (a recent win shared with the department for their work on HBO’s Barry) with numerous Golden Reel nominations from Motion Picture Sound Editors.

One "street" of the studio

Alyson first took us around the various sets situated about the lot – a small township built out of scenic skeletons and optical illusions meant to be filmed from specific angles. Each unit is highly configurable – some even had false walls (they were quite convincing, even up close) creating striking visual partitions within the spaces. Everywhere we looked, there were full crews of carpenters, electricians, and stagehands hard at work preparing for the next set. At times, I felt a bit like a pedestrian on a construction site, but no one seemed to mind us: they are likely used to random people walking about. We also spotted some unique staged statements scattered about meant for quick selfies from tourists. We obviously had to take a few for ourselves…

MFA Sound Design Students, 2019-2020

Next, we headed into the museum on the lot where technical aspects from titles in the Warner Brothers catalog were on display – from the beautiful gowns worn by Lady Gaga in A Star is Born (2018) to a scenic reproduction of Central Perk from Friends (1994-2004) or the forced perspective table used in The Hobbit (2012). Towards the end of our stroll through the museum we exited off into a small enclosed room. Here, the audio from the film Gravity (2013) was played in a stemmed format so that we could listen to the sound effects, recorded dialogue, and soundtrack independently. Afterwards, we listened to all three together to hear how the tracks were crafted to complement each other – the spatialized mixing in ProTools really brought everything to life. We concluded in, where else? The gift shop!

Costume pieces and props from A Star Is Born (2018)

One of the many stages located on the lot
Next, Alyson took us around for a peek at some of the various sound stages while we waited for the working Foley artists to go to lunch (we didn’t want to disturb them while they were working). There were dozens of various sizes (some akin to aircraft hangars), and most of them had full crews within loading in the next production or striking the previous one. Each stage has a unique placard located next to the entrance with a comprehensive list every title that had been worked on in that space. Then, we meandered through the main properties storage facility on-site which, to me, looked more like the best stocked antique store that you could ask for. Alyson said that this was a frequent haunt for her, as many of the items required for Foley could normally be considered props. We finally came full circle, ending up back at the commissary where we departed for the recently vacated Foley studio.

The Foley studio, which was underground in the post-production facility, was comprised of a main room where all of the actual Foley would take place, a kitchenette-style area with large tubs for water work, a smaller side room dedicated for storage, and a control room complete with studio-grade recording equipment. From Alyson’s description, three artists would be working in the facility for a given project – two Foley artists and a mix engineer. She also let us in on some of the more… unique sounds that she has had to come up with over her career, like the use of semi-frozen gelatin to capture the likeness of footsteps on an alien planet. The main space was full of odds and ends (all noisemakers), a pit full of sand, gravel and debris, and some great shotgun microphones. After she fielded our questions for a bit, we had a chance to make some noise of our own. Then, we headed upstairs towards the daylight and contemplated lunch. 

As the day ended, we headed out with Alyson to one of our favorite cafes right as an afternoon downpour swept over the city. We spoke more with her about some of the specifics of her work, but also what she enjoyed doing in her free time, ongoing hobbies, and the ever-present question of work/life balance. Something that I found insightful is that although longer hours are sometimes inevitable, her daily schedule was fairly regular with hours from around 8 am to 5 pm. After the rain let up a bit, we said our goodbyes and made our way back to Irvine.
All in all, it was a fantastic day full of spectacle and even more insight into a boundlessly creative line of work. Throughout the tour, one descriptor kept coming to my mind that perfectly encapsulated the career and underlined its inherent connection to live theatre - resourcefulness.

I’m incredibly grateful that we had a fantastic quarter in our Trends class last fall and that so much of it was able to be spent out in the field or exploring other industries within sound design; this final excursion was the perfect cherry on top.

Biggest of thank yous to Vincent Olivieri, Alyson Dee Moore, and all the wonderful folks at Warner Brothers.

Photos by Garrett Gagnon, Vincent Olivieri, and Meghan Roche.

Field Trip: Backstage Disneyland

When I told friends and family that my class was going to get to go on a backstage tour of Disneyland to learn about their audio & other tech systems, I think they may have thought I was reading them a page from my dream journal. I have had a longstanding love of theme park history and design that started when I was a kid visiting Disneyland--asking Cast Members (the Disney term for "employees") annoying questions, using terrible dial-up internet to scour the ‘net for ride show scripts, reading every book on Imagineering that I could, and eventually keeping lists and notes and spreadsheets about how the park has changed over time and how it might operate. (Okay, maybe it was a little bit of an obsession. It's fine.) 

(Ah, I should quickly interject here that I'll be using a lot of parentheses in the post ahead--I was perhaps a touch too excited in revisiting the trip & wound up with a lot of vaguely-related sidenotes...sorry about that!)

Getting to combine theatre tech with this longstanding fascination is something I was looking forward to for weeks, so I am happy to report that when the day finally came, it somehow managed to surpass my already-high expectations.

Because Disney has some somewhat strict policies re: secrecy of backstage magic, there’s a limit to what I can share, but here’s a breakdown (sorta) of the day:
  • We started out at TDA (Team Disney Anaheim, a giant complex of what seemed to be primarily administrative buildings) and met our host, Jerry, who is an Entertainment Manager for Technical Services and whose history with the company goes back almost 25 years across a variety of roles! (We also discovered that I have photographic proof of having met him while he performed one of these roles in the late 90s, which was WILD, but due to Disney Magic reasons I’m not sure if I can share much more about that here, unfortunately…
  • We saw where the fireworks get shot off every night behind ToonTown--for the last 15 years or so, Disneyland has been using a somewhat unique technology to set off their fireworks which significantly reduces both the smoke and noise produced by the show. (If you’re at all interested in the history of their fireworks show and how they used to be set off, as well as how Disney pioneered the use of music synchronization with pyrotechnics, I highly recommend giving this podcast episode a listen: https://www.youtube.com/watch?v=_1DRHSCuqyI
  • We also saw where the parade floats are all kept when not “on-stage” and learned how audio gets distributed to (and from) them. (There were a number of Entertainment and Imagineering employees also gathered in the warehouse to test out a future parade addition, but that one is definitely going to need to stay a secret for now.)
  • The very first steps we took “on-stage” were somewhere most of us hadn’t had a chance to see yet: the brand-new Galaxy’s Edge expansion! It was gorgeous, the complexity of the audio alone was kind of mind-blowing, it really contributes a lot to the storytelling the land is able to pull off. I got to talk to a Stormtrooper who snuck up on me and accused me of having Rebel sympathies (and who, we learned, uses a crazy sophisticated system to talk to people in a way that allows them to personalize every single interaction while keeping a consistent voice, more Disney magic I probably shouldn’t divulge here) AND try the infamous Blue Milk. (It was $8, VERY sweet, and had a texture that turned out to be deeply polarizing among those who sampled it.) 
Backstage photos at Disney are very against the rules, so this is, unfortunately,
one of the few photos we have from the day. At least we look very cool in it?
  • We visited the venue for Mickey’s Magical Map in the large outdoor Fantasyland Theatre venue (which, fun fact, initially started out as a teen night club in the 1980s, more info on that here: https://www.youtube.com/watch?v=NqgJ0N99eGg) and got to take a peek into the booth of the theatre before watching the show, chatting a bit with the show’s audio technician and stage manager. It was a good show, but because of the noise of the nearby passing train and our lovely VIP seats (which had our backs against a pretty reflective wall) the overall mix of the show was pretty loud. It took us a minute to recover. (I personally loved watching the little kids get totally enthralled in the action onstage, though.)
Waiting for the show to start! (I think? I am just now noticing that Garrett and
Jack are very focused on something up front...)
  • Lunch! We took a break from walking around the park, and grabbed a bite backstage at one of the Cast Member eateries backstage.
  • AUDIO CENTRAL. Okay, again, not sure how much I can divulge here, but the bulk of the audio for the park (especially for parades and other outdoor shows, area music, etc.) is controlled from one very fancy room sitting right above one of the guest-accessible levels of a Main Street, USA building. And when I say fancy, I mean that it sort of resembled a TV show art director’s idea of what a theme park command center might look like--shades all drawn, two people behind glass at giant control desks with many monitors flashing different numbers and graphs, a wall of video feeds and light-up maps of both Disneyland and Disney’s Californa Adventure…et cetera. (I just found an old Disney Parks Blog post about Audio Central that is relevant here, though it is 5 years old and they have since doubled the size of the facility and apparently updated a ton of the gear. Check it out: https://disneyparks.disney.go.com/blog/2014/03/behind-the-sounds-inside-audio-central-at-disney-california-adventure-park/)
  • Backstage at Frozen in the Hyperion Theatre in Disney’s California Adventure! Because it was a “dark day,” we did not get to see any action happening, but it did mean that we got to walk around the entire theatre without worrying about a huge time crunch. We saw the set-pieces and crazy costumes backstage (sidenote: the troll costumes are TERRIFYING without any actors inside of them), learned how some of their very intense automated lighting tracking worked, and walked all around the house of the theatre--clapping and marveling at how remarkably acoustically dead of a space the designers of the theater managed to make.
  • Finally, we got back to Disneyland and walked through the park to see parts of the Christmas parade that was running at the time. It was pretty neat to see in action all of the technical elements we had even learning about all day, and a nice way to put a bow on the end of the day.
This blog post is now approximately 4 times longer than I was told it should be, so I’m going to go ahead and wrap things up now. TL:DR? Theme parks are very cool and very technologically complicated! And we got to go learn about it for a day!

I leave you now with a photo of these two outrageously photogenic Disneyland ducks:

It's amazing how advanced Disney's Audio-Animatronic tech
has gotten over the years!

Sunday, January 19, 2020


Lots of Company! Life is Company! Love is Company!

...Sondheim wrote a hell of a show in Company. The unique Sondheim harmonic structures, overlapping vocal lines, and thick orchestration do not escape this show. I have only had the privilege of designing a Sondheim show twice before (Gypsy and Into the Woods), so I was thrilled at the opportunity when I learned I was going to be designing Company as the season opener.

When I was thinking through an outline for my design process, there were a couple specific challenges to overcome and goals to meet.

First of all, the show was in the Irvine Barclay Theatre (IBT), which is a fantastic 750-seat proscenium theatre, and our usual venue for the season opening musical. However, as this venue is also its own business entity, we don't get to call the shots in the same way as we might in a UCI educational venue. We couldn't configure the equipment with as much flexibility. We had very strict hours during which we could work; being forced to leave the theatre at midnight during tech week is both a wonderful and terrible thing! And lastly, we didn't have free access to the venue until load-in, so a lot more guessing and relying on CAD drawings are necessary to put the design together before load in.

Secondly, some of my design goals caused some challenges around what gear our shop was able to supply. In a venue the size of the IBT, I wanted arrays, rather than point-source boxes, as my main system. And, being a musical theatre production, we used a left-center-right speaker position as is typical of most amplified musicals. Our main line array cabinet of choice at UCI is the Meyer M1D. We have 16 of these boxes, so I opted to use 8 per side for the L and R hangs. We were then fortunate to receive a little extra funding and rent the newer Meyer LINA system as the center hang (10 LINA cabinets).

My overarching goal of this design was to create a great sounding, reliable musical theatre system. Immersive audio and surround sound are hugely popular explorations in modern sound design, but for this show I wanted to instead focus that energy on perfecting the fundamentals. The show also did not call for a huge amount of additional sound content and effect. I added some NYC soundscapes from time to time, as well as a few incidentals where called for, but for the most part, anything more complex started to step on the score's toes.

The Barclay has a few in-house QSC KLA12 boxes per side, but given some of the restrictions mentioned above, we had much more control bringing in our own M1D arrays. Being able to rent the LINA array for the center vocal system meant that I had an amazing level of quality to work with. The groundplan, section, and array drawings show where I ended up with the system.

As you can see, the system also includes front fills, balcony fills, and "side fills," which were two Meyer UPJs hung immediately next to the LINA array, but focused toward the far outside sections of the orchestra level closer to the stage. This is the area the LINA didn't quite cover within a 6db tolerance. With those UPJs and the front fills (Meyer UPMs), the orchestra level had even coverage everywhere except for about 4-6 seats.
Arrays and front fills just after the truss went up.

In the last year, we have also acquired a slew of new equipment to fold into our designs. The main impetus was the old and decrepit nature of our former large-format mixing console. The Avid VENUE from years past was falling apart physically, and becoming quickly outdated as an educational tool. We're now using an Allen & Heath dLive system, consisting of the dLive DM48 MixRack with S7000 surface, and several digital snakes (DX32 and DX168) for expansion. This system finally allows enough I/O to handle most modern designs without restriction. The following shot of the tech table shows the control software for the dLive system. It also shows the Allen & Heath IP8, which is an amazingly versatile fader bank that gives me control of the system in a much more intuitive way without being at the console.

The wireless mics used on performers were the Shure UHF-R series that have been so ubiquitous in the last decade (or two...). We were able to use DPA 4061 microphone elements on all of the UR1 body packs, double miking our lead, "Bobby," since he is so rarely offstage. The fantastic A2 crew, trained by assistant sound designer Kyle Causey, was always keeping tabs on the status and performance of each mic, such that we didn't have any major incidents during the run of the show. They were using Wavetool, which is on the screen of the iPad in the previous picture. It's a software that combines the RF monitoring of Wireless Workbench with audio monitoring capabilities to let them look and listen for each microphone on a computer, iPad, or iPod Touch anywhere on the deck. This was our first show using Wavetool, and it proved to be an awesome update to the A2 workflow here!

Another purchase we've made in the last two years is Waves SoundGrid. This is an external FX processing server that loops into the console via an expansion card - it allows us to add any Waves plugins to the live effects chain in the console! In this case, I was able to use Waves plugins like their DeEsser, RVerb, TrueVerb, C6, and a few others to augment the limited processing the board can do. At the far end of the FOH table picture is the monitor with control software for SoundGrid. In front of the S7000 is our mixer/A1 JJ Margolis, who was thankfully able to put up with me during this whole process! The picture was candid for JJ but less so for Amin, our QLab operator.

The orchestra was the last major piece of the [sound] puzzle, and in this case, they were located on stage, between the NYC buildings and the actor platform. Having the new digital snakes with the Allen & Heath equipment made it so much easier to integrate the orchestra into the system. Because of the orchestra's location, there was a pretty large distance from them to the audience, so amplifying them and keeping their stage sound level under control was more manageable than I anticipated. The actors had the advantage of being closer to the orchestra at all times (though we still used a full stage monitor system). And, I actually found it much easier to get a consistent orchestra/vocal blend throughout the audience since the orchestra was not immediately in front of the front row! I didn't snap any up-close pictures of the orchestra, but this might give an idea of their relation to the audience:

I was lucky to have mixed the season opening musical the year prior, Legally Blonde, with Jack Bueermann as designer. That show was also in the Barclay, so I was able to get a grasp on how to work best in the space and what might work best when it came around to designing Company. Because of the extra obstacles that working in the Barclay introduces, I tried to be ahead of the game in as many areas as possible. I was able to start thinking about system design options the previous Spring, and got the bulk of the drafting completed before we came back to the Fall quarter. I was met with a constant feeling that I was missing something, or something was bound to go wrong, but in the end the preparation paid off, and the team was able to get every piece of the puzzle together. I was able to enjoy working in the Barclay and felt rewarded by the success of the show.

Assistant: Meghan Roche
Assistant: Kyle Causey
Mix engineer: JJ Margolis

Wednesday, January 15, 2020

Field Recording, Crystal Cove State Beach

Last fall, I took the students in my Trends in Modern Sound Design class (all six MFA designers, and a few music PhD students) to Crystal Cove State Beach to do some field recording.  We divided the class into three groups: one group made b-format and spaced pair recordings, one group made co-incident pair stereo recordings, and one group made close-up mono and other recordings.  We talked about how to plan for a field recording trip, but each group of students had to plan, prep, pack, and carry their own gear.

We started out at the central section of the beach, where there are few cafes and lots of people. Some groups chose to wander far from the people to get some isolated sounds, but others chose to embrace the public and get some 'folks at the seaside' recordings.

After that, we packed up and drove the north edge of the park, which was virtually deserted except for a few seagulls.  We hoped to get some sounds of sea spray on rocks, but the surf was decidedly down that day.  Oh well.  At least we got a great sunset!

The student are now madly editing the sound effects and editing metadata.  Once that post-production process is complete, the sounds and metadata will be uploaded to our sound effects server so that the UCI Sound Design community can have access to it in perpetuity.  Once all that is done, I'll share some examples here!

Thursday, January 9, 2020

Track and Field 2019

In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student gets to work on each thread.

This quarter, I had nine students, so there are nine threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy!

Thursday, January 24, 2019

Are you a Sound Designer/Engineer headed to USITT in Louisville in March? This year, Sound will be included in the Portfolio review sessions, so if you want to schedule a time for some pros to give you advice/feedback, here's your chance! Apply here!

Anyone is welcome to present, at any level of your career. Maybe you're prepping a portfolio for grad school, or maybe you're prepping a portfolio for a tenure review? Maybe you're about to graduate and want to give your portfolio a last bit of polish before hitting the job market?

Maybe you're just looking for a little career advice?

 Sign up using the link if you want to present. I'll be looking at each presenter and assigning them a pair of respondents based on what the presenter is interested in.

 And, if you are interested in being a volunteer respondent, please email me at olivieri@uci.edu.

Wednesday, December 12, 2018

Track and Field, 2018

In my Trends in Modern Sound Design class, we sometimes do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students bring in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student gets to work on each thread.

This quarter, I had ten students, so there are ten threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy!

Tuesday, December 11, 2018

2018 Film Sound Projects

Every fall, all of the Sound Design MFA students take a course called Trends in Modern Sound Design.  The structure of the course is very loose, designed so that we can examine interesting avenues of sound design without having to stick to a strict course outline.

Over the past few years, one of the projects that I've done with the students is a film project.  The class is divided into groups, and each group is given two 60-second clips of film. One of the clips has more dialogue, and the other has more sound content.  The student groups are tasked with replacing all of the audio in the clips. This includes re-recording dialogue, recording and/or editing sounds, and creating and/or editing music.  It's a big project that takes most of the quarter, and while each group is responsible for their own final product, they help each other out with big foley sessions, dialogue sessions, and walla sessions.  Towards the end of the class, we watch the videos and engage in a discussion of the final work.

We don't spend a lot of time in class preparing for the project. We talked a little bit about spotting (ie identifying sonic events) and DAW organization for film projects, but this project really is designed to encourage the students to find their own solutions. And, when we review the final work, the conversation is generally more about the creative impact of their work than about the technical implementation.

Here are the videos from this year's project!  (and, I should note that I did not secure rights to use these film clips.  All the work was done for educational purposes only.  If you are or represent the rightsholder and want me to remove the clips, please let me know and I'll do so immediately).

Special thanks to the members and friends of the UCI Sound community who lent their voices to this project: Ezra Anisman, Jack Bueermann, Hailey Byerly, Toni Fajt, Garrett Gagnon, Joanna Hui, Caroline Jones, Joseph King, Hunter Long, Sophia Metcalf, Hunter Moody, Joaquin Palting, Malik Proctor, Andrew J. Tarr.

True Romance (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Triplets of Belleville (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Hacksaw Ridge (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Get Out (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
The Conversation (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Arrival (UCI Sound replacement project) from Vincent Olivieri on Vimeo.

Tuesday, June 12, 2018

Sound Art: final projects

This term, I taught a class in Sound Art.  The course itself lent itself to self-direction. Each week, students would read for an hour or two on a sound art-related topic of their choosing.  When the class met, we would incorporate their readings into the discussion.  Our topics ran broad, from the definition of sound art, to social responsibilities (or lack thereof) of artists, to the role of virtuosity in artmaking, to the technical tools for implementation of sound art.

Additionally, students worked on three projects, the last of which was a sound art piece of their own creation.  Last week, the students presented their final work.  I won't go into the details of the pieces, except to say that they had a huge range.  One student created a balloon-festooned sound walk. One placed contact microphones on a campus bridge and used the inputs to create new content. One created an interactive sculpture using mirrored cubes, hyperdirectional loudspeakers, and QR codes.  The pieces were varied and very interesting.  Here are some photos!

Tuesday, April 24, 2018

PLUMAS NEGRAS - Thesis in Review

Just over a month ago, we opened and closed our production of Juliette Carrillo’s Plumas Negras, a beautiful 3-act play about three women in one family with each act focusing on an individual and their respective struggles they faced in their time periods. This show was a very large undertaking that involved a lot of new challenges that I had never encountered before, and I’d like to share my process, thoughts, and reflections with all of you now that I’ve had time to reflect but really it's more like take a break from AMERICAN IDIOT and come back to this.
Photo: Vincent Olivieri

The music:

From the beginning of the process, I had started talking with Juliette about the importance and the function of the composed music within the world of the play. It had to serve two functions: carry the emotional weight of the characters by creating character themes that intertwined to form a larger melodic theme that we come to know as “Plumas Negras”, and use voices to create the distinction between the two different worlds we have presented within the play: the ancestral world, a world inhabited by souls long gone from the world of the living who take the form of crows, and the world of the living. One further point was that no music was to be electronic - all of the sounds used to create the music are to be natural, acoustic, of this earth. In pursuing this theme of earthiness and naturalism that remained a constant theme throughout the entire design process for all departments, we had landed on the decision to have all of the music be played live. I had never composed music for live musicians to play, so this was definitely going to be a challenge.

Research process for music and the design of the show became an ethnomusicological foray into exploring traditional Mexican and Mexican-American music and instruments. Given that we traverse time periods throughout the show, genres and musical tastes shift so it was wonderful to be able to listen to traditional folk music, moving through norteƱos, corridors, mariachi, and banda to name a few genres.

In the past but largely unrelated to this show, Juliette had worked with renowned South American harpist Alfredo Rolando Ortiz (who teaches in Corona, CA) and entertained the idea of having harp within our show musically in some form. I took this point of inspiration and delved into uses of the harp within South American countries to start and continuously moved north to uses within Mexico. Having found the traditional folk ensemble of conjunto jarocho, I used this as a starting point for one avenue of composition and used the harp to represent the ancestral world of our crows. Guitar to represent the world of the living came easily - it is an instrument that is one of the most accessible to learn, and true to form, one could find a field worker playing melodies on their sun-beaten guitar taking recluse in shade from the sun during a break from a day’s work, much like the world of Plumas. Two of our wonderful cast members, Ernest Figueroa and Amilcar Juaregui (AJ), within the show played guitar, and Juliette had asked Alfredo if he would be able to perform, but due to scheduling conflicts he was unable to; however, unbeknownst to us, he recommended one of his students with whom he had great trust in, Nina Agelvis, who studied here at UCI perform instead (who also happened to be our Honors undergraduate in Lighting Design!).

Harp: Nina Agelvis - "Crow's Lullaby"
Photo: Fernando Penaloza

I had begun by throwing proverbial spaghetti at the wall for musical ideas of the main theme, taking inspiration from traditional folk melodies, to popular genres, to soundtracks such as Disney Pixar’s Coco (which is a fantastic film and you all should go watch it if you haven’t seen it. Or go watch it again and cry because it’s that good). What I ended up landing on was a mix of all of these, creating the threads for each character theme to expand upon as we progress through the show so that when the theme reprises at the end, it will resonate that much stronger within the hearts and ears of each audience member. Think of the music from Disney Pixar’s UP, and the use of the theme to highlight events of both happiness, sadness, and anything in between so that over the course of the entire film the music carries the weight of the narrative, taking the listener on a musical journey similar to that of the characters. It is this concept that has formed the core of my compositional process, and this production was no exception to that model.

I was in rehearsal essentially everyday for the last 3 weeks leading up to tech, working and developing the music with Nina, and our wonderful guitarists, and seeing how the action on stage blended with what I was trying to do musically. Without this level of interaction, the music would have fallen flat for sure, and wouldn’t have become another voice within the world.

I also had the task of composing a folk melody, sung a cappella. The lyrics were written by Meliza Gutierrez, the actress playing Concha within the show, and I referenced slight melodic themes found from other pieces within the show to create the melody we hear. This piece is heard twice in the show, at the beginning with only half of the melody heard, and at the end of the show where we hear the entire melody.

As far as existing period music found within the show, popular music of the time was selected with the thought in mind of what these workers would listen to, and I asked each cast member if there were any songs that their parents or grandparents would listen to, drawing on popular artists from the time. It was definitely a heartwarming and touching feeling to see families in the audience remember songs that their parents or grandparents might have listened to, perhaps in a similar way to how our characters did. These pieces played out of practicals on the set, a gramophone, and a transistor radio in their respective time periods.
Crows inspecting the phonograph. Photo: Fernando Penaloza

This piece in particular found its way into our hearts:

In the end, I had composed 7 pieces for the show. But I must give the utmost of praise and gratitude to Ernest, Amilcar, and Nina. I created skeletons of each guitar piece, with the intentions musically in tact but left the true voice of the music to be carried by the performers themselves for their musicality and knowledge of their instruments were far greater than anything I could ever hope to achieve on my own. This wonderful collaboration allowed the music of the show to really come alive as it was given life from multiple people. All of the music in order can be found here:

The System:

The space for this show was the Robert Cohen theatre, our small black box space able to be configured in any way. And in every way it went. It started out with a three-quarter thrust, to shifting to the playing space in one corner, back to three-quarter thrust, and eventually landing in the alley configuration seen. Regardless of the configuration however, my main design intention was to take advantage of a more realistic sound spatialization and changing the acoustic character of the space using a Virtual Room Acoustic System (VRAS), no integrated as part of Meyer Sound's Constellation system. Thus, aside from the main address system, the space was treated largely the same.

In order to achieve realistic spatialization, movement, and VRAS, I had to go with Meyer Sound’s Digital Audio Platform - Matrix-3 system. We have the newer D-Mitri system here at UCI but not nearly enough outputs nor the processor for VRAS to achieve the design intentions of the show. Thus, I went back to our good ooooooold friend Matrix-3. What resulted was a very large system comprising of a few layers in the system: Overheads, Mains, Surrounds (audience listening height +2ft), and ground surrounds, in addition to a truck engine/exhaust system to make a real 1940s Ford F1 come to life. A large system no doubt, and loading in was further complicated without the aid of a sound supervisor (we have Jeff Polunas aboard now which is fantastic!) so generating paperwork from a logistical supervision standpoint in addition to the technical documents became a good time commitment.

Photo: Fernando Penaloza

To function properly, VRAS needs multiple microphones spread out across the space evenly ideally. The signals that the microphones are redistributed to every speaker in the system where the outputs of the speakers are then picked up in the microphones again, and are redistributed further. These seemingly random generated signals are what we hear as reverberations, and help our brains correlate what we see visually to what we hear. i.e. in a large cathedral we would expect to hear a very reverberant space to match the size of the room we are in. The power of VRAS allows us to control what we hear, and thus a space can transform almost in an instant from a completely dead space to sounding like a cathedral. We were fortunate in that this show’s configuration allowed for the microphones to be lower than they would have been in a proscenium show as the trim heights of the mics did not intrude onto the visuals of any scenery. This allowed me to have greater control of gain before feedback, and not push the microphones as loud as needed. VRAS needs the entire space to be treated as dry as possible, eliminating any naturally occurring reflections within the room; thus, each wall of the theatre was covered in curtains, as well as any bit of floorspace not being used for action was carpeted. To our benefit (but not to our lungs because dust), the dirt border / stage acted as a fantastic absorber of sound with its very porous and thick base to absorb a large amount of frequencies and foot noise.


I had used Cuestation (Matrix-3 once again) in my previous main stage here, Our Class, but only really for its fantastic Spacemap tools. I wanted to expand upon that tool, but also took on the challenge of running the entire show off of WildTracks, Cuestation’s playback method within the software itself.

This presented a number of challenges. I had never programmed a show solely in cuestation, and nor had I used Wildtracks this extensively before. What resulted in was a lot of time spent in tech, and many many hours after tech concluded, cleaning up programming and refining the bajillion ways you could execute a single cue. Working in QLab would have been much faster for all of the cues and updating them respectively, but the amount of knowledge and I had gained from using Cuestation surpasses any ease I would have got from simply programming within QLab.

Thanks to the control Cuestation allowed, I had 256 busses at my disposal to configure matrixes for assigning channels. What this allowed me to do was put our class experiment of Wave Field Synthesis and Source Oriented Reinforcement (SoR) into practice. Our harpist, Nina, would be playing essentially in one of the seating sections, and a concern would be listening levels for the audience bank directly across from her and the furthest away. The conundrum was that she couldn’t play so loud so as to deafen those sitting next to her just so those furthest away could hear. Thus, the idea of WFS came to mind, by subtly reinforcing Nina’s sound so that way everyone will still localize to her position. We took the same calculations and formula from our class, calculated the distances in 3D in Vectorworks and implemented the amplitude and delay adjustments to a “harp bus” within the software, that whenever assigned the output of the microphone capturing Nina’s harp would automatically be matrixed to her exact location. It worked incredibly well and was easily audible from any spot in the theatre without making any one area too loud.

Nina's Harp SoR Calculations


I did a fair amount of research reading up on as many VRAS document I could find, and it was A LOT of math that reminded me of all those years of calculus and physics. It also gave me the same “I want to bang my head against this wall” feeling as I delved further in the rabbit hole. However, once we were in tech we set up a matrix for each microphone and added in the attenuations to each speaker. A 12x27 matrix can make for quite the headache, but in the first test run nothing blew up and we heard an echo, progress!

From there it became constantly fine tuning the reverb, EQ, and attenuation values until we had landed on a good base to move from. Each scene of the play had VRAS treatment, lending our ears to take us to the different locations of the play, the open fields of Salinas - slightly distant, a cramped office interior - dry with a short echo, and the drifting world of the crows for example. While challenging, it was definitely rewarding and added a new dimension to the play.

Pre-tech descriptions and planning of VRAS and Spacemap


I have always loved Spacemap and the power it has at creating multichannel panning and movement of sounds. Plumas was no exception, and a fair amount of cues took advantage of Spacemap and with its series of triset mapping. In particular, I found the overhead plane and passing trains to be the most effective uses of Spacemap, achieving a very realistic image of sound moving from one location to another.


As is the norm for any UCI show, all of us in team sound watch the production together and give a critique following the show, providing our thoughts and feedback to the designer. Plumas functioned similarly but with all thesis projects, an outside industry professional comes to watch the show and impart their words, comments, and criticism as well. Sound Designer and Composer Kari Rae Seekins, was my thesis critic and gave me invaluable feedback and thoughts. Most of which I wish I could go back in time and implement, but whenever is a show truly perfect? We always can have something to go back and tweak ad infinitum.

End of critique with Kari Rae Seekins
Photo: Vincent Olivieri

From my fellow peers and mentors, I received equally strong criticism in both positive and critical manners, which I appreciate greatly.

I would like to thank my wonderful assistant, Hunter Moody. This show would not have been possible without your help in every step of the process, ranging from shop and load-in tasks, wave field synthesis calculations, Spacemap programming, and making sure I was a human who got some sleep and food. Thank you for everything!

In retrospect, I would have taken advantage of Cuestation's 256 busses more efficiently, which would have drastically saved time in programming, allowing me to create content and treat fades much more elegantly. Curation of some sound effects would have also taken a stronger presence, as some smaller sounds fell to the wayside in favor of increasing the robustness of the system. It was not a perfect show by any means as far as the actual content I created sonically; however, in an academic setting that allows for the exploration and education of new technologies and challenging one’s own limits, I feel truly thankful to have had the opportunity to learn so much and be a part of this fantastic production. Plumas will forever hold a spot in my heart, not only for what I learned but for the story and message that it told, giving the stage to a group of people unfortunately not seen in the limelight as often as they should, and letting their voices and stories to be told. Let fly.

Photo: Fernando Penaloza

- Jordan