Wednesday, December 7, 2016

Audio-for-Video Projects

Most fall terms, I teach a class called Trends in Modern Sound Design. Part of the class is a series of open-topic discussions but parts of the class are annual projects that we all look forward to. One of them is a sound-for-picture project. I divided the students into pairs, and each group got two sixty-second clips of film. Their task was to completely strip and replace the audio in the video, including re-recording all the dialogue, replacing all the sound effects, and layering in music.  The students had to work as recordists, directors, and producers. On the last day of class, we watched each video and engaged in a conversation about the team's process in recreating the sound.

Ben & Mingna:


Hateful Eight, UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Usual Suspects, UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Andrea & Andrew:


Mad Max Fury Road, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


Jordan & Ning


The Revenant, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


Trumbo, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


We had lots of help creating these videos, mostly from friends of the program lending their time and voices in the studio. Thanks especially to Sam Arnold, Aaron Arroyo, Amy Bolton, David Hernandez, Brandon Hitchcock, and MFA Sound Designer Ben Scheff for providing their voice work!


(a note on copyright: as these projects were designed as classroom exercises, we did not secure any rights to these films. If you are the rightsholder of any copyrighted material contained herein and would like us to remove these clips, please let us know and we'll be happy to.)

Saturday, December 3, 2016

The Iliad

THE ILIAD
MENIN: RAGE

Just a few weeks ago, we opened and closed our production of The Iliad. It was a huge endeavor for something that only ran five performances at one hour and fifteen minutes, but I am immensely proud of what came out of it. It was a lot of firsts for me. I’d like to share some of my process with you fine folks.


THE MUSIC:
I started talking to Annie Louie about this production of The Iliad almost eight months ago. Annie had settled on a translation by Stephen Mitchell, which is a relatively recent iteration. Unlike some of the stuffy, stilted, belabored, and ultimately un-engaging translations I had read in school, this version was very much alive. The language was clear and uncomplicated, making room for the incredible metaphor and dialog to really come through. The whole book was, while still substantial, a pretty easy read. Annie knew that there was no way we could do justice to the whole story in the course of a single show, so she set about narrowing down the length of the story, so that we stood a change of being able to tackle the depth. In terms of design we were speaking of the show in shamelessly cinematic vocabulary, so it followed that our music would go in an orchestral, cinematic direction. 

“Sure,” I said, without hesitation, “We can do that.”

Now, it’s not that the statement wasn’t true, exactly. If I had said “We *know how* to do that,” that would have been a huge lie. I had absolutely no idea how to tackle something in a symphonic scope, and certainly no precedent for putting it in front of an audience. I did know a little of the vocabulary, though. A lot of it came from breaking down scores and reading along with symphonic recordings, or possibly just having been around a huge variety of different music ensembles throughout my life. Orchestration and instrumentation is something that I’ve made a point to study, but I hadn’t exactly… you know… done it before.
Because our costumes, lighting, and very limited set were beginning to take a more modern or temporally ambiguous form, I elected to generously include electronic and synthetic elements to pair with and contrast with the more traditional symphonic sounds. Some of this choice was informed by my comfort level in writing for an orchestra. I knew, for example, that if I contrasted synthetic sounds for the gods against traditional symphonic sounds, I was less likely to fail than if I attempted to make that distinction solely using the orchestra. So I was also, in some ways, hedging my bets.
Generally, the way Annie prefers to work requires the sound designer to be a present and functionally engaged with the case during the entire rehearsal process. In the past, I had generally composed by watching rehearsal, taking notes, and/or filming, and then sitting down with what I had taken after the fact and writing then. When rehearsal was taking eight hours a day, five days a week, however, it proved to be not terrible useful in that regard. Similarly, in my usual process, I tend to track my cues pretty meticulously from the get-go. In this process, it was such a tremendous slow-down that I quickly abandoned it. In a few different ways, the first week of rehearsal was a bit of a wash. I had started to build vocabulary, and had locked down a couple brief sequences in great detail, but I hadn’t really been working in the moment with the cast the way I was hoping I could.
So the next week, I made the decision to sketch the music in the room, as basically as I possible could, and build rhythmic skeletons that I could paint over later. Holy crap, did that make all the difference in the world. For one thing, I didn’t necessarily need to film and cross reference everything I wanted to write for, which was a huge time saver at home. I could just step in to my home studio and play for an hour or two, then bring in the draft and try it the next time it came up in rehearsal. Take notes, rinse, repeat. I first started drafting in Reason, which I know is a painfully limited environment, but it’s one that I knew I could work quickly in. Almost immediately, Mike, in traditional mentorly  fashion, pointed out that Reason’s sub-par orchestral sample library was limiting my vocabulary, and that I was writing music that played into and around its limitations.
We were about three weeks in at this point, so the idea of integrating a new sample engine into my workflow was daunting, never mind the problems that would come up from just learning it, double-extra never mind that Reason doesn’t play nice with VSTs, whatever I wound up using would have to be rendered and recorded, which stood to be a substantial slow down. With East/West offering composer cloud for about $30/month, though, I figured it was a pretty low risk thing to try. So I downloaded it, and all the Hollywood libraries (Which was NOT a quick process,) and in about four hours, including setup and building the workflow for the first time, the first major music sequence I built went from this:


to this:




So… yeah. There was no looking back. Even the first time I loaded up a tutti brass patch and hit that tuba/bass trombone/cimbasso octave, I knew that I was in for a great time. I honestly don’t remember most of the time I spent arranging that piece. It was just flowing.


THE SOUNDS:
We had used the term “elemental” when talking about the effects that we wanted. Things like rain, wind, and thunder had to be enormous. While initially we had talked about shying away from literal combat foley, it became apparent as we were building a vocabulary for the gods, and for mortals when they were being aided by the gods, that mortal vs. mortal c(K)ombat needed to give us somewhere to go. If we made all the mortal clashes big and cool sounding, it wouldn’t leave the room we needed for the gods. So what we settled on was a selection of pretty typical foley sounds used as sparingly as we could, and at slightly heightened volume. The gods had swooping synthesized and repurposed narrative sounds with their actions, which leaned on showing their power and motivations, rather than the physical results of their actions. This gave us a lot of room to make some cool in-between sounds when the gods where helping mortals out.
Similar to music, I have historically tended to work on my own, and come up with a completed idea to try out against a completed scene. I knew for sure that approach wouldn’t fly in this process, so I came prepared to do the majority of my cue sculpting in QLab. Once I settled in to the process, it became incredibly reflexive. First I’d build a temporary cue, which might be one element of the final, or might simply be a random sound that fit the gesture. By the end of a given rehearsal day, about half of the cues I would have built would be just about what ultimately wound up on stage. Often, the work to be done was in creating a convincing variety around a copy-pasted gesture. 


THE SYSTEM:
Our space for this show was the Winifred Smith Hall, affectionately known as “The Winnie.” It’s sort of a modified thrust built mostly for chamber and small ensemble music performance. It carries spoken word beautifully and has a lovely reverb on its own. Having mixed a couple of concerts in there, I knew that this was something that I would have to work with, rather than against. While the room diffuses high frequencies well, it has a tendency to build up low-mid energy really quickly- Your average electric bass amp generally turns to mud very quickly at high volume. This is compounded with some aging and questionably conceived rigging in  the space which meant I couldn’t bank on getting a lot of heavy equipment in the air. Also, for convenience, the back walls are flush to the back row of seats so there was no easy spot to get any sort of rear surround. Top that off with the space being renovated until the week before load in, which was the second week of classes at the beginning of the school year, and that all means that I needed a system that was scaled to the time and labor we had available.


The WSH in a rare empty moment

A shot of the WSH from the stage


The biggest complication this put forth was subwoofer placement. I needed to put the subs some place where I could tune it to my mains, and still really get some rumble to the audience. Hanging it would inevitably compromise the power I needed. After running a few scenarios through MAPP (and estimating the 3D interactions that it could no predict,) I landed on a somewhat strange spot that kept it close enough to the audience for power, in line enough with the mains that I could compensate with time, and far enough onstage of the voms that I wasn’t going to just trap all of the energy. What I wound up with was just out of the way, and while it was definitely a bit hotter towards the front few rows, at the wavelengths they were throwing it more or less evened out. My biggest coverage gaps fell in the aisles.


The surrounds were inevitably a compromise of the back row. Doing overheads instead would’ve ultimately lost me even more coverage, so the back row wound up getting a little hosed. It was still perfectly audible, but about 6-8dB down  above 1k. Not great, but the best compromise I could come up with.




The mains were actually fairly easy. I had a left and right low on the deck, for which I selected UPQs, and a left and right high, which were UPJs. A trio of UPJrs served as a mono front address main for each section, which I used for effects, sourcing action on the apron, and also as an HF fill for the high mains upstage, where were playing most of my musical content.

What I ultimately arrived at was this paperwork package:






AND SO:

In a rare opportunity here at UCI, it was arranged for an archival video of the production to be made. Unfortunately, my attempt to record a decent surround image of the play failed (and I’m blaming MOTU,) so what you can see below is a mono direct sum of the the audio sends in addition to the live recording. It’s not a close reproduction of what happened, but it’s certainly enough to give an idea of what the content sounded like. Thanks for reading! Please hit me up in the comments section if you have any questions or comments.




Thursday, December 1, 2016

Track and Field

In my Trends in Modern Sound Design class, we've started doing an annual project, inspired by alumnus Stephen Swift, called 'Track and Field.' Track and Field is an exercise in both creativity and efficiency, and it's played over the course of the entire quarter with a series of due dates.

On the first due date, each player creates 30 seconds of sound, completely self-driven. They can spend no more than 60 minutes creating the piece, and once they're done creating, they create both a rendered mix and a series of stems. As a class, we listen to the mix. After the class, a different player takes the stems and uses them to create something entirely new, also spending no more than 60 minutes on the project. At each due date, we listen to the most recent batch of mixes, and then a new student takes the stems to create a new version.

In addition to being a highly creative project, the requirement that the player not spend more than 60 minutes on their version motivates each player to work as efficiently as possible.  Different students choose different technical goals for themselves, with some choosing to sample a lot of different DAW platforms, and others choosing to delve deep into the intricacies of one particular piece of software. An open-ended project like this allows them to shape their goals and pursue them with focus.

There were six students in the class this quarter, so there are six threads in this year's iteration. I set up a transfer matrix so that each student got to touch each version.  At the end of the quarter, instead of listening just to this week's versions, we listened to each thread, all the way through. It's interesting to hear which elements come and go over the course of the project, and which elements work as throughlines throughout the entire project. 

Here are each thread, with each version in sequence.  I hope you enjoy the oddity that is this year's Track & Field!