Wednesday, December 12, 2018

Track and Field, 2018

In my Trends in Modern Sound Design class, we sometimes do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students bring in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student gets to work on each thread.

This quarter, I had ten students, so there are ten threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy!

Tuesday, December 11, 2018

2018 Film Sound Projects

Every fall, all of the Sound Design MFA students take a course called Trends in Modern Sound Design.  The structure of the course is very loose, designed so that we can examine interesting avenues of sound design without having to stick to a strict course outline.

Over the past few years, one of the projects that I've done with the students is a film project.  The class is divided into groups, and each group is given two 60-second clips of film. One of the clips has more dialogue, and the other has more sound content.  The student groups are tasked with replacing all of the audio in the clips. This includes re-recording dialogue, recording and/or editing sounds, and creating and/or editing music.  It's a big project that takes most of the quarter, and while each group is responsible for their own final product, they help each other out with big foley sessions, dialogue sessions, and walla sessions.  Towards the end of the class, we watch the videos and engage in a discussion of the final work.

We don't spend a lot of time in class preparing for the project. We talked a little bit about spotting (ie identifying sonic events) and DAW organization for film projects, but this project really is designed to encourage the students to find their own solutions. And, when we review the final work, the conversation is generally more about the creative impact of their work than about the technical implementation.

Here are the videos from this year's project!  (and, I should note that I did not secure rights to use these film clips.  All the work was done for educational purposes only.  If you are or represent the rightsholder and want me to remove the clips, please let me know and I'll do so immediately).

Special thanks to the members and friends of the UCI Sound community who lent their voices to this project: Ezra Anisman, Jack Bueermann, Hailey Byerly, Toni Fajt, Garrett Gagnon, Joanna Hui, Caroline Jones, Joseph King, Hunter Long, Sophia Metcalf, Hunter Moody, Joaquin Palting, Malik Proctor, Andrew J. Tarr.

True Romance (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Triplets of Belleville (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Hacksaw Ridge (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Get Out (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
The Conversation (UCI Sound replacement project) from Vincent Olivieri on Vimeo.
Arrival (UCI Sound replacement project) from Vincent Olivieri on Vimeo.

Tuesday, June 12, 2018

Sound Art: final projects

This term, I taught a class in Sound Art.  The course itself lent itself to self-direction. Each week, students would read for an hour or two on a sound art-related topic of their choosing.  When the class met, we would incorporate their readings into the discussion.  Our topics ran broad, from the definition of sound art, to social responsibilities (or lack thereof) of artists, to the role of virtuosity in artmaking, to the technical tools for implementation of sound art.

Additionally, students worked on three projects, the last of which was a sound art piece of their own creation.  Last week, the students presented their final work.  I won't go into the details of the pieces, except to say that they had a huge range.  One student created a balloon-festooned sound walk. One placed contact microphones on a campus bridge and used the inputs to create new content. One created an interactive sculpture using mirrored cubes, hyperdirectional loudspeakers, and QR codes.  The pieces were varied and very interesting.  Here are some photos!

Tuesday, April 24, 2018

PLUMAS NEGRAS - Thesis in Review

Just over a month ago, we opened and closed our production of Juliette Carrillo’s Plumas Negras, a beautiful 3-act play about three women in one family with each act focusing on an individual and their respective struggles they faced in their time periods. This show was a very large undertaking that involved a lot of new challenges that I had never encountered before, and I’d like to share my process, thoughts, and reflections with all of you now that I’ve had time to reflect but really it's more like take a break from AMERICAN IDIOT and come back to this.
Photo: Vincent Olivieri

The music:

From the beginning of the process, I had started talking with Juliette about the importance and the function of the composed music within the world of the play. It had to serve two functions: carry the emotional weight of the characters by creating character themes that intertwined to form a larger melodic theme that we come to know as “Plumas Negras”, and use voices to create the distinction between the two different worlds we have presented within the play: the ancestral world, a world inhabited by souls long gone from the world of the living who take the form of crows, and the world of the living. One further point was that no music was to be electronic - all of the sounds used to create the music are to be natural, acoustic, of this earth. In pursuing this theme of earthiness and naturalism that remained a constant theme throughout the entire design process for all departments, we had landed on the decision to have all of the music be played live. I had never composed music for live musicians to play, so this was definitely going to be a challenge.

Research process for music and the design of the show became an ethnomusicological foray into exploring traditional Mexican and Mexican-American music and instruments. Given that we traverse time periods throughout the show, genres and musical tastes shift so it was wonderful to be able to listen to traditional folk music, moving through norteños, corridors, mariachi, and banda to name a few genres.

In the past but largely unrelated to this show, Juliette had worked with renowned South American harpist Alfredo Rolando Ortiz (who teaches in Corona, CA) and entertained the idea of having harp within our show musically in some form. I took this point of inspiration and delved into uses of the harp within South American countries to start and continuously moved north to uses within Mexico. Having found the traditional folk ensemble of conjunto jarocho, I used this as a starting point for one avenue of composition and used the harp to represent the ancestral world of our crows. Guitar to represent the world of the living came easily - it is an instrument that is one of the most accessible to learn, and true to form, one could find a field worker playing melodies on their sun-beaten guitar taking recluse in shade from the sun during a break from a day’s work, much like the world of Plumas. Two of our wonderful cast members, Ernest Figueroa and Amilcar Juaregui (AJ), within the show played guitar, and Juliette had asked Alfredo if he would be able to perform, but due to scheduling conflicts he was unable to; however, unbeknownst to us, he recommended one of his students with whom he had great trust in, Nina Agelvis, who studied here at UCI perform instead (who also happened to be our Honors undergraduate in Lighting Design!).

Harp: Nina Agelvis - "Crow's Lullaby"
Photo: Fernando Penaloza

I had begun by throwing proverbial spaghetti at the wall for musical ideas of the main theme, taking inspiration from traditional folk melodies, to popular genres, to soundtracks such as Disney Pixar’s Coco (which is a fantastic film and you all should go watch it if you haven’t seen it. Or go watch it again and cry because it’s that good). What I ended up landing on was a mix of all of these, creating the threads for each character theme to expand upon as we progress through the show so that when the theme reprises at the end, it will resonate that much stronger within the hearts and ears of each audience member. Think of the music from Disney Pixar’s UP, and the use of the theme to highlight events of both happiness, sadness, and anything in between so that over the course of the entire film the music carries the weight of the narrative, taking the listener on a musical journey similar to that of the characters. It is this concept that has formed the core of my compositional process, and this production was no exception to that model.

I was in rehearsal essentially everyday for the last 3 weeks leading up to tech, working and developing the music with Nina, and our wonderful guitarists, and seeing how the action on stage blended with what I was trying to do musically. Without this level of interaction, the music would have fallen flat for sure, and wouldn’t have become another voice within the world.

I also had the task of composing a folk melody, sung a cappella. The lyrics were written by Meliza Gutierrez, the actress playing Concha within the show, and I referenced slight melodic themes found from other pieces within the show to create the melody we hear. This piece is heard twice in the show, at the beginning with only half of the melody heard, and at the end of the show where we hear the entire melody.

As far as existing period music found within the show, popular music of the time was selected with the thought in mind of what these workers would listen to, and I asked each cast member if there were any songs that their parents or grandparents would listen to, drawing on popular artists from the time. It was definitely a heartwarming and touching feeling to see families in the audience remember songs that their parents or grandparents might have listened to, perhaps in a similar way to how our characters did. These pieces played out of practicals on the set, a gramophone, and a transistor radio in their respective time periods.
Crows inspecting the phonograph. Photo: Fernando Penaloza

This piece in particular found its way into our hearts:

In the end, I had composed 7 pieces for the show. But I must give the utmost of praise and gratitude to Ernest, Amilcar, and Nina. I created skeletons of each guitar piece, with the intentions musically in tact but left the true voice of the music to be carried by the performers themselves for their musicality and knowledge of their instruments were far greater than anything I could ever hope to achieve on my own. This wonderful collaboration allowed the music of the show to really come alive as it was given life from multiple people. All of the music in order can be found here:

The System:

The space for this show was the Robert Cohen theatre, our small black box space able to be configured in any way. And in every way it went. It started out with a three-quarter thrust, to shifting to the playing space in one corner, back to three-quarter thrust, and eventually landing in the alley configuration seen. Regardless of the configuration however, my main design intention was to take advantage of a more realistic sound spatialization and changing the acoustic character of the space using a Virtual Room Acoustic System (VRAS), no integrated as part of Meyer Sound's Constellation system. Thus, aside from the main address system, the space was treated largely the same.

In order to achieve realistic spatialization, movement, and VRAS, I had to go with Meyer Sound’s Digital Audio Platform - Matrix-3 system. We have the newer D-Mitri system here at UCI but not nearly enough outputs nor the processor for VRAS to achieve the design intentions of the show. Thus, I went back to our good ooooooold friend Matrix-3. What resulted was a very large system comprising of a few layers in the system: Overheads, Mains, Surrounds (audience listening height +2ft), and ground surrounds, in addition to a truck engine/exhaust system to make a real 1940s Ford F1 come to life. A large system no doubt, and loading in was further complicated without the aid of a sound supervisor (we have Jeff Polunas aboard now which is fantastic!) so generating paperwork from a logistical supervision standpoint in addition to the technical documents became a good time commitment.

Photo: Fernando Penaloza

To function properly, VRAS needs multiple microphones spread out across the space evenly ideally. The signals that the microphones are redistributed to every speaker in the system where the outputs of the speakers are then picked up in the microphones again, and are redistributed further. These seemingly random generated signals are what we hear as reverberations, and help our brains correlate what we see visually to what we hear. i.e. in a large cathedral we would expect to hear a very reverberant space to match the size of the room we are in. The power of VRAS allows us to control what we hear, and thus a space can transform almost in an instant from a completely dead space to sounding like a cathedral. We were fortunate in that this show’s configuration allowed for the microphones to be lower than they would have been in a proscenium show as the trim heights of the mics did not intrude onto the visuals of any scenery. This allowed me to have greater control of gain before feedback, and not push the microphones as loud as needed. VRAS needs the entire space to be treated as dry as possible, eliminating any naturally occurring reflections within the room; thus, each wall of the theatre was covered in curtains, as well as any bit of floorspace not being used for action was carpeted. To our benefit (but not to our lungs because dust), the dirt border / stage acted as a fantastic absorber of sound with its very porous and thick base to absorb a large amount of frequencies and foot noise.


I had used Cuestation (Matrix-3 once again) in my previous main stage here, Our Class, but only really for its fantastic Spacemap tools. I wanted to expand upon that tool, but also took on the challenge of running the entire show off of WildTracks, Cuestation’s playback method within the software itself.

This presented a number of challenges. I had never programmed a show solely in cuestation, and nor had I used Wildtracks this extensively before. What resulted in was a lot of time spent in tech, and many many hours after tech concluded, cleaning up programming and refining the bajillion ways you could execute a single cue. Working in QLab would have been much faster for all of the cues and updating them respectively, but the amount of knowledge and I had gained from using Cuestation surpasses any ease I would have got from simply programming within QLab.

Thanks to the control Cuestation allowed, I had 256 busses at my disposal to configure matrixes for assigning channels. What this allowed me to do was put our class experiment of Wave Field Synthesis and Source Oriented Reinforcement (SoR) into practice. Our harpist, Nina, would be playing essentially in one of the seating sections, and a concern would be listening levels for the audience bank directly across from her and the furthest away. The conundrum was that she couldn’t play so loud so as to deafen those sitting next to her just so those furthest away could hear. Thus, the idea of WFS came to mind, by subtly reinforcing Nina’s sound so that way everyone will still localize to her position. We took the same calculations and formula from our class, calculated the distances in 3D in Vectorworks and implemented the amplitude and delay adjustments to a “harp bus” within the software, that whenever assigned the output of the microphone capturing Nina’s harp would automatically be matrixed to her exact location. It worked incredibly well and was easily audible from any spot in the theatre without making any one area too loud.

Nina's Harp SoR Calculations


I did a fair amount of research reading up on as many VRAS document I could find, and it was A LOT of math that reminded me of all those years of calculus and physics. It also gave me the same “I want to bang my head against this wall” feeling as I delved further in the rabbit hole. However, once we were in tech we set up a matrix for each microphone and added in the attenuations to each speaker. A 12x27 matrix can make for quite the headache, but in the first test run nothing blew up and we heard an echo, progress!

From there it became constantly fine tuning the reverb, EQ, and attenuation values until we had landed on a good base to move from. Each scene of the play had VRAS treatment, lending our ears to take us to the different locations of the play, the open fields of Salinas - slightly distant, a cramped office interior - dry with a short echo, and the drifting world of the crows for example. While challenging, it was definitely rewarding and added a new dimension to the play.

Pre-tech descriptions and planning of VRAS and Spacemap


I have always loved Spacemap and the power it has at creating multichannel panning and movement of sounds. Plumas was no exception, and a fair amount of cues took advantage of Spacemap and with its series of triset mapping. In particular, I found the overhead plane and passing trains to be the most effective uses of Spacemap, achieving a very realistic image of sound moving from one location to another.


As is the norm for any UCI show, all of us in team sound watch the production together and give a critique following the show, providing our thoughts and feedback to the designer. Plumas functioned similarly but with all thesis projects, an outside industry professional comes to watch the show and impart their words, comments, and criticism as well. Sound Designer and Composer Kari Rae Seekins, was my thesis critic and gave me invaluable feedback and thoughts. Most of which I wish I could go back in time and implement, but whenever is a show truly perfect? We always can have something to go back and tweak ad infinitum.

End of critique with Kari Rae Seekins
Photo: Vincent Olivieri

From my fellow peers and mentors, I received equally strong criticism in both positive and critical manners, which I appreciate greatly.

I would like to thank my wonderful assistant, Hunter Moody. This show would not have been possible without your help in every step of the process, ranging from shop and load-in tasks, wave field synthesis calculations, Spacemap programming, and making sure I was a human who got some sleep and food. Thank you for everything!

In retrospect, I would have taken advantage of Cuestation's 256 busses more efficiently, which would have drastically saved time in programming, allowing me to create content and treat fades much more elegantly. Curation of some sound effects would have also taken a stronger presence, as some smaller sounds fell to the wayside in favor of increasing the robustness of the system. It was not a perfect show by any means as far as the actual content I created sonically; however, in an academic setting that allows for the exploration and education of new technologies and challenging one’s own limits, I feel truly thankful to have had the opportunity to learn so much and be a part of this fantastic production. Plumas will forever hold a spot in my heart, not only for what I learned but for the story and message that it told, giving the stage to a group of people unfortunately not seen in the limelight as often as they should, and letting their voices and stories to be told. Let fly.

Photo: Fernando Penaloza

- Jordan

Monday, April 2, 2018

Meet our new Sound Supervisor… Jeff Polunas!

After an extensive search, we are thrilled to announce that Jeff Polunas will be joining UCI mid-May as our new Production Sound Supervisor!

Jeff is thrilled to be retuning to UCI, where he received his MFA in Sound Design in 2012. Since graduating, Jeff previously served as Production Sound Supervisor at CSU-Fullerton for 5 years. Jeff is also a member of USA829 as a Sound Designer and has recently designed Shakespeare in Love and Sisters Rosensweig at South Coast Repertory. With over 125 designs to his credit, he has designed for the following theaters: South Coast Repertory, Antaeus, International City Theater, PCPA Theaterfest, Summer Repertory Theater, Atwater Village and many universities in Southern California. Jeff is also a member of the USITT MainStage Committee which he helps organize, install and run the Sound element of the MainStage events each year. Jeff is looking forward to working with the Graduate Students at UCI and helping mold future sound designers.

It's been a very long year since we lost our dear friend and longtime Sound Supervisor, BC Keller.  I would like to personally thank the following people for doing double-duty getting our shows up and keeping the shop from exploding:  our "unofficial" interim supervisors -- Kate Fechtig, Matt Glenn and Mark Caspary; our incredible grads -- Jordan Tani, Andrew Tarr, Ningru Guo, Hunter Moody, Jack Bueermann, Ben Scheff and Andrea Allmond; our Production Manager -- Keith Bangs; and the over-hire professionals from Diablo Sound.  You are all the very best, but thank goodness Jeff is on the way… and not a minute too soon!

-Mike Hooker

Thursday, March 29, 2018

Meet our incoming students!

We had a terrific group of applicants this year apply to the sound design MFA program, and it was a tough decision to choose which two we'd invite into the program. But, Mike and I are excited to introduce the class of 2021 at UCI Sound! Elisheva and Garrett are both remarkable artists, and we're looking forward to having them join us for three years of intense development.  Stay tuned for great things from them!

Elisheva Anisman recently graduated from Western Washington University where she studied theater, audio recording, and storytelling, in addition to sound designing a plethora of productions. She is passionate about how and why stories get told and how sound can help communicate a narrative. Since graduating she has worked as a sound designer or engineer for a variety of local companies in northwest Washington and has experimented with performing her own music.

Garrett Gagnon is incredibly excited to be joining the Sound Design program at UCI! He has lived in the Southwest Michigan area his whole life, and can’t wait to make the trek to the West Coast. He has been a musician (vocal, piano, drums) his whole life, and has always been able to find work that pulls from his strengths. He has worked at two different recording studios, and engineered live and studio albums in various genres, primarily focusing in jazz and classical. Garrett has also built a relationship with many theatres and schools in the area, and has been resident Sound Designer at Farmers Alley Theatre for the past few years. He cannot wait to start exploring what he can bring to the Sound Design program at UCI!