Tuesday, June 12, 2018

Sound Art: final projects

This term, I taught a class in Sound Art.  The course itself lent itself to self-direction. Each week, students would read for an hour or two on a sound art-related topic of their choosing.  When the class met, we would incorporate their readings into the discussion.  Our topics ran broad, from the definition of sound art, to social responsibilities (or lack thereof) of artists, to the role of virtuosity in artmaking, to the technical tools for implementation of sound art.

Additionally, students worked on three projects, the last of which was a sound art piece of their own creation.  Last week, the students presented their final work.  I won't go into the details of the pieces, except to say that they had a huge range.  One student created a balloon-festooned sound walk. One placed contact microphones on a campus bridge and used the inputs to create new content. One created an interactive sculpture using mirrored cubes, hyperdirectional loudspeakers, and QR codes.  The pieces were varied and very interesting.  Here are some photos!




Tuesday, April 24, 2018

PLUMAS NEGRAS - Thesis in Review

Just over a month ago, we opened and closed our production of Juliette Carrillo’s Plumas Negras, a beautiful 3-act play about three women in one family with each act focusing on an individual and their respective struggles they faced in their time periods. This show was a very large undertaking that involved a lot of new challenges that I had never encountered before, and I’d like to share my process, thoughts, and reflections with all of you now that I’ve had time to reflect but really it's more like take a break from AMERICAN IDIOT and come back to this.
Photo: Vincent Olivieri

The music:

From the beginning of the process, I had started talking with Juliette about the importance and the function of the composed music within the world of the play. It had to serve two functions: carry the emotional weight of the characters by creating character themes that intertwined to form a larger melodic theme that we come to know as “Plumas Negras”, and use voices to create the distinction between the two different worlds we have presented within the play: the ancestral world, a world inhabited by souls long gone from the world of the living who take the form of crows, and the world of the living. One further point was that no music was to be electronic - all of the sounds used to create the music are to be natural, acoustic, of this earth. In pursuing this theme of earthiness and naturalism that remained a constant theme throughout the entire design process for all departments, we had landed on the decision to have all of the music be played live. I had never composed music for live musicians to play, so this was definitely going to be a challenge.

Research process for music and the design of the show became an ethnomusicological foray into exploring traditional Mexican and Mexican-American music and instruments. Given that we traverse time periods throughout the show, genres and musical tastes shift so it was wonderful to be able to listen to traditional folk music, moving through norteños, corridors, mariachi, and banda to name a few genres.

In the past but largely unrelated to this show, Juliette had worked with renowned South American harpist Alfredo Rolando Ortiz (who teaches in Corona, CA) and entertained the idea of having harp within our show musically in some form. I took this point of inspiration and delved into uses of the harp within South American countries to start and continuously moved north to uses within Mexico. Having found the traditional folk ensemble of conjunto jarocho, I used this as a starting point for one avenue of composition and used the harp to represent the ancestral world of our crows. Guitar to represent the world of the living came easily - it is an instrument that is one of the most accessible to learn, and true to form, one could find a field worker playing melodies on their sun-beaten guitar taking recluse in shade from the sun during a break from a day’s work, much like the world of Plumas. Two of our wonderful cast members, Ernest Figueroa and Amilcar Juaregui (AJ), within the show played guitar, and Juliette had asked Alfredo if he would be able to perform, but due to scheduling conflicts he was unable to; however, unbeknownst to us, he recommended one of his students with whom he had great trust in, Nina Agelvis, who studied here at UCI perform instead (who also happened to be our Honors undergraduate in Lighting Design!).

Harp: Nina Agelvis - "Crow's Lullaby"
Photo: Fernando Penaloza

I had begun by throwing proverbial spaghetti at the wall for musical ideas of the main theme, taking inspiration from traditional folk melodies, to popular genres, to soundtracks such as Disney Pixar’s Coco (which is a fantastic film and you all should go watch it if you haven’t seen it. Or go watch it again and cry because it’s that good). What I ended up landing on was a mix of all of these, creating the threads for each character theme to expand upon as we progress through the show so that when the theme reprises at the end, it will resonate that much stronger within the hearts and ears of each audience member. Think of the music from Disney Pixar’s UP, and the use of the theme to highlight events of both happiness, sadness, and anything in between so that over the course of the entire film the music carries the weight of the narrative, taking the listener on a musical journey similar to that of the characters. It is this concept that has formed the core of my compositional process, and this production was no exception to that model.

I was in rehearsal essentially everyday for the last 3 weeks leading up to tech, working and developing the music with Nina, and our wonderful guitarists, and seeing how the action on stage blended with what I was trying to do musically. Without this level of interaction, the music would have fallen flat for sure, and wouldn’t have become another voice within the world.

I also had the task of composing a folk melody, sung a cappella. The lyrics were written by Meliza Gutierrez, the actress playing Concha within the show, and I referenced slight melodic themes found from other pieces within the show to create the melody we hear. This piece is heard twice in the show, at the beginning with only half of the melody heard, and at the end of the show where we hear the entire melody.

As far as existing period music found within the show, popular music of the time was selected with the thought in mind of what these workers would listen to, and I asked each cast member if there were any songs that their parents or grandparents would listen to, drawing on popular artists from the time. It was definitely a heartwarming and touching feeling to see families in the audience remember songs that their parents or grandparents might have listened to, perhaps in a similar way to how our characters did. These pieces played out of practicals on the set, a gramophone, and a transistor radio in their respective time periods.
Crows inspecting the phonograph. Photo: Fernando Penaloza

This piece in particular found its way into our hearts:
https://www.youtube.com/watch?v=8L51Pi3vy2o



In the end, I had composed 7 pieces for the show. But I must give the utmost of praise and gratitude to Ernest, Amilcar, and Nina. I created skeletons of each guitar piece, with the intentions musically in tact but left the true voice of the music to be carried by the performers themselves for their musicality and knowledge of their instruments were far greater than anything I could ever hope to achieve on my own. This wonderful collaboration allowed the music of the show to really come alive as it was given life from multiple people. All of the music in order can be found here:


The System:

The space for this show was the Robert Cohen theatre, our small black box space able to be configured in any way. And in every way it went. It started out with a three-quarter thrust, to shifting to the playing space in one corner, back to three-quarter thrust, and eventually landing in the alley configuration seen. Regardless of the configuration however, my main design intention was to take advantage of a more realistic sound spatialization and changing the acoustic character of the space using a Virtual Room Acoustic System (VRAS), no integrated as part of Meyer Sound's Constellation system. Thus, aside from the main address system, the space was treated largely the same.

In order to achieve realistic spatialization, movement, and VRAS, I had to go with Meyer Sound’s Digital Audio Platform - Matrix-3 system. We have the newer D-Mitri system here at UCI but not nearly enough outputs nor the processor for VRAS to achieve the design intentions of the show. Thus, I went back to our good ooooooold friend Matrix-3. What resulted was a very large system comprising of a few layers in the system: Overheads, Mains, Surrounds (audience listening height +2ft), and ground surrounds, in addition to a truck engine/exhaust system to make a real 1940s Ford F1 come to life. A large system no doubt, and loading in was further complicated without the aid of a sound supervisor (we have Jeff Polunas aboard now which is fantastic!) so generating paperwork from a logistical supervision standpoint in addition to the technical documents became a good time commitment.

Photo: Fernando Penaloza


To function properly, VRAS needs multiple microphones spread out across the space evenly ideally. The signals that the microphones are redistributed to every speaker in the system where the outputs of the speakers are then picked up in the microphones again, and are redistributed further. These seemingly random generated signals are what we hear as reverberations, and help our brains correlate what we see visually to what we hear. i.e. in a large cathedral we would expect to hear a very reverberant space to match the size of the room we are in. The power of VRAS allows us to control what we hear, and thus a space can transform almost in an instant from a completely dead space to sounding like a cathedral. We were fortunate in that this show’s configuration allowed for the microphones to be lower than they would have been in a proscenium show as the trim heights of the mics did not intrude onto the visuals of any scenery. This allowed me to have greater control of gain before feedback, and not push the microphones as loud as needed. VRAS needs the entire space to be treated as dry as possible, eliminating any naturally occurring reflections within the room; thus, each wall of the theatre was covered in curtains, as well as any bit of floorspace not being used for action was carpeted. To our benefit (but not to our lungs because dust), the dirt border / stage acted as a fantastic absorber of sound with its very porous and thick base to absorb a large amount of frequencies and foot noise.









CUESTATION:

I had used Cuestation (Matrix-3 once again) in my previous main stage here, Our Class, but only really for its fantastic Spacemap tools. I wanted to expand upon that tool, but also took on the challenge of running the entire show off of WildTracks, Cuestation’s playback method within the software itself.

This presented a number of challenges. I had never programmed a show solely in cuestation, and nor had I used Wildtracks this extensively before. What resulted in was a lot of time spent in tech, and many many hours after tech concluded, cleaning up programming and refining the bajillion ways you could execute a single cue. Working in QLab would have been much faster for all of the cues and updating them respectively, but the amount of knowledge and I had gained from using Cuestation surpasses any ease I would have got from simply programming within QLab.

Thanks to the control Cuestation allowed, I had 256 busses at my disposal to configure matrixes for assigning channels. What this allowed me to do was put our class experiment of Wave Field Synthesis and Source Oriented Reinforcement (SoR) into practice. Our harpist, Nina, would be playing essentially in one of the seating sections, and a concern would be listening levels for the audience bank directly across from her and the furthest away. The conundrum was that she couldn’t play so loud so as to deafen those sitting next to her just so those furthest away could hear. Thus, the idea of WFS came to mind, by subtly reinforcing Nina’s sound so that way everyone will still localize to her position. We took the same calculations and formula from our class, calculated the distances in 3D in Vectorworks and implemented the amplitude and delay adjustments to a “harp bus” within the software, that whenever assigned the output of the microphone capturing Nina’s harp would automatically be matrixed to her exact location. It worked incredibly well and was easily audible from any spot in the theatre without making any one area too loud.

Nina's Harp SoR Calculations


VRAS:

I did a fair amount of research reading up on as many VRAS document I could find, and it was A LOT of math that reminded me of all those years of calculus and physics. It also gave me the same “I want to bang my head against this wall” feeling as I delved further in the rabbit hole. However, once we were in tech we set up a matrix for each microphone and added in the attenuations to each speaker. A 12x27 matrix can make for quite the headache, but in the first test run nothing blew up and we heard an echo, progress!

From there it became constantly fine tuning the reverb, EQ, and attenuation values until we had landed on a good base to move from. Each scene of the play had VRAS treatment, lending our ears to take us to the different locations of the play, the open fields of Salinas - slightly distant, a cramped office interior - dry with a short echo, and the drifting world of the crows for example. While challenging, it was definitely rewarding and added a new dimension to the play.


Pre-tech descriptions and planning of VRAS and Spacemap


SPACEMAP:

I have always loved Spacemap and the power it has at creating multichannel panning and movement of sounds. Plumas was no exception, and a fair amount of cues took advantage of Spacemap and with its series of triset mapping. In particular, I found the overhead plane and passing trains to be the most effective uses of Spacemap, achieving a very realistic image of sound moving from one location to another.

CRITIQUE:

As is the norm for any UCI show, all of us in team sound watch the production together and give a critique following the show, providing our thoughts and feedback to the designer. Plumas functioned similarly but with all thesis projects, an outside industry professional comes to watch the show and impart their words, comments, and criticism as well. Sound Designer and Composer Kari Rae Seekins, was my thesis critic and gave me invaluable feedback and thoughts. Most of which I wish I could go back in time and implement, but whenever is a show truly perfect? We always can have something to go back and tweak ad infinitum.

End of critique with Kari Rae Seekins
Photo: Vincent Olivieri


From my fellow peers and mentors, I received equally strong criticism in both positive and critical manners, which I appreciate greatly.

I would like to thank my wonderful assistant, Hunter Moody. This show would not have been possible without your help in every step of the process, ranging from shop and load-in tasks, wave field synthesis calculations, Spacemap programming, and making sure I was a human who got some sleep and food. Thank you for everything!

In retrospect, I would have taken advantage of Cuestation's 256 busses more efficiently, which would have drastically saved time in programming, allowing me to create content and treat fades much more elegantly. Curation of some sound effects would have also taken a stronger presence, as some smaller sounds fell to the wayside in favor of increasing the robustness of the system. It was not a perfect show by any means as far as the actual content I created sonically; however, in an academic setting that allows for the exploration and education of new technologies and challenging one’s own limits, I feel truly thankful to have had the opportunity to learn so much and be a part of this fantastic production. Plumas will forever hold a spot in my heart, not only for what I learned but for the story and message that it told, giving the stage to a group of people unfortunately not seen in the limelight as often as they should, and letting their voices and stories to be told. Let fly.

Photo: Fernando Penaloza

- Jordan


Monday, April 2, 2018

Meet our new Sound Supervisor… Jeff Polunas!

After an extensive search, we are thrilled to announce that Jeff Polunas will be joining UCI mid-May as our new Production Sound Supervisor!



Jeff is thrilled to be retuning to UCI, where he received his MFA in Sound Design in 2012. Since graduating, Jeff previously served as Production Sound Supervisor at CSU-Fullerton for 5 years. Jeff is also a member of USA829 as a Sound Designer and has recently designed Shakespeare in Love and Sisters Rosensweig at South Coast Repertory. With over 125 designs to his credit, he has designed for the following theaters: South Coast Repertory, Antaeus, International City Theater, PCPA Theaterfest, Summer Repertory Theater, Atwater Village and many universities in Southern California. Jeff is also a member of the USITT MainStage Committee which he helps organize, install and run the Sound element of the MainStage events each year. Jeff is looking forward to working with the Graduate Students at UCI and helping mold future sound designers.


It's been a very long year since we lost our dear friend and longtime Sound Supervisor, BC Keller.  I would like to personally thank the following people for doing double-duty getting our shows up and keeping the shop from exploding:  our "unofficial" interim supervisors -- Kate Fechtig, Matt Glenn and Mark Caspary; our incredible grads -- Jordan Tani, Andrew Tarr, Ningru Guo, Hunter Moody, Jack Bueermann, Ben Scheff and Andrea Allmond; our Production Manager -- Keith Bangs; and the over-hire professionals from Diablo Sound.  You are all the very best, but thank goodness Jeff is on the way… and not a minute too soon!

-Mike Hooker

Thursday, March 29, 2018

Meet our incoming students!

We had a terrific group of applicants this year apply to the sound design MFA program, and it was a tough decision to choose which two we'd invite into the program. But, Mike and I are excited to introduce the class of 2021 at UCI Sound! Elisheva and Garrett are both remarkable artists, and we're looking forward to having them join us for three years of intense development.  Stay tuned for great things from them!


Elisheva Anisman recently graduated from Western Washington University where she studied theater, audio recording, and storytelling, in addition to sound designing a plethora of productions. She is passionate about how and why stories get told and how sound can help communicate a narrative. Since graduating she has worked as a sound designer or engineer for a variety of local companies in northwest Washington and has experimented with performing her own music.


Garrett Gagnon is incredibly excited to be joining the Sound Design program at UCI! He has lived in the Southwest Michigan area his whole life, and can’t wait to make the trek to the West Coast. He has been a musician (vocal, piano, drums) his whole life, and has always been able to find work that pulls from his strengths. He has worked at two different recording studios, and engineered live and studio albums in various genres, primarily focusing in jazz and classical. Garrett has also built a relationship with many theatres and schools in the area, and has been resident Sound Designer at Farmers Alley Theatre for the past few years. He cannot wait to start exploring what he can bring to the Sound Design program at UCI!

Thursday, January 11, 2018

UCI Drama Design poster

Happy New Year!

Here at UCI Sound, we're getting ready for a busy recruiting season, and part of that includes this brand-new poster!  Check it out:



It features two of our Sound Design MFA students - Ning Guo and Andrew Tarr. Scenic Design student Fernando Penaloza is in the center of the image.

We're sending hard copies of this poster to a bunch of great undergraduate schools, but we can't send them everywhere. So, consider this your own personal copy!

Sunday, December 3, 2017

Chess - In Concert

Chess was an incredibly invaluable learning experience in designing for a musical, especially so for this special concert-staged version. 

As a creative team, we had decided to center all design aspects to a “concert” staging of the show. We worked on the idea of having a goal-post truss since this would allow lighting to reach many downstage positions and also allow me to to have arrays downstage of the actors. It also allowed me to extend array positions into the proscenium opening instead of being limited to being to the side of the proscenium.

We decided to place the 25-piece orchestra onstage centre to direct the focus of the performance to be on the music. Minimal set pieces framed around this staging to help give us context to the music and set the storyline. There was also a large ensemble of 35 on risers on both sides of the orchestra; they were miked with a pair of small diaphragm condensers for each section. A total of 7 leads and 10 featured ensemble were on wireless handheld microphones to capture forward sounding vocals yet give them the freedom to move around the stage. The featured ensemble had individual lines as well as group vocals that were sometimes supported by the large ensemble. 


Under the mentorship of Mike, we spent a substantial amount of time brainstorming various configurations of system design, but finally decided that Chess would be the one show that I can implement a stereo concert-style system without the typical musical theater vocal center, due to the pop-sounding instrumentation. I also intended to not have under-balcony fills if I could design a line array system that can deliver enough to the back of the house under the balcony. We realized this was very achievable in the Irvine Barclay Theatre - each array was broken into three components - the top 2 cabinets for the balcony, the middle 3 for the far end of the stalls (including mix position) and the last 3 cabinets for the stall seats nearer to the stage. The curvature of the arrays helped focus energy to the balcony and under-balcony but a wide splay covered the stalls nearer to the stage. We made sure to have even SPL to all three sections of the seating without losing any level in the stall far/mix position. This was the final design.








This was the implementation of it after we loaded in!


Not unexpectedly, there were some slight discrepancies with the truss height and audience seats, so we opted to add 4 Meyer MM4s for frontfills. This helped to cover the slight drop in the first two rows of the audience.




I had originally decided to automate orchestra faders every time there was a musical change. This proved to be too time consuming during tech and dress to make sure I updated all the correct scenes. Now on hindsight, perhaps I should limit myself to one snapshot per song as a start and add more snapshots only if it was needed. 🧡

The whole production process was a huge lesson for me in terms of managing a team of this scale. This was the first time I was lead designer in a musical this big. I learnt the importance of effective and clear communication with the director, musical director and my team. It was also a good reminder to always think a step ahead of the work schedule and design process. Designing my first big musical here at UCI has definitely pulled me out of my comfort zone to learn to interact with a team I have never worked with before. There’s still much for me to learn, but I’m grateful for this wonderfully challenging experience. 

Photo credit: Paul Kennedy

Photo credit: Paul Kennedy


Kudos to Hunter for being such a patient assistant designer and for always keeping me on track when things got crazy! And kudos to Jack for being such an amazing mixer and team player!