It turns out that ocean waves are a pretty common through line in many of our shows this year... Metamorphoses, even without real water on stage, has movement that just begs for stormy waves; the directing students are doing Shakespeare's Tempest; and Mark's designs for The Liquid Plain (his thesis!) and The Odyssey (which he will be designing in Italy this winter!) both take place near (or on) the ocean. It's so fortunate that we live so close to the local talent.
(Or maybe we secretly wanted an reason to go to the beach)
Mark monitors the sounds of the ocean
Honestly, who could resist the ability to record a wide multitrack of the Pacific? I gave the California State Parks department a call, and asked for their permission and their thoughts on a quiet time we might be able to record. As it turns out, we found Corona Del Mar after sunset, and Huntington Beach around midday, both fairly empty (but for a group of UCI undergrad students who very kindly agreed to pause their conversations until we finished our recording -- thanks for your patience and help!). Listening back, there were some great results, and we are both very excited to layer the recordings into our shows.
The strip of beach we had to ourselves at Huntington must have been close to a young seagull nest. Probably a dozen baby seagulls, just barely able to flap their wings, were running in and out of the water. In a few seconds worth of the recordings, you can hear the sound of their feet running on the wet sand, and a quiet squeak when the water comes a little too close.
Completely unedited ocean waves below, or come see Metamorphoses and The Liquid Plain to hear it in context!
Every year, the LA Ovation Awards celebrate the best of LA-area theatre. This year's nominations are out, and for the first time, one of our own alumni is among the nominees! Noelle Hoffman MFA '11 was nominated for her co-design of Wicked Lit 2013, produced by Unbound Productions. Congratulations Noelle!
The rest of the Sound Design nominees include a number of familiar faces to the UCI Sound Design program. Drew Dalzell was Noelle's co-designer for Wicked Lit; Drew is a frequent guest critic for thesis productions (most recently last spring), guest artist in class, and has been a great supporter of our program in general. John Gottlieb teaches at CalArts, way up the road, but he also has been a thesis critic at UCI. Cricket Myers has also sat in the critic's chair. John Zalewski (who has two nominations, which is actually chump change compared to last year's FIVE) has been both a thesis critic and a guest artist in our classrooms.
Congratulations to all of the nominations, but of course, I hope Noelle & Drew win!
Laser microphones have actually been around for decades. Point a coherent light source at the outside of a window and the vibrations from inside will deflect that beam proportionally. Looks like the folks at MIT have come up with a truly novel way of capturing sound using only high speed video pointed at objects sympathetically vibrating in a sound field:
I really loved that Shazam was able to determine the DNA of the song from one of these video sound captures! The audio interpolated from the 60FPS video really intrigues me, too -- I'm still trying to wrap my head around the physics.
UCI Sound MFA student Matthew Eckstein and Honors in Sound Design undergrad Karli Blalock recently sound designed a project called Elysian Roux. This excitingly unusual piece of theatre was meant to be the first mounting of an immersive theatre piece conceived by Vincent Olivieri and Mike Floyd. We spent two weeks working intensely on the project, and the results were pretty outstanding. Elysian Roux took place on UCI’s campus, all around the Claire Trevor School of the Arts. This presented some obvious challenges for the sound team, especially considering we had no budget. After gathering every extra speaker and amp available in the shop, we were left with 7 passive speakers, 5 amps, and hundreds of feet of Speakon cable. What we hoped to achieve with the design was establishing the location, creating a rather hellish mood, and acting as a device to help the actors know where the rest of the cast was so that each scene would sync up. All of these functions, of course, had to fit in a fluid and consistent bed of sound. With such a large playing space and so few speakers, this presented quite a challenge!
We found ways to use the space to reflect the sound in the most effective ways. For instance, we placed one speaker in the amphitheatre, playing a neutral loop, and because of the acoustics in the amphitheatre, this one speaker filled in the silence of an area of about 300 ft of pathway. Most centrally, we had an alcove that played a 27 minute long loop of unearthly ambience; within the alcove, we placed 2 Ramsa A-80s – one in a corner, facing the opposite wall, and the other in the center facing towards the mouth of the ‘cave.’ Pointing a speaker towards a wall!?! In this case, the reflections and natural reverberations within the cement alcove were exactly what we needed to create the ominous and cavernous sound we were aiming for.
We also placed a RH speaker in a tunnel, which we called Hell’s mouth. This loop was triggered as the spirits began to emerge from the underworld and continued to reverberate throughout the space until the end of the show.
Hell's Mouth Ambience track embedded here:
Of course we couldn’t keep all our equipment laying out every night, so we had to strike after every rehearsal and performance. Vinnie let us keep all of our equipment in his office, along with all the props and set dressings. The entire team’s Tetris skills finally paid off!
Overall this project was a fun challenge and very rewarding. Audience members called it “delicious and disturbing” and “a unique interactive experience.” As this project (and backstory) continue to grow, we are all hoping to see future productions of Elysian Roux. But remember… don’t break the circle.
This past week, five UCI Sound MFA students and our fabulous faculty took a trip south to Fort Worth for the annual USITT convention. This year, thanks to a whole host of its sponsors including our friends at Meyer Sound, d&b audiotechnik, Yamaha, and Shure to name a few, USITT put on it's second version of SoundLab! SoundLab was a wonderful way for us to get our hands on some new gear, and have some time to explore it and hear it.
The adventures started on Sunday and Monday, as a whole group of us sound students from across the country loaded in the first PA system - one of their new J-series array systems. Once the arrays were in the air, SoundLab kicked off. One of the coolest parts of the week for me was having the opportunity to mix 4 live bands, with Jim van Bergen (JVB) and Buford Jones looking over our shoulders. It's really not every day you get to mix music with incredible minds like those two gentlemen giving you tips and tricks. A special thanks to Erik Alberg, Eileen Smitheimer, and our own Mike Hooker and Vinnie Olivieri for putting these opportunities together.
Josh and Buford Jones working on their mix
For the second half of the week, we packed up the d&b rig, and flew the brand new Meyer Lyon arrays. WOW! All of us agreed that it was a contender for the best speaker we've ever heard - flat frequency and phase response, and almost no distortion or breakup, even well above 100dB!
Lyon going up
The whole week was filled with other awesome sound stuff, including a memorable slideshow to celebrate Bob McCarthy's distinguished achievement award, JVB's microphone dressing clinic, and the opportunity to learn about some things we've never really gotten an opportunity to dig into before... like Out Board UK's TiMax!! Duncan and Dave were especially gracious in showing us the ropes of their machine and chatting about some of their experiences installing it!
Of course, while you're in Texas, you can't pass up the barbecue, TexMex, Cajun food, the Stockyards, and the Dallas Theater Center (we were able to see DTC's production of The Fortress of Solitude - absolutely wonderful show!). Thanks to everyone who made UCI Sound's journey to Fort Worth so educational, and a whole lot of fun!
Early this year, I had the pleasure of designing Angels in America Part I: Millenium Approaches in the Robert Cohen Theater at UC Irvine. The show represented a portion of my thesis for my MFA degree. The sound team for the show consisted of Mark Caspary (MFA 2) as the composer and Kelsi Halverson (MFA 1) as our assistant. The show was directed by Gavin Cameron-Webb. For those unfamiliar with Tony Kushner’s play, Angels is a fantastical portrayal of the lives of a diverse group of people living in New York City at the peak of the AIDS epidemic in the mid-1980s. The play slips fluidly in and out of real time and the real world as the characters interact with ghosts, invisible voices, hallucinations of each other, and, of course, the Angel herself. This style presents the design team with ample opportunities to change perspective and direct the audience’s focus.
The production took place in the Robert Cohen Theater, configured in a three-quarter-thrust configuration. With only three rows of seats in each audience section (a total of about 85 seats per show), the experience was both intimate and different for every seat. It became immediately clear during initial talks that the show would be extremely music and sound cue-heavy, so I knew I wanted to design a simple sound delivery system. The main speaker positions consisted of Meyer Ultra-series loudspeakers, with a UMS-1P subwoofer. The team also agreed early on upon the use of small wireless speakers throughout the props and set pieces to localize certain sound cues or to create the effect of a radio on-stage. For this setup, we purchased four Altec-Lansing IM237 battery-powered speakers to combine with UCI’s Shure PSM-900 In-Ear Montor system. The content played back through Qlab 3 and into a Yamaha LS9-32, allowing me to route a couple of sends from Qlab to some of the console’s built-in reverb effects. I also decided, in conjunction with lighting designer Stacie Marie O’Hara, to trigger certain light cues with Qlab via a MIDI connection.
Those interested can peruse my system paperwork HERE.
In initial discussions, we decided as a team to create a build of tension and anticipation throughout the show that culminated in the reveal of the angel at the very end. Mark and I decided early on to root the earliest sound and music cues in reality and to slowly expand the soundscape into a more and more otherworldly and non-diagetic style throughout the show. The shape of the space, in combination with my speaker placements, helped facilitate this illusion. Having positions in front, above, and behind the audience, as well as upstage and in the on-stage practical speakers, meant that I could easily define a space and source for the sound and music based on the scene—from a focused point (e.g., Harper’s radio) to a nebulous, reverberant, cerebral space (e.g., Harper/Prior’s shared dream).
The process of discovery and creation on this show began in early October and continued full-force until opening night. I consider it incredibly fortunate that composer Mark Caspary and I have collaborated together before, as we have already developed a language and an understanding of each other’s working styles. Despite our respective titles in the program, Mark and I produced most of the sound elements and music together. This took advantage of the power of two brains over one, but also ensured that we were always up to speed with one another’s progress and creative decisions.
With such an enormous show, it was imperative that the design team stays on top of all changes to the designs. I communicated significantly with Stacie to make sure that lighting and sound gestures worked harmoniously, and to program Qlab’s MIDI light cues with accuracy. David Phillips (scenic) and I kept in touch to track locations of props and furniture in which I hid a wireless micro-speaker. Leanna Moore (costumes) and I made sure that Hayley Palmer, the actress playing the Angel and Emily the nurse was comfortable wearing a body mic during two key scenes in Act III. Our collective designs went through many drafts, with each design undergoing at least one major revamp along the way.
Angels was an immense undertaking for all and I’m very proud of the work that was done. Team sound worked seamlessly well together (including Tariq Malik and Gabriel Barriga on audio crew). Congratulations to all on a fantastic run!
The Design faculty at UCI is delighted to announced that Karli Blalock has been granted Honors in Sound Design! Karli has only been studying sound for a year and half, but she made up for lost time with meteoric growth. Her design and technical skills have developed quickly, and recently created an excellent sound design for a workshop of The 39 Steps (if you know the show, you know how big that design is).
Karli has taken a number of sound design classes with the MFA students, and she's worked alongside them in production and crew work. She's assisted on some large shows and designed a bunch of smaller projects. She is tireless and always up for a new challenge. Mike and I are disappointed that she's graduating this spring, but we know that wherever she ends up, she'll make great work and do great things!
All of us who are part of the UCI Sound Design Program are excited to announce our two incoming students next fall. Here they are, the Class of 2017:
Andrea Allmond is currently working on staff at Dallas Theatre Center. She's a Texan through and through, which is terrific for us, since our current resident Texan is graduating this spring. Andrea did a lot of conceptual sound design work in her undergraduate training, and she's filled out her technical chops at DTC working on the sound crew. Also, she boxes (!)
Ben Scheff will be moving all the way across the country from Boston (my favorite East Coast City). Like Andrea, he's been out of school for a few years. He is currently the Sound Engineer at Boston Conservatory and the TD at Circus Smirkus. When not working those gigs, he also freelances around the New England area.
Mike and I are delighted to have Andrea and Ben join us next year. We had a number of terrific applicants, and we're really looking forward to our new additions.
It's January, which means that soon it will be LATE January, which means that soon Prof. Mike Hooker and I will start looking for two exceptional sound designers to join us at UCI for their graduate studies. That could be you!
Are you ready for graduate school in sound design? Well, go ahead and check us out at our official home on the web. We've got a dandy site there with information about faculty, staff, student, curriculum, and facilities. If you've got any questions, you can always email Prof Hooker or I for more details. We're both happy to answer any questions.
Choosing a graduate program is not easy. When choosing the right program for you, you'll need to consider curriculum, faculty, student colleagues, facilities, geography, design opportunities, and cost! Everyone considers these issues differently, and how they weight these factors can dramatically affect their choice of graduate programs. Prof Hooker and I are both happy to answer any questions you may have about our program here at UCI. We want you to make the best decision for you!
One common concern that I often encounter when talking with potential design students is cost. There is a common belief that grad school is expensive. It can be, but at UCI, we have a generous financial package. It's too early to talk details, but if you are concerned about paying for graduate school, email me, and I'll be happy to explain things in more detail.
If you are interested in UCI's Sound Design MFA program, then there are two avenues available to you for an interview:
On 31 January, Prof Hooker will be at the URTA interviews in Chicago. If you're going, he'd love to meet with you.
If you are not registered with URTA, then you can apply directly to UCI through this link.
Prof Hooker and I are both very proud of the Sound Design program we have built at UCI. It's among the best in the country, and our students are all talented in many different ways. But two will be graduating this spring, and we're going to be looking for two more to take their place. That could be YOU!
In last term's Trends in Modern Sound Design class, the MFA Sound Design students tackled a huge project. I paired them off and gave each group two two-minute video clips. Their assignment was to completely replace the audio. They had to conceptualize (I asked them to be true to the original movie, and not, for example, replace footsteps with bell rings). They had to write or pull new music. They had to record/pull and edit sound effects. They had to cast, manage, and direct voice talent. They had to pull all of their elements into a ProTools session, keeping things organized and flexible for edits. They had to bring all of these elements into alignment, blending them into one aural image.
I tried to give each pair of students two contrasting videos. One was more sound effect-y, and one was more dialogue-y.
I was fairly blown away by the results. There's some exquisite work here, and in some places, I think the students' final projects were better than the source audio. But, why don't you be the judge?
Thanks also to all of the vocal talent that ended up in these projects, including some MFA Acting students. The cast for all six included: Mark Caspary, Matt Glenn, Matt Eckstein, Leslie Lank, Matt Koenig, Blake Morris, Josh Odess-Rubin, and Jade Payton.
(I should be clear here that since these projects were done in a university setting and for academic credit, I did not secure rights to any of the video or music content. If you are a rightsholder for any of the content here and want me to remove it, please email me and I will gladly do so.)
Last weekend, UCI Sound once again took a trip up to Hollywood to visit and tour the sound system of the Lion King national tour. I was really excited to help set this up with Andrew Gorell, a friend of mine from Cleveland (he was an MFA actor when I was in undergrad).
Andrew, who plays Zazu, was able to introduce us to Kevin, the head of sound for the Lion King touring productions. The evening was a great learning experience for all of us! We heard Kevin's process for tuning his system in just five hours in each city (entirely by ear!), he talked us through how he repurposed the old A/B center cluster and how he utilizes the large number of subwoofers throughout the show, and he talked about some of the more unusual moments in the show that are under his thumb as the sound head - MIDI triggered CO2 cannons, and a 4-1/2 octave marimba with pickups on every bar! Kevin has been touring with the Lion King for almost a decade, so it was pretty incredible hearing about his experiences!
Thanks to Kevin, Andrew, Vinnie, and the staffs at the Pantages and Lion King for helping put this experience together!
This year, as part of the Trends in Modern Sound Design class, we worked on a project called Track & Field. Last year, Stephen Swift (MFA '13) oversaw the first version, and this year, we refined it a bit. Track & Field is based on Layer Tennis, a fun exercise in inspiration and workflow.
In Track & Field, each week, the player is given a 30-second clip of sound (bounced and stems). The player has 60 minutes to transform that piece into something else. They can record, remix, process, loop, crush, reconceive, redevelop, and/or do anything they want. The only hard and fast rules are that the player not take more than 60 minutes to work on the piece and that the piece lasts only 30 seconds. The player then creates a bounce and stems of their work and delivers it to the next player. Play repeats.
Track & Field is a great exercise for a number of reasons. First, it's great practice for non-linear conceptualization. Also, it encourages the development of efficient workflows. Doing it in the Fall Term gives the students a chance to reconnect after a summer apart, and it gives everyone an opportunity to listen to and learn from each other.
We have six grad students, so we had six different threads. I developed a rotation scheme so that every student got to work on every thread at least once. Every week during the term, we listened to the current versions of each thread, and on the last days of class, we listened to each thread straight through, from version 1 to version 9. It's fascinating to listen to how sonic elements transform through the piece, and how one idea transforms dramatically into another idea.
Here, for your listening enjoyment, are all six threads. Some are crazy rides, and some are confusing. Don't worry about it. Just enjoy the oddball cocktails that the MFA designers made!
Most of us nowadays use an iPhone, iPod, or iPad to listen to music
through headphones. I recently found an app designed to enhance the
sonic experience and offer some valuable listening information at the
same time. The app is “CanOpener” from Good Hertz LLC (I am in no way
associated with the app or developers; I just think it’s a great app)
and brings some unique features to the listener.
The first thing
you see when you open the app is a list of headphones to choose from.
The list right now (Version 1.1) includes:
Line-Out
AKG K55
Apple EarPods
Apple Earbuds
Audio-Technica ATH-M50
Beats Pro
Beats Solo HD
Bose AE2
Bose QC15
Bowers & Wilkins P5
Etymotic hf5
Etymotic mc3
Etymotic mc5
Generic Earbuds
Generic Headphones
Generic In-Ears
Klipsch Image ONE
Sennheiser HD 280 PRO
Sennheiser HD 580
Sennheiser HD 598
Sennheiser HD 600
Sennheiser HD 650
Sennheiser HD 800
Shure SE215
Shure SE315
Shure SE425
Shure SE535
Shure SRH440
Shure SRH840
Shure SRH940
Sony MDR-7506
V-MODA M-80
Choosing
your headphones (or the closest match) allows the app to monitor what’s
happening in your ears more accurately. The app will play anything you
have in your iOS Music app (if you have iTunes Match, it won’t stream
from iCloud but will play anything that’s been downloaded to your
device) and can also play FLAC files if they are transferred via
iTunes. Music sorting matches what you have in your Music app,
including Playlists. The only very minor bummer I’ve found is that
playing files through CanOpener does not register in the iTunes “Play
Count” metadata field, but I might be one of the only people in the
world that keeps track of that kind of thing.
In my opinion, the app design is beautiful. Here’s what it looks like from launch to music selection:
The crazy spinny circle thing in the middle of the pixelated album
artwork is time advance/rewind. Unfortunately it won’t let you scratch
your music like DJ Jazzy Jeff. Can’t win ‘em all.
Once you have
your song selected, you have a couple of options to modify your
listening experience. The first is “Crossfeed” and seems to be the
biggest marketing draw to the app. Here’s how the developers explain it:
“CanOpener’s crossfeed algorithm allows your headphones to
behave a bit more like speakers. The design, a result of extensive
research and listening tests, incorporates several broad characteristics
of an ideal loudspeaker setup, creating a more spacious, natural
soundstage over headphones.”
I personally never much minded
“headphone” sound, but I find myself really enjoying the crossfeed
feature. On a basic level, the app takes some Left and puts it in Right
and vice versa, but does so while intelligently managing things like
phase cancellation. It noticeably changes how your music is presented
via headphones. The adjustable parameters are Amount (values from 0 to
150% in 25% increments) Angle (from 0 to 75 degrees in 15 degree
increments) and L/R balance (a pan adjustment from 100% L to 100% R in
10 % increments). Also available are toggle buttons for Mono, L/R Flip,
and Polarity Reverse (making the app handy for system troubleshooting
too!)
There’s a great visual presentation of what’s going on
with crossfeed too. I’ll run through the crossfeed presets so you can
hear what’s going on in your ears. (I recommend headphones):
Normally, the question mark in the center of the screen is a neat
silhouette of your chosen headphones to represent your place in the
aural world.
The other way to modify your listening experience
in the app is the Equalizer. It is a very basic EQ, offering only Bass
and Treble adjustment. Bass frequencies range from 31.5Hz to 250Hz and
can be gained up or down by 6dB in .5dB increments. Treble offers the
same gain adjustment from 2kHz to 16kHz. I personally leave the EQ flat
but enjoy the Spectrogram because it’s pretty. I guess it also
provides some information on the power levels of frequencies or
something, but mainly it’s pretty. And you can change the colors to
match your outfit!
Here’s what it looks like:
My favorite feature in the app is the Dosimeter. We all know
that people listen to their music WAY too loudly, but there hasn’t
really been a good way to know how too loudly. As a sound designer, my
ears are definitely within my top 5 favorite and valuable body parts and
I want to keep them healthy and safe. CanOpener has a great Dosimeter
which they claim to be pretty accurate. It’s one of the main reasons
the app asks you to choose your listening device when you launch it.
It’s also realtime and cumulative to give you a current and lifetime dB
SPL reading. Unfortunately, the app is smarter than I am and knows I’m
streaming and therefore won’t show me crap on the dosimeter so, to show
you, I have to take a crappy screen video of it working. Also, I just
installed the app on my iPad so I could take said crappy video with my
phone, which is why the lifetime counter is basically nothing.
Here's what it looks like in action, with awful audio:
I really love having a gauge to know what I’m doing to my ears while I selfishly block out the rest of the world with my headphones. Disclaimer: the dosimeter is accurate, but only for the headphones on the list. Here’s what the developer says: “For supported headphone models, CanOpener can measure precisely how loud you are listening, helping you monitor your listening habits and protect your hearing.” Luckily, I have Bowers & Wilkins P5 headphones (looooooooove them) and therefore can get a good number.
I have been using the app much more than I anticipated when I purchased it. I really enjoy the crossfeed and dosimeter… enough to write this. At the very least, it’s a great audio app to have on your device for three bucks.
Last Friday at the break of dawn, team UCI sound carpooled up the treacherous 405 freeway to Burbank, California to tour the Warner Brothers Foley studio! Mary Jo Lang is a former UCI alum and a world-class Foley mixer who has been in the industry for 30 years and has developed a hugely impressive resume. Even in the midst of the busy season, she kindly invited us to tour the Warner Brothers Foley stage - a place at which she works frequently. She and her team, who included veteran Foley artists Alyson Moore and John Roesch, walked us through their process, talked about their careers, and showed us the beautiful facility which housed everything from sea shells to tide pools. Armed with shotgun microphones and Pro Tools, there didn't seem to be anything this team couldn't create. We talked about the acting and performance aspect of Foley. Did you know that there are only about 400 people in the world that do this work? The team estimated that about 45 of them are always active.
The fun didn't end here. We were in for quite the afternoon. Bob Beresh, Post Production manager at Warner Brothers, surprised us with an expanded tour that started in a Warner Brothers dub stage, where Matthew Iadarolaand his team were over over-dubbing Japanese dialogue stems for a major motion picture currently in theaters. We attended for about a 15-minute session where translators were verifying the dialogue integration while Matt and his team were critically listening for mix accuracy. What a treat it was to be a part of this process.
We moved to another dub stage and a completely different kind of Japanese film, where the mixer was gracious enough to share with us bits of his mixing process his career.
We were then able to sneak a quiet peek at a scoring stage where the Bond Quartet was recording some music with legendary composer Michael Giacchino. Although we didn't meet the artists, just being in the room marveling at the acoustics and the equipment was enough to get us all excited.
We ended our day back at the Foley stage where Mary Jo Lang and the Foley team showed us about 5 minutes of sound that they had just created for a DreamWorks short, using everything from a wok to a wet chamois.
We left Warner Brothers feeling awestruck, energized, and inspired. It's great to be a part of a program that creates these kinds of opportunities for its students.
Today, the grad students and I took a trip up to the L'Acoustics in Oxnard, northwest of LA. L'Acoustics is one of the premiere loudspeaker manufacturers in the world, but most of us had not gotten a chance to take a good hard listen to their products. Scott Sugden was our contact, and he spent half a day with us, talking about science, theory, and lots of listening.
Our day started with a lecture/discussion about sound and physics. Scott talked about coaxial v. non-coaxial cabinets and how L'Acoustics applies those ideas into their smaller cabinets. From there, he shifted seamlessly into line array theory, including a clear illustration of how line arrays function differently than point-source boxes. Along the way, we listened to most of the product line, including a scrimmage shoot-out between a Kiva array and a Kara array (which both sound pretty damn amazing).
Line Array shootout. Be aware of bleeding ears.
I won't get too wonky here except to say that after taking Bob McCarthy's SIM class last month and Jamie Anderson's SMAART school last year, it was fascinating to hear yet another take on sound systems, measuring, design, and theory. The physics don't change, but focus of attention does.
Thanks to Scott and everyone at L'Acoustics who took care of us and made today happen! We hope to see you again soon!
Josh, Mark, Kelsi, Matt, Scott, Matt, and Brian. Thanks, Kelsi, for the branding!