Wednesday, March 8, 2017

Welcome to our MFA, class of 2020!

UCI Drama is excited to present our two incoming students for next year: Hunter Moody and Jack Bueermann!  


Hunter is native Arkansan, and he first discovered the fascinating world of sound design in his undergraduate years. After graduation, he set out for eastern Kentucky to join the 2013 summer season of Jenny Wiley Theatre as the A2. With the advent of potential full-time work in 2015, he was promoted to resident sound and graphic media designer. He has since designed the sound for and engineered over thirty productions between JWT and the budding University of Pikeville Theatre program. Hunter has developed extensive experience in educational theatre, working with the Jenny Wiley Footlights Theatre, a pre-professional TFYA program, and guest lecturing at the University of Pikeville. Outside of theatre, he has designed sound for dance, voice-over, and competition, with experience in audio engineering for concert venues, church events, and live comedy. His research interests include the historical contexts of music, the psychoanalytic effects of sound upon memory and mood, and innovations in audio production technology. In his free time, Hunter is an avid video gamer and loves (trying) to bake. He has a BFA in Theatrical Design/Technology from Arkansas State University.



Jack recently graduated from the University of Redlands, where he designed multiple major productions. Since graduating, he has worked professionally as an assistant sound designer in the Los Angeles area. More recently, he has been working as the A1 at the San Diego Repertory Theatre where he mixed the world premier of Hershey Felder's Our Great Tchaikovsky. Jack is passionate about theater and story-telling through sound, and is excited to collaborate with the talented faculty and students at UCI.

We had a number of excellent applicants this year, and Mike and I are both thrilled and Jack and Hunter will be joining us in the fall.  Be on the lookout for great things from them both!




Friday, January 20, 2017

Field Recording

Last fall, the last time my Trends in Modern Sound Design class met was to go on a field recording field trip to 1000 Steps Beach in Orange County.  The beach is about 15 minutes from campus down a steep stairway (about 232 steps - not that I was counting) from the Pacific Coast Highway.  Once you're on the beach, the road is inaudible, and at 9am, there's little human activity on the beach.  So, on the Monday of Finals Week, we hauled a bunch of gear down to the beach to make some field recordings.



Earlier in the quarter, we spent some time talking about how to strategize the recording session for optimal content collection and editing speed.  We first talked about what kinds of recordings we'd like to make. We settled on these formats:

* close-miked sound effects (water, animals, etc.)
* binaural ambience
* spaced-omni pair ambience
* coincident pair (XY) ambience
* omni ambience
* 5.1 ambience
* M/S ambience
* spaced-cardioid pair ambience

When we looked at the list of formats, it became apparent that we needed three teams of students. So, I divided the class into three teams.


  • Ben, Ning, and Mingna would together handle a system that included a SoundField ST-350 microphone and a pair of spaced AKG 414 microphones. The ST-350 sends four channels (ambisonic WXYZ), but can be decoded to omni, coincident pair, 5.1, and M/S. The AKG 414 microphones have an adjustable polar pattern, so the team could choose a spaced-pair omni or cardioid. Those six channels (WXYZ & two 414) fed into a Zoom F8 recorder.  They would set up on the beach and record ambience.  

  • Andrea and Andrew would together handle the shotgun mic for close-miked sounds.  They mounted a Sennheiser MKH 416 on a boom pole, recording onto a Zoom H6. Andrea handled the boom.  Andrew handled the recorder and monitored on headphones.  They were tethered by cable.

  • Jordan handled the binaural recording by himself.  He wore a custom-made binaural ball-cap, with the omni elements sewn into the headband at the ear positions.  Those mics fed a Zoom H4.  


We met on campus at 8:30 and caravanned to the beach.  Once we were on the beach, we found a rocky plateau sufficiently far up the beach (the tide was slowly coming in) to make an impromptu camp.  Each team was responsible for collecting all of the gear (including stands, cables, batteries, etc.) they'd need, so once we got to the beach, the teams started setting up.  Teams shotgun and binaural were first out of the gate, heading south to a rocky water cave with lots of great water-against-rock sounds.  Team multi-channel took longer to set up.


The recording session itself was lots of fun.  Team shotgun enjoyed scrambling over rocks to get some great sounds of water lapping against the rocks, but their attempts at wildlife was less than successful.







Team multichannel spent some time adjusting their microphone spacing and distance from the surf, but got some excellent recordings of surf.



Team binaural (Jordan) took had a much-needed opportunity to have some quiet communion with nature after a particularly busy quarter (and an emotionally-taxing production).



As the session wound to a close, the teams started packing up, and Ning found herself in the wrong place at the wrong time with respect to the tide.  


Ultimately, we collected some great recordings!  After the session, the students went back into the studio to edit and render the recordings.  Teams binaural and shotgun only had to create two and one-track recordings to render, but team multi-channel needed to use some specialized software in the Meyer Sound Design Studio to render their recordings into all of the requisite formats.  I created a Google Doc that automatically turned the recording information that the teams entered into the appropriate data formats for uploading, both to our private sound effects server and to FreeSound.org, where we've put our recordings for public access.  We're still working on mastering and uploading all the audio, but once it's done, I'll post a link on this blog!


Wednesday, January 11, 2017

Disneyland Field Trip

The 2nd of December marked the last day of the quarter for us, but it was also the awaited day of our Disneyland visit - what a way to end the quarter where dreams come true!

Jerry Tomlinson, hiring manager of technical services were our first point of contact after a 17 mile morning drive to Disneyland Park. Technical Services at Disneyland involves technicians for all live events (stage shows, marching bands, parades, etc.), and Jerry is constantly on the lookout for talented technicians to bring onto the Disney team. His enthusiasm was infectious, but he was also extremely detailed in the operations of park as we headed straight to the Technical Services building, walking past rows of neatly pressed crew uniforms and costumes, and finally to their main equipment shop.

It was fascinating to see a central shop area that supports all equipment for the entire Disneyland. The efficiency and level of precision in equipment preparation was definitely an eye-opener. Each request for equipment is submitted to the shop, checked and prepared by the full-time technicians before they are sent out to the designated location. This speeds up the pull process but most importantly keeps inventory in check. I can definitely see how having a dedicated shop like this would have solved the issues I have encountered in professional settings elsewhere.

As we walk past a few rehearsal stages, we come across one for the Disney Performing Arts (DPA) programme. The DPA programme is a series of workshops for youths to learn vocal, instrumental performance, and dance to eventually perform in front of Disney Park guests at the end of the series. A team of experienced music recording engineers work with each group to mix and record the ensemble, and each group is presented with a USB drive containing their recording upon completion of the clinic.

A tour in Disneyland will never be complete without the next character. The theatrical stage showing Mickey and the Magical Map is a heavily automated production, with features such as Mickey’s trap door and moving set pieces. Jerry shared an interesting observation about the high occurrence of automation technicians originating from being sound technicians. We chatted about this and attributed to the programming of automation being extremely detailed and layered - possibly similar to programming sound in this digital age. 
Photos were off-limits in the backstage area, but we were finally allowed to take pictures upon entering the public area! (yay!) We watched the second half of “Beauty And The Beast” at the Royal Theatre, accompanied by a pianist similarly dressed in a simple yet elegant period costume. Kudos again to the Disney experience! We also observed the stage manager and sound technician calling and operating the show from a little corner behind the audience. The stage manager had a fairytale like music stand and an iPad control of was housed inside a treasure box. After the show ended, everything disappeared into the treasure box (just close the lid!) leaving only the beautiful music stand, clean and unobtrusive.  

We then followed Mary Poppins and the marching band to the area in front of the castle and watched their performance. Jerry explained that  each show has its own “zone” of speakers. This performance of Mary Poppins was isolated to the area in front of the castle. The system was designed to “draw people in” to the direction of the show but also not to spill too much to the surrounding areas. Jerry also explained that only the trumpeter and narrator were miked, and everything else was acoustic. This was when I asked about where the radiomic antennas, the sound console, and the sound operator were. To my astonishment, he explained that antennas were well hidden inside street signs, in trees and other structures all around the park and managed by a central system in Audio Central. This is done so that the RF signal from wireless microphones are never isolated to one pair of antennas only, providing redundancy and flexibility; there is always the freedom to stage a performance in a different location just by some re-programming at Audio Central (which we’d visit later in the tour). Imagine the amount of manual labor required for the changeover if this had been a conventional setup! 

Microphone levels and individual EQs are pre-programmed at Audio Central and cue changes are cued by a stage manager or technician who follows the performers. Fine adjustments, if required, are communicated to Audio Central via walkie-talkies so that changes can be made real time without the need of having a sound console and sound operator on the ground. I felt this definitely added to the “magical” experience at Disneyland.

Then came the most important highlight of the entire day - Audio Central. This is truly the mastermind, the brain, the “air traffic control” of everything audio around Disneyland. The most crucial software we were introduced to was the Disneyland Entertainment Control System.

We talked about how during parades, each parade truck has its own assigned show music and playback is from a SD card housed within within truck itself but Audio Central is able to override anytime if the truck system failed. The location of each parade truck is visible on the map in Audio Central and color coded according to which music it was playing. The speakers surrounding each parade will automatically crossfade between the background music to the parade music once the parade truck is within its zone. The speaker icons on the screen display a color change accordingly.

With some research on the internet, I found that this was a system developed by Don Dorsey exclusively for Disneyland. According to a guest post on http://www.mouseinfo.com/forums/disneyland-resort/87894-disneyland-show-control.html:

“The opening of California Adventure and the birth of the Eureka! Parade gave Dorsey the chance to implement a newly enhanced version of DECS. This technology represents a major advance in parade show control. Back in 1980, if Dorsey wanted to change something, he had to go into each of the 20 zones and change every individual cue line manually, a task that took hours. In 2001, DECS uses an “interpretation” feature, which allows a change to be written into one zone and immediately “expanded” to all 33 zones. “You can now rewrite the entire script in the amount of time it takes the parade to turn around,” Dorsey notes. “We finally have a system with the capability to control complicated parades and respond quickly to change requests from the creative team.”

It was still fascinating to find out so much detail was put into the show control system to ensure every single element is fully controllable yet can be overridden on the fly to adapt to changes on the ground transparent to the guests. This is critical to a high accountability venue where the going down of a single performance might trigger multitudes of complaints or loss in revenue and faith in Disney. 

After lunch we took a nice walk over to Disney California Adventure Park. Here we visited Cars Land where Jerry shared about Walt Disney’s attention to detail and every landmark or structure was built exactly as what it would have looked like in the animations. 

We then took a breathtaking walk along the Broadwalk, visited the control room of the World of Color, and eventually caught the parade of the new Disney princess Elena of Avalor. And then of  course Jordan had to give in to the lure of the churros kiosk along the way!

We finally got to the other main highlight of the day - Frozen the Musical. We walked through the backstage of the Hyperion Theatre, past the crew and actors getting ready for the show. Jerry explained that it was a video heavy show - two huge panels of LED screens join at the middle of  upstage centre to form the backdrop of most of the content of the show while the rest of the proscenium was covered with projection. The LED wall opens down the middle during moments of the show like a door for entrances and exits from upstage centre. Three bigger-than-life doors fly in at different moments of the show, one of which will be used in the famous Ana and Elsa song “Do You Want To Build A Snowman”. There was a giant intricate ice shaped chandelier above the audience very cleverly masked by bright auditorium lighting preshow. Ice spikes that were to be used for Elsa’s ice castle scene were wonderfully hidden between the gaps of the stage floor — the audience would never have guessed until they get a pleasant surprise during the scene.

The most impressive elements in the production was the LED wall and video projection. These set the scene for most of the show and freely transformed between scenes with realistic animation. The sound design by Broken Chord Collective definitely helped make the show. Together with the projection design of the show, it was a fascinating journey into the world of Frozen and definitely a magical experience for the many children and families that were watching.

We had a wonderful time seeing how dreams  come true and it definitely made some of our own dreams come true too.







Wednesday, December 7, 2016

Audio-for-Video Projects

Most fall terms, I teach a class called Trends in Modern Sound Design. Part of the class is a series of open-topic discussions but parts of the class are annual projects that we all look forward to. One of them is a sound-for-picture project. I divided the students into pairs, and each group got two sixty-second clips of film. Their task was to completely strip and replace the audio in the video, including re-recording all the dialogue, replacing all the sound effects, and layering in music.  The students had to work as recordists, directors, and producers. On the last day of class, we watched each video and engaged in a conversation about the team's process in recreating the sound.

Ben & Mingna:


Hateful Eight, UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Usual Suspects, UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Andrea & Andrew:


Mad Max Fury Road, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


Jordan & Ning


The Revenant, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


Trumbo, UCI Sound Design exercise from Vincent Olivieri on Vimeo.


We had lots of help creating these videos, mostly from friends of the program lending their time and voices in the studio. Thanks especially to Sam Arnold, Aaron Arroyo, Amy Bolton, David Hernandez, Brandon Hitchcock, and MFA Sound Designer Ben Scheff for providing their voice work!


(a note on copyright: as these projects were designed as classroom exercises, we did not secure any rights to these films. If you are the rightsholder of any copyrighted material contained herein and would like us to remove these clips, please let us know and we'll be happy to.)

Saturday, December 3, 2016

The Iliad

THE ILIAD
MENIN: RAGE

Just a few weeks ago, we opened and closed our production of The Iliad. It was a huge endeavor for something that only ran five performances at one hour and fifteen minutes, but I am immensely proud of what came out of it. It was a lot of firsts for me. I’d like to share some of my process with you fine folks.


THE MUSIC:
I started talking to Annie Louie about this production of The Iliad almost eight months ago. Annie had settled on a translation by Stephen Mitchell, which is a relatively recent iteration. Unlike some of the stuffy, stilted, belabored, and ultimately un-engaging translations I had read in school, this version was very much alive. The language was clear and uncomplicated, making room for the incredible metaphor and dialog to really come through. The whole book was, while still substantial, a pretty easy read. Annie knew that there was no way we could do justice to the whole story in the course of a single show, so she set about narrowing down the length of the story, so that we stood a change of being able to tackle the depth. In terms of design we were speaking of the show in shamelessly cinematic vocabulary, so it followed that our music would go in an orchestral, cinematic direction. 

“Sure,” I said, without hesitation, “We can do that.”

Now, it’s not that the statement wasn’t true, exactly. If I had said “We *know how* to do that,” that would have been a huge lie. I had absolutely no idea how to tackle something in a symphonic scope, and certainly no precedent for putting it in front of an audience. I did know a little of the vocabulary, though. A lot of it came from breaking down scores and reading along with symphonic recordings, or possibly just having been around a huge variety of different music ensembles throughout my life. Orchestration and instrumentation is something that I’ve made a point to study, but I hadn’t exactly… you know… done it before.
Because our costumes, lighting, and very limited set were beginning to take a more modern or temporally ambiguous form, I elected to generously include electronic and synthetic elements to pair with and contrast with the more traditional symphonic sounds. Some of this choice was informed by my comfort level in writing for an orchestra. I knew, for example, that if I contrasted synthetic sounds for the gods against traditional symphonic sounds, I was less likely to fail than if I attempted to make that distinction solely using the orchestra. So I was also, in some ways, hedging my bets.
Generally, the way Annie prefers to work requires the sound designer to be a present and functionally engaged with the case during the entire rehearsal process. In the past, I had generally composed by watching rehearsal, taking notes, and/or filming, and then sitting down with what I had taken after the fact and writing then. When rehearsal was taking eight hours a day, five days a week, however, it proved to be not terrible useful in that regard. Similarly, in my usual process, I tend to track my cues pretty meticulously from the get-go. In this process, it was such a tremendous slow-down that I quickly abandoned it. In a few different ways, the first week of rehearsal was a bit of a wash. I had started to build vocabulary, and had locked down a couple brief sequences in great detail, but I hadn’t really been working in the moment with the cast the way I was hoping I could.
So the next week, I made the decision to sketch the music in the room, as basically as I possible could, and build rhythmic skeletons that I could paint over later. Holy crap, did that make all the difference in the world. For one thing, I didn’t necessarily need to film and cross reference everything I wanted to write for, which was a huge time saver at home. I could just step in to my home studio and play for an hour or two, then bring in the draft and try it the next time it came up in rehearsal. Take notes, rinse, repeat. I first started drafting in Reason, which I know is a painfully limited environment, but it’s one that I knew I could work quickly in. Almost immediately, Mike, in traditional mentorly  fashion, pointed out that Reason’s sub-par orchestral sample library was limiting my vocabulary, and that I was writing music that played into and around its limitations.
We were about three weeks in at this point, so the idea of integrating a new sample engine into my workflow was daunting, never mind the problems that would come up from just learning it, double-extra never mind that Reason doesn’t play nice with VSTs, whatever I wound up using would have to be rendered and recorded, which stood to be a substantial slow down. With East/West offering composer cloud for about $30/month, though, I figured it was a pretty low risk thing to try. So I downloaded it, and all the Hollywood libraries (Which was NOT a quick process,) and in about four hours, including setup and building the workflow for the first time, the first major music sequence I built went from this:


to this:




So… yeah. There was no looking back. Even the first time I loaded up a tutti brass patch and hit that tuba/bass trombone/cimbasso octave, I knew that I was in for a great time. I honestly don’t remember most of the time I spent arranging that piece. It was just flowing.


THE SOUNDS:
We had used the term “elemental” when talking about the effects that we wanted. Things like rain, wind, and thunder had to be enormous. While initially we had talked about shying away from literal combat foley, it became apparent as we were building a vocabulary for the gods, and for mortals when they were being aided by the gods, that mortal vs. mortal c(K)ombat needed to give us somewhere to go. If we made all the mortal clashes big and cool sounding, it wouldn’t leave the room we needed for the gods. So what we settled on was a selection of pretty typical foley sounds used as sparingly as we could, and at slightly heightened volume. The gods had swooping synthesized and repurposed narrative sounds with their actions, which leaned on showing their power and motivations, rather than the physical results of their actions. This gave us a lot of room to make some cool in-between sounds when the gods where helping mortals out.
Similar to music, I have historically tended to work on my own, and come up with a completed idea to try out against a completed scene. I knew for sure that approach wouldn’t fly in this process, so I came prepared to do the majority of my cue sculpting in QLab. Once I settled in to the process, it became incredibly reflexive. First I’d build a temporary cue, which might be one element of the final, or might simply be a random sound that fit the gesture. By the end of a given rehearsal day, about half of the cues I would have built would be just about what ultimately wound up on stage. Often, the work to be done was in creating a convincing variety around a copy-pasted gesture. 


THE SYSTEM:
Our space for this show was the Winifred Smith Hall, affectionately known as “The Winnie.” It’s sort of a modified thrust built mostly for chamber and small ensemble music performance. It carries spoken word beautifully and has a lovely reverb on its own. Having mixed a couple of concerts in there, I knew that this was something that I would have to work with, rather than against. While the room diffuses high frequencies well, it has a tendency to build up low-mid energy really quickly- Your average electric bass amp generally turns to mud very quickly at high volume. This is compounded with some aging and questionably conceived rigging in  the space which meant I couldn’t bank on getting a lot of heavy equipment in the air. Also, for convenience, the back walls are flush to the back row of seats so there was no easy spot to get any sort of rear surround. Top that off with the space being renovated until the week before load in, which was the second week of classes at the beginning of the school year, and that all means that I needed a system that was scaled to the time and labor we had available.


The WSH in a rare empty moment

A shot of the WSH from the stage


The biggest complication this put forth was subwoofer placement. I needed to put the subs some place where I could tune it to my mains, and still really get some rumble to the audience. Hanging it would inevitably compromise the power I needed. After running a few scenarios through MAPP (and estimating the 3D interactions that it could no predict,) I landed on a somewhat strange spot that kept it close enough to the audience for power, in line enough with the mains that I could compensate with time, and far enough onstage of the voms that I wasn’t going to just trap all of the energy. What I wound up with was just out of the way, and while it was definitely a bit hotter towards the front few rows, at the wavelengths they were throwing it more or less evened out. My biggest coverage gaps fell in the aisles.


The surrounds were inevitably a compromise of the back row. Doing overheads instead would’ve ultimately lost me even more coverage, so the back row wound up getting a little hosed. It was still perfectly audible, but about 6-8dB down  above 1k. Not great, but the best compromise I could come up with.




The mains were actually fairly easy. I had a left and right low on the deck, for which I selected UPQs, and a left and right high, which were UPJs. A trio of UPJrs served as a mono front address main for each section, which I used for effects, sourcing action on the apron, and also as an HF fill for the high mains upstage, where were playing most of my musical content.

What I ultimately arrived at was this paperwork package:






AND SO:

In a rare opportunity here at UCI, it was arranged for an archival video of the production to be made. Unfortunately, my attempt to record a decent surround image of the play failed (and I’m blaming MOTU,) so what you can see below is a mono direct sum of the the audio sends in addition to the live recording. It’s not a close reproduction of what happened, but it’s certainly enough to give an idea of what the content sounded like. Thanks for reading! Please hit me up in the comments section if you have any questions or comments.




Thursday, December 1, 2016

Track and Field

In my Trends in Modern Sound Design class, we've started doing an annual project, inspired by alumnus Stephen Swift, called 'Track and Field.' Track and Field is an exercise in both creativity and efficiency, and it's played over the course of the entire quarter with a series of due dates.

On the first due date, each player creates 30 seconds of sound, completely self-driven. They can spend no more than 60 minutes creating the piece, and once they're done creating, they create both a rendered mix and a series of stems. As a class, we listen to the mix. After the class, a different player takes the stems and uses them to create something entirely new, also spending no more than 60 minutes on the project. At each due date, we listen to the most recent batch of mixes, and then a new student takes the stems to create a new version.

In addition to being a highly creative project, the requirement that the player not spend more than 60 minutes on their version motivates each player to work as efficiently as possible.  Different students choose different technical goals for themselves, with some choosing to sample a lot of different DAW platforms, and others choosing to delve deep into the intricacies of one particular piece of software. An open-ended project like this allows them to shape their goals and pursue them with focus.

There were six students in the class this quarter, so there are six threads in this year's iteration. I set up a transfer matrix so that each student got to touch each version.  At the end of the quarter, instead of listening just to this week's versions, we listened to each thread, all the way through. It's interesting to hear which elements come and go over the course of the project, and which elements work as throughlines throughout the entire project. 

Here are each thread, with each version in sequence.  I hope you enjoy the oddity that is this year's Track & Field!

Thursday, November 17, 2016

UCI Sound Field Trip – Warner Brothers Studios, Burbank, CA

On a rainy Monday morning in late October, we met at Bren Events Center outside of the Claire Trevor School of the Arts to make the 55-mile drive up to Burbank to visit Warner Brothers Studios. Ning got scolded at the gate for taking pictures, so we didn’t press our luck on the rest of the lot.






We  started our tour with a peek at one of their larger ADR (automatic dialogue replacement) stages. Damon Banks led the tour giving advice and info about the facilities and process along the way. As scheduling for this room is tight, we were fortunate to get to see one of the most utilized recording spaces in Los Angeles.





From the ADR Stage, we moved on to Rerecording Stage 4. Two editors were prepping for a meeting with Lionsgate executives to come in and listen to the final mix for an upcoming film.

Our next stop was to Rerecording Stage 1, where the Audio Editors and Music Editor walked us through their workflow and fielded questions. Here, two editors are able to work simultaneously on the console; one mixing dialogue while the other works on SFX and ambiances.






One of the highlights of the tour was visiting the Foley stage where UCI Alum MaryJo Lang showed us her typical recording session. We explored their stock of sound props, varying floors, fabrics, metals, and fluids used to create Foley tracks. Two of the staff Foley artists talked with us about mic placement, different recording techniques, and some cool tricks for our Audio for Video projects. It was outrageously cool.


















Our final stop on the Warner Bros Lot was the Scoring Stage.  We just narrowly missed a session with the orchestra recoding music for The Simpsons. The space has been used to record Hans Zimmer, Danny Elfman, and many other composers, artists, and musicians.




 

We had a great trip! Everyone was really jazzed about working on our upcoming film audio project following the trip. HUGE thanks to Damon and MaryJo for making the trip possible!

Sunday, June 12, 2016

The MiniDSP UMIK-1 USB Measurement Mic


The UMIK-1 Measurement Microphone

 I’ve been following MiniDSP for the last year or so.  They specialize in low-priced, programmable DSP modules – primarily for tuning home stereos and building custom active loudspeakers.  My introduction to them was by purchasing a MiniDSP 2x4, which is reminds me of a $100 miniature Meyer Galileo but with unbalanced I/O and DC power.  You choose routing, delay, filtering and parametric EQ on every in and out.  It’s a great little box (I’ll try and write about it later).

But what really caught my attention is a new MiniDSP product called the UMIK-1. It’s advertised as a omnidirectional measurement microphone but with a built-in preamp and D/A. It connects via USB and is recognized as a class-compliant (driverless) audio interface. Even more tantalizing is the price at $75 list.  The thought of being able to do transfer function (TF) measurements with nothing more than a laptop and a mic was also really appealing (that would require using the headphone jack on the computer – but for backpack studio junkies it’s really tempting).  When the mic arrived, let’s just say I was really excited!

The UMIK-1 comes with a substantial kit: microphone, windscreen, mic clamp, large-thread mic stand adaptor, miniature tripod, USB cable and a padded plastic box.  This is quite generous for a mic of any price. The mic itself is all metal, feels solid, and is about the same size as most P48 measurement mics (like the Earthworks M30).  You can look up your mic’s serial number on the MiniDSP website and download a custom correction file for trimming up the response in your measurement program.  I retrieved that file with no issues.



Testing time

The flattest mic in our arsenal is the DPA4007.  It’s standard for the Meyer SIM-3 system: with ruler-flat response and minimal phase shift. Measurement dorks come to fisticuffs when discussing the best mic – but the 4007 is my gold standard – and I chose it as the reference to do a standard mic comparison TF measurement against the UMIK.  SMAART v.7.5 and a Focusrite Scarlett 2i4 completed the test rig. 


Here is the mic compare setup.  Measurement police kitties hard at work in the background. 

For a source, I used a single JBL LSR4328P studio monitor. For a mic compare test, the only thing that matters is how broadband the speaker is, but it’s not a bad monitor (technically, it’s -10 down at 35 Hz and 25 KHz).  I fed it pink noise, launched SMAART and I was off to the race. The UMIK is in one USB port and the Focusrite in the other - and both are set to 48K. 

Then came a big letdown.  
The TF phase trace showed the UMIK sitting 180-degrees away from the DPA.  I know my workflow with SMAART and the Focusrite/DPA combo is absolute-polarity correct.  I then took a look at the UMIK impulse response (IR) trace and the spike went downward.  The UMIK is polarity reversed! With a regular analog XLR mic, this is only moderately egregious since one simple adaptor fixes it. But how do you un-flip the polarity on a USB plug? This is inexcusably bad engineering  – especially for a measurement mic.  Fortunately, polarity won’t affect single frequency response TF measurements – but the phase trace will always sit 180-degrees off across the whole band. So I set my OCD aside and continue.

Three bad thoughts.  
I won’t call the next anomaly another big letdown – mostly because it was pilot-error – but it stumped me for an hour.  As soon as I started the mic compare, I let SMAART add in the appropriate delay compensation on the reference source. The phase trace snapped to 180 (this would be zero if the damn polarity was right – but it was a nearly straight) and the coherence cleared up to >95%.  But then the phase trace slowly started to unwind!  And the coherence trace started dripping red blood all over the screen – starting in the HF and slowly cutting across the band.  I didn’t panic because I’ve seen this before.  When a rigged loudspeaker is slowly swinging you can see the same phenomena happen.  But how could my tiny home studio be collapsing?  The delay time was slowly slipping about 0.1ms of delay for each second of clock time. That would mean my monitor was sliding back 1.5” every second! 

Of course, the monitor stayed put. So my second bad thought was that there must be some stupid DSP problem in the mic that is adding delay. (To me that's plausible, given that someone designed it with wrong polarity).  So I engaged the delay tracking option in SMAART.  Sure enough, it was able to compensate, and I watched with fascination as the delay time started a slow countdown. 


video


Watch this video.  The first half is with no tracking – and you can see the phase trace slowly unwind as the bad coherence start to slide left.  The second half is with the delay tracking engaged and you will see the offset grow.  Sexy time!  You can download a high-res version here.

Then came the third bad thought – which was that I was just a presumptive idiot There is an inherent problem having two uncorrelated (no digital sync) digital input devices connected to the same program.  SMAART can handle as many simultaneous interfaces as you can plug into the computer – but (rightfully so) it has no method of sample-rate or sync correction.  So… how in the world can you sync two digital sources without word clock? But then I remember a stupid Avid product that never had any practical use to me... until now.  Every time you install ProTools, a rather obscure audio engine choice called “Pro Tools Aggregate I/O” shows up in your control panel.  This driver allows you to consolidate multiple interfaces into a single virtual device, which is required for ProTools 90’s-vintage input limitations. What I remembered about the aggregate device is that you can access a hidden layer that provides “drift” correction for exactly this problem.  It creates virtual digital sync.   I’m now beginning to have second thoughts about my years of Avid-bashing.  You can only access this functionality when you call up the Aggregate via the “Audio MIDI” utility in OSX (the preferences audio pane won’t do it).  I clocked both devices to the Mac internal D/A output by engaging “drift correction”.  Fortunately, this fixed the issue -- and kudos to SMAART for allowing the Aggregate choice as input.

Here are the settings that allowed Avid to fix the sync problems going into SMAART.

Finally I am up and running with a legitimate mic compare TF.  I can work around the polarity problem and now concentrate on discovering just how flat the UMIK is.   BTW, it’s not.  I ran all my TF measurements with 1/24 octave smoothing and it’s rough.  From 2k to 8k there are multiple resonant peaks – about 16 of them – upwards of +2.5dB.   My best guess is that this is the resonance of the housing or a problem with the grill. Whatever is the cause, it’s not good for a measurement mic – but it does have a tolerance of about +/- 3dB and that’s acceptable deviation and common in many popular recording microphones.

Here is the native frequency response on the UMIK-1 

I imported the correction curve provided by MiniDSP into SMAART and ran another measurement.  It made the response a bit better – but still not great.  MiniDSP claims corrected tolerance to be +/- 1dB and my measurements confirm this. 

I then decided to make my own correction curve. A correction curve is a comma delimited text file of frequency in one column and deviation in the other. SMAART is also able to export this data, which you can then paste into Excel and export as a file. 

video

Watch this video to see the export and creation process for my correction curve. You can download a high-res version of this video here.

The result of the correction curve was quite flat.

Here are the three curves together.  Pink = native, Blue = mfgr corrected, Yellow = custom correction 

With a reasonable correction curve, the UMIK-1 can be used as a reference mic in non-critical applications.  Impulse response (delay timing) and quick system conformity tests should be just fine with this mic.  Because I already had the JBL 4328 set up, I decided to run a final comparison by measuring an actual TF of the loudspeaker with each mic.  The results are shown below.


Pink = DPA4007, Blue = UMIK-1 

It’s not bad.  Some of the HF shift is likely due to the mics being about 2” apart with diffractive reflections stemming from being so close to the JBL waveguide.

 SOME OTHER THINGS TO NOTE

Calling this mic omnidirectional is a stretch. It’s really a “generously wide” cardioid.  It exhibits significant HF shading at 90 degrees – so much so that the mfgr. also provides a 90-degree correction curve.  You can also adjust the gain of this mic via the MacOS “Audio MIDI” control panel.  However, I could not get the sample rate to change even though MiniDSP claims it is both 44.1 and 48 – the dropdown doesn’t work in OS 10.11.

 CONCLUSION

What sells me on this mic is the convenience it provides at a very low price point.  It suffers from some bad engineering and a substandard correction curve, but it travels very easily and costs about 75% less than a decent entry-level (no correction needed) analog measurement mic.  If you can borrow an excellent mic from which to create your own correction curve, you have the means to correct for sample rate drift, and you don’t mind seeing your phase trace 180 degrees off, this is an acceptable device.  If you need it to work right out of the box, I would stay clear.

-Mike


UMIK-1 street price appx. $98 on Amazon with shipping