Sunday, June 12, 2016

The MiniDSP UMIK-1 USB Measurement Mic


The UMIK-1 Measurement Microphone

 I’ve been following MiniDSP for the last year or so.  They specialize in low-priced, programmable DSP modules – primarily for tuning home stereos and building custom active loudspeakers.  My introduction to them was by purchasing a MiniDSP 2x4, which is reminds me of a $100 miniature Meyer Galileo but with unbalanced I/O and DC power.  You choose routing, delay, filtering and parametric EQ on every in and out.  It’s a great little box (I’ll try and write about it later).

But what really caught my attention is a new MiniDSP product called the UMIK-1. It’s advertised as a omnidirectional measurement microphone but with a built-in preamp and D/A. It connects via USB and is recognized as a class-compliant (driverless) audio interface. Even more tantalizing is the price at $75 list.  The thought of being able to do transfer function (TF) measurements with nothing more than a laptop and a mic was also really appealing (that would require using the headphone jack on the computer – but for backpack studio junkies it’s really tempting).  When the mic arrived, let’s just say I was really excited!

The UMIK-1 comes with a substantial kit: microphone, windscreen, mic clamp, large-thread mic stand adaptor, miniature tripod, USB cable and a padded plastic box.  This is quite generous for a mic of any price. The mic itself is all metal, feels solid, and is about the same size as most P48 measurement mics (like the Earthworks M30).  You can look up your mic’s serial number on the MiniDSP website and download a custom correction file for trimming up the response in your measurement program.  I retrieved that file with no issues.



Testing time

The flattest mic in our arsenal is the DPA4007.  It’s standard for the Meyer SIM-3 system: with ruler-flat response and minimal phase shift. Measurement dorks come to fisticuffs when discussing the best mic – but the 4007 is my gold standard – and I chose it as the reference to do a standard mic comparison TF measurement against the UMIK.  SMAART v.7.5 and a Focusrite Scarlett 2i4 completed the test rig. 


Here is the mic compare setup.  Measurement police kitties hard at work in the background. 

For a source, I used a single JBL LSR4328P studio monitor. For a mic compare test, the only thing that matters is how broadband the speaker is, but it’s not a bad monitor (technically, it’s -10 down at 35 Hz and 25 KHz).  I fed it pink noise, launched SMAART and I was off to the race. The UMIK is in one USB port and the Focusrite in the other - and both are set to 48K. 

Then came a big letdown.  
The TF phase trace showed the UMIK sitting 180-degrees away from the DPA.  I know my workflow with SMAART and the Focusrite/DPA combo is absolute-polarity correct.  I then took a look at the UMIK impulse response (IR) trace and the spike went downward.  The UMIK is polarity reversed! With a regular analog XLR mic, this is only moderately egregious since one simple adaptor fixes it. But how do you un-flip the polarity on a USB plug? This is inexcusably bad engineering  – especially for a measurement mic.  Fortunately, polarity won’t affect single frequency response TF measurements – but the phase trace will always sit 180-degrees off across the whole band. So I set my OCD aside and continue.

Three bad thoughts.  
I won’t call the next anomaly another big letdown – mostly because it was pilot-error – but it stumped me for an hour.  As soon as I started the mic compare, I let SMAART add in the appropriate delay compensation on the reference source. The phase trace snapped to 180 (this would be zero if the damn polarity was right – but it was a nearly straight) and the coherence cleared up to >95%.  But then the phase trace slowly started to unwind!  And the coherence trace started dripping red blood all over the screen – starting in the HF and slowly cutting across the band.  I didn’t panic because I’ve seen this before.  When a rigged loudspeaker is slowly swinging you can see the same phenomena happen.  But how could my tiny home studio be collapsing?  The delay time was slowly slipping about 0.1ms of delay for each second of clock time. That would mean my monitor was sliding back 1.5” every second! 

Of course, the monitor stayed put. So my second bad thought was that there must be some stupid DSP problem in the mic that is adding delay. (To me that's plausible, given that someone designed it with wrong polarity).  So I engaged the delay tracking option in SMAART.  Sure enough, it was able to compensate, and I watched with fascination as the delay time started a slow countdown. 


video


Watch this video.  The first half is with no tracking – and you can see the phase trace slowly unwind as the bad coherence start to slide left.  The second half is with the delay tracking engaged and you will see the offset grow.  Sexy time!  You can download a high-res version here.

Then came the third bad thought – which was that I was just a presumptive idiot There is an inherent problem having two uncorrelated (no digital sync) digital input devices connected to the same program.  SMAART can handle as many simultaneous interfaces as you can plug into the computer – but (rightfully so) it has no method of sample-rate or sync correction.  So… how in the world can you sync two digital sources without word clock? But then I remember a stupid Avid product that never had any practical use to me... until now.  Every time you install ProTools, a rather obscure audio engine choice called “Pro Tools Aggregate I/O” shows up in your control panel.  This driver allows you to consolidate multiple interfaces into a single virtual device, which is required for ProTools 90’s-vintage input limitations. What I remembered about the aggregate device is that you can access a hidden layer that provides “drift” correction for exactly this problem.  It creates virtual digital sync.   I’m now beginning to have second thoughts about my years of Avid-bashing.  You can only access this functionality when you call up the Aggregate via the “Audio MIDI” utility in OSX (the preferences audio pane won’t do it).  I clocked both devices to the Mac internal D/A output by engaging “drift correction”.  Fortunately, this fixed the issue -- and kudos to SMAART for allowing the Aggregate choice as input.

Here are the settings that allowed Avid to fix the sync problems going into SMAART.

Finally I am up and running with a legitimate mic compare TF.  I can work around the polarity problem and now concentrate on discovering just how flat the UMIK is.   BTW, it’s not.  I ran all my TF measurements with 1/24 octave smoothing and it’s rough.  From 2k to 8k there are multiple resonant peaks – about 16 of them – upwards of +2.5dB.   My best guess is that this is the resonance of the housing or a problem with the grill. Whatever is the cause, it’s not good for a measurement mic – but it does have a tolerance of about +/- 3dB and that’s acceptable deviation and common in many popular recording microphones.

Here is the native frequency response on the UMIK-1 

I imported the correction curve provided by MiniDSP into SMAART and ran another measurement.  It made the response a bit better – but still not great.  MiniDSP claims corrected tolerance to be +/- 1dB and my measurements confirm this. 

I then decided to make my own correction curve. A correction curve is a comma delimited text file of frequency in one column and deviation in the other. SMAART is also able to export this data, which you can then paste into Excel and export as a file. 

video

Watch this video to see the export and creation process for my correction curve. You can download a high-res version of this video here.

The result of the correction curve was quite flat.

Here are the three curves together.  Pink = native, Blue = mfgr corrected, Yellow = custom correction 

With a reasonable correction curve, the UMIK-1 can be used as a reference mic in non-critical applications.  Impulse response (delay timing) and quick system conformity tests should be just fine with this mic.  Because I already had the JBL 4328 set up, I decided to run a final comparison by measuring an actual TF of the loudspeaker with each mic.  The results are shown below.


Pink = DPA4007, Blue = UMIK-1 

It’s not bad.  Some of the HF shift is likely due to the mics being about 2” apart with diffractive reflections stemming from being so close to the JBL waveguide.

 SOME OTHER THINGS TO NOTE

Calling this mic omnidirectional is a stretch. It’s really a “generously wide” cardioid.  It exhibits significant HF shading at 90 degrees – so much so that the mfgr. also provides a 90-degree correction curve.  You can also adjust the gain of this mic via the MacOS “Audio MIDI” control panel.  However, I could not get the sample rate to change even though MiniDSP claims it is both 44.1 and 48 – the dropdown doesn’t work in OS 10.11.

 CONCLUSION

What sells me on this mic is the convenience it provides at a very low price point.  It suffers from some bad engineering and a substandard correction curve, but it travels very easily and costs about 75% less than a decent entry-level (no correction needed) analog measurement mic.  If you can borrow an excellent mic from which to create your own correction curve, you have the means to correct for sample rate drift, and you don’t mind seeing your phase trace 180 degrees off, this is an acceptable device.  If you need it to work right out of the box, I would stay clear.

-Mike


UMIK-1 street price appx. $98 on Amazon with shipping









Wednesday, April 27, 2016

USITT SoundLab 2016 Project Files

This past March, the annual USITT conference was held in Salt Lake City. This year, the Sound Commission hosted a version of its biennial SoundLab that was focused on multichannel sound for theatrical playback.  A number of people collaborated on the design of the lab, and Jim Van Bergen did the delivery system design.




I served as Director of Programming for the SoundLab, which meant that I ran two projects.  The USITT SoundLab Student Project (which I ran in partnership with Drew Dalzell) was a six-month long script analysis project using Caryl Churchill's Fen as the source text.  Twenty-four students from around the country worked with mentors to read, analyze, and conceptualize a sound design for Fen. At the end of the project, six students were invited to take time during the conference to build their designs for an excerpt of the play in the SoundLab environment.

Concurrently, I also ran the SoundLab Professional Project, which was similar in scope. Three professional sound designers each took a portion of a different play, conceptualized a design, and had a chunk of time at the conference to install their design. While those designers were working, sound designer Brad Berridge and I hosted a real-time Iron Chef-style analysis of what the professional designers were doing, examining how they used the multi-channel system, how they strategized their designs, and how they organized their programming sessions in QLab. It was a terrific opportunity for the student designers to watch professionals work in real-time in a complex environment.

Student Kate Fraser and I have a conversation while she programs her design.

Dave Mickey, Elisheba Ittoop, and Brad Ward were the three professional designers.

Brad Berridge and I lead a conversation about while one of the pros installs their design.


After all nine designers had completed their programming, the designs were available for any SoundLab visitors to listen to in order to get a stronger understanding of how different designers each approach the same text. I also made recordings using a SoundField microphone of each piece for archival purposes.  The SoundField makes a B-format recording, I've decoded into multiple formats for your downloading pleasure.  You can download the work of these designers as stereo, 5.1, and b-format recordings:

Student Sound Designers:

Elisheva Anisman, Western Washington University: stereo, 5.1, b-format W, X, Y, Z
Katherine Fraser, Virginia Commonwealth University: stereo, 5.1, b-format W, X, Y, Z
Daniel Grayvold, Michigan Technical University: stereo, 5.1, b-format W, X, Y, Z
Josh Samuels, Ithaca College: stereo, 5.1, b-format W, X, Y, Z
Eric Smith, Michigan Technical University: stereo, 5.1, b-format W, X, Y, Z
Lindsey White, Ohio State University: stereo, 5.1, b-format W, X, Y, Z


Professional Sound Designers:

Elisheba Ittoop: stereo, 5.1, b-format W, X, Y, Z
Dave Mickey: stereo, 5.1, b-format W, X, Y, Z
Brad Ward: stereo, 5.1, b-format W, X, Y, Z

Here's a great vid that the fine folks at Stage Directions made about the SoundLab:




I want to thank the all of the sound designers and USITT folk who helped make these SoundLab projects happen. This was an extraordinary amount of work, and I'm appreciative of everyone who pitched in! Special thanks also go to the UC Irvine actors who voiced the roles, Sam French, who licensed us Churchill's Fen, and Figure 53, who provided the student designers with temporary licenses of QLab.

Friday, April 15, 2016

Telematic Project!


This weekend at UCI Sound, we are opening three productions... two of which are part of The Telematic Project, two plays (Smoke Front, performed on the arts campus, and Fire Road, performed on the Engineering campus) both written by Tira Palmquist to be performed simultaneously. While the audience only sees one complete story line, the protagonists of both plays rely on each other to do what they think is the right thing in the face of a challenging emergency response to natural disasters. The interaction between the two plays required some careful planning at every level, including things we often take for granted in theater.  Things like pagination, or communication between designers in tech.

While I was officially the designer, our process necessitated that the associate (Ben) take the lead on one of the productions, and I would take the lead on the other. Our sonic vocabulary and aesthetic lined up, but since the two plays had such disparate sound needs, it was important to have a designer in each space that could dedicate their entire focus to one space's audio.

Our main challenge was in figuring out how to effectively send audio and video over our campus network, since although the two venues are both on campus, they are separated by almost 2 miles.

Having learned about QSys and its power through some work at my internship in Burbank, I reached out to the folks at QSC and they were gracious enough to loan a QSys networked DSP system so that we could send audio back and forth between the spaces with an incredible amount of processing at an extremely low latency.  After some careful network configuring with our incredible IT staff (thank you Jason Valdry and Mike Scott!), we were able to successfully get QSys sending audio across campus reliably.  We added a stage management and designer microphone and headphone amp on each end, also running through QSys, so that we could communicate with each other.

We were also responsible for the video system for this show -- using UltraGrid for sending compressed video between the spaces, and QLab for playback.

From a show control perspective, we used QLab to trigger snapshots via Telnet in QSys for telematic scenes, and QLab triggered telematic video using terminal commands for UltraGrid.  Video playback was triggered by the audio computer over OSC.  We also programmed 2 QSC TCP3 touch panels for the stage managers, and wired custom call button/call light boxes for the SMs and for Ben and me, so that we could catch each others' attention.

.... well... most of the time, we just texted each other.

It's been a great process, and I was able to learn an incredible amount about network management, networked DSP, and video system design through this production. Thanks so much to QSC for their support; it was wonderful getting to work with QSys and integrate it into the production!




Click here for full system paperwork



Always good to have a troubleshooting page!

I built an iPad screen to set levels in QSys, and store them as snapshots.

Another tab on the iPad that allowed me to set delay, adjust
compression settings, and adjust echo canceling settings on the fly.

Thursday, March 31, 2016

UCI Sound Design announces incoming class of students!

Mike and I are delighted to announce our two incoming students for the fall of 2016: Ningru Guo and Andrew Tarr!



Ning comes to us from Singapore, where she turned a childhood love of musical theatre into a passion for both musicals and straight plays. She has extensive credits in both fields, and she's coming to UCI to build on her foundation in both aspects of sound design.




Andrew recently graduated from Purdue University, and he has started building his career with productions in NYC, Indiana, Texas, and points in between.  He has a strong foundation in design and composition, and he's coming to UCI to develop both more fully.

Welcome, Andrew and Ning! We can't wait for you to join us this fall!

Thursday, January 21, 2016

UCI Sound will NOT be attending URTA this year.

We previously suggested this, but now here's the official word: UCI Sound Design will definitely NOT be attending the 2016 URTA reviews.

We are, however, actively recruiting for graduate students. So, if you are interested in the Sound Design program at UCI, please do not expect to meet us at URTA. Instead, please reach out directly to either Head of Sound Design Mike Hooker or Head of Design Vincent Olivieri. Both of us are happy to correspond with you as you work to determine which schools are most appropriate for your application this winter.

We look forward to hearing from you!

Sunday, January 17, 2016

Honors in Sound Design: Jacques Zwielich!

We here at UCI Sound are pleased to announce that just before winter break, Jacques Zwielich was granted Honors in Sound Design! Jacques joined UCI just over a year ago as a transfer student and immediately contacted BC and Vinnie to discuss sound opportunities. We welcomed him with open arms and he's been an incredible asset since.
While at UCI, Jacques has participated in shop crew (taking more units than required), taken multiple classes with the MFA students, A2'd 2 musicals, assisted on 2 mainstage plays and designed many Directing Class Projects (AND has completed all of his honors requirements in 4 quarters). On top of all of the amazing things he's accomplished and learned, he's a joy to be around and can lighten any situation. We're incredibly proud to know him, and have him officially enter our UCI Sound family.
Congratulations, Jacques!

Thursday, December 17, 2015

Meyer Workshop: Richard Bugg and Tony Meola



Wrapping up the Fall quarter could not have been more exciting than hosting Meyer Sound masters Richard Bugg and Tony Meola for two workshops.  It goes without saying, learning tips in CueStation (and seeing some of the promising AVB improvements in CueStation 6) from Richard and mixing tips from Tony were invaluable and lifelong-memorable experiences.  I was especially grateful for the time in and out of the theater that we were able to spend with Tony, Richard, and Gavin. They all three have incredible insight into different facets of the sound design business, and their advice and thoughts are brilliant. It was such a gift getting to spend time with them.  I am so grateful to Helen Meyer, Gavin Canaan, and of course Richard and Tony for your time and passion. It is so inspiring! And of course thank you to BC, and Mike and Vinnie for helping to put this opportunity together for us!
--Matt Eckstein


Last week was one for the books, as we had a three-day intensive with Richard Bugg learning about DMITRI and LCS, followed by the once-in-a-lifetime opportunity to mix Defying Gravity from Wicked for Tony Meola. I’m still reeling from this experience. The first day was all about DMITRI, how to program it, the specifics within Wild Tracks and how to make a cue. The next two days were all about Tony. Not only did he give each of us mixing pointers, we spent a great deal of time simply talking about his life in the theatre, an experience unfathomable in most other programs. Both Tony and Meyer Sound genuinely care about the future of sound design. Thank you so much, Gavin Canaan, Helen Meyer, Richard Bugg, Tony Meola and the rest of Meyer Sound for donating your time and resources to our program.  
--Kelsi Halverson



Wow. Well, this was unreal. I remember when I first started looking at graduate programs, I saw the blog entry here from the last time Tony Meola did a masterclass and I remember thinking “That. I want that.” I am so grateful to have had the chance to take part in this. Tony was so generous with his time and experience, and it was really thrilling to talk with him. I was so happy with how much time we spent discussing, first and foremost, storytelling, then musical theater, then mixing. Tony’s approach of putting as little between the performer and the audience as possible is very much what I look for when I go to see musicals. It is such a delicate and simple through line, and hearing how Tony approaches scaling that up to something as large as an average Broadway musical was fascinating.
I was also really excited to get to learn from Richard Bugg! I had the opportunity to use D-Mitri for my design for Boeing Boeing last year, so I had a ton of questions and Richard was game for all of them. I’m looking forward to bothering him some more at USITT!
-- Ben Scheff


The Meyer Seminars we had last week were a special opportunity that most sound people could only dream of.  Day one we were taught by Richard Bugg on how to program LCS and he walked us through some of the new additions to the program.  The rest of the week was spent receiving mixing tips from the legendary designer Tony Meola.  Tony gave great mixing advice and very specific individualized notes.  Not only did he give fantastic artistic and technical advice he more importantly took the time to learn about each student in the program and answered any and every question we could come up with.  I have no doubt that I will use his advice for years to come.  
--Andrea Allmond


What a way to wrap up my first quarter here at UCI: an intense 3-day seminar with Meyer Sound!
LCS had been a program notorious for the headaches it’s caused in our class projects frustrated when something wasn’t working the way we thought it would or accidentally overwriting someone else’s captured cues in realtime. Richard Bugg helped de-mystify some of the inner workings of LCS and what makes it tick! Though a bit confusing at times (because it’s LCS - how do you want to want to do it?), Richard helped make the process much clearer and provided some great tips on shortcuts and layouts to create a cleaner workflow. Getting everyone connected onto the server was great and with some more tweaking (i.e. bringing our ethernet adapters instead of using WiFi), the ability for all of us to work on a similar project live would be phenomenal. Kudos to Richard and his highly informative lecture!
The mixing seminar with Tony Meola was out of this world. I had never been so nervous and excited at the same time. We broke down Wicked’s Defying Gravity and all took turns at the console to mix with challenge mode turned on as pink noise rang throughout the system if we were not careful with pickups. This added element of error response was amazing and absolutely brilliant! Definitely something I will implement when I practice mixing for Evita this coming spring. Tony is a great mentor and is a wonderful conversationalist with great insight on sound design and mixing. 
Thank you to the Meyer team: Richard, Gavin, and Tony, as well as Mike, Vinnie, and BC for this wonderful opportunity!
--Jordan Tani


My first eleven weeks of grad school at UCI were packed with new experiences from beginning to end. Sitting in on two very different Meyer seminars was a perfect way to wrap up a very productive quarter. Starting off the week watching Richard Bugg program D-MITRI was an inspiring look into the mind of the master himself (and actually made LCS seem like much less of an intimidating beast). Having never seen or worked with an LCS show control system before coming to UCI, I feel lucky to both have access to such a powerful system, and to have been able to see Richard at work. The second seminar let us all try our hand at mixing ‘Defying Gravity’ for Tony Meola, which was a totally stress-free experience (all kidding aside, many thanks to Tony for not looming over our shoulders as we sweat at the console). Though the mixing talk was valuable, my favorite parts of the second seminar were our conversations with Tony. I know the talks we had about making a career in theater gave me a lot of hope for the future, which is exactly what I needed at the end of my first quarter of grad school! Thank you so much to Richard, Tony, Gavin, and everyone else at Meyer Sound who donated time, effort, and resources to make this happen for us in the UCI Sound Design program!
--Garrett Hood


Monday, December 7, 2015

Track and Field

In my Trends in Modern Sound Design class, we've started doing an annual project, inspired by alumnus Steven Swift, called 'Track and Field.' Track and Field is an exercise in both creativity and efficiency, and it's played over the course of the entire quarter with a series of due dates.

On the first due date, each player creates 30 seconds of sound, completely self-driven. They can spend no more than 60 minutes creating the piece, and once they're done creating, they create both a rendered mix and a series of stems. As a class, we listen to the mix. After the class, a different player takes the stems and uses them to create something entirely new, also spending no more than 60 minutes on the project. At each due date, we listen to the most recent batch of mixes, and then a new student takes the stems to create a new version.

In addition to being a highly-creative project, the requirement that the player not spend more than 60 minutes on their version motivates each player to work as efficiently as possible.

There were seven students in the class this quarter, so there are seven threads in this year's iteration. I set up a transfer matrix so that each student got to touch each version.  At the end of the quarter, instead of listening just to this week's versions, we listened to each thread, all the way through. It's interesting to hear which elements come and go over the course of the project, and which elements work as throughlines throughout the entire project.

Here are each thread, with each version in sequence.  I hope you enjoy the oddity that is this year's Track & Field!







Sunday, December 6, 2015

Audio-for-Video Projects

Classes for the quarter is over, and final projects are being turned in. There are a couple that I want to share with you.  The first is a sound-for-picture project. I divided the students into pairs & trios, and each group got two sixty-second clips of film. Their task was to completely strip and replace the audio in the video, including re-recording all the dialogue, replacing all the sound effects, and layering in music.  The students had to work as recordists, directors, and producers. On the last day of class, we watched each video and engaged in a conversation about the team's process in recreating the sound.

Andrea & Jordan:


Annie Hall, UCI Sound Design exercise from Vincent Olivieri on Vimeo.

Monsters Inc., UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Kelsi, Garrett, & Jacques:


Empire Strikes Back, UCI Sound Design exercise from Vincent Olivieri on Vimeo.
Maltese Falcon, UCI Sound Design exercise from Vincent Olivieri on Vimeo.



Matt & Ben:

True Romance, UCI Sound Design exercise from Vincent Olivieri on Vimeo.

Requiem for a Dream, UCI Sound Design exercise from Vincent Olivieri on Vimeo.

We had lots of help creating these videos, mostly from friends of the program lending their time and voices in the studio. Thanks especially to Sam Arnold, Sera Bourgeau, Amy Bolton, Martha Carter, David Hernandez, Kelsey Jackson, Kelsey Jenison, Nick Manfredi, Kevin Shewey, and Jessica Van Kempen!


(a note on copyright: as these projects were designed as classroom exercises, we did not secure any rights to these films. If you are the rightsholder of any copyrighted material contained herein and would like us to remove these clips, please let us know and we'll be happy to.)

Friday, December 4, 2015

These(is) Shining Lives

My thesis show, These Shining Lives, by Melanie Marnich, was a whirlwind process, and I am so grateful to have had the opportunity to be a part of telling this story! I had the incredible opportunity to work with third-year graduate director Sarah Butts (her thesis show too!), telling a story that is simultaneously hard-hitting, historical, poetic, and resonant.  


These Shining Lives highlights the strength of women considered expendable in their day, exploring their true story and its continued resonance. Catherine Donohue and her friends have loving families and good jobs painting glow-in-the-dark watch faces at Chicago’s Radium Dial Company, and the 1920s seem full of promise. Tragedy comes when Catherine and her colleagues begin falling ill, one by one, with mystifying ailments. When the cause of their symptoms finally becomes clear, Catherine and her friends find a way to deal with their own truth: that the job they love, that has gifted them with independence, has betrayed them and will slowly kill them. This is a story of survival in its most transcendent sense, as these women refuse to allow the company that stole their health to kill their spirits or endanger the lives of those who come after them.


The play is poetic, theatrical, and, like a memory, ephemeral.  In other moments, it is factual, hard hitting, and tragic.  The women are not victims, they are stronger than that.  The music follows Catherine's journey -- her life made of time, and the release she can finally experience at the end.


Period music was sourced to the radio, and the motif of time associated with the clock on the header on stage.  To more closely align the melodic part of the music with Catherine's journey, a wireless microphone allowed the actor to interact with the music.  Tempo, pitch, and harmonies change based on her delivery of the text.  Together with my assistant Adam W. (one of our talented undergrad sound students who is graduating this quarter), we programmed a Max/MSP patch to trigger samples in QLab based on audio input from the microphone.



This was also my first adventure composing music. It was a challenging experience, but I learned a lot about songwriting, voice leading, harmony and music direction. In particular, I want to thank our incredible department chair, Dr. Gary Busby, for his music direction mentorship and guidance.


I also want to say a huge thank you to my family, girlfriend, and colleagues for their support in and out of the theater, and to Sarah and the incredible design team and cast I had the honor of working with to make this show happen. Thank you to Vinnie for your mentorship throughout the process, and to Martin Carrillo for your thoughts and words at the end.  This was an incredible process and one I'll continue to treasure and reflect upon for a long time.