Wednesday, April 27, 2016

USITT SoundLab 2016 Project Files

This past March, the annual USITT conference was held in Salt Lake City. This year, the Sound Commission hosted a version of its biennial SoundLab that was focused on multichannel sound for theatrical playback.  A number of people collaborated on the design of the lab, and Jim Van Bergen did the delivery system design.




I served as Director of Programming for the SoundLab, which meant that I ran two projects.  The USITT SoundLab Student Project (which I ran in partnership with Drew Dalzell) was a six-month long script analysis project using Caryl Churchill's Fen as the source text.  Twenty-four students from around the country worked with mentors to read, analyze, and conceptualize a sound design for Fen. At the end of the project, six students were invited to take time during the conference to build their designs for an excerpt of the play in the SoundLab environment.

Concurrently, I also ran the SoundLab Professional Project, which was similar in scope. Three professional sound designers each took a portion of a different play, conceptualized a design, and had a chunk of time at the conference to install their design. While those designers were working, sound designer Brad Berridge and I hosted a real-time Iron Chef-style analysis of what the professional designers were doing, examining how they used the multi-channel system, how they strategized their designs, and how they organized their programming sessions in QLab. It was a terrific opportunity for the student designers to watch professionals work in real-time in a complex environment.

Student Kate Fraser and I have a conversation while she programs her design.

Dave Mickey, Elisheba Ittoop, and Brad Ward were the three professional designers.

Brad Berridge and I lead a conversation about while one of the pros installs their design.


After all nine designers had completed their programming, the designs were available for any SoundLab visitors to listen to in order to get a stronger understanding of how different designers each approach the same text. I also made recordings using a SoundField microphone of each piece for archival purposes.  The SoundField makes a B-format recording, I've decoded into multiple formats for your downloading pleasure.  You can download the work of these designers as stereo, 5.1, and b-format recordings:

Student Sound Designers:

Elisheva Anisman, Western Washington University: stereo, 5.1, b-format W, X, Y, Z
Katherine Fraser, Virginia Commonwealth University: stereo, 5.1, b-format W, X, Y, Z
Daniel Grayvold, Michigan Technical University: stereo, 5.1, b-format W, X, Y, Z
Josh Samuels, Ithaca College: stereo, 5.1, b-format W, X, Y, Z
Eric Smith, Michigan Technical University: stereo, 5.1, b-format W, X, Y, Z
Lindsey White, Ohio State University: stereo, 5.1, b-format W, X, Y, Z


Professional Sound Designers:

Elisheba Ittoop: stereo, 5.1, b-format W, X, Y, Z
Dave Mickey: stereo, 5.1, b-format W, X, Y, Z
Brad Ward: stereo, 5.1, b-format W, X, Y, Z

Here's a great vid that the fine folks at Stage Directions made about the SoundLab:




I want to thank the all of the sound designers and USITT folk who helped make these SoundLab projects happen. This was an extraordinary amount of work, and I'm appreciative of everyone who pitched in! Special thanks also go to the UC Irvine actors who voiced the roles, Sam French, who licensed us Churchill's Fen, and Figure 53, who provided the student designers with temporary licenses of QLab.

Friday, April 15, 2016

Telematic Project!


This weekend at UCI Sound, we are opening three productions... two of which are part of The Telematic Project, two plays (Smoke Front, performed on the arts campus, and Fire Road, performed on the Engineering campus) both written by Tira Palmquist to be performed simultaneously. While the audience only sees one complete story line, the protagonists of both plays rely on each other to do what they think is the right thing in the face of a challenging emergency response to natural disasters. The interaction between the two plays required some careful planning at every level, including things we often take for granted in theater.  Things like pagination, or communication between designers in tech.

While I was officially the designer, our process necessitated that the associate (Ben) take the lead on one of the productions, and I would take the lead on the other. Our sonic vocabulary and aesthetic lined up, but since the two plays had such disparate sound needs, it was important to have a designer in each space that could dedicate their entire focus to one space's audio.

Our main challenge was in figuring out how to effectively send audio and video over our campus network, since although the two venues are both on campus, they are separated by almost 2 miles.

Having learned about QSys and its power through some work at my internship in Burbank, I reached out to the folks at QSC and they were gracious enough to loan a QSys networked DSP system so that we could send audio back and forth between the spaces with an incredible amount of processing at an extremely low latency.  After some careful network configuring with our incredible IT staff (thank you Jason Valdry and Mike Scott!), we were able to successfully get QSys sending audio across campus reliably.  We added a stage management and designer microphone and headphone amp on each end, also running through QSys, so that we could communicate with each other.

We were also responsible for the video system for this show -- using UltraGrid for sending compressed video between the spaces, and QLab for playback.

From a show control perspective, we used QLab to trigger snapshots via Telnet in QSys for telematic scenes, and QLab triggered telematic video using terminal commands for UltraGrid.  Video playback was triggered by the audio computer over OSC.  We also programmed 2 QSC TCP3 touch panels for the stage managers, and wired custom call button/call light boxes for the SMs and for Ben and me, so that we could catch each others' attention.

.... well... most of the time, we just texted each other.

It's been a great process, and I was able to learn an incredible amount about network management, networked DSP, and video system design through this production. Thanks so much to QSC for their support; it was wonderful getting to work with QSys and integrate it into the production!




Click here for full system paperwork



Always good to have a troubleshooting page!

I built an iPad screen to set levels in QSys, and store them as snapshots.

Another tab on the iPad that allowed me to set delay, adjust
compression settings, and adjust echo canceling settings on the fly.