Tuesday, April 4, 2023

UCI Sound Design class of 2026

After a thorough search and interviewing lots of exciting candidates, Mike and I are happy to introduce our incoming class of sound designers!  Here they are!

Eric Backus (he/him) is a composer and sound designer, joining UC Irvine’s sound design cohort from Chicago. Originally from Phoenix, Arizona, Eric received a B.A. in Theatre from Arizona State University. His original music and sound designs for theatre have been heard around the country, Off-Broadway, and in many Chicago theatres including The Goodman, Chicago Shakespeare Theater, Steppenwolf, Writers, and Northlight. In addition to theatre, Eric composes and sound designs for podcasts and film. He is beyond thrilled to be back in the West Coast sunshine, and to join UCI’s MFA Class of 2026! www.ericbackus.com

Andrew Yoder is a sound designer/producer/DJ from Boise, Idaho. His start in sound began with his passion for music and DJ culture inspiring him to blend aspects from all across the musical spectrum and applying them to the theatrical world. Starting his sound design journey early in his undergraduate career with Sam Shepard's True West. Since then he has gone on to receive 3 awards from the Kennedy Center American College Theatre Festival('19, '20, '21) for excellence in sound design for his work on A Kind of Alaska, Drowning Ophelia, and The Moors. He was also a nominee for the Don Childs' Collaboration Award in 2019. Outside of sound design, he enjoys disc golfing, yoga, and DJ-ing.

Welcome, Andrew and Eric!  We can't wait to start working with you in the fall!

Tuesday, January 10, 2023

UCI Sound Design is Recruiting MFA students!

It's January, which means that UCI Sound Design is officially recruiting for our MFA class of 2026.  Have you been thinking about grad school for sound design, we want to hear from you!

If you think that a Sound Design MFA may be the next step in your career and/or creative life, we think you should give UCI a look.  We're proud of our program and the students who study with us, and we want to talk to you about what we can offer your education and whether we might be a good fit!

If you're interested in learning about UCI Sound Design, here's what you should do:
  1. email Prof. Vinnie Olivieri at olivieri@uci.edu to introduce yourself and set up a time to chat
  2. head over to our program website to read about the program.
  3. bookmark the application page for all UCI graduate admissions.
Drop us a line!  We'd love to chat with you!

And, for the algorithm, here are some photos of UCI MFAs doing awesome things in our program.

Students at Meow Wolf's Omega Mart on a field trip for their Themed Entertainment Class.
Kissing the duck brings good luck.

Walking from building to building on a tour of the Meyer Sound Lab facilities.

Students, alumni, faculty, staff, and a significant other at our annual Holiday Party!

Was the Escape Room trip part of Themed Entertainment research or team-building?  Why not both?

Students, faculty, staff, and alumni at USITT's 2022 conference.

A student mixes on L-ISA at L'Acoustics' North American HQ.


Tuesday, December 6, 2022

UCI Alumnus Ning Guo wins Singapore Young Artist Award!

 UCI alumnus Ning Guo was recently awarded Singapore's Young Artist Award!  Ning is a 2019 graduate of UCI's sound design program (MFA).  While at UCI, she designed and composed for a number of different shows, and her thesis focused on spatial audio solutions.  She was a dynamic and creative artist at UCI, and it's clear from this award that she's making an impact in the Singaporean artistic community.

Mike and I are both very proud of Ning!  

You can read more about the award and see a photo here.

Congratulations Ning!

Thursday, November 3, 2022

The Passage @ UCI

This past fall, UCI premiered 'The Passage,' a new dance-theatre play written and directed by UCI Prof. Bryan Reynolds.   'The Passage' is a play about extreme skiing and the emotional journey that the skiers undergo, including huge adrenaline highs and impossible anguish.  I served as sound designer, and I did a small amount of composition as well.

Bryan came into this project with a very clear and specific musical vocabulary, even going so far as to identify which pieces of music needed to be included in each act, and in which order.  Normally, my sound designer spidey-sense gets all in a huff when a director gets this prescriptive, but this time I didn't mind so much, because his specificity allowed me to focus on other aspects of the design.

This fall, our friends at L'Acoustics provided us a L-ISA system for use in one of our mainstage shows (keep checking back here for a post on The Story of Biddy Mason, designed/scored by Nat Houle!).  In order to allow Nat to work with the system and learn how to be effective with it before the stress of tech, we installed the system in our xMPL theatre well before she went into tech.  And, since my show immediately preceded Nat's in the space, I took advantage of it and implemented it into my show.  That way, not only did I get a chance to experience this revolutionary tool. but Nat and I got to work together to figure out how best she could use it when it was her turn.  Essentially, I was her guinea pig.

L-ISA was designed primarily as a live mixing spatialization tool, but it's got applications in theatrical sound design as well.   I was interested in exploring how best to use it, and for me, part of that experimentation involved creating alternative methods of positional input.  I don't think that we've truly cracked the puzzle of how to control/program 3D spatialized sound (we're so often stuck using 2D tools), so designing this show was a research project for me in 3D tool design. I spent time this summer designing a few tools that I could experiment with to learn which tools/features were most useful to use when creating positional information.  These tools all sent data to a Max patch, which reformatted it and sent it along to L-ISA.

(I've often wished that I had these positional tools available to me on a big show, where I didn't have the time to develop them.  By taking the time on this show (where the director had already identified much of the initial content), I was taking advantage of a unique opportunity to prototype a new set of ideas).

I build five different interfaces for L-ISA:

1: using a Lemur patch

2: using a Wii remote controller

3: using a Mugic controller

4: using a laptop keyboard & trackpad:

5: working in conjunction with Purdue Prof (and general all-round weirdo genius) Davin Huston, we adapted MediaPipe to develop a tool that used my laptop's webcam to map my hand position to L-ISA source positions.

All five of these interfaces delivered data to Max, which transformed it into data that L-ISA could read.

I prototyped the tools over the summer, months before we loaded into the space, and once we were loaded in, I tweaked each of them while I built the show.  I used the tools primarily while building the design in the space; once we were in tech or running the show, the positional information was either sent by QLab to L-ISA via networked cues or recorded as L-ISA snapshots that were recalled by QLab.

I don't have enough space here to write about all the things I learned while working on this project, but here are a few general observations/notes for future use:

  • USE FEWER TOOLS.  Have less things to pick up.  Table space is at a premium.  If you have to reach across the table to grab a sensor, you won't use it.
  • BE GEOGRAPHICALLY CONGRUENT. Want to position something front and left?  It should *feel* front and left to you.  Width should feel wide.  Height should feel high. Intuition is fed by instinct.
  • WE DON'T HAVE THE RIGHT TOOLSET YET.  L-ISA has separate control parameters for polar coordinates on the horizontal plane (radius and angle), elevation, and width.  I wasn't able to build a tool that was able to intuitively incorporate all those controls.  Yet.
Here's a quick pro/con chart for my tools:

  • Lemur
    • PRO: intuitive, clear, able to label interface elements with text, could handle ten sources at a time
    • CON: limited to 2D
  • Wii remote
    • PRO: highly intuitive, lots of buttons that are programmable to control specific parameter
    • CON: didn't handle L-ISA's depth spatialization well, orientation was based on a simulacrum of positional information, and I needed an older mac to connect to the Wii remote itself (current Mac OS doesn't recognize the remote). Only one source at a time.
  • Mugic
    • PRO: intuitive, lightweight. Next time, I'll build the controller into a glove and wear it full-time.
    • CON: could only handle one source at a time, could only handle transmit positional information, required a dedicated proprietary wifi network to function.
  • Laptop keyboard/trackpad
    • PRO: lots of buttons and surfaces to send comprehensive data
    • CON: not much better than L-ISA's interface.  Only one source at a time.
  • MediaPipe
    • PRO: super intuitive.  I can see future iterations that incorporate elevation and width
    • CON: Only one source at a time.  In my iteration, I could not control elevation and width at the same time that I was controlling pan/distance
Overall, the show was a success. We got great feedback from the audience, and I'm very happy with how the show ended up.  I'm particularly grateful to my sound team: Associate Designer Costa Daros, Assistant Designer Alex Fakayode, and A1 Jayde Farmer. Onward to future productions!

And if you want to talk more about these ideas (and take a look at my Max patch), drop me a line!

Tuesday, September 20, 2022

Attending the American Theatre Wing Annual Gala

 Did you know that the Tony Awards were named after a woman named Antoinette "Tony" Perry? 

On September 12th, I was honored to attend the annual American Theatre Wing Gala in New York City. The theme was celebrating the incredible legacy of Antoinette Perry by uplifting the contribution of women to American theatre. The make-up of the production team, performers, Honorary Chairs, and musicians were in harmony with this purpose.

I was invited as the guest of Cricket Myers who acted as one of the two Honorary Chairs for Sound Design. The second Sound Design pair was Palmer Hefferan joined by Bailey Trierweiler. 

This occasion was an exciting opportunity to meet and network with theatre professionals, to consider the progress each generation of women has made in our field, and to converse with donors who were enthusiastic about meeting the artists behind the art.

Looking back at this experience, the most memorable portion of the night was jumping with joy when I noticed who was sitting behind the console: sound designer Joanna Lynne Staub and audio engineer Adrianna Brannon. 

We often hear today, "If you can't see it, you can't be it." This resonated with me at this event. To be in the combined presence of Palmer, Joanna, Adrianna, and Cricket - four incredible female-identifying theatre artists I deeply look up to - was not only delightful but encouraging

Attending this event was a powerful reminder that those of us who have traditionally been overlooked, marginalized, and undervalued have immense power together. We all have the capacity to thrive, learn, be a mentor, be a mentee, and stand with one another knowing we have more work ahead.

Tuesday, July 19, 2022

Jeff Polunas to leave UCI Sound

We were disappointed today to learn that Jeff Polunas, our Sound Supervisor for the past 4+ years, will be leaving UCI early next month to take an Assistant Professorship at Kent State University, where he'll be teaching sound design and projections.

Jeff has an MFA from UCI in Sound Design, and he came back to work as the Sound Supervisor from a teaching position at CalState Fullerton.  In addition to his work at UCI's School of the Arts, serving Drama, Dance, and Music productions, he also maintains an active freelance career, designing shows around the southern California area. 

At UCI, Jeff's work as the Sound Supervisor meant that he also always worked closely with the MFA Sound Design students, supplementing their coursework and design work with a strong technical foundation and skillsets.  He truly was a partner to Mike and I in the sound design program, and he leaves big shoes for us to fill.

We all wish him the best as he heads to Ohio for his new position.  Kent State is lucky to be getting him!

Thursday, June 2, 2022

Audio Theater links are up!

Earlier this academic year, the Drama Department at UCI's first foray back to in-person production was a bill of six original audio plays. The six plays were written, directed, performed, managed, designed, and crewed by students at UCI. We performed them for a live audience but we also recorded the performances. After the run was over, each sound designer mixed their play(s) down, and then MFA2 Costa Daros mastered the six plays. We've written about the project at length here, but today I'm delighted to share with you that the recordings are finally being released! You can hear them all at this soundcloud link. Happy listening!

Tuesday, April 19, 2022

BOURN KIND- Tiny Kindness Project to premiere at Tribeca Film Festival

It was hard keeping this news under wraps for the past few months, but I'm excited to announce that BOURN KIND- Tiny Kindness Project will premiere in June at the Tribeca Film Festival!

This is the second project that director Rachel Myers and I have worked on, and the post-production audio was done entirely in the Meyer Sound Design Studios here at UCI.  I did the foley performance, editing, mixing, and mastering, and MFA student Costa Daros worked as the foley mixer.  Here we are hard at work recording and mixing foley.

Congrats to everyone who made this beautiful film happen!  Check it out if you can!

Tuesday, April 12, 2022

UCI Sound Design: class of 2025

Recruiting students into the MFA Sound Design program is always tough for us.  We can only take two students each year, and there are always terrific students out there that we have to decline. This year was particularly difficult - the number of exceptional candidates was four times higher than it usually is, which made our decision much more difficult.

That said, Mike and I, along with the rest of the program are delighted to welcome our incoming students for this fall: Melanie Lopez and Jeremiah Turner.  Melanie and Jeremiah stood out to us with exceptional combinations of talent, creativity, curiosity, professionalism, and experience.  We're looking forward to working with them for the next three years!

I asked each of them to submit a short bio, so here they are, in their own words:

Thursday, January 20, 2022

Audio Theater - our return to the stage!

I was going to start this blog post by talking about the impacts on UCI Sound by COVID-19, but there’s nothing new under the sun there.  It stinks, and we all know it.  So let’s skip ahead.

In November 2021, UCI Sound was an integral part of the first live in-person mainstage performance at UCI since the pandemic began.  AUDIO THEATER was a bill of six student-written plays, performed live and recorded for eventual streaming (stay tuned!).  The plays were performed as part of a single bill, and each performance was recorded.  At the end of the run, the designers and directors chose the strongest performance, and the sound design team then prepared the recordings for streaming.

photo by Paul Kennedy

For UCI Sound, this was a complex task.  Six plays, each with their own sonic landscape, each with their own cast.  Six playwrights, each with their own text, some of which were in more flux than others.  Six directors, each with their own vision, who were serving both the audience live in the theater and the eventual streaming audience.  Five designers (four sound, one lighting – no scenic, costumes, or projections), who needed to serve their plays and support each other.  A phalanx of stage managers, who kept rehearsals efficient, safe, and effective. Dozens of actors, plenty of staff and support personnel.   One sound system, with one production assistant and one A1 (our two first-year MFAs).  

For pandemic reasons, we mounted these productions with actors spaced out, at microphones, in static positions.  They could look at each other and communicate over distance, but they couldn’t leave their microphone position. This gave the event more of a broadcast feeling rather than a theatrical feeling.  

photo by Paul Kennedy

Each console input and output was recorded for maximum flexibility when mixing/producing after the run.  Once the designers had finished mixing their pieces, they passed their final mixes off to one student who served as the mastering engineer for the whole bill.

Each of the designers has written a bit about their play and their approach to the sound design for it.  So, here they are in their own words (edited slightly):

DO YOU TRUST ME? – JJ Margolis, Sound Designer

Do You Trust Me, written by Summer Savonna Session and directed by Cybelle Kaehler, is a nostalgic flashback story set in the late 90’s. After meeting in middle school, Leon and his best friend Ty develope a tradition of exploring haunted buildings in their town. In high school, they take their girlfriends to a warehouse prepared with alien-themed props, traps, and sound effects to scare Jess, and in that warehouse they encounter… something they do not expect.

The task of sound for this story was to help support the world and ground the vocal performance in a believable, but not wholly realistic space. Through the combination of recorded audio effects and synthesized sounds, the ‘scenery’ of the play is established. One of the most interesting challenges of this is the balancing act of building a supernatural suspense without confirming in one way or another whether there is truly anything unnatural present.

As the core of the play, the vocals stood central at all times. I used different reverb effects to bring the characters into the spaces that they flashed back to. In addition, I needed to separate Leon into both a narrator and a character. I accomplished this by isolating the lines that he spoke as narrator, keeping them clean of any aforementioned reverb and adding a tube-modeled compressor to support the warmth of those lines. Finally, I layered very subtle processing onto some of the characters as the story progressed… but that would be spoilers.

LIE IN STATE – Aerik Harbert, Sound Designer

As we round out a year since the January 6th insurrection in the Capitol, Lie In State, written by Meliza Gutierrez and directed by Angela Cruz, serves as a testimonial to the events of that day. Not of reality, but of possibility; it’s impossible to know the exact complexities of what it meant to be a police officer, congressperson, or innocent bystander on location that day, but Lie In State gives us a taste of their fear, hope, and humanity. When confronting the possibility of death first hand, who knows what may come to mind?

Many of us have seen video taken on location, but there is an inherent safety in a recording because we know that we were not personally the targets of that mob. I needed to keep the feeling of danger up so that our characters never felt truly at ease, while still leaving space for them to speak their piece. There are discrete events supported by sound as protesters break into the building which highlight specific moments of danger, but the true danger in this setting is the human element, so I chose to use a persistent bed of protesters which always lived in the background.

This background detail ebbs and flows dynamically to carve space for important moments of dialogue for the actors, and it only peaks at specific moments where the increase in danger was necessary to keep stakes up as we moved from dramatic beat to dramatic beat. There are some non-diegetic elements, but overall, the sound lived within the world of the play so that the characters and audience are engaged in the same level of energy.

WHEN WILL MI HIJA VISIT ME? – Meghan Roche, Sound Designer

When Will Mi Hija Visit Me? written by Cindy El & directed by Leah Jackson, focuses in on a mother telling her friend the story of her daughter’s disappearance from her life. Maria is our main (and only speaking) character, and we wanted to keep her “real world” and “memory world” as separate as possible so there would be moments where the audience felt firmly in the world of Maria’s literally telling this story to her psychological space as she recounted it; that became my main focus as I designed the piece.

To achieve the “real world” moments, I used room tone and other low sound effects, like distant outdoor ambiance and a ticking clock, to place us in her kitchen. We also had live foley accompanying the very first and very last moments of the piece as Maria makes her coffee and fiddles with her spoon and mug. This took a fair amount of trial and error; we made good use of UCI’s extensive prop shop to find the right materials to make the sounds we wanted, and I auditioned a lot of different options for room tone—the ones I’d been working with initially wound up feeling much more out of place in the physical space than they did when I was making mock-ups and pulling SFX on my computer at home! But eventually we found a good balance, and the room tones in combination with EQ and reverb processing felt quite convincing in the Claire Trevor Theater.

The “memory world” also made use of EQ and reverb, but were meant to feel significantly less grounded than the kitchen FX did, and all of the SFX I used in the more involved flashback sequences also had a fair amount of that same reverb. There was also subtle tonal underscoring in a number of places to help support the most emotional moments.

The in-person presentation of this piece was a little different than what you will hear in the recording, as it also involved two non-speaking actors onstage doing movement work as Maria’s husband and daughter, silently acting out the story as she told it in a stylized and very physical manner. Because we do not get that in the recording, a few spots were trimmed down and adjusted to make the pauses in our actor’s speech make more sense.

The Happiest Bunch – Costa Daros, Sound Designer

The Happiest Bunch, written by Mary Hill and directed by Mary Hill and Arizsia Staton, was a slice-of-life comedy break in our night of dramas. I wanted to embrace the endearing sitcom style, so the design was mostly realistic with some exaggerated moments to heighten the comedy like Martha's murmuring and our moment within their phone conversation. Overall, the sound helped to bring the piece to life and give the characters something to react to during their performance.

Nearer By Far – Meghan Roche, Sound Designer

Nearer By Far, written by Erica Clare Bogan and directed by Dylan Seeley, was another one-actor play. It takes place in a submersible unit—essentially a very tiny submarine designed to go deep in the ocean—and is told in the form of the taped journal entries recorded by one-half of an underwater duo who were there doing undisclosed scientific research. Early in the piece, however, we learn that the other half of the team was killed, and our main character has to figure out what went wrong, and how to survive on her own.

From the beginning of our discussions about this piece, we knew that we wanted to lean into the “found tape” aspect of it, using vocal processing to really compress & affect our actor’s vocals in a way that emulated the feeling of listening to an old recording & adding in sound effects that were very specific to starting and stopping a tape recording. This came less from a desire for realism, but instead as a way to heighten drama and show the passage of time between journal entries. I also made heavy use of various drone styles—more easily heard in-person, with our beefy subwoofers and lovely Meyer speakers, than on the average computer speakers, but still present nonetheless—for the drama piece of things. They shifted subtly from entry to entry, but all throughout there was also a separate drone that faded up very slowly and imperceptibly as it went, which made the dropout of almost all of our atmospheric sounds at a power outage moment late in the piece all the more emotionally affecting.

The Informant – Costa Daros, Sound Designer

The Informant, written by Grace McCarney and directed by Lucas Nelson, was the retelling of a woman's involvement in WWII to her granddaughter. As she narrates the story, the characters she talks about come to life and take over the dialogue for her. I designed the sound in order to help divide these worlds of the story being narrated by her future-self from the story taking place in real time. This gave us the opportunity to incorporate live foley into the design which helped the characters around the woman create the world she talked about.

Right now, we’re putting the finishing touches on the mixes and mastering them.   Once the elements are in the can, we’ll be streaming them over the nets!  I’ll post again then.  

Monday, March 8, 2021

UCI Sound MFA, class of 2024

Every year when we interview potential MFA students for our Sound Design program, Mike and I always lament how hard it is to choose just two students from a qualified pool.  There are generally three or four students that are strong candidates, and it can be an excruciating decision to select which students to invite to join us for three years.  Part of selecting two students means necessarily saying 'no' to many more, including the few top-ranked alternates.  

This year was much harder, due mostly to COVID-19.  Because our industry is basically at a standstill, many folks are deciding that now is a good time to get their degree.  The number of qualified interested students shot up 500% this year. Instead of a pool of three or four top candidates, we had nine.  And, while in a normal year, we'd invite those three or four applicants to campus to sit in on classes, meet with faculty/staff/students, tour the facilities, and generally get to know us, this year UCI's campus is virtually shut down to outsiders.  So, we pivoted, scheduling Zoom sessions so that all nine top applicants could sit in on classes, meet with faculty, staff, students, get a portfolio review, etc.  It was a bit of a logistical nightmare and something I hope we don't have to repeat next year, but it gave us a chance to meet so many students that we'd never have gotten the chance to meet before. 

It also made our decisions harder, because while the number of excellent applicants increased, the number of people we could accept didn't.  We still had to choose two.  So while we're thrilled with the two women who are joining us this fall, we also want to publicly say that we wish we could have taken more of you!  Those applicants who are reading this - we wish you all the best and look forward to seeing your careers grow!

This year, Mike and I are thrilled to welcome to the UCI Sound Design MFA program two extraordinary designers.  I've asked them to write their own bios.  Here they are:

Friday, January 29, 2021


In order to have a fully functional adequate Sound Shop, the concept of the SWIFT (Sound Workshop for Integrated Fine-Arts Technology) needed to be reimagined.  Since the SWIFT has now been renamed the Keller (in honor of BC Keller), this was the perfect opportunity.

When I came on board to attempt to fill in BC’s shoes (let’s face it, anyone who knew him, knows this is impossible to do), the first thing I noticed was the sound shop was in chaos and needed some direction, since it was almost two years since a full-time sound supervisor was part of the CTSA. As evidence of the photos below, you can see there was some infrastructure and equipment being housed in those infrastructure elements.  

A change was needed and a drastic one at that.  The Sound Shop needed to feel renewed with a sense of new found purpose.  Over the course of the next 15 months, I devised a plan to slowly implement changes and improvements to how the Sound shop will function.  The first was to determine a new layout that will allow the most flexibility and storage capability.  I kept BC’s idea in keeping the center clear for a prep area and rearranged and purchased new equipment infrastructure to improve the layout.  I removed the old tool chest and donated it to the xMPL booth, removed the old test rack and any storage containers that didn’t match the new esthetic.  I decided to keep the long Shelving unit, Wireless cabinet, rigging cabinet, workbench, storage bins rack and filing cabinet.  I proceeded to purchase a second matching storage bins rack, replace the mic cabinet with a different cabinet, purchase a new shelving unit, add a new cabinet and purchase a tool chest. 

This gave the new layout a new look and allowed for more storage.  I was able to add the new storage equipment in the Sound Shop and actually gain 12 square feet of floor space.  Once the new layout was implemented, the SWIFT or Sound Shop, was no longer and The Keller was born.

When you now walk into the Keller, there is a new standard in how the Sound Design program operates and runs for each production.  Since the implementation, the shows are pulled faster, more efficiently and have a place to live before the equipment gets loaded into a venue.

I will now give a photographical tour of the Keller since we live in COVID times and only authorized people may be on campus.


When you open the door to enter, there are three cabinets on the left, rigging, microphones and wireless.  Continuing on is the workbench, with the long storage shelves along the back wall now.  On the right is the electronics cabinet and two units of storage bin racks followed by the tool chest.  Immediately on the right is the computer station.


The Rigging cabinet is now fully organized and labeled.  Speaker manufacture bolts and smaller hardware are separated out in bins or drawers.  It’s now easy to grab what is needed for a show pull and have it.  This is also where the bolts, screws and washers are stored by size.


The microphone cabinet is the latest addition to the Keller.  Each microphone model is grouped together in its own bin so it is convenient to see what is in stock and how many of each are available for a project. There are currently empty bins, which leaves room for more mics in the future.  Various mic hardware is also stored in the door to make mic mounting on non-stands more convenient.


The wireless cabinet contains the mic elements, wireless mic transmitters and any in-ear monitor units for wireless speakers.  


The workbench is full functional and equipped with a soldering station, carbon filter and vices to assist in repairing equipment.  The desktop also includes the test rack, which contains a cable tester, Clear-Com unit and mixer.  Above the workbench is Akro-Mils drawers for various components, adapters and parts.  To the right of the workbench is the repair shelving unit.  The top shelf is for cable that needs repaired, the next shelf is for mics and smaller equipment.  Below that shelf is for rack gear that needs repaired.  The bottom shelf is for Clear-Com units and larger equipment.


Along the back wall is the long storage shelving units that stores our computer monitors, Clear-Com units.  Since we now do custom racks for each production, our rack making equipment is also in this unit.  This includes panels, panel connectors, vents, lacer bars and blank panels.  Above the rack making section is the tech table supplies and computer keyboards and mice.  The very top is where the spools of various cables are stored to build new cables.  There is still plenty of room and many empty bins to be able to store future equipment on this unit.


The electronics cabinet is for storing of the smaller networking switches (non-rackmountable), wireless routers, MIDI, Yamaha Cards and video equipment.


The storage bins rack are for storing various equipment for easy access.  This includes XLR turnaround cables, RS232 cables, adapter cables, Computer Cables and other miscellaneous cables.   On the top is for connectors to make new cables and to the right is power adapters for equipment.


The tool chest has many drawers to keep tools organized for use in the Keller.  These tools are are only for the Keller as we have a separate tool chest for load-ins and strikes.  There are also empty drawers for future tool storage.


The computer station is equipped with a Mac mini on the left and a PC on the right.  This flexibility gives us the ability to run any software needed in the shop.  Both computers are connected to the 42” HDTV  so the display can be seen across the Keller.  This is helpful in pull lists and checking out equipment.  The Keyboards and mice are wireless, so it is also easy to move the keyboard around the Keller.  The PC also has a wireless scanner which helps assist in the checking out and returning of equipment.  

So there you have it.  You have witnessed the death of the SWIFT and the birth of the KELLER!

Tuesday, December 15, 2020

Track and Field 2020

 In my Trends in Modern Sound Design class, we often do an iterative hot-potato project called Track and Field.  It's an opportunity for the students to work on efficiency, conceptualization, and workflow. Here's how it works:

1) On the first day, each student brings in a 30-second piece of audio of their own creation.  The audio can be anything, from field recordings to original music to a short narrative. Students bring in both a rendered audio file and individual stems, and may spend no more than 60 minutes in the studio working on it.
2) Each submission gets passed to another student, then uses the original material as the starting point for their own work. Again, they must limit themselves to 60 minutes of studio time and 30 seconds in duration, but other than that, they can do anything they want. Students turn in both a rendered audio file and individual stems.
3) We repeat that over and over, with a matrix set up in such a way that each student usually gets to work on each thread.

This quarter, I had eleven students, so there are eleven threads.  I've taken each thread and strung each iteration together in sequence, so you can hear how one thread changes as different designers get their hands on the material.  Enjoy! 

(side note: Thread 7 featured some prominent commercial music, so SoundCloud won't let me present it to you.  Your loss!)

Tuesday, December 1, 2020

Teaching D-Mitri Remotely

Every fall, in our Digital Audio Systems class, I teach our first-year sound designers a two-week intensive overview on the Meyer Sound D-Mitri system.  D-Mitri is a powerful tool for live sound that combines the functions of a digital mixing console, a sound content playback device, a multi-channel sound spatialization tool, a room acoustics enhancement tool, and a show control hub all into one package.  D-Mitri systems are found in large scale sound installations around the world, from theme parks to Broadway to more.  The ubiquity and capabilities of D-Mitri are so large (as is the learning curve, frankly), that we typically have a number of second- and third-year students join us for the D-Mitri training to refresh their skills.

UCI has a small D-Mitri system, and we use it both as a teaching tool and in production.  When we teach with it, we roll the rack into the Meyer Sound Design Studio and patch eight D-Mitri outputs directly into our eight-channel loudspeaker system so that we can learn and work with it while experiencing its spatialization capabilities in real time.  D-Mitri programming happens through software called CueStation, which functions in a client-server capacity.  Multiple users can be logged into D-Mitri at the same time, each working on a different aspect of the programming.  Our D-Mitri classes typically involve everyone in the studio, sitting at their laptops, all wired into D-Mitri with a nest of ethernet cables.

The Meyer Sound Design Studio, in the before-times.

Of course, we can't do that this year.  We could have delayed the training module until we were able to meet safely, but I don't know when that will be, and I'm honestly tired of delaying things because of the freaking pandemic.  I didn't want to let the perfect be the enemy of the good, to paraphrase Voltaire.

So, in a pandemic, how do you teach a class that requires both client-server access AND the ability to perceive spatialized sound?  In order to solve this, I needed to think through a number of different challenges.  Here they are, how I thought through them, and how I eventually solved them.

Physical Locations

We knew that the D-Mitri rack would need to live in the Meyer Sound Design Studio.  The studio is currently cleared to be occupied by three people, but I was uncomfortable coming to campus for in-person class (I'm teaching all of my classes remotely this term).  Plus, I know how important the refresher is to our more senior students, and I didn't want to cut them out of the experience.  So, each student would be remote, logging in with their computers (with wired connections, preferably).  I came into the studio to teach the classes so that I could take care of any issues that came up while teaching that I couldn't deal with remotely.

Even though I'd be teaching from the studio, I expected that I'd need to be able to remote into the host computer in order to tweak details from home. Early in the quarter, while testing, I found that if I were on campus, I could screenshare with the host computer (an iMac that we call Chalkboard), but when I returned home, I couldn't screenshare with Chalkboard at all.  After consulting with our IT department, we determined that we needed a more robust screensharing tool.  We installed TeamViewer on Chalkboard so that I could control the host computer, restart failed connections, etc.  TeamViewer mostly worked like a champ, though there were a few times where I couldn't log on to Chalkboard at all. 

Connecting CueStation to D-Mitri

The easiest way to share a CueStation screen with the students was to just share my laptop's desktop via Zoom, but if I did that, they'd just be watching me click things, which is hardly useful when teaching a tool.  The students needed to be able to control CueStation on their own in order to get their (virtual) hands on the (virtual) machine.  I asked Richard Bugg with D-Mitri about how we might address this issue, and he noted that D-Mitri systems can be controlled from around the globe using a proxy server. The folks at D-Mitri use this feature to troubleshoot systems without having to fly halfway around the world, but it was just as useful for my needs.  Richard walked me through the steps to set it up and spent some time doing some testing with me. The proxy server required Chalkboard to be running CueStation, but as long as it was running CueStation and the proxy server was active, I could have up to eight clients logged in at the same time.  Sometimes it took a while to get all students onto the proxy server at the same time. The folks at Meyer use the proxy server to do maintenance on machines that are difficult to get to, not to teach D-Mitri to a class, so they don't typically have the user count that we did. 


So, we've figured out where everyone would be, and we figured out how everyone can control D-Mitri using a proxy server.  How can we send spatialized sound to the students so that they can all monitor the D-Mitri environment well?

My first thought was to build a SpaceMap (D-Mitri's spatialization tool) replica of the Meyer Sound Design Studio's loudspeaker configuration, take the eight outputs of D-Mitri into a DAW, place them into a 5.1 session, stream the six-channel output over the net, and then have students monitor with 5.1 headphones.  But, we ran into a number of challenges with this idea.  First, I couldn't find a reliable six-channel sample-accurate streaming tool.  We've been using AudioMovers, which does a great job with two-channel signals, but in testing, multiple two-channel instances did not sync to each other (there are rumors of a >2-channel upgrade, but I haven't tested it yet).  Also, six channels of audio is three times the bandwidth of two channels, which could impair networks in dorms and homes.  Finally, I was hoping to avoid having to seek out funds to buy enough 5.1 headphones to outfit the class.  So, back to the drawing board.

SpaceMap of the Meyer Sound Design Studio
A spacemap of the MSDS studio.

I solved my problem in the next iteration.  I still took the eight channels from D-Mitri in to my DAW, but instead of mixing to 5.1, I mixed to binaural.  I initially used Binauralizer by Noisemakers to render each D-Mitri input relative to the loudspeaker's position in the room, though I shifted to the dearVR Micro plug-in due to better low frequency management.  The DAW mixed all eight inputs together, and then I used AudioMovers to send a two-channel stream out to the students.  I cut the audio bandwidth by two-thirds and removed the need for 5.1 headphones!  

ProTools Session Set-Up
ProTools handled the binaural routing.

The eight iterations of NoiseMakers, positioned relative to their corresponding spatialization position.
Eight binauralization plug-ins spatialized the sound.

Ultimately, the students were able to listen to high-quality spatialized audio with a relatively low latency.  It wasn't the same as being in the room, but it was pretty close.

Returning to the Studio

We spent four two-hour sessions learning the basics (and some details) of D-Mitri remotely, and on the fifth and final day of the module, the two students and I met in person in the Meyer to review their work in person.  They had created a spatialized sonic event from their apartments, but they presented their work in person, through the eight-loudspeaker system that we have in the Meyer. This gave us an additional opportunity to discuss how well the binaural monitoring situation translated into actual meat-space monitoring.  Their work more or less translated well, but we note that monitoring a sound panned to the center of the room revealed itself differently in speakers than in headphones.  Via headphones, all eight ambisonic channels were being addressed, which imaged the sound to the center of our image. But in the studio, having all eight speakers firing didn't image to the center.  It either imaged EVERYWHERE (if you were sitting in the sweet spot in the room), or to whatever speaker you're closest to (if you're not in the sweet spot).  

Final Thoughts

You won't catch me yearning to do this again if I have the option to teach in person, but overall, I'm pleased with the results.  If I have to do this again, I'd need to address these issues:

  • Input source.  I was using a single channel of audio from ProTools as an input source.  I set ProTools to loop playback but sometimes the session would stop on its own.  Next time, I'd use a different, more reliable input source.  An FM radio would be a nice low-tech tool.
  • Remote Access via proxy server.  It wasn't as solid as I would have liked it to be.  In fact, on the first day of class, no one could connect except me.  
  • AudioMovers wasn't designed to stream audio 24/7 for 3 weeks, and it occasionally failed.  When that happened, I had to log into the computer, restart the transmission, and send the link around again.  I had to do that once a day or so.  Not a deal breaker.  Just a thing to note.

Overall, this was a huge success!  If you're thinking about doing something like this, let's talk!  I'd be happy to share my thoughts and brainstorm other/better solutions!