Thursday, June 28, 2012

Ain’t Nothin’ But a Phase Thing



Last week, UCI hosted the Meyer Sound seminar on Fundamentals of System Design, Implementation & Optimization, led by Steve Bush. For all of us involved, this was an excellent opportunity to return to the basics of system design and ear training for live sound. Steve started at the ground level and took us through the process of critical decision-making, at a pace that allowed everyone in the room to stay on board.

The first day was spent reinforcing the basics. Steve set up combinations of UPJ arrays, UPJ & M1D subs, & M1D sub arrays, and demonstrated the results both aurally and visually in SIM. We did a lot of listening to comb filtering and polarity reversal with pink noise to get our ears acquainted with those sounds. Steve drilled us on wavelength and period recognition, asking questions like:

Q: If Period = 1/Frequency & Frequency = 1/Period, what is the period of 100 Hz?
A: 10ms. 
Q: What happens if two 100hz sine waves are sent to the same speaker with one delayed 5ms?
A. Complete cancellation

The underlying point of the exercise was to reinforce the importance of loudspeaker placement when designing systems, allowing us to speedily approximate the basic interaction between two speakers at any distance from each other. Just remember: 100hz is a mini cooper, 1kHz is a hoagie, and 10kHz is your pinkie finger.

Steve heavily reinforced the causes and effects of phase in all stages of system design. He set up a situation in which one source of pink noise was split and sent to a single speaker, and one feed was delayed an unknown amount. By examining a combination of the frequency and phase traces in SIM, we were able to see correlation between the frequency of the first dip in the combing and the 180º phase cancellation point. We would then double that frequency to determine the frequency of the next peak, then find the period of that peak to determine the delay between the two sources. For instance, if the dip is at 500, the peak would be at 1k — therefore, there would be 1 ms of delay present.
To demonstrate some of these effects, Steve set up a fairly typical tuning situation with a single UPJ and an M1D sub. Typically when tuning these speakers, you typically insert some form of high pass filter on the UPJ & low pass on the M1D sub. As Steve demonstrated, however, these filters introduce a shift in the phase curve of each cabinet — specifically, these 2nd order filters used within the Galileo 616 processor create a 90 degree phase shift at the cut frequency for each speaker. In other words, if the two filters are set at the same frequency, they create a 180º phase difference at this crossover point. The solution? Steve merely flipped the polarity on the sub, “turning the nulls into peaks and the peaks into nulls.” This shifted the cancellation point of the two speakers upwards in frequency, to a range that the subwoofer does not cover — in other words, there was no perceivable cancellation!

We also practiced arraying 2 UPJs, but in a different style than has been done here at UCI recently. Steve decided to treat one of the two cabinets as the main source, with the other acting as a kind of a fill that expands the width of coverage of the first. The measurement process went as follows:

1.  Capture the response of the main cabinet on-axis in SIM.
2.  Shift the captured trace down 6dB.
3.  Move the mic toward the coverage limit of the horn, until the live trace matches the -6dB one. This is the “-6dB point” of the horn.
4.  Mute the main speaker and bring in the fill.
5.  Adjust the placement of the fill until it matches the -6dB response of the main speaker. This means that the coverage of the speakers crossover at the -6dB point of each.
6.  Unmute the main speaker and tune the delay of the fill until the responses match.
7.  EQ the fill as necessary

Effectively, this creates the illusion of a single UPJ that has the extended coverage of an additional UPJ horn in one direction. Very cool, and definitely something we will want to experiment with.

One of the most interesting topics we covered was the phase response of a single loudspeaker within itself. Steve definitely blew a few minds in the room when he talked about the slow response of the woofer of a cabinet when compared to the tweeter. He also covered the use of all-pass filters in tuning. All pass filters provide no change in the frequency response of a signal, but merely introduce a delay in the signal. Steve explained that Meyer tunes the phase response of its cabinets by splitting the input signal into frequency ranges and inserting all-pass filters in such a way that the higher frequency ranges are delayed to arrive more in time with lower ranges  — effectively evening out the phase response of the cabinet. In the case of the UPJ that he used for the demonstration, the phase deviation is no more than 540º. Steve also mentioned that all-pass filters can be useful tools when matching the phase response of two speakers made by different manufacturers — but he also warned us that misuse of the all-pass filters can result in significant problems in other areas of tuning.
Steve touched on numerous methods of arraying subwoofers to allow directional control of the low frequencies. A sub-gradient array consists of two subs facing forward in a single-file line, with 3.5’ of separation. The rear sub is delayed to the front one, and then the polarity is reversed on the rear sub. This effectively creates strong cancellation in the rear, allowing the overall volume to be pushed without affecting those behind the array as much. An endfire array is a similar setup, except that the front sub is delayed to the back one, and there is no polarity flip. This creates constructive interference in the front, and destructive in the back. Endfire arrays can also consist of more than two subs in a line. Steve also showed examples of arrays-of-arrays with subs — for instance, a line of endfire arrays across the front of a stage.  He cautioned us on the use of these sub arrays, though, as even the smallest problem (like an XLR wired with reversed polarity) can completely destroy the effect.

One of the goals for the seminar was to practice our newfound skills on the rep system of the Claire Trevor Theater. The rep system includes:
 
   Two line arrays, each with (8) M1Ds & (1) MID sub, on either side of the proscenium
   (2) arrayed UPQs serving as the center
   (2) MM-4XPDs at the rear of house serving as fills for the FOH mix position and the handicap seating

Steve decided to leave one of the arrays set up in the rep position, and experiment with the other one for comparison. We lowered the array to try new splay angles and heights and listened to how each set of boxes interacted with the room. It was a rare and unique opportunity to go through the steps of tuning a line array while Steve talked us through what we were hearing.

This seminar was an incredible experience for all of us at UCI. Steve was a wonderful teacher who gave some of the best-worded examples and definitions for fundamental concepts I have heard to date. He did a fantastic job of translating the results of SIM measurements into aural examples, and he explained everything in a manner that everyone, at every pace, could grasp. We really can’t wait to try out what we learned in the fall! Thank you to everyone involved and we hope to do this again soon!

^Patricia & Matt

Friday, June 22, 2012

Phase, Phase, Phase

This week, we were happy to welcome Steve Bush from Meyer Sound to UCI. He taught 2 seminars: Fundamentals of System Design, Implementation and Optimization. And Quick System Setup and Alignment. And the reoccurring theme was "Phase Phase Phase". And my brain, being what it is, kept putting it to the Paul McCartney & Michael Jackson "Say Say Say" song.

Seriously, the classes were great. Everyone loved them, and thought Steve did a wonderful job getting the material across. The little pearls of wisdom, from Mini Cooper/Hoogie/Pinky Finger, to "It's all soup!" were great! It's amazing how those little pearls stick in your head. There was science. There was math. There was art. And discussion of A.R.T. (Absorption, Reflection. Transmission)! I will leave it to the grad students to go into more details of what they learned.

It was also great opening our doors to our community to join in on the fun. We had a wide range of participants. A few folks from the corporate a/v industry. Some freelance FOH engineers. Folks from other schools. Theater people. And a few church folks too. A great smattering of people who use sound! I can't wait until the next one. Not only the opportunity to learn new tricks and tips. But also, to meet more of my compadres in the sound world, that work in avenues I don't travel down often. I'm always up for a good war story!

Thanks again to Gavin Canaan, Mac Johnson, Steve Bush, and John & Helen Meyer, for making this all possible.

Monday, June 18, 2012

Early Adopters Beware!

Apparently, there are some USB3 compatibility issues that may prove problematic for power users like sound designers.  Check out the link here (thanks to CDM!).

Sunday, June 17, 2012

Obsolete tech sounds

Gizmodo has a great piece about obsolete tech sounds - sounds we used to hear all the time but have now virtually disappeared from our world.

Here's my favorite description:  


Dot Matrix Printout

Long before the days of color laser printers that churn out 30 pages a minute, we had dot matrix printing. There was something hypnotically relaxing about the sound of the chirpy print head moving from left to right as reams of paper slowly pulled past it. We could meditate to that sound."

You can read the whole piece here.

Saturday, June 16, 2012

Sinan Zafar, BA, UC-Irvine

UCI commencement exercises will be held today for the undergraduate students.  Among those students receiving their Bachelors Degrees will be Sinan Zafar. Sinan is our first real sound design undergraduate student at UCI. He showed up on our radar over two years ago, and quickly made himself indispensable at UCI. He worked on large projects with the graduate students, was the go-to designer for workshop productions with his classmates, and took a significant number of graduate level classes. This year, in addition to all of that, he also designed two shows for the UCI mainstage season. Not surprisingly, he also became our first Honors in Sound Design student.

Sinan will be moving to LA following graduation to pursue his creative work, and the future has great things in store for him.  Congratulations, Sinan!!

Monday, June 11, 2012

Beth Lake and Jeff Polunas, MFA, UC-Irvine

Congratulations to Beth Lake and Jeff Polunas, our most recent graduates of the sound design program here at UCI! They're both terrific designers, and Mike, BC, and I are proud of them and wish them great success!


Beth hails from Arizona by way of Colorado, but she also spent time at Jacob's Pillow out in Mass.  Her thesis design was Arcadia, and she's looking forward to relocating to NYC.

Jeff spent formative years in Ohio, but had been in California (including work for PCPA) for quite a while now.  His thesis was a design for Dance Visions, our faculty-choreographed dance concert. He'll stay around the socal area for the time being.  Jeff is also getting married this summer, so it's quite an exciting time for him!

They have both been terrific students and designers, and I look forward to the great things they'll do in the future!

Sunday, June 10, 2012

Darron West, Tony winner

Just a quick update for those following... Darron West won his Tony Award last night. Congratulations Darron!  We're all proud of you!

Which future Tony winner will visit our classroom next year?  Wait and see...

Friday, June 1, 2012

Final Projects!


Each spring for the past four years, I've taught a class called 'Creating Sounds from Scratch.'  SfS starts out exploring old synthesizer technology through a digital lens, looking at oscillators, filters, and midi to create entirely new sounds or to give a unique sheen to old sounds. The bulk of the class time, however, is spent digging into the details of a program called MaxMSP. Max is a piece of software that lets the user create complex creative packages for sound & video based on data and numbers. In SfS, we use Max primarily as a data mapper, taking real-world control (pressing keys or twiddling knobs on a keyboard) and turning that data into opportunity for creative control.

SfS is a required class for MFA Sound Designers here at UCI, but this year, I let the class open up to students of various backgrounds and aptitudes.  It was a great challenge for me, as I found myself having to frame the class in such a way to engage both the students who were comfortable and versed in this kind of thinking and control as well as the students who were totally new to this kind of work. I must admit that I didn’t think it had gone particularly well, but yesterday, my mind was changed.  Final projects were due recently, and I was consistently delighted by the kind of work that the students did.  So impressed, in fact that, I want to share them with you.

The prompt for the final project was simple: ‘Use digital tools to create a modifiable soundscape of an event.’ Over the past few weeks, we looked at ways to get real-world control data into Max and ways to use that data to create/control sonic events.  Let’s look at what the students came up with:



Bryan is an undergrad with a focus in sound for games.  He was my student in a different class in the fall, and I thought that SfS would be useful to him in his game work.  He created a warzone, using his Android phone running TouchOSC as a controller.  Bryan did an excellent job in prioritizing energy and attention on foreground sounds v. background sounds (a requirement for the fast-paced world of game design).  Here’s a screenshot of his work:




Dean (an MFA Stage Manager) and Jenna (an undergraduate Sound Designer) created a car-driving sequence using a Wii Remote as a controller. The user controls speed of the car and volume of the radio using the gyroscopic parameters of the controller, and the buttons on the remote take us through a sonic event, ending in a surprising car crash (spoiler alert!)




Elliot (an undergrad Sound Designer), Pablo (an MFA student in the ICIT program in the computer music wing of the Music Department), and Stephen (an MFA Sound Designer) worked together to create a comprehensive sonic event, tracking the result of a botched bank robbery during a Fourth of July celebration. A wii remote served as a master controller, an iphone (running TouchOSC and Facetime) streamed data and audio to the host computer), and the bank safe was cracked in real time using a cardboard box modded with various sensors.




Kat (an undergrad Sound Designer) and Phil (an MFA Stage Manager) created a very precise aural replication of Super Mario Brothers to be played using a Wii Remote.  They used chance to re-create events, but they used brute force programming to get the sounds to behave correctly.




Kristen (an MFA Stage Manager), Michael (another ICIT student), and Patricia (an MFA Sound Designer) created a space battle using a JazzMutant Lemur as a controller.  They implemented their battle using an eight-channel delivery system. Unfortunately, the user loses the space battle.

You may, by now, noticed a theme of catastrophic death in these projects. Don’t fret – the theme continues.




Phillip, another undergrad game sound student, created a medieval sword fight for two Wii Remotes.  He created comparison events to trigger sonic events: if only one ‘sword’ was swung, that sword would hit, but if both ‘swords’ were swung, they would parry and clang off of each other.




Sinan, our final undergraduate Sound Designer, also created a sword fight, but his vocabulary was Star Wars light sabers.  Sinan built his sonic event using theme music, differently-pitched sounds for light-side or dark-side fighters, and victory music that changed depending on which side won:





I was thoroughly impressed by all of these projects, and I’m proud of them all!  I’m looking forward to teaching the class again next spring!