Loudspeaker arrays have been part of the sound reinforcement toolkit practically since electrical signals began flowing through moving coils attached to cones. You have probably seen these public address arrays in large concerts and stadiums. In this application, loudspeaker cabinets are mounted in close proximity. Designing a line array typically takes the dispersion pattern of the transducer and cabinet into account while progressively angling successive cabinets in order to achieve maximum coverage with minimal interference. Usually these types of arrays are all driven from a single signal; that is, the sound engineers attempt to uniformly spread the signal to the audience.
Interesting new applications arise when driving each transducer independently, however, under the control of a signal processor. Imagine two speakers being driven by the same signal, radiating in the same direction. At some points in the field, the signal from each speaker would arrive in phase, creating a higher sound pressure level. But at some points in the field, the signals would arrive out of phase, canceling each other. This concept can be taken even farther by adding more and more speakers.
With recent advances in signal processing technology, today’s computers can now control vast arrays of loudspeakers individually using the above technique creatively. Using a process called beamforming, the signal processor can manipulate the input signal by shifting the phase of the signal, by adjusting the signal’s amplitude, and by introducing delays to the signal. This allows the signal processor to harness the power of positive and negative interference to place sounds at specific points or areas in the field by adjusting phase, amplitude, and delay for each individual loudspeaker in the array.
At a recent Audio Engineering Society meeting, Ivan Tashev, Jasha Droppo, and Mike Seltzer of Microsoft Research demonstrated some applications of this to the audience. One demo was of a loudspeaker array, which consisted of a small linear array of inexpensive loudspeakers. The demo played two songs through the array, and the signal processor used beamforming to place one song in a location to the left of the array and another song to the right. The listener could begin listening in one area and hear only one song, then move to the other region to hear only the other song.
The second demo the team gave at AES was of a commercial application of this technology, the Yamaha Digital Sound Projector. The Yamaha product contains 42 drivers, a signal processor, and a multichannel amplifier. The signal processor has 5 different beamforming modes for various listening experiences.
The most straightforward mode is stereo mode, which simply mimics a typical music listening experience. Another mode, called 5-beam mode, simulates the 5.1 surround sound listening experience by decoding the surround sound signal, encoding the 5 channels into 5 separate beams, and directing these beams to specific points in the room. The rear channels are reflected off the sides and rear of the room. Another interesting mode the Sound Projector has is the so-called “My Beam” mode, which projects the input to a specific listener but leaves the rest of the sound field miraculously quiet.
These demos were actually quite impressive, even in a large meeting room. The Yamaha product really sounded great, and I was surprised by the quality of the 5-beam surround experience. The focused beam modes are interesting too, though I could still hear the program slightly at other places. I would expect to see the price of these products decrease over time, and I suspect they will also improve in quality by learning room dynamics and adapting their programs to tailor for specific environments automatically (that is, if they don’t already).
Transducer array technology is an active area of research. Berkeley’s CNMAT created a spherical array of 120 loudspeakers, which sounds exciting for interactive sound installations and performances. Microphone array technology explores identifying distinct sound sources by analyzing signals phase and delay correlations at each input point. This technique can reduce noise for teleconferencing applications, for example.
I’d love to actually hear what the CNMAT speakerball can do. Many years ago, I remember reading about some of the early computer music compositions created at IRCAM with their signal processing computers to place sounds in a field expressively, using many channels of audio. For example, a cellist would play into a microphone, and the computer could rotate the sound around the field according to the amplitude of the performance. Playing louder might result in the sound rotating faster around the room.
Thinking about that concept so long ago really broke down a barrier in my mind. For the longest time, I wanted to buy an old Leslie speaker and replace the motors that spin the horns and reflector with robotic servo motors. I imagined using analog function generators like envelopes and ramps to direct sounds around the room. Now all this is possible without any motors whatsoever.Posted: August 16th, 2009 | 1 Comment »