Almost all sound reinforcement and reproduction systems are based on the assumption that sound comes from performers on a stage facing the listeners and is enjoyed by listeners seated or standing in an audience area in front of the stage, facing the performers.

The implication of this arrangement is that, from the listener’s perspective, the position of the sound sources are arranged between the far left and far right sides of the stage. If you’re a listener in the front row, close to the stage, then you’ll hear sound sources in a ‘sound panorama’ of close to 180° (90° left and right). If you are at the back of the audience, the panorama may thin out to 60° or even less – depending on the stage width and the depth of the audience area.

The obvious way to copy this concept in sound reinforcement is to place loudspeakers on the stage at the positions of the sound sources. However, in order to not interfere with the performer’s movements on the stage, the lines of sight and also for cost reasons, in most cases the compromise of a two-channel ‘stereo’ speaker configuration is used - one speaker at the far left of the stage, one at the far right. The position of the reproduced sound sources can then be emulated by balancing their respective levels between the left and right speakers.

Although this ‘pseudo’ positioning confuses our brains a little, because signals come from two fixed positions and not from the actual sound source positions, it actually works very well. Indeed, virtually every mixing console has a special knob to do this - the panorama knob, or ‘panpot’. An advantage of this sound positioning method is that it is variable: if performers move from one position to another, the sound reproduction system can follow them - which would otherwise require the associated speaker to be moved.

However, there are two developments in professional audio that may lead sound engineers to rethink the ‘stereo’ concept.

First is the increasing use of headphones. In most cases a performance is mixed using a stereo set of loudspeakers. In both the cases of live performances by an engineer situated at the audience area’s ‘sweet spot’ and recording sessions by an engineer with the speakers nicely lined-up in his control room, this means the speakers are probably close to plus and minus 45° off-axis.

When listened to using speakers set up at the same 45° angle everything’s fine, but listening through a pair of headphones puts the speakers at precisely 90° instead of 45°, placing the listener virtually on the stage edge. The result is an unnaturally wide panorama of sounds. In fact this often turns out to sound nice, but after a while the auditory software in the listener’s brain might get tired of solving unsolvable equations, longing for a more natural ‘HiFi’ sound with a narrower panorama.

Note that it’s entirely possible to mix a perfect stereo production for headphones by applying individual level and time to each channel for each sound source. However, that mix may have issues when played back on a stereo speaker system. Either way, with the increasing amount of listeners using earphones to listen to music on smartphones and other devices, it's becoming a relevant topic.

The second development is the availability of large, affordable channel counts in pro audio equipment: DSP, network infrastructure and multichannel amplifiers have significantly dropped in price in recent years. Also the availability of compact-yet-very-powerful loudspeakers that don't obstruct line of sight in live entertainment environments offer new possibilities, following the standard in cinema having evolved from stereo sound reproduction to multichannel systems such as Dolby Atmos.

Whereas in cinema it’s very common to use surround sound effects to support ‘off-stage’ events in films, in concert applications sound effects support performances normally taking place within the limited, visible area of the stage. However, in an increasing number of cases, performers have started to extend their show to outside the stage area, so it makes sense to apply surround positioning in the sound reinforcement system as well. This can be done by placing speakers surrounding the audience, applying level panning or wavefront synthesis algorithms to position sound sources.

Applying a surround speaker system in live entertainment might also make sense when a performance originates from a stage, but has an ‘acoustic’ aspect - such as a symphony orchestra playing in an acoustically dry theatre, or in the open air. In these cases reverberation coming from the stereo PA speakers may sound a little strange - all reflections coming from the front but nothing from the sides where, in fact, our perception of reflections is most sensitive.

Using a surround loudspeaker system and a surround DSP algorithm for the reverberation may provide a more natural reverberation, increasing the quality of the performance. It’s something we are already used to with home surround systems, but now it is also applicable to live sound systems at reasonable cost.