The Yonnie blog

Surround sound techniques for mixing and composing for the screen

Jul 26, 2016
surround mixing main

It seems that, more than ever, filmmakers are requesting sound mixes to be delivered in surround sound. For this reason, becoming familiar with techniques on how to mix your score in surround can certainly be an advantage. Additionally, composing in surround from the beginning of a project can inform creative decisions, and instruments and sounds may be treated and mixed to make the most of the multiple channels. Furthermore, having control over the final surround mix of the score means that the composer is not reliant on the re-recording mixer to upmix it during the sound mix.

surround mixing main2

Philosophy and creative approach

How you intend to place and mix certain sounds and instruments in surround rather than in stereo may affect the creative decisions you make while composing. This is due to the extended channel real estate at your disposal which allows for the greater separation or immersiveness of instruments and sounds, as well as creative uses of the extended space to shape and inform the composition. This approach is similar in ethos to Pierre Schaeffer's musique concréte compositional practice, which explored the concept of using and manipulating recorded sound to shape the composition itself and, in this case, the space that the sound is reproduced within. Creative use of surround mixing can also be compared to dub music, where mixing techniques and effects are used as instruments in their own right to form an integral part of the composition.

Ultimately, composing in surround sound is subjective and there are no rules – it comes down to the individual film, the context of the music, the aesthetics of the composer and filmmakers, what they want to achieve with the score, and the palette of instruments and sounds and how they can best work in surround. For example, some Hollywood blockbusters use surround sound as a bombastic 'effect' to excite the audiences, like a theme park ride that is intended be a 'surround sound experience', while other more nuanced productions use it sparingly so as not to pull attention off the screen and out of the story.

My personal approach has developed and adapted with each project I have worked on. One of the most important things I have found is to not let the technicalities of composing in surround affect or block the creative music writing process. The most obvious way to avoid this would be to separate the composition process from the mixing process by composing in stereo and tackling the surround mix once the music has been written. I find that I generally end up somewhere in the middle, mixing as I go but finalising and refining the mix once everything has been composed. In this process, playing back surround sound, processing it, and just the sheer amount of extra tracks required can make it overwhelming while also putting a large strain on system resources. I have overcome this by trying to keep things as focused as possible and identifying the most critical instruments and sounds in a cue that will benefit from a specific surround treatment. If treating these in surround will affect my overall compositional approach to the cue, I tackle it straight away. If it's just expanding a sound or instrument into surround, I'll use a basic upmixing technique on the individual element and refine it as I get closer to the final mix.

 

Things to look out for when composing in surround

 

Fold-down compatibility and phase coherence

Being able to fold down the surround mix to stereo and even mono while maintaining its balance and phase coherence is very important. Although you may have a beautifully immersive surround mix, the reality is that many people will most likely hear it in stereo from consumer playback systems or even in mono from mobile devices. It's important to check the stereo fold-down of the mix to make sure that everything is translating faithfully, and it is also a good way to check for any phasing problems and unwanted sound colouration introduced by upmixing techniques. Mixing some cues while folding down to stereo and switching between stereo and 5.1 surround to see how the mix translates can help give a perspective of what the surround mix is actually achieving.

Centre channel

There are generally two points of view on how to use the centre channel when composing in surround. One is that it should be left clear of all music so that it doesn't interfere with dialogue. The other is that if a sound is in the front centre of the mix, it should be placed in the centre channel, as it will give the overall mix better definition, helping to spread the sound evenly over the front Left Centre Right channels. Due to the precedence effect , if the same sound is playing through multiple speakers at exactly the same time, it will appear as though it is only coming from the speaker closest to the listener. Particularly in a large theatre environment, phantom centre panned mono sounds suffer from this and can create a lopsided perception of the mix. Some re-recording mixers believe that there should always be centre channel information to help combat this problem, and if it isn't present they will sometimes partly mix the front left and right channels into the centre channel to compensate. This can potentially narrow the overall LCR image and also alter the mix balance intended by the composer. The flipside is that if there is any dialogue during a particular music cue, an overbearing use of the center channel could result in the overall music mix being lowered to compensate. Therefore a strategic and adaptive use of the centre channel can be a good approach to take.

Low frequency effects (LFE) channel

The LFE channel was originally developed to carry sub frequency cinematic effects. For this reason, the general consensus is that it shouldn't be used as a 'subwoofer' channel to be fed with everything in the low frequency range, but rather be used sparingly for specific low-frequency sounds and effects. If there is too much continuous uncontrolled sound in the LFE channel, there is a good chance the re-recording mixer may pull down the entire music LFE channel.

I have found that taking a very considered approach to using the LFE can be an advantage. In some cases, particularly with percussion and musical effects, you can render to audio the specific sound you plan to send to the LFE and assign it its own channel. This allows you to repitch it into the sub frequencies, sculpt the amplitude envelope and position the sound exactly, while monitoring how it is reacting with the original source sound and the rest of the mix.

 

Surround sound mixing and production techniques

 

There are many ways to use surround during the composition process, and some techniques work better depending on the material. I have listed the main techniques I use, from the most basic to the more experimental.

Recording in surround

A great way to put an instrument or sound in a surround space is to actually record it in a space using a multi-microphone surround recording setup. If there are several instruments, a less reverberant space can work well for utilising the spread of individual instruments. However, I have found that single instruments work best in a space with some reverberation and reflections. Since they are generally a single point source of sound, the recording space can impart a unique colour and context. By playing them back through a speaker or PA system, recorded sounds and electronic and electric instruments can also be re-amped and recorded effectively in reverberant spaces.

There are many ways to record in surround, from a traditional Decca Tree configuration with combined ambient microphones, to a handheld Zoom recorder, and all the way to experimental and customised microphone set-ups. See commonly used techniques.

The way I have approached surround recording is to try to find the best possible space in which to record, as it is the actual space that will colour and affect the sound. Depending on the score, spaces have varied from a reverberant hall to a completely dry recording studio and a concrete car park, while the equipment and mixing techniques I've used have been adapted depending on the individual project. Taking a creative approach that serves the result you are after can result in a rewarding and uniquely nuanced sound that is specific to the project.

Upmixing

Upmixing is essentially mixing or converting a sound into a higher multichannel format. There are two ways that I approach upmixing: offline (using audio) and real-time (creating playable surround instruments).

In offline upmixing, rendering instruments and sounds into audio or using existing recorded material can be the quickest and most efficient way to employ manual upmixing techniques. I have found using audio to be more solid and tactile, and that it generally requires less system resources than real-time techniques. The downside is that it’s usually best employed when the score has been written and you are into the recording and mixing stages, as audio generally locks you into timing and performance. When using this technique during the early stages of composing a cue, I find that individual percussion hits and impacts, aleatoric musical effect type sounds, pad-like or sustained sounds, and atonal sounds are the best to work with, as they can be easily repositioned and edited when developing the cue.

In real-time upmixing, a more performance-orientated approach is to use multitimbral virtual instruments, samplers and synthesizers to create real-time playable surround instruments. Some virtual instruments offer surround capacity straight out of the box, like Kontakt, Absynth and Reaktor. Others, such as Omnisphere, are not specifically surround capable, but since they are multitimbral they can be set up to accomplish surround by discretely panning individual parts. Even instruments that are not multitimbral can be made so by combining two or more instances and routing them to discrete channels or by applying algorithmic upmix plugins in real-time.

There are a number of techniques you can employ with either of these approaches that can be loosely categorised into manual upmixing, algorithmic upmixing, surround effects, creative convolution and sampled space re-creation.

Manual upmixing

 

(Hard panning, divergence and dynamic panning)

There are a number of ways to upmix mono or stereo instruments or sounds manually, and the simplest way is to mix the source into the multiple channels using hard panning or divergence controls (essentially spreading the source into multiple channels by a certain amount). This, however, can create issues based on the sound source and where the listener is sitting, especially with mono sources where there is a precedence effect. Therefore, mono sounds are better off discretely panned rather than played at equal amplitude through multiple speakers, while stereo sounds are more effective when positioned in such a way that they have some channel separation.

Hard panning similar instruments and sounds to individual speakers can offer balanced separation and immersion. I have found it to be particularly effective with similar sounding instruments, pad or ambient sustained type sounds and musical effects. There are many ways to achieve this, such as through multitracking guitar and hard panning it through individual.

Dynamic panning of mono or stereo sounds can also create a sense of movement. When there is a solid base of surround information, it can allow for subtle panning of individual instruments and sounds, creating an organic and immersive effect.

(Offsettng)

Offsetting is one of the most immediate ways of creating surround instruments and sounds, and I have found it to be one of the most powerful upmixing techniques. It generally works best with sustained atmospheric pad-like sounds that have the potential to truly immerse the listener, as opposed to single point sounds like solo instruments and voices. The basic approach is to offset multiple versions of the same instrument or sound by a certain amount (or even reverse it in some cases) while panning them to individual channels. This can result in a very wide and immersive effect, as the sounds from each channel are very closely related yet completely discrete.

A great way to try this is to create a simple mono or stereo pad sound, render it to audio, duplicate and pan each instance to a discrete channel and offset each audio clip by a couple of seconds or more. I usually drop the rear and centre channels by a few decibels to sweeten the overall image so that it feels as though the sound is coming from the front and wrapping itself around the listener. The technique can be extended to real-time instruments, where offsetting the start time of discretely panned channels of a synth or sampler can create playable results.

(Repitching)

Similarly to offsetting, repitching can be used as a technique to derive separate yet related sounds from a mono or stereo source. One way of approaching this is to duplicate a mono or stereo sound into multiple versions and then repitch them up or down, usually no more than a semitone) either by varispeeding or pitchshifting. Trying various combinations of panning – placing the lower pitched sound in the rear and the higher pitched in the front or vice versa – can help to locate the most balanced and immersive combination. Some level adjustment on the individual channels can sweeten the surround image, while the amount of pitch difference between the sounds can create interesting results as their relationship changes. I have found this to be very effective with percussive, effect-based and atonal sounds while real-time multi-oscillator detuned synth patches can also work well.

Algorithmic upmixing

There are a number of audio plugins that can take a stereo sound and convert it into surround using algorithmic processing. I have used and tested some of these and have found that each one is quite different, working better on some sources than others. So far, I haven't used any single plug-in that sounds great on all material, leaves the spectral balance of the original sound fully intact and folds back into stereo exactly. For this reason I tend to view upmix plug-ins as just another set of tools to be used where they work the best, rather than a one-glove-fits-all approach. It's important to note that only two of these plug-ins allow you to fold down the upmixed sound back to stereo perfectly without any alterations to the material. These are Penteo 5 Composer and Nugen Halo (in exact mode). If maintaining the exact relationship between the original upmixed and downmixed versions of the score is important, then these are worth testing.

These are some of the upmix plugins that are currently available, many of which offer trial versions:

  • Nugen Halo Upmix
  • Penteo 5 Composer
  • Auro 3D
  • Waves UM225/UM226
  • IOSONO Sound: Anymix Pro
  • TC UnWrap
  • Soundfield UPM-1
  • DTS Neural Upmix

Surround effects

 

(Reverb)

Using reverb to upmix is achieved by sending an instrument or sound to a surround reverb in order to place it within a virtual surround space. There are a number of surround capable reverbs in both plug-in and hardware form that can do this. It’s also possible to combine two or more stereo reverbs by panning them discretely to the front and rear channels. Another technique is to use a stereo reverb and upmix it into surround using an algorithmic upmix plugin.

Depending on what you are trying to achieve and the source material, using surround reverb can be extremely effective. Algorithmic, convolution and hybrid reverbs can be used to create realistic spaces to place the instruments and sounds within, while they can also be pushed beyond realism into the experimental realm.

One thing to keep in mind with reverb is that monitoring it in an acoustically treated studio space compared to a large theatre can alter its definition and balance. A large theatre has its own natural reverb and reflections that are added to the overall mix as it plays back. Therefore, not overdoing it, finding middle-ground settings and maintaining a good balance that works well in as many playback scenarios as possible can be a good approach to take.

(Delay and echo)

Delay and echo in a surround context can work well as a creative effect rather than recreating a real space, although it can also be useful in combination with reverb for realistic spatial positioning. There are few dedicated surround delays, and those that I have tried have tended to be idiosyncratic and don't allow for exact control over the surround placement and modulation between delay lines. I have found it to be better in some cases to combine multiple mono and/or stereo delays panned discretely as it allows exact control over each instance. By adjusting the individual feedback, filter and modulation parameters, the movement within the surround field can be controlled, while setting different delay times per instance can create rhythmic textures and movements.

(Modulation)

Modulation effects such as tremolo, vibrato, chorus, phasing and flanging can be used for movement, thickness, swirls, pulses and spins within the surround space. Both surround-capable and hard-panned mono and stereo instances can be used creatively on all types of material, both subtly and obviously for immersive effects. I tend to favour fully surroundcapable modulation effects, as I use them the majority of the time for full 360-degree surround spins, builds and pulses.

(Creative convolution)

As well as using convolution reverb for surround space creation, it’s also possible to import your own convolution impulses into some surround convolution reverbs. This allows the creation of custom surround impulses out of almost anything using upmixing techniques. Once imported into the convolution reverb engine the impulse can be used to convolve other instruments and sounds, creating interesting surround instruments and sounds derived from both sources. This is very much an experimental way of upmixing and you can end up with all types of results, good and bad, although sounds with similar spectral characteristics seem to work the best. The fact that you can create unique sounds and instruments specific to the project means this upmixing technique can potentially inform the music on a compositional level.

(Sampled space recreation)

Some sample libraries are recorded with multi-microphone arrays, allowing you to re-create in sound the space that they were recorded in. This can be a good way to create a very realistic sense of space, and is about as close as you can get to actually recording in surround. I find this can work well in an orchestral setting, when all of the sampled instruments being used are from the same library and recording space. Combining different libraries can become problematic, as the difference in the spaces in which they were recorded can be too obvious. In such cases, it can be good to send the different libraries’ close mics to the same surround reverb and construct a common surround sound space for all the instruments.

 

Summary

 

There are many considerations when composing in surround for the screen. But the question is: is it just a mixing process, or is the use of surround an integral part of the composition process? Ultimately, there are no rules; it's a subjective choice to be made by the composer and filmmakers on how a surround score will best serve the project.

It is my hope that the techniques discussed can be taken and built upon by fellow composers so that we may discover even more ways to use surround sound creatively when composing.

 

Written by: Erin McKimm