Thought I’d dig this older post from the depths of another blog that has yet to go public. I had the pleasure of setting up and mixing for Christmas Praise last December in All Souls Langham Place. It was a great concert, featuring the All Souls Orchestra with guest stars Michelle Todd and Graham Kendrick.
I freely admit that orchestral music isn’t my strong point, but I enjoy the challenge of understanding and working with something a little different to my normal tastes. This event proved challenging because a real orchestra in this space produces a lot of volume on its own – without amplification of any sort. This means that Michelle and Graham’s vocals needed to be much louder than this PA system would normally be asked to produce, and the orchestral backdrop makes the quality of any sound reinforcement much more critical if we are to blend reinforced vocals with an acoustic orchestra.
Main challenge – Vocal levels
In the event, the quality of vocal reproduction was far less of a challenge than the sheer quantity! It didn’t help that the solo singers were some ways in front of the PA speakers reinforcing them, making gain-before-feedback more of a challenge in this room than usual, especially given the very wide dispersion of the installed Bose 802’s! I was very concerned about tipping the system over into feedback (which would have been a huge distraction) as well as missing cues, so despite a good rehearsal/soundcheck time the first half was spent fighting the system to give me cleaner and louder vocals. I was impressed at having equal numbers of people approach me during the interval saying “great job on the sound”, or “Sounds great where we are but can we have more vocals please?” – that 50%/50% balance is usually difficult enough to achieve in this building with quieter setups!
By the time the second half started, I felt more comfortable with the system as a whole, and therefore set about fixing the vocals that I’d been struggling with for so long. The key to this was making the vocals loud enough to be heard above the orchestra, without having them feed back or go too loud for comfort/comprehension.
A use for compression
The first tool I brought out of the box was compression for the two vocal mics, which I set to the “vocal” preset in the iLive board, with a ratio of 2:1, soft knee, with the threshold set so that the compressor was just beginning to act (around 1dB reduction) on median volumes from each singer. With this, I was able to squash the loudest passages of each singer by about 6dB without audible pumping or feedback issues. This reduction figure was important – it essentially means a halving of the volume. if I could reduce the loudest passages by that much, I could use the “make-up gain” setting to boost the quieter passages by the same amount without having the loudest passages get too loud. Extremely loud passages will get squashed a bit more than 6dB, keeping things comfortable on particularly strong notes or the singer getting right in close to the microphone. This boosting of the quieter passages with control on the louder ones meant that the singers were *always* above the orchestra with no need for me to ride the faders, unless the orchestra also got too loud around them, in which case there’s nothing wrong with reacting by pushing the vocal fader(s) up to restore the balance. The make-up-gain in this case was turned up gradually with care to listen out for feedback creeping into quieter passages.
A use for gating
Given that Graham’s mic was much louder in the singer’s monitor wedge than Michelle’s, his mic was much more susceptible to feedback. The 6dB makeup gain put his mic on the very edge of feedback and was frankly a bit of a liability. I mitigated this by using a gate to take 3dB of gain off his mic whenever he wasn’t singing – 3dB being barely noticeable if I’d got the threshold too high for his quietest passages while I was setting it up, but enough to take the mic out of its perpetual near-feedback zone when he backed away from it.
Other things I did…
Premier radio came in to record a broadcast mix of the event, for use on Boxing Day I think. I assigned three mono auxes to send an ambient mic feed, a mono mix of the conductor’s mics (more on those in a while) and a mix of everything that was sent directly to the main sound system.
To make sure there was a backup, I decided to try out the M-MADI card in the iLive system and use that to send direct outputs of each incoming mic channel to a laptop. Allen and Heath very kindly arranged for a loan M-MADI card in lieu of any other supplier being able to provide one. With this arranged I was able to hire an RME MADIFACE for use with the one laptop I could find in the building possessing an ExpressCard-34 slot. As I was testing the robustness of the MADI interface as well as that of the laptop and software receiving the audio stream, I enabled all 64 input channels that the iLive system can theoretically cope with to be sent from the iLive, and set up the recording software to stream all 64 channels to disk.
All was well on the night, it seems – having listened to snippets of the 90Gb of audio data we created over the several hours the event lasted, everything seems to be locked together and I’ve yet to hear a dropout. It’s amazing to hear how little noise there is in the recordings compared with the same kind of activity I’ve previously done with analogue gear recording to an Alesis ADAT-HD24.
The orchestra certainly packs a punch in this building when fully unleashed – but often the strings and woodwind tend to get lost, drowned out by singers and percussion. To help mitigate this, I put a single condensor mic on a stand in front of the woodwind section, and a cross-pair of condensers in front of the conductor’s stand to get a stereo image of the whole orchestra. This pair was physically closer to the strings than any other instrument, giving me a slightly strings-heavy mix of the orchestra. As well as being used for the recording, i found them useful in the live mix, boosting the strings sections a little so they remained mostly audible even when everything else got loud. I was even able to set them up on their own audio group, so that I could use boost their level the matrix feeds for the speakers covering the sides of the building without having them too loud in the main speakers.
Another useful tool was the channel input delays. Any mic behind the speakers was delayed according to its relative distance from the central point between the main speakers using the 1ms-per-foot rule, the approximate speed of sound travelling in the air. This meant that drum and piano mics in particular effectively disappeared from the mix as identifiable sources, so i could mix in a little more of these instruments for clarity and impact without them sounding like these details were being provided by the sound system. These delays were also recorded to the hard disc, so the live recording immediately sounds more natural when played on headphones or a decent hifi system.
This was not the best-sounding mix I’ve ever created for an event, but the tools built into our iLive console certainly helped bring things under control with a whole lot less stress and anxiety than I’ve experienced with similar events in this church using the previous analogue console. The ability to stream every input to its own channel in a software recording system certainly made reviewing the work after the concert a whole lot more meaningful, so I’ve now got a list of things I’d do differently next time I run a similar event. More on that note in a future post, I think!