Search

//abstractnoise

making sound considerate

Category

Sound Engineering

Blast from the past – Yamaha 01V

1049_1

I was posted onto a live audio gig last week where I got to use an elderly Yamaha 01V for the first time in about 8 years.  Initially I had misgivings given how old the board is (launched in 1998 according to Wikipedia – tallies with memory) and how long it had been since I’d faced one in anger. As setup commenced I was pleasantly surprised at how easy it was to mix on this machine.  Sure, I only had five live inputs to deal with, so it wasn’t exactly a full band mix; but I did need to pinch out two monitor mixes from house position.

I found myself quickly remembering where everything is, and some cute foibles about getting into the EQ and using more patience than with more modern desks, as quick adjustments on the rotary encoders often got misinterpreted and intended cuts come out as boosts, or vice-versa.

So how did it sound?  Well, once I reminded myself of how the pre-amps can be rather noisy and sorted my gain structure out a little, my impression was that it sounded a little more harsh than more modern digital mixers.  I didn’t seem to need nearly as much gain on the pre-amps as I’m used to with other systems, and I had to be careful too because there doesn’t seem to be as much headroom in the main mix bus as I’d been used to. But the compression and EQ all did what they were supposed to, and even the on-board reverb sounded surprisingly good with a few tweaks here and there to warm it up a bit.

In the indoor confines of a darkened sound booth, the monochrome LCD was a joy to use too – I could see it at pretty much any normal angle and didn’t struggle to read it at all.

Flipping between layers to set main, monitor and effects mixes was a pleasure, as was the continual access to the “Return 1” ON/OFF button so I could switch off the reverb when the artist talked rather than sang.  All the moving faders jumped to their correct positions without fuss – a testament to build quality and good hygiene by previous users!

All in all I was surprised at how little I actually focussed on the machine and just got on with the job.  The raw simplicity of the device compared with the workflow of, say, a Behringer X32 compact to get the same work done was striking – I think other manufacturers could learn a lot from those early days of keeping key things available at a single button-push.

It’s amazing how far we’ve come since the launch of the 01V in terms of raw sound quality and user-interfacing, but I was stunned at how genuinely useful this board was in a real-world gigging situation. I won’t let the thought of using one scare me again, so long as it’s been reasonably well looked after.

Updates on audio-related things in progress…

  1. The long-running archiving project has hit a significant milestone – I’ve now digitised as much of the physical media as I can. Limits are set now by condition of the incoming media, and whether or not it’s really worth digitising 4 or more decaying copies of the same thing when we already have better copies of the same thing elsewhere.  The only reel (!) exception is a set of reels for a particular project whose magnetic layer fell off as soon as the reels were unpacked.  No way is that worth spending money to preserve further given the age, obscurity and potential value of the content, not least the hardware value to retrieve it once the media itself has been restored.  Shame, but commercial sense has to come into it somewhere along the line.
  2. Now have a DIY automated process for holding an in-DAW mix (working in REAPER) at -23dBLUFS or thereabouts, which greatly simplifies things for radio and podcast production.  Even if I only use it for monitoring or other less critical work, it’s an amazing time-saving tool.
    • Obviously this means it can be adapted for -16dBLUFS or any other value as needed.
    • It’s *really* not pretty  for a number of reasons – most bothersome to me is that it presently works in stepped values, somewhere around 10 updates per second – rather than more smoothly applying gain or attenuation.
    • For high-end stuff, I’m still happy to do final levelling by hand as it does tend to sound better, but that does add time to any given project.

Lossless audio compression for project work and archive – what do you use and why?

I’ve been using lossless audio compression as a matter of course in my audio workflow for a while, and not just for end-distribution.  Services like Bandcamp and Soundcloud support uploading in FLAC format, and many clients happily accept ALAC (Apple Lossless) or FLAC in place of traditional uncompressed WAV or AIFF formats.

As for long-term storage and archiving, I’ve switched to FLAC for my projects, and have back-converted most of my catalogue.  But why FLAC?

Pros:

  • The reduction in network bandwidth and storage requirements is always nice.
  • It brings the possibility of checking the project audio files for errors via the FLAC checkum (applied at the time of creation) against how the file decompresses in present-day.
    • This can show up problems with “bit-rot” on disk storage that would otherwise remain hidden.  It’s a useful alternative to deploying ZFS-based file systems and keeps storage and network kit at an affordable “prosumer” level while such technologies mature a little longer, and hopefully come down in cost too.
    • If I find a problem file? Fine – restore from a good backup.  But that does rely of course on having good backups, and the corruption not being carried into those!
  • It’s an established format that many, many software applications can read, across a wide variety of operating systems – and given the number of websites springing up offering “audiophile” digital transfers of consumer audio material based upon the format, I have good confidence that software to read and write to the format will continue to be developed for some years ahead.  And if not, it’s still easy enough to batch-convert files to whatever format replaces it.

Cons (none unique to FLAC, I notice):

  • Reaper is my DAW of choice, and exporting large projects out to FLAC is time-consuming as it still doesn’t multithread FLAC exports for projects.
  • While Reaper *does* support (in version 5.111, at last check) recording directly to FLAC and other formats, recording to anything other than WAV or AIFF on my systems has consistently thrown up audible glitches in the recorded material. With some sample-accurate editing these can be fixed, but still not acceptable.
    • What I do therefore is to record to WAV/AIFF first, then save the project to FLAC before any significant mix down happens.
  • Not every DAW supports FLAC natively, or at all.  But then, for me this is a moot point – not every DAW can read project files from every other DAW, so this is just a part of that wider picture.  You pick your tool for the job and deal with the consequences.
  • Conversion takes time, especially offline for large projects.

So  – that’s a quick brain-dump of how I’m working with this stuff and why. I’ve missed steps and I’m sure others will be quick to pick holes in it.

I suppose my question to anyone else reading this with sufficient interest is… What is everyone else doing? What file format would you pick, and why?

 

Preview: “O Holy Night” – performed by ASO and Constantine Andronikou

So here is a quick “faders-up” preview of “O Holy Night”, as performed at All Souls Church on 12th December at Christmas Praise – what a stunning event that was!

Credits as follows:

  • Composer:  Adolph Adam
  • Singer: Constantine Andronikou
  • Orchestra: All Souls Orchestra
  • Conductor: Dr Noel Tredinnick

Notes:

  1. Because there’s such a huge dynamic swing between loudest and quietest parts of the piece (some 20->40dB depending which metering standard one uses), I’ve approximated what I *would* be doing with faders by using compressors.  The result isn’t yet “pretty” but it is functional – at least enough until it’s decided whether further work will be done!
  2. Though I helped provide the technical links between live and recording “worlds” for this event, I’m not actually involved in producing the radio mix. 🙂

 

On commercial remasters possibly issued without Dolby A decoding; Mistakes, art, or…?

Some background…

I’ve commented on this blog before about the possibly questionable quality of some digital remasters released of late. Common subjective complaints in online fan and hifi forums, made myself both here and among friends in person, are that some particular remasters might be too loud, too bright, and/or otherwise “overdone” given what we know or perceive of the original source material.  There might well be various artistic or marketing-related reasons for this, so I’m not here to argue for or against these issues.

Further complicating the issue for me, both as a fan and a professional, is that many of these stand-out features are seen overwhelmingly as positive things by many fans, whether they are technically correct or not.  It would seem that a combination of perceived increase in detail and volume outweighs any issues of listening fatigue or known certain deviation from the presentation of the original master.

I’ve embarked professionally on remastering and restoration processes and have learned, from the coal-face so to speak, much of the reality of what’s involved. To onlookers it appears to be a black art – and believe me, from the inside, it can feel a lot like it too!  Sometimes I’m asked by a client or a reference-listener how or why I made a particular decision; and in some cases, especially those without actual technically verifiable information or logged conversations to go on, I have to go out on a limb and essentially say something to the effect of “well, because it ‘felt right'”, or “because it brings out the guitar *here*, which really flatters the piece” or some other abstract quantity.  At this point I just have to hope the client agrees.  If they don’t, it’s no big disaster, I am rarely emotionally tied to the decision. I just need to pick up on the feedback, do what I can with it, and move on.  Looking at the process, I guess that’s partly why the word “abstract” appears in my trading name! 🙂

“Okay, so you know a bit about this from both sides, on with the subject already!”

There are two particular commercial albums in my digital collection, both hugely successful upon their original release, whose most recent remasters have bothered me. It’s not fair to name and shame them, especially not while I await confirmation from engineers/labels that my hunch is correct.  Anyways – I’m bothered not because they’re “bad” per se, but because I bought them, took them home, and from the moment I first heard them, something about them stood out to me as being not quite “right” from a technical perspective. One of them (Album A) was released in 2001, and another (Album B) that was released earlier this year, in 2015.

What these two albums have in common is that their tonal and dynamic balance is *significantly* different to the original releases, beyond the usual remastering techniques involved with repair, restoration and sweetening of EQ and dynamics carried out to sit alongside contemporary new releases.  The giveaway is that the top-end is both much brighter than usual, and much more compressed – and the result is unnecessarily fatiguing.

Where the two albums differ, then:

  • Album A has not suffered from the “loudness wars”.
    • Its overall dynamics are relatively untouched compared with the original.
    • It appears, looking at the waveform in a DAW, that the album material has been normalised to 0dBFS (so it fills the maximum dynamic range CD has to offer), but it rarely hits such high levels.
  • Album B however, despite never having been a “loud” album on original release, has suffered from the “loudness wars”.
    • Looking at its waveform, it’s clear that it has been maximised; this means that the material has been both compressed and limited such that the original dynamics have been squashed and gain applied such that almost the entire album waveform hits the 0dBFS point.
    • As a result, the album has lost its previous tidal ebb and flow, and while arguably some details are indeed much more audible than before, it no longer has the organic subtlety it once did.  Important instrumental details get masked and actually reduced in level as louder ones come into the foreground, because with that much compression going on, there’s nowhere else for them to go except lower in level.
    • Sure, it’ll play better on an iPod while travelling on the London Underground, or in the car, so it might open up a new market that way – but for the rest of us perhaps looking forward to a better quality transfer to listen to at home or anywhere else, we don’t get that choice.
    • I’ve heard the 2015 vinyl re-release of the latter album, and it seems to not have the same issues – or if it does, nowhere near to the same extremity. There are likely good technical and human reasons for that, but that’s an aside for another post.

Experiment 1:  Treating the common issues

Last week I had some downtime, and a hunch – a dangerous combination.

Neither album was famed in its day for brightness, except for the singer’s sibilants in Album A causing vinyl cutting and playback some serious headaches if alignment wasn’t quite right. Album B does carry a lot of detail in the top end, but being mostly synthetic, and certainly not a modern-sounding album, the spectral content is much more shifted toward low-mid than anything we’d be producing post-1990.  So there will be some sheen and sparkle, but it should never be in your face, and never compressed.

Such clues told me two things: first, that Dolby A was likely not decoded from the master-tape on transfer; next, that in the case of Album B, further dynamic compression has taken place on top of the un-decoded material.

So – out came a Dolby A decoder, and through it I fed a signal from each album in turn, bouncing the decoded signal back into my DAW for storage and further analysis of the decoded signals.  Now please understand, it’s hard (if not impossible) to get a correct level-alignment without getting the test tones from the master tape, but those of us in the know can make some basic assumptions based on known recording practices of the time, and once we know what to listen for, we can also based on the audible results, especially if we have a known-good transfer from the original tape to work with.

All that said, I’m not claiming here that even with all this processing and educated guesswork, I’m able to get back to the actual sound of the original tape! But I am able to get closer to what it ought to sound like…

The result? Instantly, for both albums, the top-end was back under control – and strangely both albums were suddenly sounding much more like the previous versions I’ve been hearing, be it from vinyl, CD or other sources. Album B’s synth percussion had space between the hits, Album A’s live drums had proper dynamics and “room” space. In both albums, stereo positioning was actually much more distinct. Reverb tails were more natural, easier to place, easier to separate reverb from the “dry” source, especially for vocals. Detail and timbre in all instruments was actually easier to pick out from within the mix.  To top it all off – the albums each sounded much more like their artists’ (and their producers’) work. Both albums were far less fatiguing to listen to, while still delivering their inherent detail; and perhaps some sonic gains over previous issues.

Experiment 2:  Fixing Album B’s over-compression

First things first – we can’t ever fully reverse what has been done to a damaged audio signal without some trace being left behind.  Something will be wrong, whether “audible”, noticeable or not.  But, again, an educated guess at the practices likely used, and an ear on the output helped me get somewhere closer to the original dynamics.  But how?

Well, it was quite simple.  One track from the album has a very insistent hi-hat throughout, that comes from a synth.  If we assume that synths of the time were not MIDI controlled, and likely manually-mixed, we can assume that it should essentially sit at a constant level throughout the piece, barring fade-in/fade-out moves.  And listening to an “original” that’s pretty much what it does.  But neither in the clean nor my “decoded” version of the later album does it do so.  It drops up and down in level whenever the other pads and swept instruments come and go.  It was more noticeable on my “decoded” version, but with the frequency and micro-dynamic blends being so much more pleasant, I knew that I’d made progress and the way forward was to fix the compression if I could.

Out came a simple expander plug-in; Inserting this before the Dolby decoder, and tweaking various settings until I was happy that the hi-hat was sitting at a constant level throughout my chosen reference piece, restored dynamics to something like the original, and returned that hi-hat to something much closer to a near-constant level as the track plays.  In the end, we get something like a 6-9dB gain reduction, and the waveform looks far less squashed.  And sounds it, too.

The trick then, was to listen to all four Albums, A, B, A restored, B restored, at similar overall loudness levels, and see which works better.  So far, in this house anyways, we’re happier with the restored versions, even including those who are unfamiliar with the artistic content.

Prologue – Is this a mistake? And if so, how could it have happened?

When dealing with remasters, especially for older albums, we typically go back to playing analogue tape. There are *many* things that can go wrong here at a technical level. We’re worrying about whether the tape machine is aligned to the tape itself, both tape and machine are clean, and that the correct noise reduction technology is used, whether we’re actually getting all the information we can off that tape.

Then there is a human element. I’ve lost count of the number of times even in my small sample, where I’ve encountered a DAT or 1/2” reel labelled as being pre-EQ’d or Dolby-encoded with some system or another when in fact it wasn’t. Then there are other similar labelling and human errors I’ve encountered; Perhaps it wasn’t labelled as being Dolby-encoded and it really was. Or perhaps the “safety copy” was actually the clean master and the “master copy” was actually the cruddy “safety” with a 10dB higher noise-floor recorded at half-speed on lower-grade tape on an inferior machine that we know nothing about, with the channels swapped randomly due to a patching error in the studio.

Technology, and technicians, like the kind of questions and answers that have defined, logical “0 or 1”, “yes or no”, “is this right or is this wrong?” kind of answers. Unfortunately for us then, when dealing with music, as with any other art, and so then dealing with musicians, producers and other artists involved with the music creation and production process, we soon find that the lines between “right and wrong” very quickly get blurred.

As an engineer, I’m also all too aware of the dichotomy between my *paying* client (usually the artist), and my *unpaying* client (the listener).  Most of the time these are in agreement with what is needed for a project, but sometimes they’re not. The usual issue is the one of being asked for too little dynamic range – “can you turn it up a bit so it sounds as ‘loud’ as everything else?” and the resulting sound is fatiguing even to me as the engineer to work with, let alone the poor saps who’ll be invited to buy it. Sometimes I know that some sounds simply won’t process well to MP3/AAC (that’s less of an issue these days, but still happens).

Anyways – all that to say -if these albums both suffered the same mistake, if indeed it was, then even without the myriad artistic issues creeping in, I can see how an unlabelled, undecoded Dolby-A tape can slip through the net, and blow the ears off an artist or engineer who’s been used to the previous released versions and get people saying “YEAH, LET’S DO THAT ONE!” 🙂

CF

abstractnoise reel-to-reel demos

This week I’ve been playing with our recently acquired Revox B77 1/4″ recorder. It’s a stereo half-track model, and I use it at 7ips. Currently I’m not sure what actual tape I’m recording to, as it’s a 20ish-minute offcut from a reel that was found to be blank at the end of a transcription job. The two tracks presented here represent two very different production methods now open to me. 

“Changes Afoot” was sent track-by-track to tape, from DAW, and back again, to produce a digitally mixed master. A couple of those tracks were recorded particularly “hot” to bring out more tape character. No noise reduction was used at all. Signal-to-noise on any single tape recording in this setup was was found to be around 60dB.

“Innocence 2010” started as a digital stereo pre-master that I was never fully happy with, which was sent to tape via a skunkworks noise reduction system I’m working on behind the scenes; the system is not yet complete, but has enough processing built to give a useful 10dB gain in signal-to-noise ratio without any significant audible artefacts, as borne out by the 69-70dB signal-to-noise ratio found in this setup even after peak-limiting.

I suspect the signal-to-noise ratio is limited by noisy pre-amps on my DAW setup; I’ll need to swap to a different audio interface to confirm. That’s something to play with another day. Overall I’m VERY impressed with the overall sound quality this kit is able to deliver, and the range of analogue “colours” it can provide. I’m really looking forward to finishing the skunkworks noise reduction project; I have my eyes set on somewhere near 24dB noise reduction once it’s fully up and running. But it’s good to prove that I can both encode and decode on-the-fly!

Watch this space!

Protected: Restoration Listening Test 1

This content is password protected. To view it please enter your password below:

The Pilgrims Pod Radio Hour

This radio variety show, hosted by Will Mackerras, is performed live with guest actors and musicians in front of a real live audience in London’s bustling Fitzrovia.   The show is also streamed live online via Mixlr, and recorded for edited podcasts.

So, if you want to go to live recording at its London home, then please do head over to The Pilgrims Pod Radio Hour website for more details as we get them.

Why am I plugging it here?

I’m involved with the show, primarily as broadcast technician.  This means I mix the show both for the live and online audience, while also recording the show in a way that enables us to produce a more polished product for podcast and possible future re-broadcast.

Here on my blog, you’ll see me post copies of the podcast mixes, with some of my notes on what I’ve learned during that show.  As the show develops its own path, I’m usually trying new things to make the technology work better, with less overall effort than the last show we tried.

Pilgrim’s Pod Radio Hour – Episode 3

UPDATE (26/2/2014):

This is the edited version, to keep the show length under an hour, and to tidy up some slower-moving passages.

ORIGINAL POST:

Another episode was recorded on Friday 7th February.  A slightly different feel to this one – with more spoken content. Featuring Liz Jadav and Phil Gallagher.

Technical notes

This time, the live-stream was sourced from the software mix that created this edited recording.  I’ve fixed a mistake where I ran out of hands to sort the live-stream mix during the intro, and we re-recorded a song with Paul after he’d choked on some water just before his song!  Aside from those issues, the stream levels were much more easily managed this way, and mixing the recording live with the usual processing in-place also made this edit much quicker to produce!

Also new to us was a Superlux S502 ORTF mic (select “English” from the top-right of the linked page), used for room ambience and audience.  Compared with the AKG 451’s we were using, rigging was much simpler, and the resulting sound was slightly more consistent.  I’m really pleased with this mic in this and some other applications; subject for another post I’m sure!

Blog at WordPress.com.

Up ↑

%d bloggers like this: