Professional audio engineer and hobbyist creative technician, based in Brooklyn Center, MN. This blog hosts some of my personal and leisure interests as well as some examples of my professional work. All works and opinions represented on this site are my own, unless otherwise credited/quoted.
I was posted onto a live audio gig last week where I got to use an elderly Yamaha 01V for the first time in about 8 years. Initially I had misgivings given how old the board is (launched in 1998 according to Wikipedia – tallies with memory) and how long it had been since I’d faced one in anger. As setup commenced I was pleasantly surprised at how easy it was to mix on this machine. Sure, I only had five live inputs to deal with, so it wasn’t exactly a full band mix; but I did need to pinch out two monitor mixes from house position.
I found myself quickly remembering where everything is, and some cute foibles about getting into the EQ and using more patience than with more modern desks, as quick adjustments on the rotary encoders often got misinterpreted and intended cuts come out as boosts, or vice-versa.
So how did it sound? Well, once I reminded myself of how the pre-amps can be rather noisy and sorted my gain structure out a little, my impression was that it sounded a little more harsh than more modern digital mixers. I didn’t seem to need nearly as much gain on the pre-amps as I’m used to with other systems, and I had to be careful too because there doesn’t seem to be as much headroom in the main mix bus as I’d been used to. But the compression and EQ all did what they were supposed to, and even the on-board reverb sounded surprisingly good with a few tweaks here and there to warm it up a bit.
In the indoor confines of a darkened sound booth, the monochrome LCD was a joy to use too – I could see it at pretty much any normal angle and didn’t struggle to read it at all.
Flipping between layers to set main, monitor and effects mixes was a pleasure, as was the continual access to the “Return 1” ON/OFF button so I could switch off the reverb when the artist talked rather than sang. All the moving faders jumped to their correct positions without fuss – a testament to build quality and good hygiene by previous users!
All in all I was surprised at how little I actually focussed on the machine and just got on with the job. The raw simplicity of the device compared with the workflow of, say, a Behringer X32 compact to get the same work done was striking – I think other manufacturers could learn a lot from those early days of keeping key things available at a single button-push.
It’s amazing how far we’ve come since the launch of the 01V in terms of raw sound quality and user-interfacing, but I was stunned at how genuinely useful this board was in a real-world gigging situation. I won’t let the thought of using one scare me again, so long as it’s been reasonably well looked after.
…and I’m rather excited.
I’ve been a fan of the Volumio Project for rather a while now, since discovering it as a good platform for my Raspberry Pi audio player a year or more ago. Several self-built MPD-based setups have come and gone since the Raspberry Pi arrived, but Volumio has been the mainstay for reliable playback with control from numerous devices. The main draw for me has been the combination of its web interface, the fact the hard work has been done for me in terms of getting all the software components working together, and the fact that the whole package does seem to sound good.
On reflection I’m not sure that the various “audio optimizations” at the kernel or any other level really make an audible difference, but I do know that the whole package does seem to work more reliably on the limited resources of Raspberry Pi hardware than anything I’ve been able to cook up myself, at least without significant effort expended.
So why does an x86 port excite me so much? Two reasons:
- More processing power availability opens the platform up to interesting things like DSP and dual-use such as streaming to remote machines and the like without falling over. Presently I’d have multiple Raspberry Pi’s set up with dedicated tasks. That’s been educational, but arguably a lot of hassle to set up and maintain. A single machine would make some of this stuff easier.
- Opening up the platform to more common (and more powerful) hardware fvastly extends the range of audio and storage hardware that can usefully be used with it, and perhaps extends Volumio’s exposure on the wider marketplace.
The Raspberry Pi is an amazing platform for what it is – and audio systems based upon its limited bus bandwidth are capable of sounding incredible. But not everyone has a NAS to throw their music onto, which makes the Pi’s USB2 storage a pain to deal with when using it for networking, local storage AND the audio device all at the same time. And even those two do use it with a NAS are hampered by the 100MB Ethernet connection. Sure, streaming even “HD” audio files won’t tax it, but storing, backing up and indexing large audio collections will. And THIS is where even an old Netbook could best it.
At some point where time allows, I’m looking forward to putting my elderly ASUS Netbook through its paces with a 192KHz-capable USB2 audio device and either a USB drive or “Gigabit” Ethernet adaptor (its own onboard Ethernet, like the Pi’s, is limited to 100MB), to see how it stacks up against the Pi running on the same hardware. I know from running the RC download today that the distro works and plays audio even on the onboard audio, and the default setup to use the onboard display, keyboard and mouse to show the Web interface by default is a lovely touch.
A new home, a new country indeed; and so a new showcase, covering a wider variety of work.
What’s not included (and why):
- Restoration work (copyright and attribution issues);
- Sermon recordings I’ve worked on (owned by employer, samples available upon request);
- Other recorded works I’ve not been able to clear for use due to copyright, or other reasons out of respect for the artists and their plans for future development and release.
Thing is, they only just understand what they have now. And they’re in the middle of a larger life project, for which cutting off email access RIGHT NOW with no more than SAME-DAY notice would effectively kill the project stone-dead and potentially leave them in a *terrible* financial mess.
Sure, as the email from MS points out, they *could* continue to collect emails from the web client until the upgrade has been completed. But that’s *another* thing to learn at a time when they’re least able to put time or mental power in to processing that kind of change.
From a technical perspective it really annoys me that a simple email service needs sufficient upgrades both to server and OS/client just to deliver electronic mail, for which standards for doing this sufficiently securely and efficiently have existed for *years* now, and seem to be followed by just about every other service provider on the planet. Even Apple’s iCloud service I *think* has eventually got over itself and eventually allowed standard IMAP login from non-Apple clients.
We’ll work out a solution – but this whole thing leaves a very bitter taste in the meantime, especially for those of us who just need to coach our users to get things done, because they don’t have the time and brain-space to keep pace with *everything* going on in tech world and why it’s shaped that way.
We in tech world would do well to put the users and tasks first for a change.
- The long-running archiving project has hit a significant milestone – I’ve now digitised as much of the physical media as I can. Limits are set now by condition of the incoming media, and whether or not it’s really worth digitising 4 or more decaying copies of the same thing when we already have better copies of the same thing elsewhere. The only reel (!) exception is a set of reels for a particular project whose magnetic layer fell off as soon as the reels were unpacked. No way is that worth spending money to preserve further given the age, obscurity and potential value of the content, not least the hardware value to retrieve it once the media itself has been restored. Shame, but commercial sense has to come into it somewhere along the line.
- Now have a DIY automated process for holding an in-DAW mix (working in REAPER) at -23dBLUFS or thereabouts, which greatly simplifies things for radio and podcast production. Even if I only use it for monitoring or other less critical work, it’s an amazing time-saving tool.
- Obviously this means it can be adapted for -16dBLUFS or any other value as needed.
- It’s *really* not pretty for a number of reasons – most bothersome to me is that it presently works in stepped values, somewhere around 10 updates per second – rather than more smoothly applying gain or attenuation.
- For high-end stuff, I’m still happy to do final levelling by hand as it does tend to sound better, but that does add time to any given project.
I’ve been using lossless audio compression as a matter of course in my audio workflow for a while, and not just for end-distribution. Services like Bandcamp and Soundcloud support uploading in FLAC format, and many clients happily accept ALAC (Apple Lossless) or FLAC in place of traditional uncompressed WAV or AIFF formats.
As for long-term storage and archiving, I’ve switched to FLAC for my projects, and have back-converted most of my catalogue. But why FLAC?
- The reduction in network bandwidth and storage requirements is always nice.
- It brings the possibility of checking the project audio files for errors via the FLAC checkum (applied at the time of creation) against how the file decompresses in present-day.
- This can show up problems with “bit-rot” on disk storage that would otherwise remain hidden. It’s a useful alternative to deploying ZFS-based file systems and keeps storage and network kit at an affordable “prosumer” level while such technologies mature a little longer, and hopefully come down in cost too.
- If I find a problem file? Fine – restore from a good backup. But that does rely of course on having good backups, and the corruption not being carried into those!
- It’s an established format that many, many software applications can read, across a wide variety of operating systems – and given the number of websites springing up offering “audiophile” digital transfers of consumer audio material based upon the format, I have good confidence that software to read and write to the format will continue to be developed for some years ahead. And if not, it’s still easy enough to batch-convert files to whatever format replaces it.
Cons (none unique to FLAC, I notice):
- Reaper is my DAW of choice, and exporting large projects out to FLAC is time-consuming as it still doesn’t multithread FLAC exports for projects.
- While Reaper *does* support (in version 5.111, at last check) recording directly to FLAC and other formats, recording to anything other than WAV or AIFF on my systems has consistently thrown up audible glitches in the recorded material. With some sample-accurate editing these can be fixed, but still not acceptable.
- What I do therefore is to record to WAV/AIFF first, then save the project to FLAC before any significant mix down happens.
- Not every DAW supports FLAC natively, or at all. But then, for me this is a moot point – not every DAW can read project files from every other DAW, so this is just a part of that wider picture. You pick your tool for the job and deal with the consequences.
- Conversion takes time, especially offline for large projects.
So – that’s a quick brain-dump of how I’m working with this stuff and why. I’ve missed steps and I’m sure others will be quick to pick holes in it.
I suppose my question to anyone else reading this with sufficient interest is… What is everyone else doing? What file format would you pick, and why?
I’ve been using my Revox B77 with various audio interfaces and operating systems for a while, and this week I’ve restarted tape imports for a long-running project that needs to come to a final conclusion – at least on the ingest stage.
Various factors, not least compatibility with Mac OS X El Capitan, have forced me into using my Presonus Firestudio Project as the analogue-to-digital converter for this stage. It’s not ideal for a number of reasons, but it gets a job done, without too many sonic compromises (especially when compared with the vagaries of the source material) and so I suck it up and move on with life.
One important thing has come to light this time around though… My B77 *really* doesn’t like feeding Channels 1&2 at line-level, which are labelled as “instrument” (presumably Hi-Z, high impedance) inputs. Its output preamplifier runs out of headroom rather quickly and ends up giving horrible clipping, especially with the outputs wound up to give something approximating “line level”.
Plugging into any other channel line input (3 through 8) does reduce the recorded signal level as seen by the DAW software, but also gets rid of this clipping. Lesson learned.
I’m left wondering if this is an expected quirk of the B77 electronics, or whether it’s a quirk of my specific machine? I don’t (yet) have time to go poring through manuals to find out for sure – but I can’t say this result surprises me!
Still – onward and upward!
So here is a quick “faders-up” preview of “O Holy Night”, as performed at All Souls Church on 12th December at Christmas Praise – what a stunning event that was!
Credits as follows:
- Composer: Adolph Adam
- Singer: Constantine Andronikou
- Orchestra: All Souls Orchestra
- Conductor: Dr Noel Tredinnick
- Because there’s such a huge dynamic swing between loudest and quietest parts of the piece (some 20->40dB depending which metering standard one uses), I’ve approximated what I *would* be doing with faders by using compressors. The result isn’t yet “pretty” but it is functional – at least enough until it’s decided whether further work will be done!
- Though I helped provide the technical links between live and recording “worlds” for this event, I’m not actually involved in producing the radio mix. 🙂
Yup – thought I’d dip my toes into the waters with a new single, out on Bandcamp! Really pleased to have been able to work on this despite the year that’s been. Two new bits of kit have come out to play for this one: Caustic3 on the iPad, and a Revox B77 that I’ve been primarily using to do tape restorations, notably for a certain Christian orchestra. But of course, I *had* to see what it could do with recordings too! 🙂
I’d love it if others could support me and the wider range of non-mixing work I do, so here’s your chance!
I’ve commented on this blog before about the possibly questionable quality of some digital remasters released of late. Common subjective complaints in online fan and hifi forums, made myself both here and among friends in person, are that some particular remasters might be too loud, too bright, and/or otherwise “overdone” given what we know or perceive of the original source material. There might well be various artistic or marketing-related reasons for this, so I’m not here to argue for or against these issues.
Further complicating the issue for me, both as a fan and a professional, is that many of these stand-out features are seen overwhelmingly as positive things by many fans, whether they are technically correct or not. It would seem that a combination of perceived increase in detail and volume outweighs any issues of listening fatigue or known certain deviation from the presentation of the original master.
I’ve embarked professionally on remastering and restoration processes and have learned, from the coal-face so to speak, much of the reality of what’s involved. To onlookers it appears to be a black art – and believe me, from the inside, it can feel a lot like it too! Sometimes I’m asked by a client or a reference-listener how or why I made a particular decision; and in some cases, especially those without actual technically verifiable information or logged conversations to go on, I have to go out on a limb and essentially say something to the effect of “well, because it ‘felt right'”, or “because it brings out the guitar *here*, which really flatters the piece” or some other abstract quantity. At this point I just have to hope the client agrees. If they don’t, it’s no big disaster, I am rarely emotionally tied to the decision. I just need to pick up on the feedback, do what I can with it, and move on. Looking at the process, I guess that’s partly why the word “abstract” appears in my trading name! 🙂
“Okay, so you know a bit about this from both sides, on with the subject already!”
There are two particular commercial albums in my digital collection, both hugely successful upon their original release, whose most recent remasters have bothered me. It’s not fair to name and shame them, especially not while I await confirmation from engineers/labels that my hunch is correct. Anyways – I’m bothered not because they’re “bad” per se, but because I bought them, took them home, and from the moment I first heard them, something about them stood out to me as being not quite “right” from a technical perspective. One of them (Album A) was released in 2001, and another (Album B) that was released earlier this year, in 2015.
What these two albums have in common is that their tonal and dynamic balance is *significantly* different to the original releases, beyond the usual remastering techniques involved with repair, restoration and sweetening of EQ and dynamics carried out to sit alongside contemporary new releases. The giveaway is that the top-end is both much brighter than usual, and much more compressed – and the result is unnecessarily fatiguing.
Where the two albums differ, then:
- Album A has not suffered from the “loudness wars”.
- Its overall dynamics are relatively untouched compared with the original.
- It appears, looking at the waveform in a DAW, that the album material has been normalised to 0dBFS (so it fills the maximum dynamic range CD has to offer), but it rarely hits such high levels.
- Album B however, despite never having been a “loud” album on original release, has suffered from the “loudness wars”.
- Looking at its waveform, it’s clear that it has been maximised; this means that the material has been both compressed and limited such that the original dynamics have been squashed and gain applied such that almost the entire album waveform hits the 0dBFS point.
- As a result, the album has lost its previous tidal ebb and flow, and while arguably some details are indeed much more audible than before, it no longer has the organic subtlety it once did. Important instrumental details get masked and actually reduced in level as louder ones come into the foreground, because with that much compression going on, there’s nowhere else for them to go except lower in level.
- Sure, it’ll play better on an iPod while travelling on the London Underground, or in the car, so it might open up a new market that way – but for the rest of us perhaps looking forward to a better quality transfer to listen to at home or anywhere else, we don’t get that choice.
- I’ve heard the 2015 vinyl re-release of the latter album, and it seems to not have the same issues – or if it does, nowhere near to the same extremity. There are likely good technical and human reasons for that, but that’s an aside for another post.
Experiment 1: Treating the common issues
Last week I had some downtime, and a hunch – a dangerous combination.
Neither album was famed in its day for brightness, except for the singer’s sibilants in Album A causing vinyl cutting and playback some serious headaches if alignment wasn’t quite right. Album B does carry a lot of detail in the top end, but being mostly synthetic, and certainly not a modern-sounding album, the spectral content is much more shifted toward low-mid than anything we’d be producing post-1990. So there will be some sheen and sparkle, but it should never be in your face, and never compressed.
Such clues told me two things: first, that Dolby A was likely not decoded from the master-tape on transfer; next, that in the case of Album B, further dynamic compression has taken place on top of the un-decoded material.
So – out came a Dolby A decoder, and through it I fed a signal from each album in turn, bouncing the decoded signal back into my DAW for storage and further analysis of the decoded signals. Now please understand, it’s hard (if not impossible) to get a correct level-alignment without getting the test tones from the master tape, but those of us in the know can make some basic assumptions based on known recording practices of the time, and once we know what to listen for, we can also based on the audible results, especially if we have a known-good transfer from the original tape to work with.
All that said, I’m not claiming here that even with all this processing and educated guesswork, I’m able to get back to the actual sound of the original tape! But I am able to get closer to what it ought to sound like…
The result? Instantly, for both albums, the top-end was back under control – and strangely both albums were suddenly sounding much more like the previous versions I’ve been hearing, be it from vinyl, CD or other sources. Album B’s synth percussion had space between the hits, Album A’s live drums had proper dynamics and “room” space. In both albums, stereo positioning was actually much more distinct. Reverb tails were more natural, easier to place, easier to separate reverb from the “dry” source, especially for vocals. Detail and timbre in all instruments was actually easier to pick out from within the mix. To top it all off – the albums each sounded much more like their artists’ (and their producers’) work. Both albums were far less fatiguing to listen to, while still delivering their inherent detail; and perhaps some sonic gains over previous issues.
Experiment 2: Fixing Album B’s over-compression
First things first – we can’t ever fully reverse what has been done to a damaged audio signal without some trace being left behind. Something will be wrong, whether “audible”, noticeable or not. But, again, an educated guess at the practices likely used, and an ear on the output helped me get somewhere closer to the original dynamics. But how?
Well, it was quite simple. One track from the album has a very insistent hi-hat throughout, that comes from a synth. If we assume that synths of the time were not MIDI controlled, and likely manually-mixed, we can assume that it should essentially sit at a constant level throughout the piece, barring fade-in/fade-out moves. And listening to an “original” that’s pretty much what it does. But neither in the clean nor my “decoded” version of the later album does it do so. It drops up and down in level whenever the other pads and swept instruments come and go. It was more noticeable on my “decoded” version, but with the frequency and micro-dynamic blends being so much more pleasant, I knew that I’d made progress and the way forward was to fix the compression if I could.
Out came a simple expander plug-in; Inserting this before the Dolby decoder, and tweaking various settings until I was happy that the hi-hat was sitting at a constant level throughout my chosen reference piece, restored dynamics to something like the original, and returned that hi-hat to something much closer to a near-constant level as the track plays. In the end, we get something like a 6-9dB gain reduction, and the waveform looks far less squashed. And sounds it, too.
The trick then, was to listen to all four Albums, A, B, A restored, B restored, at similar overall loudness levels, and see which works better. So far, in this house anyways, we’re happier with the restored versions, even including those who are unfamiliar with the artistic content.
Prologue – Is this a mistake? And if so, how could it have happened?
When dealing with remasters, especially for older albums, we typically go back to playing analogue tape. There are *many* things that can go wrong here at a technical level. We’re worrying about whether the tape machine is aligned to the tape itself, both tape and machine are clean, and that the correct noise reduction technology is used, whether we’re actually getting all the information we can off that tape.
Then there is a human element. I’ve lost count of the number of times even in my small sample, where I’ve encountered a DAT or 1/2” reel labelled as being pre-EQ’d or Dolby-encoded with some system or another when in fact it wasn’t. Then there are other similar labelling and human errors I’ve encountered; Perhaps it wasn’t labelled as being Dolby-encoded and it really was. Or perhaps the “safety copy” was actually the clean master and the “master copy” was actually the cruddy “safety” with a 10dB higher noise-floor recorded at half-speed on lower-grade tape on an inferior machine that we know nothing about, with the channels swapped randomly due to a patching error in the studio.
Technology, and technicians, like the kind of questions and answers that have defined, logical “0 or 1”, “yes or no”, “is this right or is this wrong?” kind of answers. Unfortunately for us then, when dealing with music, as with any other art, and so then dealing with musicians, producers and other artists involved with the music creation and production process, we soon find that the lines between “right and wrong” very quickly get blurred.
As an engineer, I’m also all too aware of the dichotomy between my *paying* client (usually the artist), and my *unpaying* client (the listener). Most of the time these are in agreement with what is needed for a project, but sometimes they’re not. The usual issue is the one of being asked for too little dynamic range – “can you turn it up a bit so it sounds as ‘loud’ as everything else?” and the resulting sound is fatiguing even to me as the engineer to work with, let alone the poor saps who’ll be invited to buy it. Sometimes I know that some sounds simply won’t process well to MP3/AAC (that’s less of an issue these days, but still happens).
Anyways – all that to say -if these albums both suffered the same mistake, if indeed it was, then even without the myriad artistic issues creeping in, I can see how an unlabelled, undecoded Dolby-A tape can slip through the net, and blow the ears off an artist or engineer who’s been used to the previous released versions and get people saying “YEAH, LET’S DO THAT ONE!” 🙂