On commercial remasters possibly issued without Dolby A decoding; Mistakes, art, or…?

Some background…

I’ve commented on this blog before about the possibly questionable quality of some digital remasters released of late. Common subjective complaints in online fan and hifi forums, made myself both here and among friends in person, are that some particular remasters might be too loud, too bright, and/or otherwise “overdone” given what we know or perceive of the original source material.  There might well be various artistic or marketing-related reasons for this, so I’m not here to argue for or against these issues.

Further complicating the issue for me, both as a fan and a professional, is that many of these stand-out features are seen overwhelmingly as positive things by many fans, whether they are technically correct or not.  It would seem that a combination of perceived increase in detail and volume outweighs any issues of listening fatigue or known certain deviation from the presentation of the original master.

I’ve embarked professionally on remastering and restoration processes and have learned, from the coal-face so to speak, much of the reality of what’s involved. To onlookers it appears to be a black art – and believe me, from the inside, it can feel a lot like it too!  Sometimes I’m asked by a client or a reference-listener how or why I made a particular decision; and in some cases, especially those without actual technically verifiable information or logged conversations to go on, I have to go out on a limb and essentially say something to the effect of “well, because it ‘felt right'”, or “because it brings out the guitar *here*, which really flatters the piece” or some other abstract quantity.  At this point I just have to hope the client agrees.  If they don’t, it’s no big disaster, I am rarely emotionally tied to the decision. I just need to pick up on the feedback, do what I can with it, and move on.  Looking at the process, I guess that’s partly why the word “abstract” appears in my trading name! 🙂

“Okay, so you know a bit about this from both sides, on with the subject already!”

There are two particular commercial albums in my digital collection, both hugely successful upon their original release, whose most recent remasters have bothered me. It’s not fair to name and shame them, especially not while I await confirmation from engineers/labels that my hunch is correct.  Anyways – I’m bothered not because they’re “bad” per se, but because I bought them, took them home, and from the moment I first heard them, something about them stood out to me as being not quite “right” from a technical perspective. One of them (Album A) was released in 2001, and another (Album B) that was released earlier this year, in 2015.

What these two albums have in common is that their tonal and dynamic balance is *significantly* different to the original releases, beyond the usual remastering techniques involved with repair, restoration and sweetening of EQ and dynamics carried out to sit alongside contemporary new releases.  The giveaway is that the top-end is both much brighter than usual, and much more compressed – and the result is unnecessarily fatiguing.

Where the two albums differ, then:

  • Album A has not suffered from the “loudness wars”.
    • Its overall dynamics are relatively untouched compared with the original.
    • It appears, looking at the waveform in a DAW, that the album material has been normalised to 0dBFS (so it fills the maximum dynamic range CD has to offer), but it rarely hits such high levels.
  • Album B however, despite never having been a “loud” album on original release, has suffered from the “loudness wars”.
    • Looking at its waveform, it’s clear that it has been maximised; this means that the material has been both compressed and limited such that the original dynamics have been squashed and gain applied such that almost the entire album waveform hits the 0dBFS point.
    • As a result, the album has lost its previous tidal ebb and flow, and while arguably some details are indeed much more audible than before, it no longer has the organic subtlety it once did.  Important instrumental details get masked and actually reduced in level as louder ones come into the foreground, because with that much compression going on, there’s nowhere else for them to go except lower in level.
    • Sure, it’ll play better on an iPod while travelling on the London Underground, or in the car, so it might open up a new market that way – but for the rest of us perhaps looking forward to a better quality transfer to listen to at home or anywhere else, we don’t get that choice.
    • I’ve heard the 2015 vinyl re-release of the latter album, and it seems to not have the same issues – or if it does, nowhere near to the same extremity. There are likely good technical and human reasons for that, but that’s an aside for another post.

Experiment 1:  Treating the common issues

Last week I had some downtime, and a hunch – a dangerous combination.

Neither album was famed in its day for brightness, except for the singer’s sibilants in Album A causing vinyl cutting and playback some serious headaches if alignment wasn’t quite right. Album B does carry a lot of detail in the top end, but being mostly synthetic, and certainly not a modern-sounding album, the spectral content is much more shifted toward low-mid than anything we’d be producing post-1990.  So there will be some sheen and sparkle, but it should never be in your face, and never compressed.

Such clues told me two things: first, that Dolby A was likely not decoded from the master-tape on transfer; next, that in the case of Album B, further dynamic compression has taken place on top of the un-decoded material.

So – out came a Dolby A decoder, and through it I fed a signal from each album in turn, bouncing the decoded signal back into my DAW for storage and further analysis of the decoded signals.  Now please understand, it’s hard (if not impossible) to get a correct level-alignment without getting the test tones from the master tape, but those of us in the know can make some basic assumptions based on known recording practices of the time, and once we know what to listen for, we can also based on the audible results, especially if we have a known-good transfer from the original tape to work with.

All that said, I’m not claiming here that even with all this processing and educated guesswork, I’m able to get back to the actual sound of the original tape! But I am able to get closer to what it ought to sound like…

The result? Instantly, for both albums, the top-end was back under control – and strangely both albums were suddenly sounding much more like the previous versions I’ve been hearing, be it from vinyl, CD or other sources. Album B’s synth percussion had space between the hits, Album A’s live drums had proper dynamics and “room” space. In both albums, stereo positioning was actually much more distinct. Reverb tails were more natural, easier to place, easier to separate reverb from the “dry” source, especially for vocals. Detail and timbre in all instruments was actually easier to pick out from within the mix.  To top it all off – the albums each sounded much more like their artists’ (and their producers’) work. Both albums were far less fatiguing to listen to, while still delivering their inherent detail; and perhaps some sonic gains over previous issues.

Experiment 2:  Fixing Album B’s over-compression

First things first – we can’t ever fully reverse what has been done to a damaged audio signal without some trace being left behind.  Something will be wrong, whether “audible”, noticeable or not.  But, again, an educated guess at the practices likely used, and an ear on the output helped me get somewhere closer to the original dynamics.  But how?

Well, it was quite simple.  One track from the album has a very insistent hi-hat throughout, that comes from a synth.  If we assume that synths of the time were not MIDI controlled, and likely manually-mixed, we can assume that it should essentially sit at a constant level throughout the piece, barring fade-in/fade-out moves.  And listening to an “original” that’s pretty much what it does.  But neither in the clean nor my “decoded” version of the later album does it do so.  It drops up and down in level whenever the other pads and swept instruments come and go.  It was more noticeable on my “decoded” version, but with the frequency and micro-dynamic blends being so much more pleasant, I knew that I’d made progress and the way forward was to fix the compression if I could.

Out came a simple expander plug-in; Inserting this before the Dolby decoder, and tweaking various settings until I was happy that the hi-hat was sitting at a constant level throughout my chosen reference piece, restored dynamics to something like the original, and returned that hi-hat to something much closer to a near-constant level as the track plays.  In the end, we get something like a 6-9dB gain reduction, and the waveform looks far less squashed.  And sounds it, too.

The trick then, was to listen to all four Albums, A, B, A restored, B restored, at similar overall loudness levels, and see which works better.  So far, in this house anyways, we’re happier with the restored versions, even including those who are unfamiliar with the artistic content.

Prologue – Is this a mistake? And if so, how could it have happened?

When dealing with remasters, especially for older albums, we typically go back to playing analogue tape. There are *many* things that can go wrong here at a technical level. We’re worrying about whether the tape machine is aligned to the tape itself, both tape and machine are clean, and that the correct noise reduction technology is used, whether we’re actually getting all the information we can off that tape.

Then there is a human element. I’ve lost count of the number of times even in my small sample, where I’ve encountered a DAT or 1/2” reel labelled as being pre-EQ’d or Dolby-encoded with some system or another when in fact it wasn’t. Then there are other similar labelling and human errors I’ve encountered; Perhaps it wasn’t labelled as being Dolby-encoded and it really was. Or perhaps the “safety copy” was actually the clean master and the “master copy” was actually the cruddy “safety” with a 10dB higher noise-floor recorded at half-speed on lower-grade tape on an inferior machine that we know nothing about, with the channels swapped randomly due to a patching error in the studio.

Technology, and technicians, like the kind of questions and answers that have defined, logical “0 or 1”, “yes or no”, “is this right or is this wrong?” kind of answers. Unfortunately for us then, when dealing with music, as with any other art, and so then dealing with musicians, producers and other artists involved with the music creation and production process, we soon find that the lines between “right and wrong” very quickly get blurred.

As an engineer, I’m also all too aware of the dichotomy between my *paying* client (usually the artist), and my *unpaying* client (the listener).  Most of the time these are in agreement with what is needed for a project, but sometimes they’re not. The usual issue is the one of being asked for too little dynamic range – “can you turn it up a bit so it sounds as ‘loud’ as everything else?” and the resulting sound is fatiguing even to me as the engineer to work with, let alone the poor saps who’ll be invited to buy it. Sometimes I know that some sounds simply won’t process well to MP3/AAC (that’s less of an issue these days, but still happens).

Anyways – all that to say -if these albums both suffered the same mistake, if indeed it was, then even without the myriad artistic issues creeping in, I can see how an unlabelled, undecoded Dolby-A tape can slip through the net, and blow the ears off an artist or engineer who’s been used to the previous released versions and get people saying “YEAH, LET’S DO THAT ONE!” 🙂

CF

Feia – cassette restoration case-study

After a few weeks playing with head alignments, audio interfaces, decks, plugins and sanity, I’ve run off a successful “first draft” attempt to restoring these interesting recordings.

About the cassettes themselves…

The cassettes themselves are a little odd – they appear to be using Type-II (CrO2) shells, but I can’t tell from listening or visual inspection whether the formulation on the tape is actually Type-I (Ferric) or Type-II. Both tapes seemed to sound better with Type-I playback EQ, selected in each case by blocking the tape type holes in the shell with judicious use of Scotch-tape.

Noise levels on the tapes were horrendous. Both cassettes seem to have been recorded about 10dB quieter than most commercial tapes given to me in the same batch, and seem to have experienced significant loss of high-frequencies – something that I noticed getting audibly worse with each playback pass despite cleaning and demagnetising the heads before each run. At best I was getting something like 15dB signal-to-noise before noise reduction. Much of this is broadband noise, but there’s also a significant rolling static crackle running on the right channel, which seems to match the rotational speed either of the pinch-roller on the deck, or perhaps the guide capstans inside the tape shell itself.

Playback

Something I’ve always known about the Akai deck I’ve now inherited and restored to working condition is that it’s always played a little fast. While I’ve not been able to fix this at a hardware level (seems to involve fiddling with the motor control circuits – a major stripdown and rebuild I’m not convinced I have the time or confidence to complete without an accident), I have taken an average of how fast the machine is playing by comparing songs from an assortment of pre-recorded commercial cassettes with digital copies from CD or previews on iTunes. From this I discovered that pulling the playback speed down to 95.75% of the sampled audio gives an acceptable match (within 1 second or so across the side of a cassette) to the commercially-available digital versions. This is really easy to do in my audio software as it doesn’t involve convoluted resampling and slicing to keep the original pitch.

Noise reduction

Challenges

A significant HF-boost was required to get the tape sounding anything like a natural recording, which of course brings the noise levels up. I don’t have access to an external Dolby decoder, and the Akai deck used for doing the transfers sounds very strange with Dolby B engaged even on well-produced pre-recorded material that came to me in excellent condition. The Denon deck I have is technically better than the Akai in many ways, but to beat the Akai in sonic terms needs about an hour spent on alignment (per cassette) and the source material needs to be in excellent condition. So I proceeded to transfer the content from the Akai at a known higher running speed, without Dolby decoding, in the hopes of being able to fix this later in software.

Decoding for playback

There is a lot said online about the mechanics of Dolby B, and many people think it’s a simple fixed 10dB shelving HF EQ boost (emphasis) on recording, that is easily dealt with by a simple shelving HF EQ cut (de-emphasis) on playback – or even simply doing nothing with older tapes that have suffered HF loss. Well, without going into detail that might infringe patents and/or copyright, let me tell you that even from listening to the undecoded audio, it really isn’t that simple. What we’re dealing with here is some form of dynamic processing, dependent on both the incoming frequency content AND the incoming levels. Even with its modest highest-available noise reduction, it’s a beastly-clever system when it works, and remarkably effective in many environments, but as with many complex systems it makes a lot of assumptions, open to a lot of factors influencing the quality of the output.

Working up a solution

Having no access to a known-good hardware decoder that could be calibrated to the tape, I set about using a chain of bundled plugins in my Reaper workstation software to mimic the decoding process. Having been through the process, with hindsight I can see why there are so few software decoders for Dolby B on the market, even without considering the patenting issues surrounding it. It’s a tough gig.

For this process, I picked out the best-sounding pre-recorded tape in our collection and aligned the Denon deck to it, listening for most consistent sound, running speed and dolby decoding.  I got a sound off the cheap ferric formulation that came subjectively very close to the same release on CD or vinyl in terms of listening quality – the tape suffering only slightly with additional HF grain, with some through-printing and background noise evident only when listening at high levels on headphones.

I then aligned the Akai to the same tape before sampling (without Dolby B decoding) and correcting for speed. A rip of the CD, and the samples from the Denon, were used as references as I set about creating the software decoding chain – keeping overall levels the same between reference and working tracks to ensure I was comparing like with like.

A day was spent setting up and tweaking the decoder chain before I came out with a chain that gives equivalent subjective performance to what the Denon deck can do with great source material. I tried the same settings on a variety of cassettes, and was able to repeat the results across all of them…

Content, replication and mastering issues?

…until I came to the content of the Feia tapes I was planning to work on!

Once the cassettes were digitised, and playback speed and overall frequency response corrected, each side of the two tapes was given its own stereo channel, so that individual EQ, channel balancing and stereo-width settings could be assigned to each side of the tape, since I noted some differences in each of these areas that were common to each side of each cassette.

While listening to the digitising run, without playback speed correction, I noted a 50Hz hum in the recordings that was common to all sampled media – I tracked this down to issues with signal grounding between the audio interface, the monitor amplifier, and the cassette deck. No amount of tweaking this signal chain could get rid of it, but with the tapes sounding significantly worse with each playback pass the only way forward was to remove the hum using an FIR/FFT plugin. I therefore set one up on each of the stereo channels and sampled a section of the noise (without the content) into each filter and tweaked the removal settings to be more subtle than default – this removed the hum but left the remaining signal (including bass-notes passing through the hum and its harmonic frequencies) intact.

Each stereo channel was then taken out of the master mix and routed to two more stereo channels – one for the noise-reduction decoder and the other for the side-chain trigger telling the decoder what to do.

Listening to the results at this stage was intriguing. Even after tweaking the decoder threshold levels I noted a general improvement in the signal quality, a reduction in noise levels, but still a strange compression artefact that was evident on high frequencies. This got me wondering whether the labelled Dolby B encoding was actually a mistake, and whether Dolby C had been applied by mistake. Cue another day spent mimicking the Dolby C system by tweaking my homebrew decoding system. Nope – compression still there, but the overall spectral effect of decoding Dolby C was having way too much affect on the mid and high frequencies.

So: onto the next likely candidate: dbx noise reduction. I found out more online about how it works and created an encode/decode chain in software, using a ripped CD track as source material.  Applying the decoding stage to the Feia recordings was dynamically a little better in the top-end, but still not right.

Combining the homebrew Dolby B chain, and following it with a little dynamic expansion on the top 12dB of the recording made a useful difference.  Suddenly transients and sibilants sounded more natural, with more “bite” and less splashiness on the decay, particularly at higher frequencies.

Neither tape is sonic perfection itself even after this restoration, but I’ve learned a lot through it, and how have a much better understanding of why cassettes *can* sound great, but generally don’t, especially recordings made on one deck that are played on another.  I now realise that I’d far rather deal with vinyl and pre-digitised content than extracting it from >20-year-old compact cassettes! At some future point, I’ll likely post up some before/after samples so you can judge the results for yourself.

Dual 505-2: Mini-restoration and first impressions

 

Those who have read my little review of the Tannoy Mercury M20 Golds might be aware I’ve inherited some other items that they were originally purchased with sometime in 1985, we think. In this little article I’ll be explaining a little about our (immediately) beloved turntable – the Dual 505-2. By modern standards, if purchased new I would guess this record deck would compare with the likes of the Project Debut Essential package or similar. Our example appears to still be fitted with its original Dual ULM165E cartridge and DN165E stylus. We have no idea what playing time the stylus has seen, nor whether it is indeed the original stylus or an after-market replacement.

Fault-finding and repair

Upon arrival the platter would not spin, and I had been warned of the need for replacement drive and pitch-adjustment belts. After some Google abuse in an attempt to find a service or owners’ manual for this unit, I found my way to the Vinyl Engine which had an English owner’s manual available.

Dual 505-2 disassembly/reassembly

After downloading and reading the owners’ manual, I then attempted to take the inner plinth out of the wooden frame. This isn’t as easy as it looks; so having armed yourself with a Leatherman, Dutch courage and a glance at the owner’s manual, it goes something like:

  1. Lock the tonearm in place.
  2. Remove the stylus and put it somewhere safe to save it getting damaged.
  3. Remove the rubber turntable mat, and the platter.
  4. Remove the plastic lid.
  5. Ensure the transit screws are in the “playing” position, to give the suspension mounts full movement.
  6. Turn the whole deck on its back (so it rests with the hinges against your work surface).
  7. Slide the suspension spring bases out of their homes in the plastic base plate. The whole plinth assembly can then separate from the base. Note that the captive mains, signal and ground cables prevent the plinth separating completely from the base, without further work to release the cable entry glands.
  8. Breathe. Carefully.
  9. Reassembly is essentially a reversal of steps 1>8.

Checking the tonearm, motor and microswitch interaction

Another thing I learned during my Google session to find the manual was that one of the most common faults with these decks is that the microswitch seizes up if the deck has not been used for a while.This is the switch that starts the motor spinning when the tonearm is moved into the playing position. The cure recommended on most online forum posts I found on the issue was to simply use a screwdriver to operate the switch enough times until it starts working again. If I recall correctly, the switch is on the underside of the plinth next to the motor, and has either a yellow or blue plastic cap that connects to the tonearm assembly, via a system of levers that I could not easily work out. Within about 10 pushes the switch mechanism freed itself and the motor started spinning. With hindsight it was risky leaving the mains plugged in while I took everything apart, but it paid off and as it turns out there were no exposed terminals that a stray finger or screwdriver could have found. Phew.

Drivebelt check

As mentioned earlier, I had been warned of the need for a new drivebelt. It turned out that the belt itself was fine, but it needed a little gentle persuasion to realign it so it ran inside the speed selection mechanism. I tested the speed selector a couple of times to check that the belt stayed in the correct position, which it did. While I had the deck apart I also discovered that the toothed pitch adjustment belt had somehow snapped, so without any spare parts to hand I simply removed it and hoped for the best.

Function test, and adjust pitch

Having reassembled the deck I plunked down Suzanne Vega’s debut LP, and found both platter and cue mechanisms to work as designed. The result was quite stunning – the aged stylus and cartridge combination was working well enough with the photo stage of my NAD 302 amplifier to extract a remarkably pleasant sound from the disc, albeit at a slightly higher pitch than normal.

So, off came the platter and out came the Leatherman to attempt a quick-and-dirty adjustment of the pitch control pulley. Having worked out that the belt links the surface-mounted pitch control knob to the control pulley on the motor assembly, I could reason as to which way to turn the pulley to make the required adjustment. Using the narrow flat-blade screwdriver on the Leatherman, I turned the pulley through 90 degrees anti-clockwise by locking the blade in the teeth of the pulley and pushing gently in the right direction.

Trying the test LP again showed I’d adjusted too much – the song was now playing slightly slower than normal, so I went back and halved the difference. The LP now played what I considered to be ‘normal speed’ (turns out I do have an intuitive sense of ‘perfect pitch’, though it helped to have a CD of the same album to compare, which did indeed synchronise perfectly both in terms of track length and perceived pitch/tempo.

First impressions of sound quality

This is a subject for another post of its own, I’m sure. What immediately strikes is that the sound quality on offer is surprisingly good, but there are no good words or phrases I can think of to describe how it differs to the same material played from CD or other known digital sources in my system. When the disc is clean and in good playing condition, and has itself been mastered and manufactured well, the soundstage is noticeably wider and deeper than my digital sources, and the overall presentation is simply more musical. It’s not that the vinyl source is ‘warmer’ or more detailed as has become the stereotypical wording used by audiophiles writing in print or online when selling the plus-points of vinyl – just that the overall result is simply more pleasing to me. My wife confirmed this, noting that the vinyl feels more ‘real’, more as if the musicians are being presented in a space around and in front of us, compared with the digital sources forcing the soundstage to be artificially contained within our room.

Stay tuned for more on what work we carry put on this deck, and for some more in-depth reviews of what it enables us to enjoy!

Update @23:42
Put picture back into the post as originally designed, correct spelling/typo’s, add the following list:

Things still to do:

  • Install a replacement stylus in case I broke anything in transport or handling. Turns out the current one (and its cartridge) are replacements with only 50hours playing time on them, but I’ve already ordered a Dual DN-165E replacement stylus from Stylus Plus, so at least I’ll have a spare.
  • Replace the pitch control belt – not because it’s needed in normal operation, but I feel it’s only fitting to bring this one back “to spec”.
  • Check alignment of the cartridge. When I first started using the deck, I noticed a significant amount of sibilant distortion when nearing the end of a side. This isn’t uncommon, but a quick alignment according to the “Stevenson Method” has made things noticeably better at the expense of slightly increased sibilant distortion at the beginning of each side.
  • Clean all our LP’s and replace inner sleeves.
  • Try the phono stage of our inherited NAD 3020B in place of the current NAD 302.