Search

//abstractnoise

Professional London-based audio and tech geek

Welcome

Professional audio engineer and hobbyist creative technician, based in South East London.  This blog hosts some of my personal and leisure interests as well as some examples of my professional work.  All works and opinions represented on this site are my own, unless otherwise credited/quoted.

Featured post

Windows 10 forced upgrades from another angle – email

Really annoyed at Microsoft, on behalf of a Windows 7-using relative. They received an email on their Outlook.com address (formerly hotmail) on 30th June saying that they will lose access to some or all of their messages as of 30th June because they’re using Windows Live Mail 2012, and server upgrades require them to immediately upgrade to Windows 10.
Thing is, they only just understand what they have now. And they’re in the middle of a larger life project, for which cutting off email access RIGHT NOW with no more than SAME-DAY notice would effectively kill the project stone-dead and potentially leave them in a *terrible* financial mess.

Sure, as the email from MS points out, they *could* continue to collect emails from the web client until the upgrade has been completed. But that’s *another* thing to learn at a time when they’re least able to put time or mental power in to processing that kind of change.

From a technical perspective it really annoys me that a simple email service needs sufficient upgrades both to server and OS/client just to deliver electronic mail, for which standards for doing this sufficiently securely and efficiently have existed for *years* now, and seem to be followed by just about every other service provider on the planet. Even Apple’s iCloud service I *think* has eventually got over itself and eventually allowed standard IMAP login from non-Apple clients.

We’ll work out a solution – but this whole thing leaves a very bitter taste in the meantime, especially for those of us who just need to coach our users to get things done, because they don’t have the time and brain-space to keep pace with *everything* going on in tech world and why it’s shaped that way.

We in tech world would do well to put the users and tasks first for a change.

Updates on audio-related things in progress…

  1. The long-running archiving project has hit a significant milestone – I’ve now digitised as much of the physical media as I can. Limits are set now by condition of the incoming media, and whether or not it’s really worth digitising 4 or more decaying copies of the same thing when we already have better copies of the same thing elsewhere.  The only reel (!) exception is a set of reels for a particular project whose magnetic layer fell off as soon as the reels were unpacked.  No way is that worth spending money to preserve further given the age, obscurity and potential value of the content, not least the hardware value to retrieve it once the media itself has been restored.  Shame, but commercial sense has to come into it somewhere along the line.
  2. Now have a DIY automated process for holding an in-DAW mix (working in REAPER) at -23dBLUFS or thereabouts, which greatly simplifies things for radio and podcast production.  Even if I only use it for monitoring or other less critical work, it’s an amazing time-saving tool.
    • Obviously this means it can be adapted for -16dBLUFS or any other value as needed.
    • It’s *really* not pretty  for a number of reasons – most bothersome to me is that it presently works in stepped values, somewhere around 10 updates per second – rather than more smoothly applying gain or attenuation.
    • For high-end stuff, I’m still happy to do final levelling by hand as it does tend to sound better, but that does add time to any given project.

Lossless audio compression for project work and archive – what do you use and why?

I’ve been using lossless audio compression as a matter of course in my audio workflow for a while, and not just for end-distribution.  Services like Bandcamp and Soundcloud support uploading in FLAC format, and many clients happily accept ALAC (Apple Lossless) or FLAC in place of traditional uncompressed WAV or AIFF formats.

As for long-term storage and archiving, I’ve switched to FLAC for my projects, and have back-converted most of my catalogue.  But why FLAC?

Pros:

  • The reduction in network bandwidth and storage requirements is always nice.
  • It brings the possibility of checking the project audio files for errors via the FLAC checkum (applied at the time of creation) against how the file decompresses in present-day.
    • This can show up problems with “bit-rot” on disk storage that would otherwise remain hidden.  It’s a useful alternative to deploying ZFS-based file systems and keeps storage and network kit at an affordable “prosumer” level while such technologies mature a little longer, and hopefully come down in cost too.
    • If I find a problem file? Fine – restore from a good backup.  But that does rely of course on having good backups, and the corruption not being carried into those!
  • It’s an established format that many, many software applications can read, across a wide variety of operating systems – and given the number of websites springing up offering “audiophile” digital transfers of consumer audio material based upon the format, I have good confidence that software to read and write to the format will continue to be developed for some years ahead.  And if not, it’s still easy enough to batch-convert files to whatever format replaces it.

Cons (none unique to FLAC, I notice):

  • Reaper is my DAW of choice, and exporting large projects out to FLAC is time-consuming as it still doesn’t multithread FLAC exports for projects.
  • While Reaper *does* support (in version 5.111, at last check) recording directly to FLAC and other formats, recording to anything other than WAV or AIFF on my systems has consistently thrown up audible glitches in the recorded material. With some sample-accurate editing these can be fixed, but still not acceptable.
    • What I do therefore is to record to WAV/AIFF first, then save the project to FLAC before any significant mix down happens.
  • Not every DAW supports FLAC natively, or at all.  But then, for me this is a moot point – not every DAW can read project files from every other DAW, so this is just a part of that wider picture.  You pick your tool for the job and deal with the consequences.
  • Conversion takes time, especially offline for large projects.

So  – that’s a quick brain-dump of how I’m working with this stuff and why. I’ve missed steps and I’m sure others will be quick to pick holes in it.

I suppose my question to anyone else reading this with sufficient interest is… What is everyone else doing? What file format would you pick, and why?

 

Quick note on using a Revox B77 with a Presonus Firestudio Project – potential impedance mismatch!

I’ve been using my Revox B77 with various audio interfaces and operating systems for a while, and this week I’ve restarted tape imports for a long-running project that needs to come to a final conclusion – at least on the ingest stage.

Various factors, not least compatibility with Mac OS X El Capitan, have forced me into using my Presonus Firestudio Project as the analogue-to-digital converter for this stage.  It’s not ideal for a number of reasons, but it gets a job done, without too many sonic compromises (especially when compared with the vagaries of the source material) and so I suck it up and move on with life.

One important thing has come to light this time around though… My B77 *really* doesn’t like feeding Channels 1&2 at line-level, which are labelled as “instrument” (presumably Hi-Z, high impedance) inputs.  Its output preamplifier runs out of headroom rather quickly and ends up giving horrible clipping, especially with the outputs wound up to give something approximating “line level”.

Plugging into any other channel line input (3 through 8) does reduce the recorded signal level as seen by the DAW software, but also gets rid of this clipping.  Lesson learned.

I’m left wondering if this is an expected quirk of the B77 electronics, or whether it’s a quirk of my specific machine?  I don’t (yet) have time to go poring through manuals to find out for sure – but I can’t say this result surprises me!

Still – onward and upward!

Preview: “O Holy Night” – performed by ASO and Constantine Andronikou

So here is a quick “faders-up” preview of “O Holy Night”, as performed at All Souls Church on 12th December at Christmas Praise – what a stunning event that was!

Credits as follows:

  • Composer:  Adolph Adam
  • Singer: Constantine Andronikou
  • Orchestra: All Souls Orchestra
  • Conductor: Dr Noel Tredinnick

Notes:

  1. Because there’s such a huge dynamic swing between loudest and quietest parts of the piece (some 20->40dB depending which metering standard one uses), I’ve approximated what I *would* be doing with faders by using compressors.  The result isn’t yet “pretty” but it is functional – at least enough until it’s decided whether further work will be done!
  2. Though I helped provide the technical links between live and recording “worlds” for this event, I’m not actually involved in producing the radio mix.:)

 

“Changes afoot” – new single

Yup – thought I’d dip my toes into the waters with a new single, out on Bandcamp!  Really pleased to have been able to work on this despite the year that’s been.  Two new bits of kit have come out to play for this one: Caustic3 on the iPad, and a Revox B77 that I’ve been primarily using to do tape restorations, notably for a certain Christian orchestra.  But of course, I *had* to see what it could do with recordings too!:)

I’d love it if others could support me and the wider range of non-mixing work I do, so here’s your chance!

On commercial remasters possibly issued without Dolby A decoding; Mistakes, art, or…?

Some background…

I’ve commented on this blog before about the possibly questionable quality of some digital remasters released of late. Common subjective complaints in online fan and hifi forums, made myself both here and among friends in person, are that some particular remasters might be too loud, too bright, and/or otherwise “overdone” given what we know or perceive of the original source material.  There might well be various artistic or marketing-related reasons for this, so I’m not here to argue for or against these issues.

Further complicating the issue for me, both as a fan and a professional, is that many of these stand-out features are seen overwhelmingly as positive things by many fans, whether they are technically correct or not.  It would seem that a combination of perceived increase in detail and volume outweighs any issues of listening fatigue or known certain deviation from the presentation of the original master.

I’ve embarked professionally on remastering and restoration processes and have learned, from the coal-face so to speak, much of the reality of what’s involved. To onlookers it appears to be a black art – and believe me, from the inside, it can feel a lot like it too!  Sometimes I’m asked by a client or a reference-listener how or why I made a particular decision; and in some cases, especially those without actual technically verifiable information or logged conversations to go on, I have to go out on a limb and essentially say something to the effect of “well, because it ‘felt right'”, or “because it brings out the guitar *here*, which really flatters the piece” or some other abstract quantity.  At this point I just have to hope the client agrees.  If they don’t, it’s no big disaster, I am rarely emotionally tied to the decision. I just need to pick up on the feedback, do what I can with it, and move on.  Looking at the process, I guess that’s partly why the word “abstract” appears in my trading name!:)

“Okay, so you know a bit about this from both sides, on with the subject already!”

There are two particular commercial albums in my digital collection, both hugely successful upon their original release, whose most recent remasters have bothered me. It’s not fair to name and shame them, especially not while I await confirmation from engineers/labels that my hunch is correct.  Anyways – I’m bothered not because they’re “bad” per se, but because I bought them, took them home, and from the moment I first heard them, something about them stood out to me as being not quite “right” from a technical perspective. One of them (Album A) was released in 2001, and another (Album B) that was released earlier this year, in 2015.

What these two albums have in common is that their tonal and dynamic balance is *significantly* different to the original releases, beyond the usual remastering techniques involved with repair, restoration and sweetening of EQ and dynamics carried out to sit alongside contemporary new releases.  The giveaway is that the top-end is both much brighter than usual, and much more compressed – and the result is unnecessarily fatiguing.

Where the two albums differ, then:

  • Album A has not suffered from the “loudness wars”.
    • Its overall dynamics are relatively untouched compared with the original.
    • It appears, looking at the waveform in a DAW, that the album material has been normalised to 0dBFS (so it fills the maximum dynamic range CD has to offer), but it rarely hits such high levels.
  • Album B however, despite never having been a “loud” album on original release, has suffered from the “loudness wars”.
    • Looking at its waveform, it’s clear that it has been maximised; this means that the material has been both compressed and limited such that the original dynamics have been squashed and gain applied such that almost the entire album waveform hits the 0dBFS point.
    • As a result, the album has lost its previous tidal ebb and flow, and while arguably some details are indeed much more audible than before, it no longer has the organic subtlety it once did.  Important instrumental details get masked and actually reduced in level as louder ones come into the foreground, because with that much compression going on, there’s nowhere else for them to go except lower in level.
    • Sure, it’ll play better on an iPod while travelling on the London Underground, or in the car, so it might open up a new market that way – but for the rest of us perhaps looking forward to a better quality transfer to listen to at home or anywhere else, we don’t get that choice.
    • I’ve heard the 2015 vinyl re-release of the latter album, and it seems to not have the same issues – or if it does, nowhere near to the same extremity. There are likely good technical and human reasons for that, but that’s an aside for another post.

Experiment 1:  Treating the common issues

Last week I had some downtime, and a hunch – a dangerous combination.

Neither album was famed in its day for brightness, except for the singer’s sibilants in Album A causing vinyl cutting and playback some serious headaches if alignment wasn’t quite right. Album B does carry a lot of detail in the top end, but being mostly synthetic, and certainly not a modern-sounding album, the spectral content is much more shifted toward low-mid than anything we’d be producing post-1990.  So there will be some sheen and sparkle, but it should never be in your face, and never compressed.

Such clues told me two things: first, that Dolby A was likely not decoded from the master-tape on transfer; next, that in the case of Album B, further dynamic compression has taken place on top of the un-decoded material.

So – out came a Dolby A decoder, and through it I fed a signal from each album in turn, bouncing the decoded signal back into my DAW for storage and further analysis of the decoded signals.  Now please understand, it’s hard (if not impossible) to get a correct level-alignment without getting the test tones from the master tape, but those of us in the know can make some basic assumptions based on known recording practices of the time, and once we know what to listen for, we can also based on the audible results, especially if we have a known-good transfer from the original tape to work with.

All that said, I’m not claiming here that even with all this processing and educated guesswork, I’m able to get back to the actual sound of the original tape! But I am able to get closer to what it ought to sound like…

The result? Instantly, for both albums, the top-end was back under control – and strangely both albums were suddenly sounding much more like the previous versions I’ve been hearing, be it from vinyl, CD or other sources. Album B’s synth percussion had space between the hits, Album A’s live drums had proper dynamics and “room” space. In both albums, stereo positioning was actually much more distinct. Reverb tails were more natural, easier to place, easier to separate reverb from the “dry” source, especially for vocals. Detail and timbre in all instruments was actually easier to pick out from within the mix.  To top it all off – the albums each sounded much more like their artists’ (and their producers’) work. Both albums were far less fatiguing to listen to, while still delivering their inherent detail; and perhaps some sonic gains over previous issues.

Experiment 2:  Fixing Album B’s over-compression

First things first – we can’t ever fully reverse what has been done to a damaged audio signal without some trace being left behind.  Something will be wrong, whether “audible”, noticeable or not.  But, again, an educated guess at the practices likely used, and an ear on the output helped me get somewhere closer to the original dynamics.  But how?

Well, it was quite simple.  One track from the album has a very insistent hi-hat throughout, that comes from a synth.  If we assume that synths of the time were not MIDI controlled, and likely manually-mixed, we can assume that it should essentially sit at a constant level throughout the piece, barring fade-in/fade-out moves.  And listening to an “original” that’s pretty much what it does.  But neither in the clean nor my “decoded” version of the later album does it do so.  It drops up and down in level whenever the other pads and swept instruments come and go.  It was more noticeable on my “decoded” version, but with the frequency and micro-dynamic blends being so much more pleasant, I knew that I’d made progress and the way forward was to fix the compression if I could.

Out came a simple expander plug-in; Inserting this before the Dolby decoder, and tweaking various settings until I was happy that the hi-hat was sitting at a constant level throughout my chosen reference piece, restored dynamics to something like the original, and returned that hi-hat to something much closer to a near-constant level as the track plays.  In the end, we get something like a 6-9dB gain reduction, and the waveform looks far less squashed.  And sounds it, too.

The trick then, was to listen to all four Albums, A, B, A restored, B restored, at similar overall loudness levels, and see which works better.  So far, in this house anyways, we’re happier with the restored versions, even including those who are unfamiliar with the artistic content.

Prologue – Is this a mistake? And if so, how could it have happened?

When dealing with remasters, especially for older albums, we typically go back to playing analogue tape. There are *many* things that can go wrong here at a technical level. We’re worrying about whether the tape machine is aligned to the tape itself, both tape and machine are clean, and that the correct noise reduction technology is used, whether we’re actually getting all the information we can off that tape.

Then there is a human element. I’ve lost count of the number of times even in my small sample, where I’ve encountered a DAT or 1/2” reel labelled as being pre-EQ’d or Dolby-encoded with some system or another when in fact it wasn’t. Then there are other similar labelling and human errors I’ve encountered; Perhaps it wasn’t labelled as being Dolby-encoded and it really was. Or perhaps the “safety copy” was actually the clean master and the “master copy” was actually the cruddy “safety” with a 10dB higher noise-floor recorded at half-speed on lower-grade tape on an inferior machine that we know nothing about, with the channels swapped randomly due to a patching error in the studio.

Technology, and technicians, like the kind of questions and answers that have defined, logical “0 or 1”, “yes or no”, “is this right or is this wrong?” kind of answers. Unfortunately for us then, when dealing with music, as with any other art, and so then dealing with musicians, producers and other artists involved with the music creation and production process, we soon find that the lines between “right and wrong” very quickly get blurred.

As an engineer, I’m also all too aware of the dichotomy between my *paying* client (usually the artist), and my *unpaying* client (the listener).  Most of the time these are in agreement with what is needed for a project, but sometimes they’re not. The usual issue is the one of being asked for too little dynamic range – “can you turn it up a bit so it sounds as ‘loud’ as everything else?” and the resulting sound is fatiguing even to me as the engineer to work with, let alone the poor saps who’ll be invited to buy it. Sometimes I know that some sounds simply won’t process well to MP3/AAC (that’s less of an issue these days, but still happens).

Anyways – all that to say -if these albums both suffered the same mistake, if indeed it was, then even without the myriad artistic issues creeping in, I can see how an unlabelled, undecoded Dolby-A tape can slip through the net, and blow the ears off an artist or engineer who’s been used to the previous released versions and get people saying “YEAH, LET’S DO THAT ONE!”:)

CF

Preparing images for Google Slides (and other Google apps too)

User comes to tech, tech solves problem, world moves on. Yawn.

Some months ago, I helped with a project to help move someone’s large archive of digital photographs and clipart to Google Drive.  That was easy enough in itself – we just installed Google Drive on their Mac and just moved everything from the appropriate folder on their Mac to a suitable place on inside their Google Drive Folder, on a fast (100Mb synchronous) connection, and let time and Google Drive application do their work.  “Job done…”

Problem 1: Images are over 2MB. Okay, so we’ll shrink them…

…Eeeeeexcept they wanted to use the images immediately, as-is, in Google Slides and other Google apps.  The very first image dropped into their new presentation was too big.  It was either over 2MB, or over some arbitary pixel dimensions that the dialog box didn’t tell the user about.  So back the user came to our team asking what the heck was going on…

Looking at the relevant Google Support page for Docs Editors (as at 10th April 2015, and still not fully populated on 9th November 2015), one might think that just recompressing the images so they *just* squeeze under the 2MB size limit would be enough to comply.  And indeed on this info, given the ‘000’s of images affected, I sync’ed a copy of the affected Google Drive account to a spare Mac, installed Imagemagick (along with its numerous dependencies) and wrote us a bash-script.  Looking at the fileset, I noticed that only the JPGs were over 2MB in size, so I found I could simply tweak the script to look for any JPG over 2MB and use Imagemagick’s “convert” tool to resize the file in-place, then delete the old file to save confusion at the user end.  The basis of the conversion was this command:

convert image.jpg -define jpeg:extent=1970kb image.jpg.smaller

Sure, we sacrifice some overall quality due to JPG recompression, and the script needs to take care of some housekeeping along the way.  But having looked at a bunch of test-images side-by-side, we decided the work involved, and the results obtained were more than good enough for the intended use-case, and indeed any quality losses were barely detectable in >9 out of 10 cases even when pixel-peeping on a decent calibrated monitor.  So on we went with the live dataset after many test-cases, thinking the job was done.

So with the results looking good, off I went to tell the user that the file conversions were done, the results looking good and let us know of any problems.  Meanwhile we’ll move onto the next tasks on our creaking lists.

Problem 2: File size alone wasn’t the issue – pixel dimensions mattered too! D’oh!

Then the dreaded email pinged.  Our poor user had tried to insert the first of the newly recompressed images and sure enough, it had failed to load, and again the same generic helpless dialogue box appeared.  Not because it was a bad image itself – It’s a perfectly valid JPG, and looked very nice, despite the high pixel count and our fears over high compression rates and known multiple recompression steps.  These images are intended for end-use after all, not for further editing, and certainly not for anything other than on-screen use at low DPI output at long distance.

I had to confirm the issue for myself, dragging a JPG onto the insert image tool’s “upload from computer” window finally got the Google Slides image tool to tell me that the image was too big, and finally it gave me the actual limits I was supposed to be working to.

Great.  Now I need to go off and resize my images.  Again.

So, off I went to find a new copy of the original images in their original folders (you do still keep backups of what’s on your cloud storage, right?) Then I worked on a new script, that would resize the images to fit inside a 3500×2500 window, preserving aspect ratio, and would again work for GIF, JPG or PNG since those are all supported by Google Docs.  THEN I ran the same recompression script as before on any files that were still too big after downscaling.  Overall the process took much the same amount of time as the first run, but came with the advantage of the end-result overall looking much better up to the limits of the pixel dimensions and the file size.

Summing up

Some time on our end testing the full end-to-end process might have saved both us and the user some time and hassle, for sure, so my own lesson here is that some short-sightedness on our part for the sake of trying to “get back to other things” most certainly bit us all in the bum.  In our defence however, the process would have been *much* easier had the image dimensions, aspect ratio and file size limitations all been given up-front, NOT just at the point of the image throwing up an error, but also on ALL appropriate import screens, and in accompanying documentation we can search from outside the situation the users face. Another couple of lessons both for developers and support folk here!:)

abstractnoise reel-to-reel demos

This week I’ve been playing with our recently acquired Revox B77 1/4″ recorder. It’s a stereo half-track model, and I use it at 7ips. Currently I’m not sure what actual tape I’m recording to, as it’s a 20ish-minute offcut from a reel that was found to be blank at the end of a transcription job. The two tracks presented here represent two very different production methods now open to me. 

“Changes Afoot” was sent track-by-track to tape, from DAW, and back again, to produce a digitally mixed master. A couple of those tracks were recorded particularly “hot” to bring out more tape character. No noise reduction was used at all. Signal-to-noise on any single tape recording in this setup was was found to be around 60dB.

“Innocence 2010” started as a digital stereo pre-master that I was never fully happy with, which was sent to tape via a skunkworks noise reduction system I’m working on behind the scenes; the system is not yet complete, but has enough processing built to give a useful 10dB gain in signal-to-noise ratio without any significant audible artefacts, as borne out by the 69-70dB signal-to-noise ratio found in this setup even after peak-limiting.

I suspect the signal-to-noise ratio is limited by noisy pre-amps on my DAW setup; I’ll need to swap to a different audio interface to confirm. That’s something to play with another day. Overall I’m VERY impressed with the overall sound quality this kit is able to deliver, and the range of analogue “colours” it can provide. I’m really looking forward to finishing the skunkworks noise reduction project; I have my eyes set on somewhere near 24dB noise reduction once it’s fully up and running. But it’s good to prove that I can both encode and decode on-the-fly!

Watch this space!

Blog at WordPress.com. | The Baskerville Theme.

Up ↑

%d bloggers like this: