Search

//abstractnoise

Professional London-based audio and tech geek

Category

Technology

Pilgrim’s Pod Radio Hour – Episode 3

UPDATE (26/2/2014):

This is the edited version, to keep the show length under an hour, and to tidy up some slower-moving passages.

ORIGINAL POST:

Another episode was recorded on Friday 7th February.  A slightly different feel to this one – with more spoken content. Featuring Liz Jadav and Phil Gallagher.

Technical notes

This time, the live-stream was sourced from the software mix that created this edited recording.  I’ve fixed a mistake where I ran out of hands to sort the live-stream mix during the intro, and we re-recorded a song with Paul after he’d choked on some water just before his song!  Aside from those issues, the stream levels were much more easily managed this way, and mixing the recording live with the usual processing in-place also made this edit much quicker to produce!

Also new to us was a Superlux S502 ORTF mic (select “English” from the top-right of the linked page), used for room ambience and audience.  Compared with the AKG 451’s we were using, rigging was much simpler, and the resulting sound was slightly more consistent.  I’m really pleased with this mic in this and some other applications; subject for another post I’m sure!

Getting an EMU 0202USB working with a Raspberry Pi

In the last couple of weeks, out of curiosity, I’ve bought a Raspberry Pi to play with at home.  It’s really very impressive to see what can be done these days with a $35 computer – an “educational” model at that!

Our Pi is currently in place as our digital audio player, courtesy of the Volumio linux “audiophile” distribution, and an EMU 0202 USB audio interface.

Once the Pi was booting Volumio off the SD card, I found two things that needed doing:

  1. Set up the Pi to pull files off our NAS device.  In theory this can be done from the Volumio web interface, but I had to go hacking around editing config files to make this work seamlessly.
  2. Set up the EMU for optimal digital playback.  I take a somewhat different path on this to most “audiophiles”.  I’m specifically aiming to implement a software volume control, provided I can run the digital audio chain at 88.2KHz/24bit, or higher.  This means CD/MP3 content gets upsampled, while some recordings made natively at 88.2KHz/24bit get to be played that way.

The Volumio forums helped me out with point 1, but I’ve lost a lot of brainpower and free time to getting the EMU to work properly.  I could get it to play out at 44.1KHz/24-bit, but any attempt to play native files at higher rates, or to have MPD upsample, resulted in obviously robotic-sounding distorted playback.  It turns out the key was simple:

It seems the clock rate on the EMU 0202 and 0404 USB devices is assigned to a fader in ALSA, which in this case I accessed using alsamixer.  There were two faders for my 0202:  PCM and Clock rate Selector.

The latter has a range of stepped values, equating to the following sample rates:

  •   0% 44.1KHz
  •  20% 48.0KHz
  •  40% 88.2KHz
  •  60% 96.0KHz
  •  80% 176.4KHz
  • 100% 192.0KHz

What I’ve learned then is that to get the setup working, I needed to not only set Volumio (or the underlying MPD player) to resample to the target output rate of 88.2KHz/24-bit but ALSO to set the Clock rate Selector to 40% in alsamixer.

All works happily and I’m loving the more “analogue” sound of the EMU in that mode!

UPDATE, 23RD FEB 2014:

I’ve managed to get MPD to reliably resample to 176400Hz/24-bit (32-bit internal, 24-bit at the card.) by forcing the Pi’s turbo to “always on” and a slight overclock. It’s not *quite* perfect yet, so i might see if I can push it a little harder before documenting our full setup.

Rocky road ahead: Google Cloud Print (BETA)

Background

An organisation whose IT team I know well has moved a lot of their services across to various Google platforms.  The move has been considered largely positive by users and management alike, not least because it has significantly reduced the management and infrastructure burdens on their organisation, and has genuinely improved IT-related life in many key ways.

The move therefore continues apace.  One problem identified by the organisation is that there seems little sense in paying c.£500-£1000 per head for a computer setup that spends the vast majority of its time being used (legitimately) as a web-browser.  The various Chromebooks undergoing trial have been a huge success given their planned usage, but with one common problem:  Users in 2013/14 STILL need to be able to print.

[Enter Google Cloud Print (BETA), Stage Left]

Image

“No problem!” says Google, “Here’s Cloud Print!”.  There are two flavours of documentation presented, in “consumer” and “IT Administrator” guises, both essentially saying (and ultimately describing) the same thing.

For those who haven’t come across it yet – the idea is that you buy a “Google Cloud Print Enabled” printer, give it 24/7 power and Internet, and you can print to it from anywhere, using your Google account in various creative ways.  Specifically for my friend, it gives print access to Chromebooks and other portable devices for which no other good printing solutions already exist.  Essentially if it can run Google Chrome, it can print.  And the concept is really neat.

Forecast: Storms ahead

There’s a thunderstorm in some clouds however, and this service is no exception.  I’ve heard a few common complaints in various pub-conversations, and even investigated a few when I’ve experienced them myself within my own Google Apps domains:

  • First off, some printers, once correctly hooked-up and signed-in, simply stop receiving Cloud Print jobs.  Often, turning them off and back on, and waiting up to a day, solves it.  But sometimes the log-jam becomes permanent.  Printing via local network or direct USB connection works fine from machines that can do it, but all Cloud Print jobs get stuck, forever destined to be “In Progress”.
  • The Cloud Print Management interface looks surprisingly mature for a Beta product, except that it gives very little information about what is really happening.  Once a job inevitably gets stuck, there’s no option to do anything other than to wait, or delete it.  It can’t be diverted to another printer.
  • More worrying, the status-codes are too general.  Sure, I don’t need a verbose running commentary when things are working well, nor perhaps when a job is “in progress”.  But when things get stuck, I’d like more information about the problem than the job simply being flagged “Error”.
  • Google provides no technical support for Cloud Print – so beyond what you can find in documentation provided either by Google or your printer manufacturer, you’re on your own.  No support. No apparent feedback mechanism even.
  • If something does go wrong, often the only way to fix it is to delete the printer on Cloud Print, and re-assign it.  This might be fine for single home users, but for anyone looking to share a printer between two or more people, this gets complicated, because you then need to share the newly-set up printer again with those who need it.
  • Then there’s the pervading security concern.  Where are those jobs going when travelling between the browser and the printer, and in what format?  Are they encrypted?  Are the documents scanned for content by Google or anyone else on the way?

Google comes close to a partial-answer in the FAQ/support page, with the following statements:

Documents you send to print are your personal information and are kept strictly confidential. Google does not access the documents you print for any purpose other than to improve printing.

For home users, that might be good enough.  At least there’s *something* in writing.  But for a business I’d suggest it’s too vague.  Let’s leave that alone for a moment and look at troubleshooting; how do I get a print queue working again, if I’m using a cloud ready printer?  Again, Google has a partial answer:

If you’re using a cloud ready printer…

Okay, done that, and checked that.  Still nothing.  Now what?

Conclusions?

Some reading this might say I’m being too harsh about what is *really* only a beta product.  And they might be right, if the product was released within the context of a beta product essentially being marketed or released only to technically-interested (and competent) people for evaluation, feedback and improvement before a wider release.  What’s happened instead is that some printer manufacturers have jumped onto the product by offering support (good), but without making it clear that this is a BETA service which may change, break or be taken offline at any time, without warning (bad. Very bad).

Even the business run-down provided by Google doesn’t mention its BETA status, and gives no clue as to how support or (useful) feedback can be found, nor even submitted.

So, is this going to be like so many other recent Google BETA products to get half a momentum going and then suddenly be killed? Or will it actually become more like Gmail and mature into a properly supported service, with SLA’s available to those who need them?  Only time will tell, but meanwhile based on what I know now, I’m finding it very hard to recommend deploying Google Cloud Print in my own organisations in its present form…

Random Gmail annoyance – sorting Inbox to show oldest messages first

It is 2013.  Sure, my email practices are probably based on conventions from 1993, but this is an ongoing personal frustration. I should say up-front that it can of course be solved by using a third-party mail client, which I do (on occasion).

On a desktop or laptop device, I’ve always preferred to run with older messages showing first, since they’re the ones that are most likely to *become* important – someone who has been waiting for a week is more likely to need their answer *now* than someone who has only just got in touch a minute ago.

It also means I can think “forward” of where I currently am, no matter what point of the day or workflow I encounter it.  Having to constantly think both forwards and backwards, particularly when dealing with the user interface elements involved with navigating, dealing with and filing messages or whole conversation threads, feels completely counterproductive. Actually it’s worse – it’s nudging me towards falling in to the “tyranny of the urgent” rather than dealing with what’s actually “important” right now.

On the one hand, one could argue that not having this facility means I constantly need to re-evaluate the whole queue every time I look at it.  On the other hand, I find that this approach saps functional time and energy away from the things that really *do* need doing.  And I don’t like that.

So come on Google – how’s about enabling that option for more than just multi-page lists?

Google – Another step backward for UI design?

It really doesn’t feel like much time has passed since Google launched the “black bar” to navigate around Docs/Calendars/other services.  And over time, many of us have come to rely on it being there.

Roll on another (wow, it’s been a couple of years already?) couple of years, and now we get this:

Image

Yup. That’s a grid, buried among a couple other things that rarely get used.  Click on it, and a list of icons appears to help take you to your chosen service. All well and good, except you have to click again to go there.

Those of us relying on pattern or muscle-memory to get things done intuitively will balk at this for a few reasons:

  1. We now need to click twice to get a simple thing done.  Surely activation by hovering over the grid should bring up the menu?
  2. The grid is in no way intuitive – looking at the icon doesn’t tell me anything meaningful about what it’s going to do if I click on it.
  3. The grid is in a completely different place on the page from where the old navigation bar was

A little car analogy:  I need to know that when I take my car for its annual service, it comes back with key consumables replaced under the hood, but with key controls (gas and brakes for example) in the same place as when I took it there, each retaining the same function as when I left the car at the garage.  I don’t want to have to relearn where the pedals are, and what each does, every time I head off on a new journey.  Likewise with software.  Changes and improvements are a good thing.  But only when managed in a way that allows the majority to keep up, or to operate the machinery safely in the way they were first trained to when taking on the machine.

It’s the small things like this (and Ars Technica has an interesting article listing similar things here) which are turning many of my tech-embracing friends and relatives back away from the tech they purchased, because they don’t yet use it enough to learn how to relearn pretty much every task they ever set out to achieve.  Many of them might only perform a task once every year or two, yet every time they do, enough little things have changed that mean they’re relearning the process as a new user.

I think that’s a clear example of technology creating more stress, and more hassle – far from the technology enabling things through reducing effort and overheads.

Am I the only one thinking this way?

Mid-2012 MacBook Air 13″ – fixing one form of “Black Screen of Death”

Various online forums are abuzz with MacBook Airs of 2012 and 2013 flavours suffering the “Black Screen of Death” – apparently the machine, mid-use, decides to either shutdown completely, or just shut its display off.  It’s the latter case I’m most interested in here, since a colleague just presented me her Mid-2012 128GB 1.8GHz 4GB-RAM model.  It’s still exactly as it was when it came out of the box.

The problem

The machine shutdown mid-use, and subsequently would only boot as far as turning on the display backlight.

The (apparent) solution

The PRAM (Parameter RAM) reset – hold down ALT, CMD, P and R keys together immediately after pressing the power button.  While the keys are held down, the machine will reboot with a “Clang”.  I usually hold the key-combo down until the clang has happened three times, releasing the keys on the third.  This may be a superstition as one cycle might be enough, but from my bad old days doing the same trick on older G4-based iMacs this is a habit that still hasn’t been shifted.

The result

The MacBook Air immediately booted as normal, and within a few seconds I was greeted with the usual File Vault 2 login screen, and the machine has behaved impeccably since then.

Further preventative maintenance

Apparently the machine had missed a few software update cycles, so I installed everything available, including a Thunderbolt firmware update and the recently-released 10.8.5 update.

Online music streaming – missing a note or two?

Google Play logo, courtesy Wikipedia
Google Play logo, courtesy Wikipedia

Quick thought, while I’m procrastinating…

While I’m not planning to let go of physical media anytime soon – not least the vinyl collection, I’m becoming a huge fan of Google Play, and its ability to play music “uploaded and matched” from my own collection.  Real bonuses for me are that this happens for no extra cost to my Google Apps domain, and  it seems to work well wherever I have a reliable ‘net connection.  The quality when listening via headphones and Google Chrome on a laptop is surprisingly good considering they’re MP3’s – possibly transparent enough to pass a proper ABX test between them and the original uncompressed digital stream on CD.

But something is different, and something is missing… quite a lot of things are missing actually.

Where’s the song information?

Geeks might call this “metadata”. The information about the making and content of the recording is as useful to me as the actual content itself.  I like knowing things like, who wrote the song I’m listening to. I might want to check the lyrics. I might also want to know whether I’m listening to a particular remaster or reissue.  While the content and artwork are there on Google Play, I’ve got absolutely no idea at first glance which exact version or release of a song I’m listening to.

At present, I know who the release artist is for a song as it plays, and from which album. I can even see the album artwork for the majority of my collection, as well as a release year.  What I don’t know without doing a *lot* more digging is whether the particular copy of “Bohemian Rhapsody” I’m listening to is from a 1990’s remaster, or the more recent (2011?) remasters? I’m not ordinarily such a geek – a great song is a great song whatever the media it’s carried on.  But it’s good to know nonetheless.  Especially if I happen to like the work of a particular mix/master engineer, or if I purchased a particular CD release of an album due to a known heritage, which has been matched to another version which sounds particularly different.

I think it would be really nice if digital streaming/shop purveyors could actually provide the full information of the songs they’re sending us.  There are more involved in most major releases than just the artists, and it’s only right that they get the credit, even if the information shows no significant other commercial purpose.

What even made me think of this?

Listening to the current version of Queen’s “A Kind of Magic” up on Google Play, I’m noticing a lot more musical and tonal detail in the recordings than I remember from my own CD copies.  This is an album I’ve known for the whole of my musical life, and I therefore have some very strong memories of it, and can recall absurd amounts of detail regarding both musical arrangements and sonic character and how they were reproduced differently in each of the releases I’ve owned copies of.  Since I’m hearing so many new things despite listening on familiar equipment, I’d like to understand where they come from.  Since I like the differences, I’d like to know if they are due to a particular engineer’s approach to remastering, and whether I can find more by the same engineer.  Or whether I can learn something about the engineering approach that led to the result I liked so much.

On the one hand the freedom offered by always-on streaming access like this is wonderful – but on the other it comes with a lot of compromises, and with a lot of things “hidden” from view that I feel really should be open to us all…

Touchfreeze – useful tool

Been a while since I last used my Asus EeePC 1011PX for serious typing. And so it came as something of a surprise that despite the latest Elantech touchpad drivers being installed, the touchpad *still* was being accidentally activated while typing.

So, out went the driver – it simply didn’t function in Windows 8.  Perhaps it doesn’t really support my particular hardware, or perhaps it’s an OS problem.  Either way, it was a whole lotta software for not a lot of function.

Instead, I’ve installed Touchfreeze from the Google Code project.  Left as installed, in automatic mode, it seems to be doing the job just fine, and I can carry on typing huge reams into OneNote 2010 with ease!

Feia – cassette restoration case-study

After a few weeks playing with head alignments, audio interfaces, decks, plugins and sanity, I’ve run off a successful “first draft” attempt to restoring these interesting recordings.

About the cassettes themselves…

The cassettes themselves are a little odd – they appear to be using Type-II (CrO2) shells, but I can’t tell from listening or visual inspection whether the formulation on the tape is actually Type-I (Ferric) or Type-II. Both tapes seemed to sound better with Type-I playback EQ, selected in each case by blocking the tape type holes in the shell with judicious use of Scotch-tape.

Noise levels on the tapes were horrendous. Both cassettes seem to have been recorded about 10dB quieter than most commercial tapes given to me in the same batch, and seem to have experienced significant loss of high-frequencies – something that I noticed getting audibly worse with each playback pass despite cleaning and demagnetising the heads before each run. At best I was getting something like 15dB signal-to-noise before noise reduction. Much of this is broadband noise, but there’s also a significant rolling static crackle running on the right channel, which seems to match the rotational speed either of the pinch-roller on the deck, or perhaps the guide capstans inside the tape shell itself.

Playback

Something I’ve always known about the Akai deck I’ve now inherited and restored to working condition is that it’s always played a little fast. While I’ve not been able to fix this at a hardware level (seems to involve fiddling with the motor control circuits – a major stripdown and rebuild I’m not convinced I have the time or confidence to complete without an accident), I have taken an average of how fast the machine is playing by comparing songs from an assortment of pre-recorded commercial cassettes with digital copies from CD or previews on iTunes. From this I discovered that pulling the playback speed down to 95.75% of the sampled audio gives an acceptable match (within 1 second or so across the side of a cassette) to the commercially-available digital versions. This is really easy to do in my audio software as it doesn’t involve convoluted resampling and slicing to keep the original pitch.

Noise reduction

Challenges

A significant HF-boost was required to get the tape sounding anything like a natural recording, which of course brings the noise levels up. I don’t have access to an external Dolby decoder, and the Akai deck used for doing the transfers sounds very strange with Dolby B engaged even on well-produced pre-recorded material that came to me in excellent condition. The Denon deck I have is technically better than the Akai in many ways, but to beat the Akai in sonic terms needs about an hour spent on alignment (per cassette) and the source material needs to be in excellent condition. So I proceeded to transfer the content from the Akai at a known higher running speed, without Dolby decoding, in the hopes of being able to fix this later in software.

Decoding for playback

There is a lot said online about the mechanics of Dolby B, and many people think it’s a simple fixed 10dB shelving HF EQ boost (emphasis) on recording, that is easily dealt with by a simple shelving HF EQ cut (de-emphasis) on playback – or even simply doing nothing with older tapes that have suffered HF loss. Well, without going into detail that might infringe patents and/or copyright, let me tell you that even from listening to the undecoded audio, it really isn’t that simple. What we’re dealing with here is some form of dynamic processing, dependent on both the incoming frequency content AND the incoming levels. Even with its modest highest-available noise reduction, it’s a beastly-clever system when it works, and remarkably effective in many environments, but as with many complex systems it makes a lot of assumptions, open to a lot of factors influencing the quality of the output.

Working up a solution

Having no access to a known-good hardware decoder that could be calibrated to the tape, I set about using a chain of bundled plugins in my Reaper workstation software to mimic the decoding process. Having been through the process, with hindsight I can see why there are so few software decoders for Dolby B on the market, even without considering the patenting issues surrounding it. It’s a tough gig.

For this process, I picked out the best-sounding pre-recorded tape in our collection and aligned the Denon deck to it, listening for most consistent sound, running speed and dolby decoding.  I got a sound off the cheap ferric formulation that came subjectively very close to the same release on CD or vinyl in terms of listening quality – the tape suffering only slightly with additional HF grain, with some through-printing and background noise evident only when listening at high levels on headphones.

I then aligned the Akai to the same tape before sampling (without Dolby B decoding) and correcting for speed. A rip of the CD, and the samples from the Denon, were used as references as I set about creating the software decoding chain – keeping overall levels the same between reference and working tracks to ensure I was comparing like with like.

A day was spent setting up and tweaking the decoder chain before I came out with a chain that gives equivalent subjective performance to what the Denon deck can do with great source material. I tried the same settings on a variety of cassettes, and was able to repeat the results across all of them…

Content, replication and mastering issues?

…until I came to the content of the Feia tapes I was planning to work on!

Once the cassettes were digitised, and playback speed and overall frequency response corrected, each side of the two tapes was given its own stereo channel, so that individual EQ, channel balancing and stereo-width settings could be assigned to each side of the tape, since I noted some differences in each of these areas that were common to each side of each cassette.

While listening to the digitising run, without playback speed correction, I noted a 50Hz hum in the recordings that was common to all sampled media – I tracked this down to issues with signal grounding between the audio interface, the monitor amplifier, and the cassette deck. No amount of tweaking this signal chain could get rid of it, but with the tapes sounding significantly worse with each playback pass the only way forward was to remove the hum using an FIR/FFT plugin. I therefore set one up on each of the stereo channels and sampled a section of the noise (without the content) into each filter and tweaked the removal settings to be more subtle than default – this removed the hum but left the remaining signal (including bass-notes passing through the hum and its harmonic frequencies) intact.

Each stereo channel was then taken out of the master mix and routed to two more stereo channels – one for the noise-reduction decoder and the other for the side-chain trigger telling the decoder what to do.

Listening to the results at this stage was intriguing. Even after tweaking the decoder threshold levels I noted a general improvement in the signal quality, a reduction in noise levels, but still a strange compression artefact that was evident on high frequencies. This got me wondering whether the labelled Dolby B encoding was actually a mistake, and whether Dolby C had been applied by mistake. Cue another day spent mimicking the Dolby C system by tweaking my homebrew decoding system. Nope – compression still there, but the overall spectral effect of decoding Dolby C was having way too much affect on the mid and high frequencies.

So: onto the next likely candidate: dbx noise reduction. I found out more online about how it works and created an encode/decode chain in software, using a ripped CD track as source material.  Applying the decoding stage to the Feia recordings was dynamically a little better in the top-end, but still not right.

Combining the homebrew Dolby B chain, and following it with a little dynamic expansion on the top 12dB of the recording made a useful difference.  Suddenly transients and sibilants sounded more natural, with more “bite” and less splashiness on the decay, particularly at higher frequencies.

Neither tape is sonic perfection itself even after this restoration, but I’ve learned a lot through it, and how have a much better understanding of why cassettes *can* sound great, but generally don’t, especially recordings made on one deck that are played on another.  I now realise that I’d far rather deal with vinyl and pre-digitised content than extracting it from >20-year-old compact cassettes! At some future point, I’ll likely post up some before/after samples so you can judge the results for yourself.

Blog at WordPress.com. | The Baskerville Theme.

Up ↑

%d bloggers like this: