Search

//abstractnoise

making sound considerate

Category

Technology

Raspberry Pi HDMI audio vs USB CPU use

Quick note after some experiments last night. Not completely scientific, but enough to show a trend. I set out to compare CPU usage of the Pi running Volumio, upsampling lossless 44.1KHz 16bit stereo with ‘fastest sinc’ to 192KHz 32-bit stereo.

Streaming to the USB uses between 70 and 90% CPU. Streaming to the HDMI output uses 95% and more! Audio gets choppy in the latter case even without other processes getting in the way, whereas the former only gets choppy when the Pi happens to try and update the MPD database at the same time.

Wonder if anyone knows why onboard streaming should use so much extra CPU time to do the same work, and whether I2C suffers the same fate? Not sure I want to spend on a custom DAC if the current EMU 0202USB is more efficient?

Quick AppleScript debugging tip

Been a while since I last had to debug some basic AppleScript – and it’s fair to say programming and scripting really aren’t my cup of tea. I don’t really *know* much about either skill lately, but with Google and enough time/coffee I can sometimes roll my own or call out simple errors in others’ code.

To help solve today’s problem (a script saving a file to the wrong location despite path names apparently being set correctly) it really helped to do two things:

  1. Use the “say” command to announce each stage of the script task, sometimes announcing the situation (such as pass or fail of an “if” statement or similar). 
  2. Use the “say” or “display dialog” command to announce key variables as they are set or manipulated. Dialogs are useful for long strings (like the full file name path I was working on) as they can remain visible until you click OK. 
They’re really silly or “childish” for pro programmers I’m sure, but they really helped me understand the code and its structure, so that I could see where a variable was being misinterpreted and apply a fix.

Pilgrim’s Pod Radio Hour – Episode 3

UPDATE (26/2/2014):

This is the edited version, to keep the show length under an hour, and to tidy up some slower-moving passages.

ORIGINAL POST:

Another episode was recorded on Friday 7th February.  A slightly different feel to this one – with more spoken content. Featuring Liz Jadav and Phil Gallagher.

Technical notes

This time, the live-stream was sourced from the software mix that created this edited recording.  I’ve fixed a mistake where I ran out of hands to sort the live-stream mix during the intro, and we re-recorded a song with Paul after he’d choked on some water just before his song!  Aside from those issues, the stream levels were much more easily managed this way, and mixing the recording live with the usual processing in-place also made this edit much quicker to produce!

Also new to us was a Superlux S502 ORTF mic (select “English” from the top-right of the linked page), used for room ambience and audience.  Compared with the AKG 451’s we were using, rigging was much simpler, and the resulting sound was slightly more consistent.  I’m really pleased with this mic in this and some other applications; subject for another post I’m sure!

Getting an EMU 0202USB working with a Raspberry Pi

In the last couple of weeks, out of curiosity, I’ve bought a Raspberry Pi to play with at home.  It’s really very impressive to see what can be done these days with a $35 computer – an “educational” model at that!

Our Pi is currently in place as our digital audio player, courtesy of the Volumio linux “audiophile” distribution, and an EMU 0202 USB audio interface.

Once the Pi was booting Volumio off the SD card, I found two things that needed doing:

  1. Set up the Pi to pull files off our NAS device.  In theory this can be done from the Volumio web interface, but I had to go hacking around editing config files to make this work seamlessly.
  2. Set up the EMU for optimal digital playback.  I take a somewhat different path on this to most “audiophiles”.  I’m specifically aiming to implement a software volume control, provided I can run the digital audio chain at 88.2KHz/24bit, or higher.  This means CD/MP3 content gets upsampled, while some recordings made natively at 88.2KHz/24bit get to be played that way.

The Volumio forums helped me out with point 1, but I’ve lost a lot of brainpower and free time to getting the EMU to work properly.  I could get it to play out at 44.1KHz/24-bit, but any attempt to play native files at higher rates, or to have MPD upsample, resulted in obviously robotic-sounding distorted playback.  It turns out the key was simple:

It seems the clock rate on the EMU 0202 and 0404 USB devices is assigned to a fader in ALSA, which in this case I accessed using alsamixer.  There were two faders for my 0202:  PCM and Clock rate Selector.

The latter has a range of stepped values, equating to the following sample rates:

  •   0% 44.1KHz
  •  20% 48.0KHz
  •  40% 88.2KHz
  •  60% 96.0KHz
  •  80% 176.4KHz
  • 100% 192.0KHz

What I’ve learned then is that to get the setup working, I needed to not only set Volumio (or the underlying MPD player) to resample to the target output rate of 88.2KHz/24-bit but ALSO to set the Clock rate Selector to 40% in alsamixer.

All works happily and I’m loving the more “analogue” sound of the EMU in that mode!

UPDATE, 23RD FEB 2014:

I’ve managed to get MPD to reliably resample to 176400Hz/24-bit (32-bit internal, 24-bit at the card.) by forcing the Pi’s turbo to “always on” and a slight overclock. It’s not *quite* perfect yet, so i might see if I can push it a little harder before documenting our full setup.

Rocky road ahead: Google Cloud Print (BETA)

Background

An organisation whose IT team I know well has moved a lot of their services across to various Google platforms.  The move has been considered largely positive by users and management alike, not least because it has significantly reduced the management and infrastructure burdens on their organisation, and has genuinely improved IT-related life in many key ways.

The move therefore continues apace.  One problem identified by the organisation is that there seems little sense in paying c.£500-£1000 per head for a computer setup that spends the vast majority of its time being used (legitimately) as a web-browser.  The various Chromebooks undergoing trial have been a huge success given their planned usage, but with one common problem:  Users in 2013/14 STILL need to be able to print.

[Enter Google Cloud Print (BETA), Stage Left]

Image

“No problem!” says Google, “Here’s Cloud Print!”.  There are two flavours of documentation presented, in “consumer” and “IT Administrator” guises, both essentially saying (and ultimately describing) the same thing.

For those who haven’t come across it yet – the idea is that you buy a “Google Cloud Print Enabled” printer, give it 24/7 power and Internet, and you can print to it from anywhere, using your Google account in various creative ways.  Specifically for my friend, it gives print access to Chromebooks and other portable devices for which no other good printing solutions already exist.  Essentially if it can run Google Chrome, it can print.  And the concept is really neat.

Forecast: Storms ahead

There’s a thunderstorm in some clouds however, and this service is no exception.  I’ve heard a few common complaints in various pub-conversations, and even investigated a few when I’ve experienced them myself within my own Google Apps domains:

  • First off, some printers, once correctly hooked-up and signed-in, simply stop receiving Cloud Print jobs.  Often, turning them off and back on, and waiting up to a day, solves it.  But sometimes the log-jam becomes permanent.  Printing via local network or direct USB connection works fine from machines that can do it, but all Cloud Print jobs get stuck, forever destined to be “In Progress”.
  • The Cloud Print Management interface looks surprisingly mature for a Beta product, except that it gives very little information about what is really happening.  Once a job inevitably gets stuck, there’s no option to do anything other than to wait, or delete it.  It can’t be diverted to another printer.
  • More worrying, the status-codes are too general.  Sure, I don’t need a verbose running commentary when things are working well, nor perhaps when a job is “in progress”.  But when things get stuck, I’d like more information about the problem than the job simply being flagged “Error”.
  • Google provides no technical support for Cloud Print – so beyond what you can find in documentation provided either by Google or your printer manufacturer, you’re on your own.  No support. No apparent feedback mechanism even.
  • If something does go wrong, often the only way to fix it is to delete the printer on Cloud Print, and re-assign it.  This might be fine for single home users, but for anyone looking to share a printer between two or more people, this gets complicated, because you then need to share the newly-set up printer again with those who need it.
  • Then there’s the pervading security concern.  Where are those jobs going when travelling between the browser and the printer, and in what format?  Are they encrypted?  Are the documents scanned for content by Google or anyone else on the way?

Google comes close to a partial-answer in the FAQ/support page, with the following statements:

Documents you send to print are your personal information and are kept strictly confidential. Google does not access the documents you print for any purpose other than to improve printing.

For home users, that might be good enough.  At least there’s *something* in writing.  But for a business I’d suggest it’s too vague.  Let’s leave that alone for a moment and look at troubleshooting; how do I get a print queue working again, if I’m using a cloud ready printer?  Again, Google has a partial answer:

If you’re using a cloud ready printer…

Okay, done that, and checked that.  Still nothing.  Now what?

Conclusions?

Some reading this might say I’m being too harsh about what is *really* only a beta product.  And they might be right, if the product was released within the context of a beta product essentially being marketed or released only to technically-interested (and competent) people for evaluation, feedback and improvement before a wider release.  What’s happened instead is that some printer manufacturers have jumped onto the product by offering support (good), but without making it clear that this is a BETA service which may change, break or be taken offline at any time, without warning (bad. Very bad).

Even the business run-down provided by Google doesn’t mention its BETA status, and gives no clue as to how support or (useful) feedback can be found, nor even submitted.

So, is this going to be like so many other recent Google BETA products to get half a momentum going and then suddenly be killed? Or will it actually become more like Gmail and mature into a properly supported service, with SLA’s available to those who need them?  Only time will tell, but meanwhile based on what I know now, I’m finding it very hard to recommend deploying Google Cloud Print in my own organisations in its present form…

Random Gmail annoyance – sorting Inbox to show oldest messages first

It is 2013.  Sure, my email practices are probably based on conventions from 1993, but this is an ongoing personal frustration. I should say up-front that it can of course be solved by using a third-party mail client, which I do (on occasion).

On a desktop or laptop device, I’ve always preferred to run with older messages showing first, since they’re the ones that are most likely to *become* important – someone who has been waiting for a week is more likely to need their answer *now* than someone who has only just got in touch a minute ago.

It also means I can think “forward” of where I currently am, no matter what point of the day or workflow I encounter it.  Having to constantly think both forwards and backwards, particularly when dealing with the user interface elements involved with navigating, dealing with and filing messages or whole conversation threads, feels completely counterproductive. Actually it’s worse – it’s nudging me towards falling in to the “tyranny of the urgent” rather than dealing with what’s actually “important” right now.

On the one hand, one could argue that not having this facility means I constantly need to re-evaluate the whole queue every time I look at it.  On the other hand, I find that this approach saps functional time and energy away from the things that really *do* need doing.  And I don’t like that.

So come on Google – how’s about enabling that option for more than just multi-page lists?

Google – Another step backward for UI design?

It really doesn’t feel like much time has passed since Google launched the “black bar” to navigate around Docs/Calendars/other services.  And over time, many of us have come to rely on it being there.

Roll on another (wow, it’s been a couple of years already?) couple of years, and now we get this:

Image

Yup. That’s a grid, buried among a couple other things that rarely get used.  Click on it, and a list of icons appears to help take you to your chosen service. All well and good, except you have to click again to go there.

Those of us relying on pattern or muscle-memory to get things done intuitively will balk at this for a few reasons:

  1. We now need to click twice to get a simple thing done.  Surely activation by hovering over the grid should bring up the menu?
  2. The grid is in no way intuitive – looking at the icon doesn’t tell me anything meaningful about what it’s going to do if I click on it.
  3. The grid is in a completely different place on the page from where the old navigation bar was

A little car analogy:  I need to know that when I take my car for its annual service, it comes back with key consumables replaced under the hood, but with key controls (gas and brakes for example) in the same place as when I took it there, each retaining the same function as when I left the car at the garage.  I don’t want to have to relearn where the pedals are, and what each does, every time I head off on a new journey.  Likewise with software.  Changes and improvements are a good thing.  But only when managed in a way that allows the majority to keep up, or to operate the machinery safely in the way they were first trained to when taking on the machine.

It’s the small things like this (and Ars Technica has an interesting article listing similar things here) which are turning many of my tech-embracing friends and relatives back away from the tech they purchased, because they don’t yet use it enough to learn how to relearn pretty much every task they ever set out to achieve.  Many of them might only perform a task once every year or two, yet every time they do, enough little things have changed that mean they’re relearning the process as a new user.

I think that’s a clear example of technology creating more stress, and more hassle – far from the technology enabling things through reducing effort and overheads.

Am I the only one thinking this way?

Mid-2012 MacBook Air 13″ – fixing one form of “Black Screen of Death”

Various online forums are abuzz with MacBook Airs of 2012 and 2013 flavours suffering the “Black Screen of Death” – apparently the machine, mid-use, decides to either shutdown completely, or just shut its display off.  It’s the latter case I’m most interested in here, since a colleague just presented me her Mid-2012 128GB 1.8GHz 4GB-RAM model.  It’s still exactly as it was when it came out of the box.

The problem

The machine shutdown mid-use, and subsequently would only boot as far as turning on the display backlight.

The (apparent) solution

The PRAM (Parameter RAM) reset – hold down ALT, CMD, P and R keys together immediately after pressing the power button.  While the keys are held down, the machine will reboot with a “Clang”.  I usually hold the key-combo down until the clang has happened three times, releasing the keys on the third.  This may be a superstition as one cycle might be enough, but from my bad old days doing the same trick on older G4-based iMacs this is a habit that still hasn’t been shifted.

The result

The MacBook Air immediately booted as normal, and within a few seconds I was greeted with the usual File Vault 2 login screen, and the machine has behaved impeccably since then.

Further preventative maintenance

Apparently the machine had missed a few software update cycles, so I installed everything available, including a Thunderbolt firmware update and the recently-released 10.8.5 update.

Online music streaming – missing a note or two?

Google Play logo, courtesy Wikipedia
Google Play logo, courtesy Wikipedia

Quick thought, while I’m procrastinating…

While I’m not planning to let go of physical media anytime soon – not least the vinyl collection, I’m becoming a huge fan of Google Play, and its ability to play music “uploaded and matched” from my own collection.  Real bonuses for me are that this happens for no extra cost to my Google Apps domain, and  it seems to work well wherever I have a reliable ‘net connection.  The quality when listening via headphones and Google Chrome on a laptop is surprisingly good considering they’re MP3’s – possibly transparent enough to pass a proper ABX test between them and the original uncompressed digital stream on CD.

But something is different, and something is missing… quite a lot of things are missing actually.

Where’s the song information?

Geeks might call this “metadata”. The information about the making and content of the recording is as useful to me as the actual content itself.  I like knowing things like, who wrote the song I’m listening to. I might want to check the lyrics. I might also want to know whether I’m listening to a particular remaster or reissue.  While the content and artwork are there on Google Play, I’ve got absolutely no idea at first glance which exact version or release of a song I’m listening to.

At present, I know who the release artist is for a song as it plays, and from which album. I can even see the album artwork for the majority of my collection, as well as a release year.  What I don’t know without doing a *lot* more digging is whether the particular copy of “Bohemian Rhapsody” I’m listening to is from a 1990’s remaster, or the more recent (2011?) remasters? I’m not ordinarily such a geek – a great song is a great song whatever the media it’s carried on.  But it’s good to know nonetheless.  Especially if I happen to like the work of a particular mix/master engineer, or if I purchased a particular CD release of an album due to a known heritage, which has been matched to another version which sounds particularly different.

I think it would be really nice if digital streaming/shop purveyors could actually provide the full information of the songs they’re sending us.  There are more involved in most major releases than just the artists, and it’s only right that they get the credit, even if the information shows no significant other commercial purpose.

What even made me think of this?

Listening to the current version of Queen’s “A Kind of Magic” up on Google Play, I’m noticing a lot more musical and tonal detail in the recordings than I remember from my own CD copies.  This is an album I’ve known for the whole of my musical life, and I therefore have some very strong memories of it, and can recall absurd amounts of detail regarding both musical arrangements and sonic character and how they were reproduced differently in each of the releases I’ve owned copies of.  Since I’m hearing so many new things despite listening on familiar equipment, I’d like to understand where they come from.  Since I like the differences, I’d like to know if they are due to a particular engineer’s approach to remastering, and whether I can find more by the same engineer.  Or whether I can learn something about the engineering approach that led to the result I liked so much.

On the one hand the freedom offered by always-on streaming access like this is wonderful – but on the other it comes with a lot of compromises, and with a lot of things “hidden” from view that I feel really should be open to us all…

Blog at WordPress.com.

Up ↑

%d bloggers like this: