Quick AppleScript debugging tip

Been a while since I last had to debug some basic AppleScript – and it’s fair to say programming and scripting really aren’t my cup of tea. I don’t really *know* much about either skill lately, but with Google and enough time/coffee I can sometimes roll my own or call out simple errors in others’ code.

To help solve today’s problem (a script saving a file to the wrong location despite path names apparently being set correctly) it really helped to do two things:

  1. Use the “say” command to announce each stage of the script task, sometimes announcing the situation (such as pass or fail of an “if” statement or similar). 
  2. Use the “say” or “display dialog” command to announce key variables as they are set or manipulated. Dialogs are useful for long strings (like the full file name path I was working on) as they can remain visible until you click OK. 
They’re really silly or “childish” for pro programmers I’m sure, but they really helped me understand the code and its structure, so that I could see where a variable was being misinterpreted and apply a fix.

Further adventures with Raspberry Pi

Having owned a Raspberry Pi for a few months, I’ve found the experience really refreshing.  Despite the limited processing power, the ability to have a silent digital source playing in our front room, with excellent audio quality, has brought our enjoyment of digital audio sources right up there with our vinyl playback.  We now have the choice to live with the absolute technical superiority that digital can offer, or the character of vinyl.

Essentially we’re now in a place where we can judge that well-recorded (and mastered) digital recordings *can* exceed the perceived quality of vinyl in many cases.

It’s not been an easy task to get it there though.  Our chosen USB audio interface, the EMU 0202 USB, sounds remarkably good at all supported clock rates given its original selling price, but really shines when playing at 192KHz 24-bit.  This remains the case even with content originally mastered at 44.1KHz 16-bit.  There are two likely reasons for this; firstly increasing the sample rate takes the reconstruction filtering further outside the audible frequency range, giving a cleaner sound (Assuming jitter and other distortions are low enough to not introduce their own issues).  Secondly, digital volume control simply retains more information when working in the 24-bit domain than the 16-bit domain.  No, it’s not “bit-perfect” – but show me a real-world situation that is outside of a laboratory or computer simulation, then we’ll talk.

Optimising the Pi for playback

I need to be fair here and point out that the vast majority of the tweaking has been done as part of the Volumio project development.  I get to enjoy the overall benefits and tweak things to suit our system and tastes. Must find their donations page at some point!

I’m now running Volumio 1.4 (still marked as BETA) on our Pi Model B, which brings some speed improvements.  So far as i can tell, it also fixes many of the USB issues that I was experiencing with v1.2 and earlier, so playback at all supported samplerates now works without pops and clicks, so long as CPU usage is kept under control.

CPU usage

In the earliest days of my Pi ownership, I did perhaps find myself a little disappointed with the limited CPU power of the Pi.  But then what the heck did I expect for so little money and power?  Anyways, some research told me I could overclock the unit to get some more headroom.  And that’s where I had been running, but I was still having problems with upsampling beyond 88.2KHz 24-bit.  So I got a little more serious.  I did some playing with the software config, and noting that I was voiding the warranty on an effectively disposable machine, I decided to try over-volting alongside the overclock.  The result has been nothing short of stunning.  I’m now running with the following lines in my /boot/config.txt file:
gpu_mem=16
hdmi_drive=2
force_turbo=1
over_voltage=8
arm_freq=1100
core_freq=500
gpu_freq=250
sdram_frequency=450
That’s a pretty extreme jump from factory defaults, and the processing stats show it.  I’ve not run any measurable benchmarks, but I have noted the following:
  • My previous mild-overclock settings could manage very listenable audio at 192KHz/24-bit rate without audible clicks, with better sound quality than upsampling to “only” 88.2KHz 24-bit. CPU usage sits at around 65% in the new mode compared with 95% with the previous mild overclock.
  • The Web-GUI speed is much improved.
  • The massive jump in overclocking gives much-improved definition, especially in terms of dynamics and timing.  There are also notable improvements in the stereo image, and finer tonal details exposed.
  • I should note that ABX testing is not possible in our rig, so confirmation bias *could* be at play.
  • The benefits were maximised when playing with the “Kernel profile” sound quality tweaks, available in Volumio’s “System” menu.  I’ve settled on the Buscia profile, both because it sounds perceivably better, but also because it seems to bring the CPU load average down significantly when the same work is being done.
  • I had hoped to be able to free up enough power to use the something better than the “Fastest Sinc Interpolator” resampling algorithm, but as yet I’ve not been able to do so as the audio gets too choppy to listen to.  I suspect I’ve found the upper limit of what this setup is capable of, unless someone can refactor the code such that all that processing can be carried out by such a low-powered machine.

Summing up

Much of the value of this exercise is in the tweaking. And that’s really rewarding as it’s a whole new skill-set learned virtually free of charge. The improvements are significant, even if only to my own perception. It’s been a lot of fun getting this far with such cheap and cheerful kit. So now I aim to leave it all alone and just enjoy the music, until I can get around to fitting an LCD display, control interface and put it all into a nice custom case.

Pilgrim’s Pod Radio Hour – Episode 3

UPDATE (26/2/2014):

This is the edited version, to keep the show length under an hour, and to tidy up some slower-moving passages.

ORIGINAL POST:

Another episode was recorded on Friday 7th February.  A slightly different feel to this one – with more spoken content. Featuring Liz Jadav and Phil Gallagher.

Technical notes

This time, the live-stream was sourced from the software mix that created this edited recording.  I’ve fixed a mistake where I ran out of hands to sort the live-stream mix during the intro, and we re-recorded a song with Paul after he’d choked on some water just before his song!  Aside from those issues, the stream levels were much more easily managed this way, and mixing the recording live with the usual processing in-place also made this edit much quicker to produce!

Also new to us was a Superlux S502 ORTF mic (select “English” from the top-right of the linked page), used for room ambience and audience.  Compared with the AKG 451’s we were using, rigging was much simpler, and the resulting sound was slightly more consistent.  I’m really pleased with this mic in this and some other applications; subject for another post I’m sure!

Getting an EMU 0202USB working with a Raspberry Pi

In the last couple of weeks, out of curiosity, I’ve bought a Raspberry Pi to play with at home.  It’s really very impressive to see what can be done these days with a $35 computer – an “educational” model at that!

Our Pi is currently in place as our digital audio player, courtesy of the Volumio linux “audiophile” distribution, and an EMU 0202 USB audio interface.

Once the Pi was booting Volumio off the SD card, I found two things that needed doing:

  1. Set up the Pi to pull files off our NAS device.  In theory this can be done from the Volumio web interface, but I had to go hacking around editing config files to make this work seamlessly.
  2. Set up the EMU for optimal digital playback.  I take a somewhat different path on this to most “audiophiles”.  I’m specifically aiming to implement a software volume control, provided I can run the digital audio chain at 88.2KHz/24bit, or higher.  This means CD/MP3 content gets upsampled, while some recordings made natively at 88.2KHz/24bit get to be played that way.

The Volumio forums helped me out with point 1, but I’ve lost a lot of brainpower and free time to getting the EMU to work properly.  I could get it to play out at 44.1KHz/24-bit, but any attempt to play native files at higher rates, or to have MPD upsample, resulted in obviously robotic-sounding distorted playback.  It turns out the key was simple:

It seems the clock rate on the EMU 0202 and 0404 USB devices is assigned to a fader in ALSA, which in this case I accessed using alsamixer.  There were two faders for my 0202:  PCM and Clock rate Selector.

The latter has a range of stepped values, equating to the following sample rates:

  •   0% 44.1KHz
  •  20% 48.0KHz
  •  40% 88.2KHz
  •  60% 96.0KHz
  •  80% 176.4KHz
  • 100% 192.0KHz

What I’ve learned then is that to get the setup working, I needed to not only set Volumio (or the underlying MPD player) to resample to the target output rate of 88.2KHz/24-bit but ALSO to set the Clock rate Selector to 40% in alsamixer.

All works happily and I’m loving the more “analogue” sound of the EMU in that mode!

UPDATE, 23RD FEB 2014:

I’ve managed to get MPD to reliably resample to 176400Hz/24-bit (32-bit internal, 24-bit at the card.) by forcing the Pi’s turbo to “always on” and a slight overclock. It’s not *quite* perfect yet, so i might see if I can push it a little harder before documenting our full setup.

Rocky road ahead: Google Cloud Print (BETA)

Background

An organisation whose IT team I know well has moved a lot of their services across to various Google platforms.  The move has been considered largely positive by users and management alike, not least because it has significantly reduced the management and infrastructure burdens on their organisation, and has genuinely improved IT-related life in many key ways.

The move therefore continues apace.  One problem identified by the organisation is that there seems little sense in paying c.£500-£1000 per head for a computer setup that spends the vast majority of its time being used (legitimately) as a web-browser.  The various Chromebooks undergoing trial have been a huge success given their planned usage, but with one common problem:  Users in 2013/14 STILL need to be able to print.

[Enter Google Cloud Print (BETA), Stage Left]

Image

“No problem!” says Google, “Here’s Cloud Print!”.  There are two flavours of documentation presented, in “consumer” and “IT Administrator” guises, both essentially saying (and ultimately describing) the same thing.

For those who haven’t come across it yet – the idea is that you buy a “Google Cloud Print Enabled” printer, give it 24/7 power and Internet, and you can print to it from anywhere, using your Google account in various creative ways.  Specifically for my friend, it gives print access to Chromebooks and other portable devices for which no other good printing solutions already exist.  Essentially if it can run Google Chrome, it can print.  And the concept is really neat.

Forecast: Storms ahead

There’s a thunderstorm in some clouds however, and this service is no exception.  I’ve heard a few common complaints in various pub-conversations, and even investigated a few when I’ve experienced them myself within my own Google Apps domains:

  • First off, some printers, once correctly hooked-up and signed-in, simply stop receiving Cloud Print jobs.  Often, turning them off and back on, and waiting up to a day, solves it.  But sometimes the log-jam becomes permanent.  Printing via local network or direct USB connection works fine from machines that can do it, but all Cloud Print jobs get stuck, forever destined to be “In Progress”.
  • The Cloud Print Management interface looks surprisingly mature for a Beta product, except that it gives very little information about what is really happening.  Once a job inevitably gets stuck, there’s no option to do anything other than to wait, or delete it.  It can’t be diverted to another printer.
  • More worrying, the status-codes are too general.  Sure, I don’t need a verbose running commentary when things are working well, nor perhaps when a job is “in progress”.  But when things get stuck, I’d like more information about the problem than the job simply being flagged “Error”.
  • Google provides no technical support for Cloud Print – so beyond what you can find in documentation provided either by Google or your printer manufacturer, you’re on your own.  No support. No apparent feedback mechanism even.
  • If something does go wrong, often the only way to fix it is to delete the printer on Cloud Print, and re-assign it.  This might be fine for single home users, but for anyone looking to share a printer between two or more people, this gets complicated, because you then need to share the newly-set up printer again with those who need it.
  • Then there’s the pervading security concern.  Where are those jobs going when travelling between the browser and the printer, and in what format?  Are they encrypted?  Are the documents scanned for content by Google or anyone else on the way?

Google comes close to a partial-answer in the FAQ/support page, with the following statements:

Documents you send to print are your personal information and are kept strictly confidential. Google does not access the documents you print for any purpose other than to improve printing.

For home users, that might be good enough.  At least there’s *something* in writing.  But for a business I’d suggest it’s too vague.  Let’s leave that alone for a moment and look at troubleshooting; how do I get a print queue working again, if I’m using a cloud ready printer?  Again, Google has a partial answer:

If you’re using a cloud ready printer…

Okay, done that, and checked that.  Still nothing.  Now what?

Conclusions?

Some reading this might say I’m being too harsh about what is *really* only a beta product.  And they might be right, if the product was released within the context of a beta product essentially being marketed or released only to technically-interested (and competent) people for evaluation, feedback and improvement before a wider release.  What’s happened instead is that some printer manufacturers have jumped onto the product by offering support (good), but without making it clear that this is a BETA service which may change, break or be taken offline at any time, without warning (bad. Very bad).

Even the business run-down provided by Google doesn’t mention its BETA status, and gives no clue as to how support or (useful) feedback can be found, nor even submitted.

So, is this going to be like so many other recent Google BETA products to get half a momentum going and then suddenly be killed? Or will it actually become more like Gmail and mature into a properly supported service, with SLA’s available to those who need them?  Only time will tell, but meanwhile based on what I know now, I’m finding it very hard to recommend deploying Google Cloud Print in my own organisations in its present form…

Pilgrim’s Pod Radio Hour, Episode 2 – Christmas Special featuring @miriamjones

Well, here’s the second episode of the Pilgrim’s Pod Radio Hour, with our host Will Mackerras, Paul Enns leading the band, and our special guest Miriam Jones!

I’ll possibly expand on this later, but we had a lot of fun making the show, so I hope you enjoy listening to it!

Album-art oddity: Chicane vs Jarre

Just been checking out Chicane’s “Thousand Yard Stare” on Spotify as background music while I’m writing up another project, and as I glanced at the artwork, it struck a familiar chord.  Took me a while, but it just came to me while I was typing…

Here’s the Chicane artwork for the album:

Chicane’s “Thousand Yard Stare” cover – look familiar?

When it occurred to me where I’d seen it before, I kicked myself for it having taken so long.  Here’s what I thought of:

Artwork for Jean Michel Jarre’s “Magnetic Fields” / “Chants Magnetiques”. Inspiration for Chicane, perhaps?

Being a fan of both artists, I love that Chicane has apparently given such a nod to Jarre, whom in my mind seems to have laid a lot of the groundwork for Chicane’s work.  And for what it’s worth, both are excellent albums in their own right!

I’d be intrigued to see any other artists who’ve nodded to each other in this way…  Comments are open!

I'm just a London geek – what do I know?

%d bloggers like this: