Professional London-based audio and tech geek


Professional audio engineer and hobbyist creative technician, based in South East London.  This blog hosts some of my personal and leisure interests as well as some examples of my professional work.  All works and opinions represented on this site are my own, unless otherwise credited/quoted.

Featured post

On commercial remasters possibly issued without Dolby A decoding; Mistakes, art, or…?

Some background…

I’ve commented on this blog before about the possibly questionable quality of some digital remasters released of late. Common subjective complaints in online fan and hifi forums, made myself both here and among friends in person, are that some particular remasters might be too loud, too bright, and/or otherwise “overdone” given what we know or perceive of the original source material.  There might well be various artistic or marketing-related reasons for this, so I’m not here to argue for or against these issues.

Further complicating the issue for me, both as a fan and a professional, is that many of these stand-out features are seen overwhelmingly as positive things by many fans, whether they are technically correct or not.  It would seem that a combination of perceived increase in detail and volume outweighs any issues of listening fatigue or known certain deviation from the presentation of the original master.

I’ve embarked professionally on remastering and restoration processes and have learned, from the coal-face so to speak, much of the reality of what’s involved. To onlookers it appears to be a black art – and believe me, from the inside, it can feel a lot like it too!  Sometimes I’m asked by a client or a reference-listener how or why I made a particular decision; and in some cases, especially those without actual technically verifiable information or logged conversations to go on, I have to go out on a limb and essentially say something to the effect of “well, because it ‘felt right'”, or “because it brings out the guitar *here*, which really flatters the piece” or some other abstract quantity.  At this point I just have to hope the client agrees.  If they don’t, it’s no big disaster, I am rarely emotionally tied to the decision. I just need to pick up on the feedback, do what I can with it, and move on.  Looking at the process, I guess that’s partly why the word “abstract” appears in my trading name! :)

“Okay, so you know a bit about this from both sides, on with the subject already!”

There are two particular commercial albums in my digital collection, both hugely successful upon their original release, whose most recent remasters have bothered me. It’s not fair to name and shame them, especially not while I await confirmation from engineers/labels that my hunch is correct.  Anyways – I’m bothered not because they’re “bad” per se, but because I bought them, took them home, and from the moment I first heard them, something about them stood out to me as being not quite “right” from a technical perspective. One of them (Album A) was released in 2001, and another (Album B) that was released earlier this year, in 2015.

What these two albums have in common is that their tonal and dynamic balance is *significantly* different to the original releases, beyond the usual remastering techniques involved with repair, restoration and sweetening of EQ and dynamics carried out to sit alongside contemporary new releases.  The giveaway is that the top-end is both much brighter than usual, and much more compressed – and the result is unnecessarily fatiguing.

Where the two albums differ, then:

  • Album A has not suffered from the “loudness wars”.
    • Its overall dynamics are relatively untouched compared with the original.
    • It appears, looking at the waveform in a DAW, that the album material has been normalised to 0dBFS (so it fills the maximum dynamic range CD has to offer), but it rarely hits such high levels.
  • Album B however, despite never having been a “loud” album on original release, has suffered from the “loudness wars”.
    • Looking at its waveform, it’s clear that it has been maximised; this means that the material has been both compressed and limited such that the original dynamics have been squashed and gain applied such that almost the entire album waveform hits the 0dBFS point.
    • As a result, the album has lost its previous tidal ebb and flow, and while arguably some details are indeed much more audible than before, it no longer has the organic subtlety it once did.  Important instrumental details get masked and actually reduced in level as louder ones come into the foreground, because with that much compression going on, there’s nowhere else for them to go except lower in level.
    • Sure, it’ll play better on an iPod while travelling on the London Underground, or in the car, so it might open up a new market that way – but for the rest of us perhaps looking forward to a better quality transfer to listen to at home or anywhere else, we don’t get that choice.
    • I’ve heard the 2015 vinyl re-release of the latter album, and it seems to not have the same issues – or if it does, nowhere near to the same extremity. There are likely good technical and human reasons for that, but that’s an aside for another post.

Experiment 1:  Treating the common issues

Last week I had some downtime, and a hunch – a dangerous combination.

Neither album was famed in its day for brightness, except for the singer’s sibilants in Album A causing vinyl cutting and playback some serious headaches if alignment wasn’t quite right. Album B does carry a lot of detail in the top end, but being mostly synthetic, and certainly not a modern-sounding album, the spectral content is much more shifted toward low-mid than anything we’d be producing post-1990.  So there will be some sheen and sparkle, but it should never be in your face, and never compressed.

Such clues told me two things: first, that Dolby A was likely not decoded from the master-tape on transfer; next, that in the case of Album B, further dynamic compression has taken place on top of the un-decoded material.

So – out came a Dolby A decoder, and through it I fed a signal from each album in turn, bouncing the decoded signal back into my DAW for storage and further analysis of the decoded signals.  Now please understand, it’s hard (if not impossible) to get a correct level-alignment without getting the test tones from the master tape, but those of us in the know can make some basic assumptions based on known recording practices of the time, and once we know what to listen for, we can also based on the audible results, especially if we have a known-good transfer from the original tape to work with.

All that said, I’m not claiming here that even with all this processing and educated guesswork, I’m able to get back to the actual sound of the original tape! But I am able to get closer to what it ought to sound like…

The result? Instantly, for both albums, the top-end was back under control – and strangely both albums were suddenly sounding much more like the previous versions I’ve been hearing, be it from vinyl, CD or other sources. Album B’s synth percussion had space between the hits, Album A’s live drums had proper dynamics and “room” space. In both albums, stereo positioning was actually much more distinct. Reverb tails were more natural, easier to place, easier to separate reverb from the “dry” source, especially for vocals. Detail and timbre in all instruments was actually easier to pick out from within the mix.  To top it all off – the albums each sounded much more like their artists’ (and their producers’) work. Both albums were far less fatiguing to listen to, while still delivering their inherent detail; and perhaps some sonic gains over previous issues.

Experiment 2:  Fixing Album B’s over-compression

First things first – we can’t ever fully reverse what has been done to a damaged audio signal without some trace being left behind.  Something will be wrong, whether “audible”, noticeable or not.  But, again, an educated guess at the practices likely used, and an ear on the output helped me get somewhere closer to the original dynamics.  But how?

Well, it was quite simple.  One track from the album has a very insistent hi-hat throughout, that comes from a synth.  If we assume that synths of the time were not MIDI controlled, and likely manually-mixed, we can assume that it should essentially sit at a constant level throughout the piece, barring fade-in/fade-out moves.  And listening to an “original” that’s pretty much what it does.  But neither in the clean nor my “decoded” version of the later album does it do so.  It drops up and down in level whenever the other pads and swept instruments come and go.  It was more noticeable on my “decoded” version, but with the frequency and micro-dynamic blends being so much more pleasant, I knew that I’d made progress and the way forward was to fix the compression if I could.

Out came a simple expander plug-in; Inserting this before the Dolby decoder, and tweaking various settings until I was happy that the hi-hat was sitting at a constant level throughout my chosen reference piece, restored dynamics to something like the original, and returned that hi-hat to something much closer to a near-constant level as the track plays.  In the end, we get something like a 6-9dB gain reduction, and the waveform looks far less squashed.  And sounds it, too.

The trick then, was to listen to all four Albums, A, B, A restored, B restored, at similar overall loudness levels, and see which works better.  So far, in this house anyways, we’re happier with the restored versions, even including those who are unfamiliar with the artistic content.

Prologue – Is this a mistake? And if so, how could it have happened?

When dealing with remasters, especially for older albums, we typically go back to playing analogue tape. There are *many* things that can go wrong here at a technical level. We’re worrying about whether the tape machine is aligned to the tape itself, both tape and machine are clean, and that the correct noise reduction technology is used, whether we’re actually getting all the information we can off that tape.

Then there is a human element. I’ve lost count of the number of times even in my small sample, where I’ve encountered a DAT or 1/2” reel labelled as being pre-EQ’d or Dolby-encoded with some system or another when in fact it wasn’t. Then there are other similar labelling and human errors I’ve encountered; Perhaps it wasn’t labelled as being Dolby-encoded and it really was. Or perhaps the “safety copy” was actually the clean master and the “master copy” was actually the cruddy “safety” with a 10dB higher noise-floor recorded at half-speed on lower-grade tape on an inferior machine that we know nothing about, with the channels swapped randomly due to a patching error in the studio.

Technology, and technicians, like the kind of questions and answers that have defined, logical “0 or 1”, “yes or no”, “is this right or is this wrong?” kind of answers. Unfortunately for us then, when dealing with music, as with any other art, and so then dealing with musicians, producers and other artists involved with the music creation and production process, we soon find that the lines between “right and wrong” very quickly get blurred.

As an engineer, I’m also all too aware of the dichotomy between my *paying* client (usually the artist), and my *unpaying* client (the listener).  Most of the time these are in agreement with what is needed for a project, but sometimes they’re not. The usual issue is the one of being asked for too little dynamic range – “can you turn it up a bit so it sounds as ‘loud’ as everything else?” and the resulting sound is fatiguing even to me as the engineer to work with, let alone the poor saps who’ll be invited to buy it. Sometimes I know that some sounds simply won’t process well to MP3/AAC (that’s less of an issue these days, but still happens).

Anyways – all that to say -if these albums both suffered the same mistake, if indeed it was, then even without the myriad artistic issues creeping in, I can see how an unlabelled, undecoded Dolby-A tape can slip through the net, and blow the ears off an artist or engineer who’s been used to the previous released versions and get people saying “YEAH, LET’S DO THAT ONE!” :)


Preparing images for Google Slides (and other Google apps too)

User comes to tech, tech solves problem, world moves on. Yawn.

Some months ago, I helped with a project to help move someone’s large archive of digital photographs and clipart to Google Drive.  That was easy enough in itself – we just installed Google Drive on their Mac and just moved everything from the appropriate folder on their Mac to a suitable place on inside their Google Drive Folder, on a fast (100Mb synchronous) connection, and let time and Google Drive application do their work.  “Job done…”

Problem 1: Images are over 2MB. Okay, so we’ll shrink them…

…Eeeeeexcept they wanted to use the images immediately, as-is, in Google Slides and other Google apps.  The very first image dropped into their new presentation was too big.  It was either over 2MB, or over some arbitary pixel dimensions that the dialog box didn’t tell the user about.  So back the user came to our team asking what the heck was going on…

Looking at the relevant Google Support page for Docs Editors (as at 10th April 2015, and still not fully populated on 9th November 2015), one might think that just recompressing the images so they *just* squeeze under the 2MB size limit would be enough to comply.  And indeed on this info, given the ‘000’s of images affected, I sync’ed a copy of the affected Google Drive account to a spare Mac, installed Imagemagick (along with its numerous dependencies) and wrote us a bash-script.  Looking at the fileset, I noticed that only the JPGs were over 2MB in size, so I found I could simply tweak the script to look for any JPG over 2MB and use Imagemagick’s “convert” tool to resize the file in-place, then delete the old file to save confusion at the user end.  The basis of the conversion was this command:

convert image.jpg -define jpeg:extent=1970kb image.jpg.smaller

Sure, we sacrifice some overall quality due to JPG recompression, and the script needs to take care of some housekeeping along the way.  But having looked at a bunch of test-images side-by-side, we decided the work involved, and the results obtained were more than good enough for the intended use-case, and indeed any quality losses were barely detectable in >9 out of 10 cases even when pixel-peeping on a decent calibrated monitor.  So on we went with the live dataset after many test-cases, thinking the job was done.

So with the results looking good, off I went to tell the user that the file conversions were done, the results looking good and let us know of any problems.  Meanwhile we’ll move onto the next tasks on our creaking lists.

Problem 2: File size alone wasn’t the issue – pixel dimensions mattered too! D’oh!

Then the dreaded email pinged.  Our poor user had tried to insert the first of the newly recompressed images and sure enough, it had failed to load, and again the same generic helpless dialogue box appeared.  Not because it was a bad image itself – It’s a perfectly valid JPG, and looked very nice, despite the high pixel count and our fears over high compression rates and known multiple recompression steps.  These images are intended for end-use after all, not for further editing, and certainly not for anything other than on-screen use at low DPI output at long distance.

I had to confirm the issue for myself, dragging a JPG onto the insert image tool’s “upload from computer” window finally got the Google Slides image tool to tell me that the image was too big, and finally it gave me the actual limits I was supposed to be working to.

Great.  Now I need to go off and resize my images.  Again.

So, off I went to find a new copy of the original images in their original folders (you do still keep backups of what’s on your cloud storage, right?) Then I worked on a new script, that would resize the images to fit inside a 3500×2500 window, preserving aspect ratio, and would again work for GIF, JPG or PNG since those are all supported by Google Docs.  THEN I ran the same recompression script as before on any files that were still too big after downscaling.  Overall the process took much the same amount of time as the first run, but came with the advantage of the end-result overall looking much better up to the limits of the pixel dimensions and the file size.

Summing up

Some time on our end testing the full end-to-end process might have saved both us and the user some time and hassle, for sure, so my own lesson here is that some short-sightedness on our part for the sake of trying to “get back to other things” most certainly bit us all in the bum.  In our defence however, the process would have been *much* easier had the image dimensions, aspect ratio and file size limitations all been given up-front, NOT just at the point of the image throwing up an error, but also on ALL appropriate import screens, and in accompanying documentation we can search from outside the situation the users face. Another couple of lessons both for developers and support folk here! :)

abstractnoise reel-to-reel demos

This week I’ve been playing with our recently acquired Revox B77 1/4″ recorder. It’s a stereo half-track model, and I use it at 7ips. Currently I’m not sure what actual tape I’m recording to, as it’s a 20ish-minute offcut from a reel that was found to be blank at the end of a transcription job. The two tracks presented here represent two very different production methods now open to me. 

“Changes Afoot” was sent track-by-track to tape, from DAW, and back again, to produce a digitally mixed master. A couple of those tracks were recorded particularly “hot” to bring out more tape character. No noise reduction was used at all. Signal-to-noise on any single tape recording in this setup was was found to be around 60dB.

“Innocence 2010” started as a digital stereo pre-master that I was never fully happy with, which was sent to tape via a skunkworks noise reduction system I’m working on behind the scenes; the system is not yet complete, but has enough processing built to give a useful 10dB gain in signal-to-noise ratio without any significant audible artefacts, as borne out by the 69-70dB signal-to-noise ratio found in this setup even after peak-limiting.

I suspect the signal-to-noise ratio is limited by noisy pre-amps on my DAW setup; I’ll need to swap to a different audio interface to confirm. That’s something to play with another day. Overall I’m VERY impressed with the overall sound quality this kit is able to deliver, and the range of analogue “colours” it can provide. I’m really looking forward to finishing the skunkworks noise reduction project; I have my eyes set on somewhere near 24dB noise reduction once it’s fully up and running. But it’s good to prove that I can both encode and decode on-the-fly!

Watch this space!

Protected: Restoration Listening Test 1

This content is password protected. To view it please enter your password below:

Mac OS X Yosemite Quarantine issues and workaround…

Getting bored of having to do stuff like this, both at work and play.

Many useful Mac apps still come in from places on the Internet OTHER THAN the Mac App Store.  This might be news to the boffins at Apple, but there you go.  This can cause problems at a user-level, where we end up with warning messages like these every time we try and start an installed application:

“xxxxxxxxxx” is an application downloaded from the Internet. Are you sure you want to open it?

AAAAAAAARGHH!  OF COURSE I want to open it! I installed it! I even used my Admin rights to move it to my Applications folder, and it’s been there for months, perhaps years! So quit telling me about this every time I open it!

Okay, chill, breathe, take your meds, it’s time to fix this.  Again, Google to the rescue, and I found a lot of people have been having this kind of issue since Lion.  I have to admit I’ve managed to not have it bite me or my pool of users in the bum at all, (except on first-use of the application, which is fine, because that’s all it needs) until Yosemite.  And specifically, Yosemite’s 10.10.2 point-release.  Ugh.

In all cases, people have reported general success by many sledgehammer-to-crack-walnut means, mostly by turning security and quarantine features off.  I prefer not to do that, so I much enjoyed the more fine-grained solution found here.  Not sure how it’ll work as apps get upgraded, but even if it needs redoing at this point, it’s better than being prompted every time I open an app I regularly use!

So, rather than rewording, I’ll quote D. W. Hoard’s words from his article (linked above):

The quarantine flag can be removed from files or applications that are already downloaded, without completely disabling the quarantine mechanism, by using the following command:


A slight shortcut is to type everything up to the path (including the trailing space) in a Terminal window, then drag the desired application or file from a Finder window into the Terminal window, which will automatically paste in the full path to the application or file. If you perform this process using an Administrator account, then the quarantine will be permanently removed for all users on the same computer, regardless of the Administrator privilege level of their accounts.

Oh gosh, I had a horrible thought… it reminds me of the dark days of MS Vista… :-o

Installing Mavericks or Yosemite on Mac laptop after battery failure

Had a number of issues with installing Mavericks or Yosemite on Macs that have had a dead battery.  By “dead” I mean, either run flat and left in a cupboard or on a desk for a week or more before we’ve got around to rebuilding them, or where the battery itself has died and needed replacing, before the software is rebuilt.

Each time we’ve had to do it, we’ve ended up scratching heads and usually ended up simply cloning a hard drive from a working machine.  Today, I had some time waiting for other tasks to complete, and managed to hit up Google for some research, one common thread hit me…

When a laptop battery dies, chances are that if you leave it long enough, any onboard backup battery for CMOS/BIOS/PRAM/NVRAM or whatever other backed up settings, and the onboard realtime clock, will eventually go flat too.

Usually the first sign on replacement or recharge is that the date and time are wrong.  So the first thing a Mac can do is either prompt the user that the date and time are wrong and need resetting, or if it’s already online it can contact an NTP server and correct itself.  But when you’re installing from scratch, it does neither of those things.  In fact, it doesn’t even show you what the date and time are, unless you go well out of your way and ask it.  So the first sign that something’s wrong at this stage is that you get an error message, like:

  • “An Error occurred while preparing the installation.  Try running this application again.”
  • “This copy of the Install OS X Yosemite application can’t be verified. It may have been corrupted or tampered with during downloading.”

The fix, to get an install going here this afternoon, was easy:

  1. Get the installer booting, either from an external USB drive, or from Target Disk Mode from another working Mac.
  2. Once the installer is loaded and showing you the main menu, you should be able to see the “Utilities” menu. Click on it, and go to “Terminal”.
  3. Check the current date and time from your watch or another machine/clock/device of your choice.  Convert it to the following numeric format, so that 6:15pm on 4th December 2014 becomes 120418152014, following the mmddHHMMyyyy format.
  4. Type the following into “Terminal”:  date {mmddHHMMyyyy string above, without these funny brackets}
  5. Press enter, if you haven’t done so already.

Date and time should now be accepted, and Terminal will confirm this.  If you did it correctly, the installers should now work without either of those errors.  Worked like a charm here!

Without Google, and particularly its quick realisation that I needed to be looking here, I’d never have even thought to check something like that, to get something like an installer going!

Opinion: Marriott & others wishing to block “personal” WiFi hotspots

Even though I’m firmly based in the UK, I’m somewhat concerned to see plans by the Marriott chain, among others in the US, applying to the FCC for permission to allow blocking of “personal wifi hotspots” in certain corporate areas of their facilities. I’m not going to go into the technicalities of how this might be done, but some articles here from The Register give some (perhaps some slightly baised) insight:

In theory, and in their defence, I can see why it might be desirable to the hotel chains and their customers. Nobody paying for a Wifi service wants it to be interrupted, and so anything that can be done to preserve it in a critical area is a Good Thing [tm]. The problem here is both in the message it sends, and also in the people it unintentionally affects, especially if there’s little or no definition that separates a “personal hotspot” (someone tethering their mobile phone to their iPad to look up Twitter feeds or similar) from a functional wifi network that a contractor might have legitimate reasons to bring along as a tool to assist them in what they’re being paid to do in that same space. Let me explain by setting a scene:

Imagine that you’re a sound or AV contractor, brought in to support an event in one of these corporate spaces. These industries being what they are now, you likely have one or more devices in your arsenal that either require or are very much more useful with Wifi, either for Internet connectivity (relatively rare still, in my experience), or for direct on-site control with a remote control application running on something like an iPad or Android tablet. And there are likely more on the way.

Such systems being what they are, you likely find with experience, as I have, that the systems only work well together when configured with one specific brand and even model of wifi access point/router, so you cable that into your rack so that wherever you go, your wifi network name and IP range is always the same, and all you perhaps need to do is change the channel to better fit those around you so you (and others) get the most reliable connection and best available throughput. 

So you rock up at one of the sites where such blocking is in place, and everything stops working, so you grab the attention of a member of staff a the hotel and see what they can do. They point you to their on-site tech-support person, in the best-case scenario that they have one, and that this person is knowledgeable and sympathetic to your cause…

“Sorry guys, I know that this is a business device,” says the harrassed-looking tech-support person, “but you’ve got caught out as it’s being seen as a ‘personal wifi hotspot’. There’s nothing we can do. You’ll either need to connect everything to our network, which is THIS brand, or get your hardware control surface and some cables out of the truck and use that instead.”

“But your brand doesn’t play nicely with my kit, even though it’s all supposed to be standards-compliant. Don’t ask me why, it just isn’t. I don’t/can’t carry that extra 60lbs/30Kg of kit, I don’t need it anywhere else. Soooo, can’t you just…”

“Nope. Sorry, rules are rules.”

Now sure, we could waste time arguing that relying on wifi, venue-provides or not, for any event-critical functions is asking for trouble. Certainly, I find there’s nothing quite like a piece of physical copper or fibre cable running between my devices for reliability. However we look at it, until the wider industry accepts that using over-saturated and effectively consumer-grade wifi is a no-go, and creates a separate radio band and maybe even a licenced protocol (much like the UK’s PMSE system) away from “consumers” to increase reliability and available bandwidth, we’re stuck with what we have – the often all-too-sucky wifi – for the foreseeable future.

Others might argue “why not just have hotels and other venues make provisions?” Okay, good start, but two points on this:

1) how much provision should they make, and how should they do it? And how do we even define and hold them to it?

2) as the owner and operator of such sound and AV kit, would you trust your show/event to someone else’s network and all the risks that can entail? I personally choose not to, not least because I don’t want to have to configure numerous devices to talk to each other over someone else’s IP schema and to inadvertently be bandwidth-restricted by someone else’s careless, callous or plain unwitting action(s) at a crucial moment. 

Let’s be clear on why this stuff is important: when I’m working on-site, both my own and my client’s reputations are at stake, so in showtime anything outside mine or my client’s direct control is a risk we’d rather not be responsible for without good clear disclaimers covering our backsides. And even then, potential reputation damage means we’d want to steer clear of that kind of risk anyways. Just like taking life and car insurance doesn’t lead a responsible person to drive everywhere 30mph faster than they otherwise would, simply because they know they’re covered if someone else runs out of talent on a critical apex and totals both cars and occupants.

So the message currently being sent seems to be “we want our clients to pay for everything, and to not have that paid service (and therefore income stream) interrupted, and consequences be damned”. Perhaps the finer details have indeed been thought through, but if nobody asks the question (I have), how do we know?

Technology is creating some amazing solutions in live events, that have real potential to not only make lives easier for those of us doing real work there, but also to save a lot of real mass being lugged in vehicles the world over when much smaller and lighter solutions can be deployed. It would be a shame to have to put the brakes on that because a few shareholders of a few large chain hotels end up changing the landscape by way of “thought leadership”, and this silly idea ends up spreading.

And besides… When was the last time anyone connected to a public wifi service actually suffered because of other wifi networks popping up around the place? It’s actually never yet happened to me, perhaps because as an events-tech I never trust my shows and equipment to such networks, precisely because I roll-my-own so I know everything plays nicely together. But what are the other modes of failure I have seen?

  • Run out of IP addresses in the DHCP pool? Sure!  Even managed to do it myself to a bunch of users when I managed to underestimate the usage that one of my own networks would actually get.  Easily fixed because I use my own network, so I saw it happen and fixed the problem immediately on-site. How many other sites can say the same, even at say, Starbucks, or similar?
  • Managed WiFi zones that can’t smoothly hand over a device from one base station to another? Plenty!
    • Granted this can be tricky to manage well, because different client devices respond differently to the management methods implemented by different zone managers. I have experience of trying to do this well across numerous platforms and have yet to find a true one-size-fits-all solution, especially one that doesn’t interrupt a significant group of client devices in some way.
  • Run out of bandwidth for the number of connected users? Yup – this is a biggie, and it happens nearly every time.
    • In the real world, I’ve given up using public wifi (paid or not), simply because the majority of the times I use it, the bandwidth available simply isn’t enough to provide for the number of client devices (and their users) connected to it. So instead I revert to tethering to 3G or 4G mobile/cellphone networks instead.
    • Even abroad, my costings from traveling around the US in 2013 suggest it’s about 10x cheaper than equivalent WiFi costs, and 10x more reliable; not to mention that I can take the cheaper mobile/cell service with me wherever I go – the WiFi only works within the confines of the hotel or campus. The same metric applied in Italy in 2012. Uuuuuh, no-brainer then.

A key thing to note here is that in none of those highlighted cases am I even considering connecting valuable event-critical tools to such networks for mission-critical tasks – here I’m only talking about personal “holiday” usage; finding out more about the immediate world around me, mailing the odd photo to friends and family, checking email to keep on top of bills and any big family news.

So please, if you’re a hotel chain and considering this kind of plan, or an IT provider for a similar corporate planning a similar exercise, don’t even talk to me about this kind of revenue-generating exercise in the world of WiFi until you get your ducks in a row on these and other much more simple provisioning issues, okay?  If I’m going to pay 10x for a service that I can get elsewhere and carry with me wherever I go, then you’d better make it worth my while. And a big hint here is that you don’t do that by blocking me from using that 0.1x cost (vs yours) service that works. You do it by making your service WORTH 10x the cost of the other one. If you can’t, then perhaps something else is wrong, and it’s time to reassess the cost-benefit analysis.

“Exit to Street Only” – A new album for a new year

On New Year’s Eve 2014, another childhood ambition got ticked off the list; Namely to publish an album for public sale – Huzzah! So I’ve taken the liberty of embedding it here and adding some details below:

This album is an electronica set made on and inspired by the sights, sounds and smells of London’s public transport systems.

I’ve been dabbling with music as a hobby for many years, and I’ve long been frustrated by the commute taking time away from home-life that I’d prefer to use for other things, music included. And then along came Nanostudio on the iPhone, and I quickly found myself able to use the commute time to actually *make* music.

I’m immensely proud to have gotten “Exit to street only” this far from my train seat – well actually, about two and a half years of commutes, some seated but many not.  Composing and sometimes recording on my iPhone and then my iPad has been an incredible learning and growing experience.  Yes, some recording and mixing of some vocal lines was done at home, as was the mixing and mastering, and during that latter process some sounds were replaced using Sunrizer on my iPad.

Like what you hear? It’s only £5 on Bandcamp, following the “Buy” link in the embed link above, or going to the album page here.

Raspberry Pi HDMI audio vs USB CPU use

Quick note after some experiments last night. Not completely scientific, but enough to show a trend. I set out to compare CPU usage of the Pi running Volumio, upsampling lossless 44.1KHz 16bit stereo with ‘fastest sinc’ to 192KHz 32-bit stereo.

Streaming to the USB uses between 70 and 90% CPU. Streaming to the HDMI output uses 95% and more! Audio gets choppy in the latter case even without other processes getting in the way, whereas the former only gets choppy when the Pi happens to try and update the MPD database at the same time.

Wonder if anyone knows why onboard streaming should use so much extra CPU time to do the same work, and whether I2C suffers the same fate? Not sure I want to spend on a custom DAC if the current EMU 0202USB is more efficient?

Quick AppleScript debugging tip

Been a while since I last had to debug some basic AppleScript – and it’s fair to say programming and scripting really aren’t my cup of tea. I don’t really *know* much about either skill lately, but with Google and enough time/coffee I can sometimes roll my own or call out simple errors in others’ code.

To help solve today’s problem (a script saving a file to the wrong location despite path names apparently being set correctly) it really helped to do two things:

  1. Use the “say” command to announce each stage of the script task, sometimes announcing the situation (such as pass or fail of an “if” statement or similar). 
  2. Use the “say” or “display dialog” command to announce key variables as they are set or manipulated. Dialogs are useful for long strings (like the full file name path I was working on) as they can remain visible until you click OK. 
They’re really silly or “childish” for pro programmers I’m sure, but they really helped me understand the code and its structure, so that I could see where a variable was being misinterpreted and apply a fix.

Blog at | The Baskerville Theme.

Up ↑

%d bloggers like this: