Mac OS X Yosemite Quarantine issues and workaround…

Getting bored of having to do stuff like this, both at work and play.

Many useful Mac apps still come in from places on the Internet OTHER THAN the Mac App Store.  This might be news to the boffins at Apple, but there you go.  This can cause problems at a user-level, where we end up with warning messages like these every time we try and start an installed application:

“xxxxxxxxxx” is an application downloaded from the Internet. Are you sure you want to open it?

AAAAAAAARGHH!  OF COURSE I want to open it! I installed it! I even used my Admin rights to move it to my Applications folder, and it’s been there for months, perhaps years! So quit telling me about this every time I open it!

Okay, chill, breathe, take your meds, it’s time to fix this.  Again, Google to the rescue, and I found a lot of people have been having this kind of issue since Lion.  I have to admit I’ve managed to not have it bite me or my pool of users in the bum at all, (except on first-use of the application, which is fine, because that’s all it needs) until Yosemite.  And specifically, Yosemite’s 10.10.2 point-release.  Ugh.

In all cases, people have reported general success by many sledgehammer-to-crack-walnut means, mostly by turning security and quarantine features off.  I prefer not to do that, so I much enjoyed the more fine-grained solution found here.  Not sure how it’ll work as apps get upgraded, but even if it needs redoing at this point, it’s better than being prompted every time I open an app I regularly use!

So, rather than rewording, I’ll quote D. W. Hoard’s words from his article (linked above):

The quarantine flag can be removed from files or applications that are already downloaded, without completely disabling the quarantine mechanism, by using the following command:

xattr -d com.apple.quarantine /PATH/TO/APPLICATION

A slight shortcut is to type everything up to the path (including the trailing space) in a Terminal window, then drag the desired application or file from a Finder window into the Terminal window, which will automatically paste in the full path to the application or file. If you perform this process using an Administrator account, then the quarantine will be permanently removed for all users on the same computer, regardless of the Administrator privilege level of their accounts.

Oh gosh, I had a horrible thought… it reminds me of the dark days of MS Vista… 😮

Installing Mavericks or Yosemite on Mac laptop after battery failure

Had a number of issues with installing Mavericks or Yosemite on Macs that have had a dead battery.  By “dead” I mean, either run flat and left in a cupboard or on a desk for a week or more before we’ve got around to rebuilding them, or where the battery itself has died and needed replacing, before the software is rebuilt.

Each time we’ve had to do it, we’ve ended up scratching heads and usually ended up simply cloning a hard drive from a working machine.  Today, I had some time waiting for other tasks to complete, and managed to hit up Google for some research, one common thread hit me…

When a laptop battery dies, chances are that if you leave it long enough, any onboard backup battery for CMOS/BIOS/PRAM/NVRAM or whatever other backed up settings, and the onboard realtime clock, will eventually go flat too.

Usually the first sign on replacement or recharge is that the date and time are wrong.  So the first thing a Mac can do is either prompt the user that the date and time are wrong and need resetting, or if it’s already online it can contact an NTP server and correct itself.  But when you’re installing from scratch, it does neither of those things.  In fact, it doesn’t even show you what the date and time are, unless you go well out of your way and ask it.  So the first sign that something’s wrong at this stage is that you get an error message, like:

  • “An Error occurred while preparing the installation.  Try running this application again.”
  • “This copy of the Install OS X Yosemite application can’t be verified. It may have been corrupted or tampered with during downloading.”

The fix, to get an install going here this afternoon, was easy:

  1. Get the installer booting, either from an external USB drive, or from Target Disk Mode from another working Mac.
  2. Once the installer is loaded and showing you the main menu, you should be able to see the “Utilities” menu. Click on it, and go to “Terminal”.
  3. Check the current date and time from your watch or another machine/clock/device of your choice.  Convert it to the following numeric format, so that 6:15pm on 4th December 2014 becomes 120418152014, following the mmddHHMMyyyy format.
  4. Type the following into “Terminal”:  date {mmddHHMMyyyy string above, without these funny brackets}
  5. Press enter, if you haven’t done so already.

Date and time should now be accepted, and Terminal will confirm this.  If you did it correctly, the installers should now work without either of those errors.  Worked like a charm here!

Without Google, and particularly its quick realisation that I needed to be looking here, I’d never have even thought to check something like that, to get something like an installer going!

Raspberry Pi HDMI audio vs USB CPU use

Quick note after some experiments last night. Not completely scientific, but enough to show a trend. I set out to compare CPU usage of the Pi running Volumio, upsampling lossless 44.1KHz 16bit stereo with ‘fastest sinc’ to 192KHz 32-bit stereo.

Streaming to the USB uses between 70 and 90% CPU. Streaming to the HDMI output uses 95% and more! Audio gets choppy in the latter case even without other processes getting in the way, whereas the former only gets choppy when the Pi happens to try and update the MPD database at the same time.

Wonder if anyone knows why onboard streaming should use so much extra CPU time to do the same work, and whether I2C suffers the same fate? Not sure I want to spend on a custom DAC if the current EMU 0202USB is more efficient?

Quick AppleScript debugging tip

Been a while since I last had to debug some basic AppleScript – and it’s fair to say programming and scripting really aren’t my cup of tea. I don’t really *know* much about either skill lately, but with Google and enough time/coffee I can sometimes roll my own or call out simple errors in others’ code.

To help solve today’s problem (a script saving a file to the wrong location despite path names apparently being set correctly) it really helped to do two things:

  1. Use the “say” command to announce each stage of the script task, sometimes announcing the situation (such as pass or fail of an “if” statement or similar). 
  2. Use the “say” or “display dialog” command to announce key variables as they are set or manipulated. Dialogs are useful for long strings (like the full file name path I was working on) as they can remain visible until you click OK. 
They’re really silly or “childish” for pro programmers I’m sure, but they really helped me understand the code and its structure, so that I could see where a variable was being misinterpreted and apply a fix.

Pilgrim’s Pod Radio Hour – Episode 3

UPDATE (26/2/2014):

This is the edited version, to keep the show length under an hour, and to tidy up some slower-moving passages.

ORIGINAL POST:

Another episode was recorded on Friday 7th February.  A slightly different feel to this one – with more spoken content. Featuring Liz Jadav and Phil Gallagher.

Technical notes

This time, the live-stream was sourced from the software mix that created this edited recording.  I’ve fixed a mistake where I ran out of hands to sort the live-stream mix during the intro, and we re-recorded a song with Paul after he’d choked on some water just before his song!  Aside from those issues, the stream levels were much more easily managed this way, and mixing the recording live with the usual processing in-place also made this edit much quicker to produce!

Also new to us was a Superlux S502 ORTF mic (select “English” from the top-right of the linked page), used for room ambience and audience.  Compared with the AKG 451’s we were using, rigging was much simpler, and the resulting sound was slightly more consistent.  I’m really pleased with this mic in this and some other applications; subject for another post I’m sure!

Getting an EMU 0202USB working with a Raspberry Pi

In the last couple of weeks, out of curiosity, I’ve bought a Raspberry Pi to play with at home.  It’s really very impressive to see what can be done these days with a $35 computer – an “educational” model at that!

Our Pi is currently in place as our digital audio player, courtesy of the Volumio linux “audiophile” distribution, and an EMU 0202 USB audio interface.

Once the Pi was booting Volumio off the SD card, I found two things that needed doing:

  1. Set up the Pi to pull files off our NAS device.  In theory this can be done from the Volumio web interface, but I had to go hacking around editing config files to make this work seamlessly.
  2. Set up the EMU for optimal digital playback.  I take a somewhat different path on this to most “audiophiles”.  I’m specifically aiming to implement a software volume control, provided I can run the digital audio chain at 88.2KHz/24bit, or higher.  This means CD/MP3 content gets upsampled, while some recordings made natively at 88.2KHz/24bit get to be played that way.

The Volumio forums helped me out with point 1, but I’ve lost a lot of brainpower and free time to getting the EMU to work properly.  I could get it to play out at 44.1KHz/24-bit, but any attempt to play native files at higher rates, or to have MPD upsample, resulted in obviously robotic-sounding distorted playback.  It turns out the key was simple:

It seems the clock rate on the EMU 0202 and 0404 USB devices is assigned to a fader in ALSA, which in this case I accessed using alsamixer.  There were two faders for my 0202:  PCM and Clock rate Selector.

The latter has a range of stepped values, equating to the following sample rates:

  •   0% 44.1KHz
  •  20% 48.0KHz
  •  40% 88.2KHz
  •  60% 96.0KHz
  •  80% 176.4KHz
  • 100% 192.0KHz

What I’ve learned then is that to get the setup working, I needed to not only set Volumio (or the underlying MPD player) to resample to the target output rate of 88.2KHz/24-bit but ALSO to set the Clock rate Selector to 40% in alsamixer.

All works happily and I’m loving the more “analogue” sound of the EMU in that mode!

UPDATE, 23RD FEB 2014:

I’ve managed to get MPD to reliably resample to 176400Hz/24-bit (32-bit internal, 24-bit at the card.) by forcing the Pi’s turbo to “always on” and a slight overclock. It’s not *quite* perfect yet, so i might see if I can push it a little harder before documenting our full setup.

Rocky road ahead: Google Cloud Print (BETA)

Background

An organisation whose IT team I know well has moved a lot of their services across to various Google platforms.  The move has been considered largely positive by users and management alike, not least because it has significantly reduced the management and infrastructure burdens on their organisation, and has genuinely improved IT-related life in many key ways.

The move therefore continues apace.  One problem identified by the organisation is that there seems little sense in paying c.£500-£1000 per head for a computer setup that spends the vast majority of its time being used (legitimately) as a web-browser.  The various Chromebooks undergoing trial have been a huge success given their planned usage, but with one common problem:  Users in 2013/14 STILL need to be able to print.

[Enter Google Cloud Print (BETA), Stage Left]

Image

“No problem!” says Google, “Here’s Cloud Print!”.  There are two flavours of documentation presented, in “consumer” and “IT Administrator” guises, both essentially saying (and ultimately describing) the same thing.

For those who haven’t come across it yet – the idea is that you buy a “Google Cloud Print Enabled” printer, give it 24/7 power and Internet, and you can print to it from anywhere, using your Google account in various creative ways.  Specifically for my friend, it gives print access to Chromebooks and other portable devices for which no other good printing solutions already exist.  Essentially if it can run Google Chrome, it can print.  And the concept is really neat.

Forecast: Storms ahead

There’s a thunderstorm in some clouds however, and this service is no exception.  I’ve heard a few common complaints in various pub-conversations, and even investigated a few when I’ve experienced them myself within my own Google Apps domains:

  • First off, some printers, once correctly hooked-up and signed-in, simply stop receiving Cloud Print jobs.  Often, turning them off and back on, and waiting up to a day, solves it.  But sometimes the log-jam becomes permanent.  Printing via local network or direct USB connection works fine from machines that can do it, but all Cloud Print jobs get stuck, forever destined to be “In Progress”.
  • The Cloud Print Management interface looks surprisingly mature for a Beta product, except that it gives very little information about what is really happening.  Once a job inevitably gets stuck, there’s no option to do anything other than to wait, or delete it.  It can’t be diverted to another printer.
  • More worrying, the status-codes are too general.  Sure, I don’t need a verbose running commentary when things are working well, nor perhaps when a job is “in progress”.  But when things get stuck, I’d like more information about the problem than the job simply being flagged “Error”.
  • Google provides no technical support for Cloud Print – so beyond what you can find in documentation provided either by Google or your printer manufacturer, you’re on your own.  No support. No apparent feedback mechanism even.
  • If something does go wrong, often the only way to fix it is to delete the printer on Cloud Print, and re-assign it.  This might be fine for single home users, but for anyone looking to share a printer between two or more people, this gets complicated, because you then need to share the newly-set up printer again with those who need it.
  • Then there’s the pervading security concern.  Where are those jobs going when travelling between the browser and the printer, and in what format?  Are they encrypted?  Are the documents scanned for content by Google or anyone else on the way?

Google comes close to a partial-answer in the FAQ/support page, with the following statements:

Documents you send to print are your personal information and are kept strictly confidential. Google does not access the documents you print for any purpose other than to improve printing.

For home users, that might be good enough.  At least there’s *something* in writing.  But for a business I’d suggest it’s too vague.  Let’s leave that alone for a moment and look at troubleshooting; how do I get a print queue working again, if I’m using a cloud ready printer?  Again, Google has a partial answer:

If you’re using a cloud ready printer…

Okay, done that, and checked that.  Still nothing.  Now what?

Conclusions?

Some reading this might say I’m being too harsh about what is *really* only a beta product.  And they might be right, if the product was released within the context of a beta product essentially being marketed or released only to technically-interested (and competent) people for evaluation, feedback and improvement before a wider release.  What’s happened instead is that some printer manufacturers have jumped onto the product by offering support (good), but without making it clear that this is a BETA service which may change, break or be taken offline at any time, without warning (bad. Very bad).

Even the business run-down provided by Google doesn’t mention its BETA status, and gives no clue as to how support or (useful) feedback can be found, nor even submitted.

So, is this going to be like so many other recent Google BETA products to get half a momentum going and then suddenly be killed? Or will it actually become more like Gmail and mature into a properly supported service, with SLA’s available to those who need them?  Only time will tell, but meanwhile based on what I know now, I’m finding it very hard to recommend deploying Google Cloud Print in my own organisations in its present form…