Google – Another step backward for UI design?

It really doesn’t feel like much time has passed since Google launched the “black bar” to navigate around Docs/Calendars/other services.  And over time, many of us have come to rely on it being there.

Roll on another (wow, it’s been a couple of years already?) couple of years, and now we get this:


Yup. That’s a grid, buried among a couple other things that rarely get used.  Click on it, and a list of icons appears to help take you to your chosen service. All well and good, except you have to click again to go there.

Those of us relying on pattern or muscle-memory to get things done intuitively will balk at this for a few reasons:

  1. We now need to click twice to get a simple thing done.  Surely activation by hovering over the grid should bring up the menu?
  2. The grid is in no way intuitive – looking at the icon doesn’t tell me anything meaningful about what it’s going to do if I click on it.
  3. The grid is in a completely different place on the page from where the old navigation bar was

A little car analogy:  I need to know that when I take my car for its annual service, it comes back with key consumables replaced under the hood, but with key controls (gas and brakes for example) in the same place as when I took it there, each retaining the same function as when I left the car at the garage.  I don’t want to have to relearn where the pedals are, and what each does, every time I head off on a new journey.  Likewise with software.  Changes and improvements are a good thing.  But only when managed in a way that allows the majority to keep up, or to operate the machinery safely in the way they were first trained to when taking on the machine.

It’s the small things like this (and Ars Technica has an interesting article listing similar things here) which are turning many of my tech-embracing friends and relatives back away from the tech they purchased, because they don’t yet use it enough to learn how to relearn pretty much every task they ever set out to achieve.  Many of them might only perform a task once every year or two, yet every time they do, enough little things have changed that mean they’re relearning the process as a new user.

I think that’s a clear example of technology creating more stress, and more hassle – far from the technology enabling things through reducing effort and overheads.

Am I the only one thinking this way?

Mid-2012 MacBook Air 13″ – fixing one form of “Black Screen of Death”

Various online forums are abuzz with MacBook Airs of 2012 and 2013 flavours suffering the “Black Screen of Death” – apparently the machine, mid-use, decides to either shutdown completely, or just shut its display off.  It’s the latter case I’m most interested in here, since a colleague just presented me her Mid-2012 128GB 1.8GHz 4GB-RAM model.  It’s still exactly as it was when it came out of the box.

The problem

The machine shutdown mid-use, and subsequently would only boot as far as turning on the display backlight.

The (apparent) solution

The PRAM (Parameter RAM) reset – hold down ALT, CMD, P and R keys together immediately after pressing the power button.  While the keys are held down, the machine will reboot with a “Clang”.  I usually hold the key-combo down until the clang has happened three times, releasing the keys on the third.  This may be a superstition as one cycle might be enough, but from my bad old days doing the same trick on older G4-based iMacs this is a habit that still hasn’t been shifted.

The result

The MacBook Air immediately booted as normal, and within a few seconds I was greeted with the usual File Vault 2 login screen, and the machine has behaved impeccably since then.

Further preventative maintenance

Apparently the machine had missed a few software update cycles, so I installed everything available, including a Thunderbolt firmware update and the recently-released 10.8.5 update.

Online music streaming – missing a note or two?

Google Play logo, courtesy Wikipedia
Google Play logo, courtesy Wikipedia

Quick thought, while I’m procrastinating…

While I’m not planning to let go of physical media anytime soon – not least the vinyl collection, I’m becoming a huge fan of Google Play, and its ability to play music “uploaded and matched” from my own collection.  Real bonuses for me are that this happens for no extra cost to my Google Apps domain, and  it seems to work well wherever I have a reliable ‘net connection.  The quality when listening via headphones and Google Chrome on a laptop is surprisingly good considering they’re MP3’s – possibly transparent enough to pass a proper ABX test between them and the original uncompressed digital stream on CD.

But something is different, and something is missing… quite a lot of things are missing actually.

Where’s the song information?

Geeks might call this “metadata”. The information about the making and content of the recording is as useful to me as the actual content itself.  I like knowing things like, who wrote the song I’m listening to. I might want to check the lyrics. I might also want to know whether I’m listening to a particular remaster or reissue.  While the content and artwork are there on Google Play, I’ve got absolutely no idea at first glance which exact version or release of a song I’m listening to.

At present, I know who the release artist is for a song as it plays, and from which album. I can even see the album artwork for the majority of my collection, as well as a release year.  What I don’t know without doing a *lot* more digging is whether the particular copy of “Bohemian Rhapsody” I’m listening to is from a 1990’s remaster, or the more recent (2011?) remasters? I’m not ordinarily such a geek – a great song is a great song whatever the media it’s carried on.  But it’s good to know nonetheless.  Especially if I happen to like the work of a particular mix/master engineer, or if I purchased a particular CD release of an album due to a known heritage, which has been matched to another version which sounds particularly different.

I think it would be really nice if digital streaming/shop purveyors could actually provide the full information of the songs they’re sending us.  There are more involved in most major releases than just the artists, and it’s only right that they get the credit, even if the information shows no significant other commercial purpose.

What even made me think of this?

Listening to the current version of Queen’s “A Kind of Magic” up on Google Play, I’m noticing a lot more musical and tonal detail in the recordings than I remember from my own CD copies.  This is an album I’ve known for the whole of my musical life, and I therefore have some very strong memories of it, and can recall absurd amounts of detail regarding both musical arrangements and sonic character and how they were reproduced differently in each of the releases I’ve owned copies of.  Since I’m hearing so many new things despite listening on familiar equipment, I’d like to understand where they come from.  Since I like the differences, I’d like to know if they are due to a particular engineer’s approach to remastering, and whether I can find more by the same engineer.  Or whether I can learn something about the engineering approach that led to the result I liked so much.

On the one hand the freedom offered by always-on streaming access like this is wonderful – but on the other it comes with a lot of compromises, and with a lot of things “hidden” from view that I feel really should be open to us all…

Touchfreeze – useful tool

Been a while since I last used my Asus EeePC 1011PX for serious typing. And so it came as something of a surprise that despite the latest Elantech touchpad drivers being installed, the touchpad *still* was being accidentally activated while typing.

So, out went the driver – it simply didn’t function in Windows 8.  Perhaps it doesn’t really support my particular hardware, or perhaps it’s an OS problem.  Either way, it was a whole lotta software for not a lot of function.

Instead, I’ve installed Touchfreeze from the Google Code project.  Left as installed, in automatic mode, it seems to be doing the job just fine, and I can carry on typing huge reams into OneNote 2010 with ease!

Feia – cassette restoration case-study

After a few weeks playing with head alignments, audio interfaces, decks, plugins and sanity, I’ve run off a successful “first draft” attempt to restoring these interesting recordings.

About the cassettes themselves…

The cassettes themselves are a little odd – they appear to be using Type-II (CrO2) shells, but I can’t tell from listening or visual inspection whether the formulation on the tape is actually Type-I (Ferric) or Type-II. Both tapes seemed to sound better with Type-I playback EQ, selected in each case by blocking the tape type holes in the shell with judicious use of Scotch-tape.

Noise levels on the tapes were horrendous. Both cassettes seem to have been recorded about 10dB quieter than most commercial tapes given to me in the same batch, and seem to have experienced significant loss of high-frequencies – something that I noticed getting audibly worse with each playback pass despite cleaning and demagnetising the heads before each run. At best I was getting something like 15dB signal-to-noise before noise reduction. Much of this is broadband noise, but there’s also a significant rolling static crackle running on the right channel, which seems to match the rotational speed either of the pinch-roller on the deck, or perhaps the guide capstans inside the tape shell itself.


Something I’ve always known about the Akai deck I’ve now inherited and restored to working condition is that it’s always played a little fast. While I’ve not been able to fix this at a hardware level (seems to involve fiddling with the motor control circuits – a major stripdown and rebuild I’m not convinced I have the time or confidence to complete without an accident), I have taken an average of how fast the machine is playing by comparing songs from an assortment of pre-recorded commercial cassettes with digital copies from CD or previews on iTunes. From this I discovered that pulling the playback speed down to 95.75% of the sampled audio gives an acceptable match (within 1 second or so across the side of a cassette) to the commercially-available digital versions. This is really easy to do in my audio software as it doesn’t involve convoluted resampling and slicing to keep the original pitch.

Noise reduction


A significant HF-boost was required to get the tape sounding anything like a natural recording, which of course brings the noise levels up. I don’t have access to an external Dolby decoder, and the Akai deck used for doing the transfers sounds very strange with Dolby B engaged even on well-produced pre-recorded material that came to me in excellent condition. The Denon deck I have is technically better than the Akai in many ways, but to beat the Akai in sonic terms needs about an hour spent on alignment (per cassette) and the source material needs to be in excellent condition. So I proceeded to transfer the content from the Akai at a known higher running speed, without Dolby decoding, in the hopes of being able to fix this later in software.

Decoding for playback

There is a lot said online about the mechanics of Dolby B, and many people think it’s a simple fixed 10dB shelving HF EQ boost (emphasis) on recording, that is easily dealt with by a simple shelving HF EQ cut (de-emphasis) on playback – or even simply doing nothing with older tapes that have suffered HF loss. Well, without going into detail that might infringe patents and/or copyright, let me tell you that even from listening to the undecoded audio, it really isn’t that simple. What we’re dealing with here is some form of dynamic processing, dependent on both the incoming frequency content AND the incoming levels. Even with its modest highest-available noise reduction, it’s a beastly-clever system when it works, and remarkably effective in many environments, but as with many complex systems it makes a lot of assumptions, open to a lot of factors influencing the quality of the output.

Working up a solution

Having no access to a known-good hardware decoder that could be calibrated to the tape, I set about using a chain of bundled plugins in my Reaper workstation software to mimic the decoding process. Having been through the process, with hindsight I can see why there are so few software decoders for Dolby B on the market, even without considering the patenting issues surrounding it. It’s a tough gig.

For this process, I picked out the best-sounding pre-recorded tape in our collection and aligned the Denon deck to it, listening for most consistent sound, running speed and dolby decoding.  I got a sound off the cheap ferric formulation that came subjectively very close to the same release on CD or vinyl in terms of listening quality – the tape suffering only slightly with additional HF grain, with some through-printing and background noise evident only when listening at high levels on headphones.

I then aligned the Akai to the same tape before sampling (without Dolby B decoding) and correcting for speed. A rip of the CD, and the samples from the Denon, were used as references as I set about creating the software decoding chain – keeping overall levels the same between reference and working tracks to ensure I was comparing like with like.

A day was spent setting up and tweaking the decoder chain before I came out with a chain that gives equivalent subjective performance to what the Denon deck can do with great source material. I tried the same settings on a variety of cassettes, and was able to repeat the results across all of them…

Content, replication and mastering issues?

…until I came to the content of the Feia tapes I was planning to work on!

Once the cassettes were digitised, and playback speed and overall frequency response corrected, each side of the two tapes was given its own stereo channel, so that individual EQ, channel balancing and stereo-width settings could be assigned to each side of the tape, since I noted some differences in each of these areas that were common to each side of each cassette.

While listening to the digitising run, without playback speed correction, I noted a 50Hz hum in the recordings that was common to all sampled media – I tracked this down to issues with signal grounding between the audio interface, the monitor amplifier, and the cassette deck. No amount of tweaking this signal chain could get rid of it, but with the tapes sounding significantly worse with each playback pass the only way forward was to remove the hum using an FIR/FFT plugin. I therefore set one up on each of the stereo channels and sampled a section of the noise (without the content) into each filter and tweaked the removal settings to be more subtle than default – this removed the hum but left the remaining signal (including bass-notes passing through the hum and its harmonic frequencies) intact.

Each stereo channel was then taken out of the master mix and routed to two more stereo channels – one for the noise-reduction decoder and the other for the side-chain trigger telling the decoder what to do.

Listening to the results at this stage was intriguing. Even after tweaking the decoder threshold levels I noted a general improvement in the signal quality, a reduction in noise levels, but still a strange compression artefact that was evident on high frequencies. This got me wondering whether the labelled Dolby B encoding was actually a mistake, and whether Dolby C had been applied by mistake. Cue another day spent mimicking the Dolby C system by tweaking my homebrew decoding system. Nope – compression still there, but the overall spectral effect of decoding Dolby C was having way too much affect on the mid and high frequencies.

So: onto the next likely candidate: dbx noise reduction. I found out more online about how it works and created an encode/decode chain in software, using a ripped CD track as source material.  Applying the decoding stage to the Feia recordings was dynamically a little better in the top-end, but still not right.

Combining the homebrew Dolby B chain, and following it with a little dynamic expansion on the top 12dB of the recording made a useful difference.  Suddenly transients and sibilants sounded more natural, with more “bite” and less splashiness on the decay, particularly at higher frequencies.

Neither tape is sonic perfection itself even after this restoration, but I’ve learned a lot through it, and how have a much better understanding of why cassettes *can* sound great, but generally don’t, especially recordings made on one deck that are played on another.  I now realise that I’d far rather deal with vinyl and pre-digitised content than extracting it from >20-year-old compact cassettes! At some future point, I’ll likely post up some before/after samples so you can judge the results for yourself.

HMV: End of an era?

The news that HMV is calling for administrators is hardly a surprise. As with Comet and Jessops, the question in my mind is “What took so long?”

It’s a cruel irony that I’ve seen some significant improvements to their London Oxford Street and Piccadilly stores in the last few months, especially in vinyl stocks. But that doesn’t really offset the issues I’ve been having with them lately. For incidence – none of the stores seem to have put much effort into being places that anyone would want to spend time in. The constant drone of over-loud pap-Muzak pervaded the entire experience, often distracting from what I wanted to buy. The vinyl sections up until a year ago were badly kept, with old bent/warped stock that was in a perpetual state of disordered chaos. This got better in the last few months at the Piccadilly store, but still wasn’t great.

Even finding CD’s was a chore, at Christmas time in the Westfield Stratford branch I was unable to find anything from the shopping-list of well-known artists we had compiled, except for Susan Boyle’s latest. The cheaply-published and packaged best-of’s offered for remaining artists on the list were hardly good gifts and often didn’t actually contain the ‘best’ of said artists’ output. DVD’s and BluRays were easier to find once I could navigate the crowds, but again I only had a 50% hit-rate. The eye-watering queues at the tills also didn’t help, especially for what should have been quick lunchtime purchases!

For me and my household, despite (always) being on a budget, price doesn’t have to rule the spending decision. Part of the fun of building our music and movie collection has been the voyage of discovery, and the sense of a good shopping experience. If the in-store experience is bad or even just merely indifferent, then that infringes on my perceived quality of the product. If the store doesn’t care about its contents, then why should I, unless I really know something they don’t? Certainly in that case I won’t order online from the same store – likely I won’t order anywhere at all until I find a store that does have it, and cares about it. In short – we tend to buy what we are looking for, or discover on the way – not always the cheapest, and rarely online.

An interesting angle on this was found when I took on the project to upgrade my grandparents’ tape collection to CD. Their collection has a surprising number of quality albums from the 80’s and 90’s, none of which I was able to find on CD in the high-street, HMV included. Given the amount of work involved in converting a number of old tapes to CD, restoring them to “like-new” quality levels associated with CD on the way so that the transition is an improvement as much as a necessity, it is usually far easier and more cost-effective to replace with store-bought new copies. The artists get more royalties, the stores get more sales, and I save myself hundreds of pounds in time, software and electricity doing the conversions myself – that’s a win-win situation! This ‘shopping-list’ style of shopping lends itself best to online retailers now – but even online only about 75% of the content is available, and I’d rather support high-street stores where I can actually physically browse, interact with staff, etc etc. In other areas of life I’ve had fabulous conversations with staff and patrons, even leading to increased sales (“hey, you’re looking for Curved Air, right? i just found some over here!”) and offers of real work. That won’t happen if I buy my music on Amazon!

Another negative experience, and one that pervades all the ‘big’ electronics/media stores I’ve encountered recently, is that there’s no real try-before-you-buy facility, especially on things like headphones and media players. Where such facilities are offered, staff tend to be rushed and pushy, and the range of equipment available for real-world comparison is usually much smaller than that available for sale in-store. Where kit is available for demonstration it’s broken, or priced at such a premium level that I couldn’t afford it even if it were the right thing – many “Beats” or “Bose” headphones for example are easily outperformed by (sometimes significantly) cheaper competition, but with no way to test this there’s no way for the consumer to sort the genuine star-players from the dross.

Seems to me that a lesson being missed here, and one that seems to be in common with Comet, Jessops and HMV, is that there’s a level of basic sales service, and customer experience, that is being missed. Sure, the economic situation isn’t helping. Sure, online sales are taking their toll. But the stores I choose to frequent for such things, especially music, are those like Sister Ray and Music and Video Exchange in Soho, where passion, care and above all, content, are king.

If HMV passes, that leaves small independents a niche. If they (and we as consumers) can exploit that, it could be a very good thing for the music industry as a whole. If they don’t, then physical music purchases will likely become a niche, and consumer electronics will likely follow behind, beyond what the marketeers can tell us all we should be buying next. Sad times. I enjoyed the variety and excitement in these markets in the 80’s and 90’s, and I’ll miss them now they’re all but gone.

Some thoughts on using Google Docs

Following on from yesterday’s thoughts on using a Chromebook for an extended period, I thought it worth updating it (coming soon!), as well as jotting down some thoughts about Google Docs.  This got so big (and is relevant to all platforms, not just the Chromebook) that for the sake of clarity I decided to hive it off as a separate post.

Game-changing features

I think the main thing I’ve had to learn in terms of my expectation of what Google Docs can do, is to consider them as functions of a large and very advanced database.  From this perspective, the vague consideration of “wow – how do they even do that?” becomes much easier to resolve and put to rest.  With that in mind, I can now take a deep breath and present some major gains I’ve found with Google Docs as opposed to working in traditional desktop productivity apps like MS Office.

Never hit “Save” (or ctrl-S) again

This is a big one.  I type out a sentence, and then pause to look up to the toolbar… the word “Saving…” presents itself for a few seconds, before eventually changing to “All changes saved in Drive”. In theory, this means I can go into a document, type some stuff, then just navigate away from it in the knowledge that the changes were saved without my even having to worry about it.  Compared with MS Office, where it’s quite normal to get completely sucked-in to writing that important document then have it crash when fine-tuning the formatting and then find you didn’t manually save that last 3 hours of work, even the Autosave functionality often doesn’t keep up with important edits.  The Google Way™ seems so much better, and has saved many a draft.

Always available, on any computer in the world…

…provided that it has an Internet connection and a modern web browser.  This has massive implications for the freedom of users to roam the planet as they need and still have access to the information that’s important to them.  Obviously this doesn’t negate the need for backup of truly valuable data – but does act as a less-admin-intensive solution than providing a full roaming Windows/Mac roaming network account with all the security and software licensing hassles that creates.


It’s now routine for my boss and I to dump a load of notes into a Document, or run through entries on a spreadsheet, then have both of us view and edit the same document at the same time.  While we remain online and inside the document(s), we can each see who is doing what and where – even where the cursor is for each user.  This helps us greatly in documenting expenses, working through tricky wording of contracts, manuals, specifications and other basic project management tasks.  This feature alone, working across documents, spreadsheets and even presentations, has changed our working lives for the better.

Word processing

Generally, for any document created in Google Docs itself, everything pretty much works as expected – at least from a simple “type up some notes, edit then, make them look vaguely presentable, and print/email it” perspective.
That said, some foibles have been found that have stepped in the way of my making a more complete switch to Google Docs full-time, and relying on MS Office:

  • Previewing of MS Office documents does indeed (mostly) work, but Google Docs’ more simple headings, formatting and layout options mean that document fidelity with formal reports tends to suffer.
    • Sometimes inserted graphics disappear, or are rendered very badly, or appear in the wrong place with text wrapping mangled in the process.
    • Appendices and other numbered/customised headings tend to get lost – sometimes changing the implied meaning and flow of the incoming report.
    • To get around these issues, I tend to ask those reporting to us to submit (both final and draft) reports to me either as email body text (for informal reports), or as PDF’s for more formal work.
  • Page layouts that preview well on-screen can end up with very different pagination, especially when printing to A4, or rendering to PDF.
  • Working with headers and footers is basic, but in fairness does allow insertion of tables, images etc for fine control over layout of logos, titles, author details, page numbers etc.
  • While I’m pleased to see that footnotes work, it’s not a full referencing system that can log and tabulate the source of each reference – again this makes full academic and some reporting use-cases awkward, and calls for migration to more powerful desktop software.
  • Table of Contents can be inserted, taking and automatically updating its entries from headings used throughout the document.  Good basic stuff, but:
    • No page numbers alongside the links.
    • No obvious control over which heading classes are included, nor over the specific formatting of the table entry.
    • Headings cannot be formatted with numbering, in the way that MS Word or other word-processing apps handle.  (Collaborative) Drafting of formal proposals, reports or academic writing can be done on Google Docs, but really formal documents are best having the final text copy/pasted into MS Word or a more advanced desktop word processing or page layout tool of your choice.
    • Table formatting is quite flexible, but not as many available line styles or formatting options as MS Word.
      • Also, can only move cell boundaries when they are visible, eg. when they have a border thickness greater than 0pt.
  • Printing and output
    • Page size is set to US Letter by default. This can be changed to any other supported paper size – A4 for me, please!
    • Equations entered through the Equation tool end up inconsistently placed and pixellated on both PDF and printed output.
    • Documents can be downloaded (or shared) as PDF
      • An example of the PDF output, combining these and yesterdays’ posts, is here:  SamsungChromebook303Cusability (2))
      • Useful for sending out fixed versions of a document files as a reference.
      • The PDF rendering engine can have some strange results, notably with changes to pagination.  Stray blank pages get inserted, and some placement changes made for the onscreen page preview end up looking different on paper.
      • A 20-page report (such as this one, according to the page count in the footers) on-screen ends up coming out as a PDF with 22 or more pages, depending on how and where simple page-breaks have been used.
      • Interestingly, automatically-generated page counts remain correct regardless of whether the document is viewed in the Docs editor, or as a PDF.
      • These are the kind of inconsistencies that most users I know find absolutely maddening for formal work – and a crucial limitation for users to be informed of. It’s like using a camera that takes a photo of the most beautiful mountain range in the world, at sunset, and when you download the photo to your home computer you find it actually gives you a photo of a discarded needle on a wet East London street-corner.
    • Documents can also be downloaded in common MS Office and other (more open) file formats.


My needs for spreadsheets tend to fall into one of two categories:

  1. Simple line-entries and basic summaries thereof, for things like expenses, inventory-lists and the like.  This kind of use is so easy to cater for that I’ve yet to find any flaws – and the extra collaboration and availability of the files tends to win over the bulk of a desktop application and opening an actual file from a disk.
  2. Complex mathematical data import, analysis and charting, with templates for print output of charts and tables  to be included in other documents.  Such work tends to involve complex and obscure cell functions, and often (in Excel) some customised VBA code.  Such documents have previewed in Google Docs with reasonable fidelity, but there’s no way I’d expect anything other than MS Excel to understand the file, let alone work with it in any meaninful way or timeline.


Rather than using presentations in teaching, I tend to use more of a show-and-tell approach, or even use a Google Doc (word processor) as a virtual blackboard to help explain what’s going on.  That said, when I want a simple pack of slides to summarise the points made, or to outline the plan for a day,
I’ve not played with the Presentations tool much beyond this, mostly because I expect problems even getting Powerpoint files to open and play out correctly on another copy of MS Powerpoint – let alone transferring them to another app such as Google Presentations.  

(Nearly) Two weeks with a Samsung Chromebook 303C

Scope of review

In the week before Christmas, we took delivery of a Samsung Chromebook Series 3 (303C) – with the intention of reviewing it for suitability towards a distinct usergroup we administer.  So to that end I’ve spent many hours using this machine in place of my usual MacBook Pro (for work) and occasionally for personal use in place of my usual Windows 8-based netbook.  I’ve taken some notes as I’ve encountered thoughts and issues provoked in daily use, which have been compiled into this review (which itself was written on the Chromebook in Google Docs) for others to see where I’ve got to with it and why.  Hopefully it will inform and comment rather than poke holes or fun.
Please note therefore that this review is neither an analysis of Google software/policy/infrastructure, nor is it an in-depth user manual for this machine or the Chrome OS it runs.  Others have these functions covered far better elsewhere.

Setting the scene

The computing market has been flooded with sub-£400 laptops in recent years, with many being in the small “netbook” form-factor.  Their primary intended use is for the consumption of online content, and getting simple tasks done like email, letter-writing, online banking etc.  Most of these netbooks run full copies of Windows or Linux and offering power enough to run basic internet, office and even multimedia software – this has given us a new class of affordable machines with surprising processing power and flexibility, despite being designed for much simpler tasks.  New models continue to be offered with Windows 8 and Intel/AMD x86-compatible processors.

Cheap, powerful computing – what it *can* be

I bought an Asus EeePC 1011PX to aid study and note-taking in 2011.  As I progressed through the studies beyond simple note-taking, writing up projects in Microsoft Office 2010, it has been used for mixing multitrack audio on the move, as well as room-acoustics analysis with a USB test mic.  That’s an amazing amount of processing power and flexibility for £230, even though that doesn’t include the extra hardware and software I now use with it.
To get the best out of such a small machine, I’ve had to carefully analyse my needs and find solutions that scale down appropriately to such a small machine.  Document compatibility issues finally pushed me to purchase and relearn Microsoft Office 2010.  To make that transition I ditched the dog-slow Windows 7 Starter Edition in favour of the two major consumer-previews of Windows 8, enjoying both enough to finally upgrade to the release version Windows 8 Pro.
I’ve also had to deal with what I feel was more than my fair share of maintenance.  Within 11 months of purchase both the fan and hard-drive failed, both of which were dealt with surprisingly quickly by the manufacturers’ UK repair agents.  No surprise that these moving parts needed replacement, but within 11 months?  The OS itself needs to update itself from time to time, as do most of the individual applications – albeit less often and usually without requiring a reboot.
So all this leads me to ask; what makes the Chromebook any better than what I know of an arguably similarly-specified Windows machine at a similar price point, and what can one expect from such a machine?

Software and hardware

First-off, a Chromebook comes preinstalled with enough of an operating system (OS) to run Google Chrome, and connect to the outside world via WiFi and Bluetooth wireless, alongside slots for USB and SD-cards.  Anything that can be done inside a web-browser can be done with a Chromebook.  This essentially makes it a Netbook in the most literal definition of the word.
Additional software is available, but only in the form of web-apps that can be installed inside Google Chrome itself.  This should ensure an increased level of OS security and stability compared with a full-blown Windows, Mac or Linux installation, since the user cannot fiddle with it.  It should also ensure that software updates are much more limited in scope and number, since there are less components on the Chromebook.
Installing Microsoft Office is out of the question, but that doesn’t mean that the machine can’t be useful for paper-based productivity – but instead of Office, Google would expect you to use their Docs/Drive package with a Google account.  Instead of Outlook, Gmail – this would include calendar and contacts functionality.

User data

A Chromebook typically comes with very little built-in storage.  The Samsung 303C tested here comes with a 16GB SSD which is seemingly used for both the built-in OS and any user-data such as downloads, etc.  With such limited onboard storage, multimedia options are limited to anything that can be downloaded from the Internet, or played directly from USB/SD media.
The idea of the Chromebook platform is that it acts as an interface to cloud-based storage and management of email and documents – and is clearly best used with a Google account.  If you don’t have one, the machine will allow you to create an account as part of the login process.

First impressions – hardware


  • Fast boot time (needs measuring)
  • Easy to get going with Google account credentials or as a guest user
  • Fast to sleep and to wake up.




  • Surprisingly nice screen – compares well with existing Asus EeePC 1011PX netbook. Pixel size seems ideal for form-factor.
  • Text rendering looks surprisingly crisp – without being fatiguing.
  • Matte finish much nicer to use than the reflective shiny glass finish on Macs and some PC laptops.



  • HDMI connection to second monitor has yet to work with any DVI or HDMI-equipped TV or computer monitor I’ve tried – usually causing the laptop screen to go dark.  This might make presentations a problem.




  • Thin
  • Light
  • Feels solid in the hand.



  • Fiddly to open one-handed, but too light and small to easily open two-handed.  Could easily have been solved by setting a bigger indent just under the trackpad to offer more grip.
  • Silver coating is really too easy to scratch. The underside of the machine is scratched up after a day’s use, and it’s only ever been on a clean desk, or inside a padded case.
  • The “G” from the Samsung lid decals has fallen off – not good since the unit has only ever travelled in my hand or a soft case!
  • While the machine feels solid enough in handling, the screen does seem to touch the keyboard when folded down, allowing dust and skin-grease to transfer, particularly from the spacebar to form lines on the screen.  This is a common problem to all plastic-screened laptops and notebooks I’ve used.  Models such as recent MacBook Pro’s with much more solid glass-faced screens seem to flex less easily to begin with, and mark less easily than the plastic if they do make contact with the keys.
  • Headphone socket is a very tight fit with most standard 3.5mm plugs encountered during the trial.  Really does feel like I’m going to break the machine if I push too hard.  This is the complete opposite case to most laptops I’ve ever encountered, whose headphone/line-out connections are generally too loose, causing nightmares for corporate presentations.




  • Full-size keyboard is very much like the MacBook (Pro) machines we’ve been using for the last five or more years.
  • Function keys well thought out with dedicated (and marked) keys for tab refresh, maximise, window cycle, brightness, volume mute/down/up, standby.
  • Typing longer documents (like this review, even) is a surprisingly comfortable experience – I’m finding it hard to feel any notable difference between this and a MacBook.
  • Dedicated “Search” button likely more useful to modern users than “Caps Lock”, but…



    • Actually, Alt-Search has the same effect – makes sense since the search key is in the traditional place for the Caps Lock key, but this config could confuse new users who might not understand why their Chromebook “randomly” brings up a search function!
  • No “Delete” key, nor obvious way to replicate function.
  • Left and right arrow function keys would make most sense as a way of moving across tabs in the same window, but don’t appear to do anything?
  • No media keys – would be useful for YouTube, Google Play Music player, etc



  • Like many new machines, this was set a little slow by default. Soon fixed by adjusting settings (more on this later).
  • Right-clicking with two-finger tapping seems hit-and-miss.  Right side seems more sensitive/accurate to touch gestures than left.
  • Works best either with a firm thumb-push at the bottom (where buttons used to be before smooth trackpads became the “in thing”), or using tap-to-click. To this end-user, this feature seems no different to the glass Apple Trackpads fitted to aluminium unibody models.

Built-in software – in use

User accounts


  • Multiple user accounts can be set up on the same Chromebook.
  • “Admin” tools, suitable for remote control and corporate deployment are available as part of a Google Apps domain (how else?), but at a cost of something around $20 per year per machine at a quick glance.
  • Most users will likely be fine with a strong password and normal “user accounts”.
  • “guest” (browser-only”) access can be selected as an option at login/lock screens.
  • Accounts can be “locked” after sleep, requiring password (or switch to guest/alternate account) to wake – important for security.

Taskbar; a.k.a Launcher


  • Seems to be fixed at the bottom of the screen – but can be set to auto-hide.
  • Left side shows currently-open apps
  • Apps can be pinned to the launcher, much like Windows.
    • Some apps open in their own window, some open in a new tab.
    • Right side shows clock, WiFi, battery and account avatar pic by default.  Also shows notification of audio muting and caps-lock.



  • Relatively few built into the OS itself.
  • Tend to be limited to particular app (for the browser) or function (for things like WiFi, Bluetooth etc.)

Network connectivity

This machine’s sole means of connectivity with the outside world is via WiFi, which supports WPA, WEP and unencrypted connections on 2.4GHz (a/b/g) or 5GHz (n) WiFi networks.  Connectivity has been consistently good with a variety of Ruckus, Netgear and Apple access points.

Bluetooth connectivity

File transfer

Not attempted as couldn’t get the Bluetooth Stack to connect with any phone compatible with bluetooth file transfer protocols.


Pairing an Apple keyboard/mouse set with the Chromebook was easy, once I’d remembered (searched Google for) the method to get the devices into a discoverable state.  Keymapping seemed reasonably logical – with volume, screen brightness, dashboard and windowing keys apparently behaving as expected.
Interesting discovery:  Playing a WAV file from a CF card (via USB card reader) brings up a built-in Music app – which does seem to respond even to the media keys on the Apple keyboard – impressive since there are no marked media keys on the built-in keyboard.  Nice little “easter egg” inserted to make developers’ lives easier perhaps?

Internet tethering

See “Interacting with Smartphones” below.


Apps can be set (usually by right-clicking on them in the Launcher bar or menu) to the following windowing modes:

  1. As standard tab
  2. As pinnned tab
  3. Maximised
  4. Fullscreen

In real use, the actual implementation (and terminology) seem confusing and inconsistent.  “Maximised” Gmail has a different (and more minimalist) window style to any other “maximised” tab.  Some other apps (Scratchpad, for example) seem to be able to use the same minimalist maximised style, but not everything.

File management

It’s bound to happen – at some point in using a Chromebook, you’ll find that you’ve got some file(s) from a camera or USB drive that need attaching to an email or uploading to cloud storage somewhere.
Essentially, anything presenting itself as a USB Mass Storage Device, when plugged into one of the USB ports on the back of the machine, will bring up the File Manager window and make the contents available.  Obviously not every file type can be opened directly on the machine, but all files can at least be copied, uploaded or attached to emails.
Pretty much all common disk formats are supported, with no problems found during testing when reading and writing to USB drives formatted to default Mac OS X or Windows 8 settings.  According to the relevant Google support page, common Linux filesystems are compatible too – so the average user should rarely get into a situation where a given USB drive is unreadable.

A note about photos

Inserting an SD card or USB drive full of pics straight from a camera gives access to the pictures via the file manager.

  • Photos can be viewed as a slideshow directly from the drive.
  • Opening a photo will view the photo fullscreen.
  • Once the photo is open, the file manager also includes some simple editing tools:
    • Editing mode is enabled by clicking on the pencil icon that appears in the bottom-right corner of the preview screen/window.

Web browsing

This machine essentially is Google Chrome, with enough of an OS to run it.  So browsing the web is essentially the same as it would be on any other machine supporting the same version of Chrome.


The Apps menu links to various built-in apps by default, including an app for the webstore where additional software from Google and third-parties can be installed. Note that this doesn’t mean you can install standard Mac, Windows or Linux software on this machine at all, let alone expect it to run.
Any apps installed are essentially plugins that extend the functionality of the Chrome web browser.  If you sync your Chrome settings to your Google account, then all pre-existing bookmarks, settings and apps installed on other machines should find themselves synced on the Chromebook.


All machine settings are essentially available through the Settings tab of the Chrome browser itself – with some shortcuts (date/time, WiFi, Battery) on launcher.



Uses the normal web Gmail interface, just like any other browser.


Available via a free downloadable Offline Google Mail app, from the Chrome web store.

  • Interface looks more like on iPad than the usual Gmail web interface.
  • Offline syncing selectable up to whole of previous months’-worth of messages.
  • Some odd windowing issues when composing or filing messages.
  • Also default zoom levels needed reducing (eg press ctrl & – to zoom out) to make text in “Apply” and “Cancel” boxes

Smartphone interoperability

Given the cloud-based credentials of the Chromebook and Chrome OS, how does one get at photos, audio or video recorded on a smartphone?  it would seem that these should be synced to a suitable cloud-based service via some form of native app running directly on the device itself.  Once in the cloud, they’re accessed through a browser or web-app like any other web content.

Interaction with iPhone 3G (iOS 3.1.3)


  • No way to get photos or other content direct from device over USB.
  • No mobile Internet tethering via USB/Bluetooth. No WiFi tethering via iPhone 3G without jailbreaking the iPhone, which is untested as I don’t want to jailbreak my work phone!
  • All Google services accessible through Safari will be synced with same services accessed via Chromebook.

Interaction with iPhone 4 (iOS 6.0.1)

As iPhone 3G above, but:

  • Wifi hotspot may be possible but unable to test as the feature is locked out on my iPhone/plan.
  • All Google iOS apps, AND services available through Safari/any other browser app, will stay in sync with content accessed via the Chromebook.



Mobile phones, support and contracts… (Submitted by email)

It’s been interesting seeing how the mobile phone market has progressed in a few years. Ten years ago, I’d have walked into a store, picked a handset that did what I needed it to do, and live with it as-is for the next two years or however long the contract runs for. Then wash, rinse, repeat, adding new features to the ‘necessaries’ list in the meantime to inform each new purchase. If a phone didn’t do what it should, software updates were out of the question – just check it thoroughly in the first week and if required, swap it out for a phone that does work under an exchange policy. My Nokia 6310i worked for years without updates, and was even supported by much newer OS’s for Bluetooth sync and data connectivity.

Then the smartphone came along, and specifically the iPhone and Android platforms. There are hundreds more features in these things. And that’s great. I love my iPhone and find it very hard to imaging life without one. I’d function, but with more hassle in some ways, especially with regard to navigation and planning journeys on public transport. Email and SMS have become staples of information exchange on the move in ways I didn’t even think possible, let alone useful.

The downsides with this mass proliferation of features and functionality seem to be:

1) useability – it takes longer to learn to use and harness all the new potential features that come as standard. Doing these steps, and optimising them for everyday smoothness is beginning to become as big a time drain as not using them at all. iOS6 has so many new additional features over, say, iOS4 that I’ll never realistically find time to try everything to see if and how it fits with my life and needs.

2) lock-in – there was a time for me in the late 1990’s that I came to know about standards such as POP3 and IMAP email systems and how to deploy them. I think LDAP or something like it was also available. These seemed to be worldwide standards – anything that could follow the protocol could essentially work with anything else designed to the same protocol, regardless of the software or service provider. Fast-forward some 10 or more years, and we now have a number of somewhat proprietary systems for the same functionality, branded by say Google and Gmail, or Apple and its iCloud services. Taking email as an example, IMAP functionality is claimed but doesn’t quite work as IMAP standards intended. Gmail IMAP basically works but needs a bit of tweaking to get it right. On the other hand, I’ve yet to get a bog-standard IMAP client to even authenticate to iCloud’s servers, let alone talk to them. So if I’m to exploit the additional features offered by either platform, I’m forced to use more modern, more expensive hardware for features that really are trivially easy in terms of processing power and network bandwidth, if only the providers would just stick to established standards. This isn’t strictly limited to mobile phone platforms, but it’s an important limitation that in part defines the solution deployed on my desktops and laptops.

3) software updates – all these extra functions and solutions, whether built into the device operating systems themselves or bolted on as third-party applications, require regular updates to fix bugs or security holes. This seems to be an increasing need lately, since the devices, operating systems and data protocols involved seem to be too complicated for developers to get right first time – a problem that is human in origin (nobody is perfect, right?) and will likely never be fixed while needs (perceived or otherwise) and functionality continue to grow.

My big question coming out of all this is: do I really *need* all this new technology to survive in this modern age?

If the answer is thought to be “yes”, can I live with the time and patience required to get the best of it?

I’m getting to the point where the madness has to stop – beyond retaining existing functionality, the answer to both questions is trending towards ‘no’. I’m a technology geek. By no means an expert: but this small voice feels that something needs doing to make things still-easier on these fronts if we are to see this explosion in technological functionality actually translate into useful productivity. Anyone care to add any thoughts on this?