Volumio goes to x86…

…and I’m rather excited.

I’ve been a fan of the Volumio Project for rather a while now, since discovering it as a good platform for my Raspberry Pi audio player a year or more ago.  Several self-built MPD-based setups have come and gone since the Raspberry Pi arrived, but Volumio has been the mainstay for reliable playback with control from numerous devices.  The main draw for me has been the combination of its web interface, the fact the hard work has been done for me in terms of getting all the software components working together, and the fact that the whole package does seem to sound good.

On reflection I’m not sure that the various “audio optimizations” at the kernel or any other level really make an audible difference, but I do know that the whole package does seem to work more reliably on the limited resources of Raspberry Pi hardware than anything I’ve been able to cook up myself, at least without significant effort expended.

So why does an x86 port excite me so much?  Two reasons:

  1. More processing power availability opens the platform up to interesting things like DSP and dual-use such as streaming to remote machines and the like without falling over.  Presently I’d have multiple Raspberry Pi’s set up with dedicated tasks.  That’s been educational, but arguably a lot of hassle to set up and maintain. A single machine would make some of this stuff easier.
  2. Opening up the platform to more common (and more powerful) hardware fvastly extends the range of audio and storage hardware that can usefully be used with it, and perhaps extends Volumio’s exposure on the wider marketplace.

The Raspberry Pi is an amazing platform for what it is – and audio systems based upon its limited bus bandwidth are capable of sounding incredible. But not everyone has a NAS to throw their music onto, which makes the Pi’s USB2 storage a pain to deal with when using it for networking, local storage AND the audio device all at the same time.  And even those two do use it with a NAS are hampered by the 100MB Ethernet connection.  Sure, streaming even “HD” audio files  won’t tax it, but storing, backing up and indexing large audio collections will.  And THIS is where even an old Netbook could best it.

At some point where time allows, I’m looking forward to putting my elderly ASUS Netbook through its paces with a 192KHz-capable USB2 audio device and either a USB drive or “Gigabit” Ethernet adaptor (its own onboard Ethernet, like the Pi’s, is limited to 100MB), to see how it stacks up against the Pi running on the same hardware.  I know from running the RC download today that the distro works and plays audio even on the onboard audio, and the default setup to use the onboard display, keyboard and mouse to show the Web interface by default is a lovely touch.

 

Lossless audio compression for project work and archive – what do you use and why?

I’ve been using lossless audio compression as a matter of course in my audio workflow for a while, and not just for end-distribution.  Services like Bandcamp and Soundcloud support uploading in FLAC format, and many clients happily accept ALAC (Apple Lossless) or FLAC in place of traditional uncompressed WAV or AIFF formats.

As for long-term storage and archiving, I’ve switched to FLAC for my projects, and have back-converted most of my catalogue.  But why FLAC?

Pros:

  • The reduction in network bandwidth and storage requirements is always nice.
  • It brings the possibility of checking the project audio files for errors via the FLAC checkum (applied at the time of creation) against how the file decompresses in present-day.
    • This can show up problems with “bit-rot” on disk storage that would otherwise remain hidden.  It’s a useful alternative to deploying ZFS-based file systems and keeps storage and network kit at an affordable “prosumer” level while such technologies mature a little longer, and hopefully come down in cost too.
    • If I find a problem file? Fine – restore from a good backup.  But that does rely of course on having good backups, and the corruption not being carried into those!
  • It’s an established format that many, many software applications can read, across a wide variety of operating systems – and given the number of websites springing up offering “audiophile” digital transfers of consumer audio material based upon the format, I have good confidence that software to read and write to the format will continue to be developed for some years ahead.  And if not, it’s still easy enough to batch-convert files to whatever format replaces it.

Cons (none unique to FLAC, I notice):

  • Reaper is my DAW of choice, and exporting large projects out to FLAC is time-consuming as it still doesn’t multithread FLAC exports for projects.
  • While Reaper *does* support (in version 5.111, at last check) recording directly to FLAC and other formats, recording to anything other than WAV or AIFF on my systems has consistently thrown up audible glitches in the recorded material. With some sample-accurate editing these can be fixed, but still not acceptable.
    • What I do therefore is to record to WAV/AIFF first, then save the project to FLAC before any significant mix down happens.
  • Not every DAW supports FLAC natively, or at all.  But then, for me this is a moot point – not every DAW can read project files from every other DAW, so this is just a part of that wider picture.  You pick your tool for the job and deal with the consequences.
  • Conversion takes time, especially offline for large projects.

So  – that’s a quick brain-dump of how I’m working with this stuff and why. I’ve missed steps and I’m sure others will be quick to pick holes in it.

I suppose my question to anyone else reading this with sufficient interest is… What is everyone else doing? What file format would you pick, and why?

 

Raspberry Pi HDMI audio vs USB CPU use

Quick note after some experiments last night. Not completely scientific, but enough to show a trend. I set out to compare CPU usage of the Pi running Volumio, upsampling lossless 44.1KHz 16bit stereo with ‘fastest sinc’ to 192KHz 32-bit stereo.

Streaming to the USB uses between 70 and 90% CPU. Streaming to the HDMI output uses 95% and more! Audio gets choppy in the latter case even without other processes getting in the way, whereas the former only gets choppy when the Pi happens to try and update the MPD database at the same time.

Wonder if anyone knows why onboard streaming should use so much extra CPU time to do the same work, and whether I2C suffers the same fate? Not sure I want to spend on a custom DAC if the current EMU 0202USB is more efficient?

Getting an EMU 0202USB working with a Raspberry Pi

In the last couple of weeks, out of curiosity, I’ve bought a Raspberry Pi to play with at home.  It’s really very impressive to see what can be done these days with a $35 computer – an “educational” model at that!

Our Pi is currently in place as our digital audio player, courtesy of the Volumio linux “audiophile” distribution, and an EMU 0202 USB audio interface.

Once the Pi was booting Volumio off the SD card, I found two things that needed doing:

  1. Set up the Pi to pull files off our NAS device.  In theory this can be done from the Volumio web interface, but I had to go hacking around editing config files to make this work seamlessly.
  2. Set up the EMU for optimal digital playback.  I take a somewhat different path on this to most “audiophiles”.  I’m specifically aiming to implement a software volume control, provided I can run the digital audio chain at 88.2KHz/24bit, or higher.  This means CD/MP3 content gets upsampled, while some recordings made natively at 88.2KHz/24bit get to be played that way.

The Volumio forums helped me out with point 1, but I’ve lost a lot of brainpower and free time to getting the EMU to work properly.  I could get it to play out at 44.1KHz/24-bit, but any attempt to play native files at higher rates, or to have MPD upsample, resulted in obviously robotic-sounding distorted playback.  It turns out the key was simple:

It seems the clock rate on the EMU 0202 and 0404 USB devices is assigned to a fader in ALSA, which in this case I accessed using alsamixer.  There were two faders for my 0202:  PCM and Clock rate Selector.

The latter has a range of stepped values, equating to the following sample rates:

  •   0% 44.1KHz
  •  20% 48.0KHz
  •  40% 88.2KHz
  •  60% 96.0KHz
  •  80% 176.4KHz
  • 100% 192.0KHz

What I’ve learned then is that to get the setup working, I needed to not only set Volumio (or the underlying MPD player) to resample to the target output rate of 88.2KHz/24-bit but ALSO to set the Clock rate Selector to 40% in alsamixer.

All works happily and I’m loving the more “analogue” sound of the EMU in that mode!

UPDATE, 23RD FEB 2014:

I’ve managed to get MPD to reliably resample to 176400Hz/24-bit (32-bit internal, 24-bit at the card.) by forcing the Pi’s turbo to “always on” and a slight overclock. It’s not *quite* perfect yet, so i might see if I can push it a little harder before documenting our full setup.

Feia – cassette restoration case-study

After a few weeks playing with head alignments, audio interfaces, decks, plugins and sanity, I’ve run off a successful “first draft” attempt to restoring these interesting recordings.

About the cassettes themselves…

The cassettes themselves are a little odd – they appear to be using Type-II (CrO2) shells, but I can’t tell from listening or visual inspection whether the formulation on the tape is actually Type-I (Ferric) or Type-II. Both tapes seemed to sound better with Type-I playback EQ, selected in each case by blocking the tape type holes in the shell with judicious use of Scotch-tape.

Noise levels on the tapes were horrendous. Both cassettes seem to have been recorded about 10dB quieter than most commercial tapes given to me in the same batch, and seem to have experienced significant loss of high-frequencies – something that I noticed getting audibly worse with each playback pass despite cleaning and demagnetising the heads before each run. At best I was getting something like 15dB signal-to-noise before noise reduction. Much of this is broadband noise, but there’s also a significant rolling static crackle running on the right channel, which seems to match the rotational speed either of the pinch-roller on the deck, or perhaps the guide capstans inside the tape shell itself.

Playback

Something I’ve always known about the Akai deck I’ve now inherited and restored to working condition is that it’s always played a little fast. While I’ve not been able to fix this at a hardware level (seems to involve fiddling with the motor control circuits – a major stripdown and rebuild I’m not convinced I have the time or confidence to complete without an accident), I have taken an average of how fast the machine is playing by comparing songs from an assortment of pre-recorded commercial cassettes with digital copies from CD or previews on iTunes. From this I discovered that pulling the playback speed down to 95.75% of the sampled audio gives an acceptable match (within 1 second or so across the side of a cassette) to the commercially-available digital versions. This is really easy to do in my audio software as it doesn’t involve convoluted resampling and slicing to keep the original pitch.

Noise reduction

Challenges

A significant HF-boost was required to get the tape sounding anything like a natural recording, which of course brings the noise levels up. I don’t have access to an external Dolby decoder, and the Akai deck used for doing the transfers sounds very strange with Dolby B engaged even on well-produced pre-recorded material that came to me in excellent condition. The Denon deck I have is technically better than the Akai in many ways, but to beat the Akai in sonic terms needs about an hour spent on alignment (per cassette) and the source material needs to be in excellent condition. So I proceeded to transfer the content from the Akai at a known higher running speed, without Dolby decoding, in the hopes of being able to fix this later in software.

Decoding for playback

There is a lot said online about the mechanics of Dolby B, and many people think it’s a simple fixed 10dB shelving HF EQ boost (emphasis) on recording, that is easily dealt with by a simple shelving HF EQ cut (de-emphasis) on playback – or even simply doing nothing with older tapes that have suffered HF loss. Well, without going into detail that might infringe patents and/or copyright, let me tell you that even from listening to the undecoded audio, it really isn’t that simple. What we’re dealing with here is some form of dynamic processing, dependent on both the incoming frequency content AND the incoming levels. Even with its modest highest-available noise reduction, it’s a beastly-clever system when it works, and remarkably effective in many environments, but as with many complex systems it makes a lot of assumptions, open to a lot of factors influencing the quality of the output.

Working up a solution

Having no access to a known-good hardware decoder that could be calibrated to the tape, I set about using a chain of bundled plugins in my Reaper workstation software to mimic the decoding process. Having been through the process, with hindsight I can see why there are so few software decoders for Dolby B on the market, even without considering the patenting issues surrounding it. It’s a tough gig.

For this process, I picked out the best-sounding pre-recorded tape in our collection and aligned the Denon deck to it, listening for most consistent sound, running speed and dolby decoding.  I got a sound off the cheap ferric formulation that came subjectively very close to the same release on CD or vinyl in terms of listening quality – the tape suffering only slightly with additional HF grain, with some through-printing and background noise evident only when listening at high levels on headphones.

I then aligned the Akai to the same tape before sampling (without Dolby B decoding) and correcting for speed. A rip of the CD, and the samples from the Denon, were used as references as I set about creating the software decoding chain – keeping overall levels the same between reference and working tracks to ensure I was comparing like with like.

A day was spent setting up and tweaking the decoder chain before I came out with a chain that gives equivalent subjective performance to what the Denon deck can do with great source material. I tried the same settings on a variety of cassettes, and was able to repeat the results across all of them…

Content, replication and mastering issues?

…until I came to the content of the Feia tapes I was planning to work on!

Once the cassettes were digitised, and playback speed and overall frequency response corrected, each side of the two tapes was given its own stereo channel, so that individual EQ, channel balancing and stereo-width settings could be assigned to each side of the tape, since I noted some differences in each of these areas that were common to each side of each cassette.

While listening to the digitising run, without playback speed correction, I noted a 50Hz hum in the recordings that was common to all sampled media – I tracked this down to issues with signal grounding between the audio interface, the monitor amplifier, and the cassette deck. No amount of tweaking this signal chain could get rid of it, but with the tapes sounding significantly worse with each playback pass the only way forward was to remove the hum using an FIR/FFT plugin. I therefore set one up on each of the stereo channels and sampled a section of the noise (without the content) into each filter and tweaked the removal settings to be more subtle than default – this removed the hum but left the remaining signal (including bass-notes passing through the hum and its harmonic frequencies) intact.

Each stereo channel was then taken out of the master mix and routed to two more stereo channels – one for the noise-reduction decoder and the other for the side-chain trigger telling the decoder what to do.

Listening to the results at this stage was intriguing. Even after tweaking the decoder threshold levels I noted a general improvement in the signal quality, a reduction in noise levels, but still a strange compression artefact that was evident on high frequencies. This got me wondering whether the labelled Dolby B encoding was actually a mistake, and whether Dolby C had been applied by mistake. Cue another day spent mimicking the Dolby C system by tweaking my homebrew decoding system. Nope – compression still there, but the overall spectral effect of decoding Dolby C was having way too much affect on the mid and high frequencies.

So: onto the next likely candidate: dbx noise reduction. I found out more online about how it works and created an encode/decode chain in software, using a ripped CD track as source material.  Applying the decoding stage to the Feia recordings was dynamically a little better in the top-end, but still not right.

Combining the homebrew Dolby B chain, and following it with a little dynamic expansion on the top 12dB of the recording made a useful difference.  Suddenly transients and sibilants sounded more natural, with more “bite” and less splashiness on the decay, particularly at higher frequencies.

Neither tape is sonic perfection itself even after this restoration, but I’ve learned a lot through it, and how have a much better understanding of why cassettes *can* sound great, but generally don’t, especially recordings made on one deck that are played on another.  I now realise that I’d far rather deal with vinyl and pre-digitised content than extracting it from >20-year-old compact cassettes! At some future point, I’ll likely post up some before/after samples so you can judge the results for yourself.

NAD 3020 where it should be: In our rack!

NAD 3020B: Keeper or Clunker?

Been a while since I last posted on anything audio-related – I’m taking that as a good sign because I know I’ve been enjoying a *lot* of music lately.

NAD 3020 where it should be: In our rack!
Our NAD 3020B in use. (Please forgive the poor photo!)

Many an audiophile posting online has an extremely polarised attitude towards the humble NAD 3020 series of integrated amplifiers, which seem to be very much a “love ’em or hate ’em” box. I always thought I was in the “love ’em” camp, but until I inherited a 3020B from my father at the end of last year I never quite knew why. It’s not been the easiest of journeys, so please bear with me as I try to explain what I’ve found and what was going on at the time I found it.

If there’s any one lesson to glean from this experience, it’s that getting hifi sounding good is as much about the interaction of components working together as it is about finding of well-engineered components and slinging them together according to a spec-sheet.  These are also differences that I feel can make or break a system over the long term, but may not be immediately identifiable in typical demonstration arrangements that most stores can offer.

When inheriting our current system, my intention had been to replace my existing components one-by-one so I could check how the sound was changing at each stage on the way. I first swapped the speakers, as mentioned in another post. I then started repairing and using the record deck – plenty of other posts on that particular subject. With that now mostly bedded-in, i’ve come to the final part – using the 3020B.

Build quality

As a whole the unit feels well manufactured. Years of dust needed cleaning out of the phono contacts before connecting anything, but the speaker output binding posts are firm and accept 4mm banana plugs without modification – this amplifier was made in the generation(s) before the EU got their teeth into manufacturing regulations in the mid-90’s.

The source-select buttons are known on this series to be of slightly cheap construction, resulting in the plastic caps flying across the room when a new source is selected. Also, the source input sockets are somewhat loose.  This might be a result of their PCB flexing slightly when connections are made, or it might just be that the dimensions tolerance of the sockets themselves isn’t quite right. Again, this is a common flaw with amplifiers of this series, perhaps even of this generation.

The switches operate silently so far as the audio path is concerned, and the Bass, Treble, Balance and Volume pots/knobs also operate silently – rather impressive for such an old unit, especially if it’s ever been exposed to cigarette smoke, pets, small children and life’s little accidents as I know this one has.

Overall this unit is in better physical condition than I could have asked for – some surface grime aside, it’s basically unmarked except for the small hole drilled into it side where an intruder-alarm used to have a line threaded through it as a crime-prevention method. It’ll be an extremely rare find on Ebay that turns out in such good condition.

Sound quality – Take 1

Used with the Tannoy Mercury M20 loudspeakers it had been paired with in its previous home, the first impressions were that it is far warmer in tone than the 302 I was comparing it to, even with all tone controls at neutral and the loudness control off. Bass has more depth, stereo imaging is wider and deeper, but treble felt like a veil had been placed over the speakers.

Some experimentation with the Soft Clipping circuit showed no audible difference whether it was switched “in” or “out”.  I prefer to be safe rather than sorry, so I’ve left it “in” for now.

Another interesting experiment was to assess any audible differences between using the “Normal” (Low and High-pass-filtered) and “Lab” (Unfiltered) power amplifier inputs.  Theoretically the “Normal” input should be used, to filter out frequencies below 20Hz and above 20KHz, enabling the amplifier to use all its power in the audible frequency range and to run without interference.  The “Lab” input sounds better to my ear – soundstaging feels more solid, and the tonal balance a little more accurate throughout the entire frequency range. (See the first comment on this post for more about the correct selection of “Normal” vs “Lab” input).

Even having worked out which signal path to use, and to avoid the “Loudness” button, the amplifier was still not producing an overall sound I thought I could live with.  I therefore started to do some tweaking to work out where the “problem” was, if only to understand what was going on.

Experimenting with Pre/Power amp combinations

Both the 302 and 3020 have pre-out and power-in socket sets, allowing either to be used as the power amp for the other’s pre-amp section. First of all I wanted to see if the older 3020’s pre-amp section was the cause of the slightly muted treble. Some re-plugging later, I had both CD and LP feeding the 3020 pre-amp section, which in turn was wired to feed the power-amp of the 302. This combination had narrower imaging, slightly leaner bass, and still the soft treble that felt like it was hiding something.

Next I swapped the amp sections round, with the 302 pre-amp now feeding the much older power-amp section of the 3020, and everything seemed better. The soundstage was locked tight between the speakers for centred instruments and vocals, but there was much freer reign for anything panned between and even outside the speakers to be given space to do their thing. Either amplifier seemed equally capable of playing ‘depth’ information in recordings that have it, and so this was the way I left the units set up for some weeks while I got settled with the record deck and its cartridge.

Listening to the Tannoy’s through the 302 (using both its pre and power sections) I thought the sound was nicely tonally balanced, but always felt like I was listening through an imaginary window that the box placed on the musical world being painted in front of me. Conversely, the 302-pre and 3020-power combo gave slightly more extreme bass and treble presence, and effectively took away that windowed effect while fixing the veiled treble of the older amplifier used on its own.

System changes – a second chance?

Having settled on using the 302 pre-amp and the 3020B power-amplifier, a couple of things changed. First off, I found the new complexity of the system somewhat frustrating, but were willing to live with it if that’s what was going to give us the best overall sound. Then came the other major shift in our listening; I upgraded the phono cartridge to a Denon DL-160 MC (High output), seeking more accuracy of sibilants and better soundstaging. This much I got, but then many recordings were now too bright. Whether this was a result of longer-than-optimal running times on some discs, or perhaps due to an active mastering decision, I’ll likely never know.

Phono stages

With the new cartridge in place, switching between 302 and 3020 phono stages showed the differences between them were surprisingly subtle, but the older stage won out. It seems to reveal more midrange detail than the newer design, particularly with female vocals.  There’s also a lot more information being played from the background of mixes, better rendering things like room ambience and reverb tails. It also has better overall dynamics, and the soundstaging is a little deeper and wider.

This surprised me, since on paper the older design looks like it should perform worse than the new one. For one thing the signal-to-noise ratio quoted by the manufacturer is slightly higher in the older design, and I would expect its component tolerances to have drifted enough with age and use by now to have a significant negative effect, likely leading to loss of high-frequency detail and increased noise.

Just one side-note on the 3020 phono stage – it has two modes, one for MM (Moving Magnet) cartridges and the other for MC (Moving Coil) cartridges. MM carts typically have higher output levels than their MC siblings, but our MC is a “high output” model, compatible with conventional MM stages. Having tried the unit in both modes, neither sounds different than the other, even when the setting is “wrong” for the kind of cartridge in use. The phono stage shows ample headroom – I did experiment with using the MM cartridge with the extra amplification of MC mode and could hear absolutely no evidence of added distortion, even with discs mastered with very high recording levels. Further, using the MC mode with its extra gain ought to bring more measurable background noise into the mix, but I’ve yet to hear this in practise.

The 3020B on its own – Take 2

I decided to give the amplifier a second chance to fly solo, with vinyl as the primary source. Soundstaging now sounds wonderful with well-mastered discs in good condition – Pink Floyd’s “Dark Side of the Moon” and Eric Clapton’s “Slowhand” show a lot of their natural recording ambiences.  Newer, more synthetic recordings like Enya’s “Watermark” or Jean Michel Jarre’s “Revolutions” sound as modern as their source material and production values should present, with the end result sounding always convincing and really very human. Every instrument and voice has its own space in the mix, with no particular instrument or frequency range standing out above any other.

Poorer or duller discs can easily be improved with an adjustment via the tone controls. The effect of the tone controls is subtle but effective – I don’t feel like either circuit (Bass or Treble) impedes any other aspect of the sound passing through it other than whatever I’m telling it to do. Most bass-light recordings are usually too heavy in the treble, so a slight treble reduction usually brings things back into perspective. The inverse tends to be true if a recording is bass-heavy – usually a slight treble boost evens things out.

Turning to digital sources, playback again felt like it was lacking some treble at first, and the soundstage was somewhat vague. For most TV and DVD content we watch this isn’t a bad thing, and easily fixed with a slight adjustment to the treble control.

With playback of CD or downloaded content from our EMU 0202USB, it seemed that while bass and mid-range were coming through with much more timbre than I’ve been used to, and a much more even tonal balance, the high-frequency content was being reduced slightly, and felt slightly hazy, if such a term can apply to audio.

Having noted a slight increase in treble response over the few weeks the system lived in this new state, I’d have been happy to leave it there, concluding that either the increased usage had brought some components and connections back within tolerance, or (more likely) my subconscious processing of what I’m hearing was adjusting to the new system.

But then I made a discovery:  I could change the settings to run the DAC at a much-increased sample rate of 176.4KHz and 24-bit, with internal volume processing being done in the computer at 32 bits. This had the overall effect of giving slightly more audible treble, but more importantly it gave a lot more definition and control to the treble content.

I’ll likely write separately about this transition, but it really does take the digital playback to a level that competes with the best of what our vinyl source can give us. Listening to Royksopp’s “Senior” album for example, bass frequencies go into (and possibly below) sub-bass territory and the system keeps up, resolving the basslines with good speed – at no time does any bass note feel like it’s stopping later than it should. Synthesised kick drums tend to have very short attack times, and these are resolved wonderfully, the tonality of each kick drum making even different synths identifiable.  This is something I’ve never experienced before.

Remastered recordings I’ve complained about before (Al Stewart’s “Year of the Cat” and Genesis’ “Trick of the Tail”) are still a little too treble-heavy for my tastes, but have huge amounts of spacial and vocal definition, and are finally on a par with the original vinyl releases of the same albums.

Conclusion?

Based on some very practical testing, done by ear and confirmed with others who were unaware of the tweaking going on behind the scenes except for the cartridge upgrade, I have concluded that my 3020B is very much “a keeper”. Its warm tonal balance is generally flattering and does not interfere with the finer details of dynamics, soundstaging and definition. It is certainly able to show up any flaws of the recordings and source devices it’s amplifying. I think it fair to surmise that it does a good job with entry-level devices as they come out out of the box, but it does a truly great job when fed with higher-end devices, whatever form they would take.

Mike Oldfield: Tubular Bells

Don’t worry, I’m not going to do another review of this album. Many of us have read too many of them by now and my conclusion is that it’s something that one either “gets” or doesn’t. Instead I’d like to offer an insight on my experience of the album as a musical piece.

It’s an album I’ve always wanted to understand, and perhaps even grow to like, yet until this evening I had never heard it in a context or from a source that does it justice. I’ve owned a copy on CD since around 1990 I think, when I first became sentient and started to realise I love music. I remember getting that CD home and trying to listen to it on headphones and just… hated every moment of it. It wasn’t that the music itself was uninspiring, or that it needed concentration to really get the most out of, it was more that I felt I simply couldn’t hear enough of what it was made of for it to make sense. Perhaps this then was the start of my interest in audio?

Fast-forwarding through memories of several life stages accompanied by several audio playback systems through the last 20 years or so, up to the present moment. I find I have on the shelf a “well-loved” 1980’s pressing of the album on 12″ vinyl. My head hurts, life is what I might call “full” right now and some escapism is most welcome. So I put this on the deck and let it play out.

And I’m absolutely gobsmacked. For the first time I feel like I’m actually hearing the work. I can hear the timbre of the instruments and the arrangements. I can feel moods change, and I can appreciate the random non sequiturs that actually add to the intended mood rather than distracting away from it. The work feels right. And so I shut my eyes, listen, and am taken on an obscure journey that has completely set me to rights. Just wonderful.