I’ve been a fan of the Volumio Project for rather a while now, since discovering it as a good platform for my Raspberry Pi audio player a year or more ago. Several self-built MPD-based setups have come and gone since the Raspberry Pi arrived, but Volumio has been the mainstay for reliable playback with control from numerous devices. The main draw for me has been the combination of its web interface, the fact the hard work has been done for me in terms of getting all the software components working together, and the fact that the whole package does seem to sound good.
On reflection I’m not sure that the various “audio optimizations” at the kernel or any other level really make an audible difference, but I do know that the whole package does seem to work more reliably on the limited resources of Raspberry Pi hardware than anything I’ve been able to cook up myself, at least without significant effort expended.
So why does an x86 port excite me so much? Two reasons:
More processing power availability opens the platform up to interesting things like DSP and dual-use such as streaming to remote machines and the like without falling over. Presently I’d have multiple Raspberry Pi’s set up with dedicated tasks. That’s been educational, but arguably a lot of hassle to set up and maintain. A single machine would make some of this stuff easier.
Opening up the platform to more common (and more powerful) hardware fvastly extends the range of audio and storage hardware that can usefully be used with it, and perhaps extends Volumio’s exposure on the wider marketplace.
The Raspberry Pi is an amazing platform for what it is – and audio systems based upon its limited bus bandwidth are capable of sounding incredible. But not everyone has a NAS to throw their music onto, which makes the Pi’s USB2 storage a pain to deal with when using it for networking, local storage AND the audio device all at the same time. And even those two do use it with a NAS are hampered by the 100MB Ethernet connection. Sure, streaming even “HD” audio files won’t tax it, but storing, backing up and indexing large audio collections will. And THIS is where even an old Netbook could best it.
At some point where time allows, I’m looking forward to putting my elderly ASUS Netbook through its paces with a 192KHz-capable USB2 audio device and either a USB drive or “Gigabit” Ethernet adaptor (its own onboard Ethernet, like the Pi’s, is limited to 100MB), to see how it stacks up against the Pi running on the same hardware. I know from running the RC download today that the distro works and plays audio even on the onboard audio, and the default setup to use the onboard display, keyboard and mouse to show the Web interface by default is a lovely touch.
This week I’ve been playing with our recently acquired Revox B77 1/4″ recorder. It’s a stereo half-track model, and I use it at 7ips. Currently I’m not sure what actual tape I’m recording to, as it’s a 20ish-minute offcut from a reel that was found to be blank at the end of a transcription job. The two tracks presented here represent two very different production methods now open to me.
“Changes Afoot” was sent track-by-track to tape, from DAW, and back again, to produce a digitally mixed master. A couple of those tracks were recorded particularly “hot” to bring out more tape character. No noise reduction was used at all. Signal-to-noise on any single tape recording in this setup was was found to be around 60dB.
“Innocence 2010” started as a digital stereo pre-master that I was never fully happy with, which was sent to tape via a skunkworks noise reduction system I’m working on behind the scenes; the system is not yet complete, but has enough processing built to give a useful 10dB gain in signal-to-noise ratio without any significant audible artefacts, as borne out by the 69-70dB signal-to-noise ratio found in this setup even after peak-limiting.
I suspect the signal-to-noise ratio is limited by noisy pre-amps on my DAW setup; I’ll need to swap to a different audio interface to confirm. That’s something to play with another day. Overall I’m VERY impressed with the overall sound quality this kit is able to deliver, and the range of analogue “colours” it can provide. I’m really looking forward to finishing the skunkworks noise reduction project; I have my eyes set on somewhere near 24dB noise reduction once it’s fully up and running. But it’s good to prove that I can both encode and decode on-the-fly!
I’ve been really happy with our record deck since we inherited it a few months ago, but one common problem has been playing back 70’s and 80’s pop/rock LP’s that have been, shall we say, well loved. I had an idea while last re-aligning the cartridge/stylus that part of the treble reproduction issue I’ve been experiencing with these discs was that the cartridge was slightly vertically offset in comparison to the vertical alignment of the groove – almost as if the cartridge mounting is somehow twisted slightly on the arm.
The Dual 505-2 has no azimuth adjustment to speak of, but the cartridge is held in with a pair of bolts, and so the azimuth could easily be corrected by inserting a washer or some other small object to act as a shim. I didn’t have any washers to hand that were less than 1mm thick (way too thick for this project), so for the sake of experimentation I’ve experimented with a piece of copper tape retrieved from a no-longer-working hard drive enclosure. It’s less than 0.25mm thick, and easy to cut and fold to the right height.
To get a rough measurement of what was required, I chose an older duplicate disc and set it playing, and observed the end of the cartridge right above the needle riding the groove. I estimated that the left side of the cartridge (as it faced me) was about 0.5mm too high compared with the level of the LP spinning underneath it, so I cut a small piece of the copper, folded it to get the approximate thickness required and cut a bolt-hole through it, before inserting it between the cartridge and its mount.
After doing up the bolts and re-checking the overall alignment, I started the same disc playing and listened. Much of the treble harshness had gone, and vocal sibilant distortion was down around 50% – much more listenable for some of the older discs in our collection. What’s more, newer/cleaner discs are sounding much more dynamic, and their overall soundstage much more focussed – much more digital, one might say. Bryan Ferry’s “Boys and Girls” was a particularly problematic recording, but was much improved in this evening’s round of listening tests. The same improvement was noted on playing a slightly ratty copy of Michael Jackson’s “Thriller” LP. We’re closer to hearing the music, and further away from hearing the equipment playing it. That’s a good thing in my book.
Next on my list is to trim the copper shim to make the installation invisible, but it certainly proved my little theory and proves a nice illustration for anyone else wanting to try the same thing.
While checking the tracking weight of the deck earlier this week I had the nagging feeling that the tonearm was not pivoting across the platter as easily as perhaps it could. Moving when cueing felt stiff, whether the cue control was lifted or not. I wondered if this was contributing to some of the sibilant distortion I’ve been hearing lately.
So I took the deck apart a second time today, to have a look at re-greasing the internal mechanisms with Lithium white grease as an experiment to see whether freeing up the mechanisms might help tame the subtle distortions I’ve been hearing. I also noted that the platter spindle didn’t want to continue spinning freely in its socket even when the drive belt was removed, so for good measure I removed the spindle and greased its socket.
After putting everything back together I rechecked the cartridge/stylus alignment and tracking weight. The latter was much easier to set, and confirmed that the arm moves much more freely on its pivot than it did. So as a test I’ve started playing the first side of Mike Oldfield’s “Five Miles Out”. It’s a very treble-heavy recording, but it is now much less distorted and demonstrates much more extension beyond 10KHz. It’s also worth noting too that I’m hearing a lot less rumble in quieter moments on the disc – so it seems that greasing the spindle bearing/socket too was a success.
Interestingly low end rumble from the LP itself is much more obvious – different tones coming from different sides of the disc would indicate mastering or manufacturing issue rather than motor bearing noise being passed through.
I’ve also found that tracking with 2g force rather than the recommended 1.5g seems to get more control over the treble, and seems to improve overall dynamics, giving drums more immediacy and impact without dominating or smearing the mix.
Maybe in time I’ll do a complete stripdown for a proper cleanup and to get the old grease out before applying new. At least at this point in time I’ve confirmed the theory of what I wanted to achieve – for which I’m rather happy!
Those who have read my little review of the Tannoy Mercury M20 Golds might be aware I’ve inherited some other items that they were originally purchased with sometime in 1985, we think. In this little article I’ll be explaining a little about our (immediately) beloved turntable – the Dual 505-2. By modern standards, if purchased new I would guess this record deck would compare with the likes of the Project Debut Essential package or similar. Our example appears to still be fitted with its original Dual ULM165E cartridge and DN165E stylus. We have no idea what playing time the stylus has seen, nor whether it is indeed the original stylus or an after-market replacement.
Fault-finding and repair
Upon arrival the platter would not spin, and I had been warned of the need for replacement drive and pitch-adjustment belts. After some Google abuse in an attempt to find a service or owners’ manual for this unit, I found my way to the Vinyl Engine which had an English owner’s manual available.
Dual 505-2 disassembly/reassembly
After downloading and reading the owners’ manual, I then attempted to take the inner plinth out of the wooden frame. This isn’t as easy as it looks; so having armed yourself with a Leatherman, Dutch courage and a glance at the owner’s manual, it goes something like:
Lock the tonearm in place.
Remove the stylus and put it somewhere safe to save it getting damaged.
Remove the rubber turntable mat, and the platter.
Remove the plastic lid.
Ensure the transit screws are in the “playing” position, to give the suspension mounts full movement.
Turn the whole deck on its back (so it rests with the hinges against your work surface).
Slide the suspension spring bases out of their homes in the plastic base plate. The whole plinth assembly can then separate from the base. Note that the captive mains, signal and ground cables prevent the plinth separating completely from the base, without further work to release the cable entry glands.
Reassembly is essentially a reversal of steps 1>8.
Checking the tonearm, motor and microswitch interaction
Another thing I learned during my Google session to find the manual was that one of the most common faults with these decks is that the microswitch seizes up if the deck has not been used for a while.This is the switch that starts the motor spinning when the tonearm is moved into the playing position. The cure recommended on most online forum posts I found on the issue was to simply use a screwdriver to operate the switch enough times until it starts working again. If I recall correctly, the switch is on the underside of the plinth next to the motor, and has either a yellow or blue plastic cap that connects to the tonearm assembly, via a system of levers that I could not easily work out. Within about 10 pushes the switch mechanism freed itself and the motor started spinning. With hindsight it was risky leaving the mains plugged in while I took everything apart, but it paid off and as it turns out there were no exposed terminals that a stray finger or screwdriver could have found. Phew.
As mentioned earlier, I had been warned of the need for a new drivebelt. It turned out that the belt itself was fine, but it needed a little gentle persuasion to realign it so it ran inside the speed selection mechanism. I tested the speed selector a couple of times to check that the belt stayed in the correct position, which it did. While I had the deck apart I also discovered that the toothed pitch adjustment belt had somehow snapped, so without any spare parts to hand I simply removed it and hoped for the best.
Function test, and adjust pitch
Having reassembled the deck I plunked down Suzanne Vega’s debut LP, and found both platter and cue mechanisms to work as designed. The result was quite stunning – the aged stylus and cartridge combination was working well enough with the photo stage of my NAD 302 amplifier to extract a remarkably pleasant sound from the disc, albeit at a slightly higher pitch than normal.
So, off came the platter and out came the Leatherman to attempt a quick-and-dirty adjustment of the pitch control pulley. Having worked out that the belt links the surface-mounted pitch control knob to the control pulley on the motor assembly, I could reason as to which way to turn the pulley to make the required adjustment. Using the narrow flat-blade screwdriver on the Leatherman, I turned the pulley through 90 degrees anti-clockwise by locking the blade in the teeth of the pulley and pushing gently in the right direction.
Trying the test LP again showed I’d adjusted too much – the song was now playing slightly slower than normal, so I went back and halved the difference. The LP now played what I considered to be ‘normal speed’ (turns out I do have an intuitive sense of ‘perfect pitch’, though it helped to have a CD of the same album to compare, which did indeed synchronise perfectly both in terms of track length and perceived pitch/tempo.
First impressions of sound quality
This is a subject for another post of its own, I’m sure. What immediately strikes is that the sound quality on offer is surprisingly good, but there are no good words or phrases I can think of to describe how it differs to the same material played from CD or other known digital sources in my system. When the disc is clean and in good playing condition, and has itself been mastered and manufactured well, the soundstage is noticeably wider and deeper than my digital sources, and the overall presentation is simply more musical. It’s not that the vinyl source is ‘warmer’ or more detailed as has become the stereotypical wording used by audiophiles writing in print or online when selling the plus-points of vinyl – just that the overall result is simply more pleasing to me. My wife confirmed this, noting that the vinyl feels more ‘real’, more as if the musicians are being presented in a space around and in front of us, compared with the digital sources forcing the soundstage to be artificially contained within our room.
Stay tuned for more on what work we carry put on this deck, and for some more in-depth reviews of what it enables us to enjoy!
Put picture back into the post as originally designed, correct spelling/typo’s, add the following list:
Things still to do:
Install a replacement stylus in case I broke anything in transport or handling. Turns out the current one (and its cartridge) are replacements with only 50hours playing time on them, but I’ve already ordered a Dual DN-165E replacement stylus from Stylus Plus, so at least I’ll have a spare.
Replace the pitch control belt – not because it’s needed in normal operation, but I feel it’s only fitting to bring this one back “to spec”.
Check alignment of the cartridge. When I first started using the deck, I noticed a significant amount of sibilant distortion when nearing the end of a side. This isn’t uncommon, but a quick alignment according to the “Stevenson Method” has made things noticeably better at the expense of slightly increased sibilant distortion at the beginning of each side.
Clean all our LP’s and replace inner sleeves.
Try the phono stage of our inherited NAD 3020B in place of the current NAD 302.
Based on the system EQ settings (and process) that inspired this blog entry a couple of weeks ago, I took some time to do some system measurements to see what kind of pink-noise response the system gives now it’s been tuned “by ear” in our building. This measurement session also set the groundwork for another blog entry concerning the spillover from stage monitors into the rest of our building.
I freely admit that these measurements were taken from mere curiosity and without any specific question in mind, nor were they taken with any specific point to investigate. For this reason I find the measurements and their interpretation so interesting.
Theory of RTA measurement in sound systems
For true reproduction of a pink noise source, one would expect the 1/3-octave bars in the RTA display to all be at the same level – pink noise has the unique property of having equal power per musical octave. I would suggest that this means there is as much power in the range from 20-40Hz as is found from 2000-4000Hz – hence explaining the “flat” response that should be seen on a 1/3-octave RTA such as my iPhone app used for these measurements. Further reading on the theory of pink noise can be found in this wikipedia article on the subject.
A “perfect” sound system
I think the most commonly accepted definition of a “perfect” sound reinforcement or playback system is that it plays back exactly what is fed into it. This definition seems to hold true in most fields, at both consumer and professional levels. The thinking here is that if you play pink noise into the sound system, you should observe an exact replica of that original signal coming out of the speakers. Any tonal (frequency-based) deviation from that original signal will show up on a 1/3-octave RTA display as a peak or a trough relative to the the neighbouring bars.
This is the basis on which EQ’ing/analysing with a pink-noise source is built – we know the source, and in theory we can therefore shape the frequency response of the system to get the height of the bars as even as possible. Having made suitable adjustments to the sound system EQ and speaker choice/positioning, we ought to be able to draw a horizontal line from the top of the lowest frequency bar to the top of the highest, with no one bar falling above or below that line.
Of course the reality of many sound systems, even the ones we would judge to be “excellent” often fall quite short of this aim, due to a combination of physical system/room parameters and interactions, as well as the intentional “voicing” of a system by a human engineer/operator to flatter its usual source material. Humans often like to interpret things in this quite non-scientific way and when this is done well it is considered an artistic addition to the system. When such deviations are present due to the nature of the room or the system itself, or if they are due to an inappropriate “voicing” by a human engineer, these artefacts tend to be considered bad things.
On with the measurements!
Sound mix position
With all the above in mind, I thought it interesting to measure the output of our system when fed a pink noise source, after the system was EQ’d by ear to a series of test-tones, where I intentionally made a perceptive judgement of the apparent loudness of each tone relative to the others. Since these judgements were taken from the sound mix position, I thought this a good location from which to take my first reading, shown here:
Now, this graph is interesting. Well okay, *I* find it interesting, even if nobody else does! Overall, it suggests a reasonably even response from 20Hz up to around 2KHz, before the response of the system falls off significantly at around 4KHz, before climbing steadily up to 16KHz. Above this point, either the sound system or the measurement mic seem to show a reduction in output. I’m not terribly concerned about anything above 16KHz because many people cannot hear much above this anyway, and those who can will tend to not be too bothered by a slight reduction in output here.
But what about that dip centered around 4KHz?
A good question. Human hearing is an odd thing, especially in buildings when listening to abstract tones that bear little relation to real music or human voices. Frequencies have different perceived loudness compared to others, even if a perfect sound pressure measurement system shows them to be played at the same frequency. The human ear tends to be most sensitive to mid-range frequencies, from around 1KHz up to around 5-6KHz. This contains much of the intelligibility components of the human voice, so in some ways it makes sense that our ears are tuned to be most sensitive at such crucial frequencies.
BUT: here’s another thing – our hearing sensitivity changes depending on the volume of the sound we’re hearing and/or responding to. Typical speech levels tend to be in the 50-70dB range in normal conversation in a reasonably quiet room, so I chose 65dB(A) as reference sound pressure level for the test-tones as well as for the pink noise analysis, given that it’s towards the top end of the volume range most people find comfortable when listening to reinforced sound. Any louder and people feel like they’re being shouted at. Any quieter and the effects of tonal inconsistencies tend to be perceived as being less of an issue to the average person.
So what has happened here during my EQ session is that I’ve intentionally pulled a chunk out of the system’s sensitivity to these critical mid-frequencies based on the fact that they seemed so much louder to me than the others, despite their measured sound level being within 1 or 2 dB(A) of those I didn’t find so bothersome. On its own, this is usually a bad move for setting up live sound systems for either speech or music reinforcement, so I checked out the new EQ profile by playing some well-known and much-loved music through the system, and found that the EQ curve still worked. Comparing that to the same music played through a curve without the drop in the 2-8KHz frequency range made the music sound rather harsh and shouty, even brittle somehow. So despite a significant issue showing up on paper, I left the new curve in place on the basis that it sounds pretty reasonable with music and speech playback without any further work being required by the mix engineer to make that material sound good.
Phew – so far so good.
Front row position (seats closest to the stage/chancel)
With my curiosity still ablaze, I took another measurement of the system from the front row of our main downstairs space, using the same pink noise signal at the same input level, and leaving the all other controls alone. This will show up any difference between what is heard at the mix position and what is heard at the front row location.
First off, I note that the measured sound levels at this position are really not appreciably different to those experienced at the mix position – 0.3dB as shown between the 10-second measurement periods at each position. Let’s consider for a moment that most people (even many sound engineers) would have a real struggle to reliably notice a 1dB difference between two signal levels. Essentially we’re saying that someone sat at the front of the venue shouldn’t be hearing a signal level difference compared to the same person sitting at the back of the venue, at the mix position. That’s quite astonishing, and shows something of good sound system design that the coverage is so even.
The second thing I notice is about the content of that sound level. The output at frequencies below 2.5KHz shows to be generally slightly lower than measured at the mix position, but frequencies above 2.5KHz are played slightly higher than at the mix position. Thus, despite the overall levels being so evenly matched, from the measured differences one could expect the system will sound tonally quite different in the front row seats compared to the back.
From experience I can tell you: it really does! I now know that if I’m mixing for “hifi sparkle” (with lots of high-frequency detail) as heard at the mix position, the seats in front of me will receive a mix that is unpleasantly biased towards higher-frequencies, sounding more like the brittle and shouty system that the aforementioned measures-flat-on-a-meter EQ method gave us. Ugh.
The sound system as heard by the most important microphone feeding it
An interesting one this. I’ve often observed that I can hear a whole lot of the main sound system output at the main speaking position we use, and I’ve been telling less experienced readers/service leaders for years not to let this fool them into thinking they can speak more quietly and let the sound system do all the work.
Well now I finally have something approximating proof of what I thought I’ve been hearing for all these years:
Again, there are two things I would note from this graph. The first is that we’re only approximately 1dB(A) down in overall output volume from the sound system at the microphone position than we would be at the back of the church. It’s almost no better than having the microphone directly in front of one or more of the speakers its output is fed into. Gain-before-feedback is a significant area of struggle in our building and with this information I can see why.
Secondly, let’s think about the content of that sound. Again, the output is quite even up to around 3KHz, with less of a relative dip at 4KHz than in either of the measured positions where sound output from the system is desired. In this position we actually don’t want the sound system to be contributing any significant output – and in our current setup we have the very opposite of that desire! A “flat” system response to pink noise would result in a microphone that is already very sensitive to frequencies between 2KHz and 8KHz receiving a lot of its own sound from the system at those very frequencies – leading to vast quantities of feedback if the person speaking into it is delivering less than around 65dB. Sadly for us, many of our less experienced people using the lectern tend to deliver less than 60dB to the mic, so getting them sounding both loud enough and pleasant enough to listen to is a significant challenge.
I hope that rather than bemoaning an ongoing struggle I’ve actually contributed some useful thought and input to the subject that others can either learn from or correct me on. This has been a wonderful learning exercise for me, and I hope the findings can eventually lead to some significant improvements to our systems, our methods and eventually the sound that is heard in our venue.
Since being given a Nikon F60 SLR some twelve years ago or so, I’ve struggled with the standard viewfinders provided on modern consumer-grade SLR cameras. My annoyance was further amplified when I bought my factory-reconditioned D40 in 2008.
The problem shared by both these cameras is that the viewfinder is not terribly precise. It’s easy enough in most lighting situations to see whether the frame is generally in or out of focus, but to do so with any precision is challenging at best. Quite often I find my eye automatically compensates for poor focus by focussing on the subject “through” the screen itself. I do this without even thinking about it, despite whatever conscious effort I make to prevent it happening.
Specific to the D40 is the issue that the viewfinder is about 1.5x smaller than the F60 I had been used to, and certainly smaller than the more professionally-oriented devices Nikon currently offers. Not only is it smaller, but it’s also dimmer. Sure, I willingly bought the D40 knowing about these issues, but they’ve become more of a show-stopper as I’ve grown to move away from shiny new auto-focussing lenses to older manually-focussing units.
I did some research into possible solutions that didn’t involve buying a new camera body, and I soon found that like pretty much every other Nikon SLR out there, the focussing screen on the D40 is easily removable – presumably for cleaning. This also opens the way to replacing the existing screen with something more suited to the way I want to work – that is to use split-prism and/or microprism focussing rather than relying on the built-in autofocus system.
I won’t explain how split-prism focussing works, as it’s done so much better here.
WHAT I BOUGHT, AND HOW I INSTALLED IT
The screen I chose was a generic (no brand name) cheap unit found on eBay, but it’s like this one. Instructions were found online here, which gave me enough information to know what to expect. Lacking specialist tools, I washed my hands and used my index finger (and nail) to operate the latch that holds the screen in place, and used clean fingers to drop the shim and screen into place. The wrapping on the new screen was enough to place between the mirror and the latch to prevent scratching.
I’m not always the most practical of people, and yet this installation job was easily done in less than 20mins including making some tea and reading the manuals – and I’ve never done anything like this before!
The concept proved itself very rapidly in ‘everyday’ snapshot situations. But something was wrong: when I manually focussed using the new screen, the image turned out with the actual focal point being shifted a little behind what I’d focussed on. Pants.
Given that my lenses hadn’t changed, and given the assurance I’ve had from several sources that the choice (or even presence) of screen does not interfere with the camera’s built-in autofocus sensors, there was only one answer: the screen was out of alignment.
Back to Google and Flickr forums I went, and found that this is a common enough issue experienced when installing and using custom screens – particularly cheap ones like mine. Whether this problem reflects the quality of the cheaper screens, or whether it’s caused by the quality of calibration carried out in the manufacture of the camera itself, I cannot tell.
Before going any further, I found and printed a focussing test chart (more on the issues that brought up here!) and took some autofocus-assisted shots of it with the kit lens. First with the new screen, then with the old, and finally even without a screen. This confirmed what I had read, namely that the autofocus works as it should. Good – just the manual focussing to sort then.
The research I’d previously carried out into the problem suggested using some special tools to recalibrate the mirror, so it naturally rests in a different place. In this camera this would inevitably mess up the autofocus system too, which I felt would be too complex a job to realign with the tools and time I had at my disposal. There had to be a simpler solution!
So I gave the issue some thought, and basically I cheated. I figured that if moving the mirror in relation to the screen can solve the problem, then why not move the screen in relation to the mirror instead? If this can be done without putting too much pressure on the screen assembly (and damaging either the screen in the camera in the process) then it has to be worth a shot – right?
REFINING THE INSTALLATION
I pulled the screen and shim back out of the camera, and first decided to try installing the new screen without the metal shim. This moved the screen further away from the mirror. Took some test shots, and soon saw that the back-focus problem had gotten worse. Okay, then if that was the case then perhaps instead I needed to effectively thicken the shim to move the screen closer to the mirror?
Buying a second shim to test this theory seemed like too much work, not to mention that this would mean throwing yet more time and money at the problem, with no guarantee of a workable solution. What if the spendy new shim added too much thickness?
So instead I thought to try sticking some appropriately-sized paper strips to the shim to thicken it slightly. Five minutes with some Pritt-Stick, scissors and an old till-receipt was enough to prepare the existing shim for testing. I reinstalled the shim and screen, and took several hundred (!) test shots. The situation was now bearable but the back-focussing problem was still present enough to be annoying.
Back to the cutting-board I went, this time removing the paper strips and replacing them with similar sized strips cut from a used rail-ticket, as that was about double the (estimated) thickness of the original paper strips). Pritt-Stick, scissors and patience were still the tools of choice. Eventually I put it all back together and “Hurrah!” – manual focussing with the split-prism now works even more accurately than auto-focus. Problem solved!
Despite, and perhaps even because of, the work I’ve had to put in to get things calibrated properly I’m now very pleased with the end result. I now have a camera that still works very well with autofocussing lenses, but also has the capability to give me even sharper results with manual focussing.
I would heartily recommend such a screen upgrade to anyone who is struggling with manual-focussing their modern DSLR rig, but I would perhaps suggest that they buy an official screen either from the camera manufacturer (Nikon, in my case) or a much more trusted third-party manufacturer such as the Katz-Eye products linked earlier in the article.
With all of that said, I cannot vouch for either of these more expensive solutions as I have not tried them – so as always your mileage may vary.