Opinion: Marriott & others wishing to block “personal” WiFi hotspots

Even though I’m firmly based in the UK, I’m somewhat concerned to see plans by the Marriott chain, among others in the US, applying to the FCC for permission to allow blocking of “personal wifi hotspots” in certain corporate areas of their facilities. I’m not going to go into the technicalities of how this might be done, but some articles here from The Register give some (perhaps some slightly baised) insight:

In theory, and in their defence, I can see why it might be desirable to the hotel chains and their customers. Nobody paying for a Wifi service wants it to be interrupted, and so anything that can be done to preserve it in a critical area is a Good Thing [tm]. The problem here is both in the message it sends, and also in the people it unintentionally affects, especially if there’s little or no definition that separates a “personal hotspot” (someone tethering their mobile phone to their iPad to look up Twitter feeds or similar) from a functional wifi network that a contractor might have legitimate reasons to bring along as a tool to assist them in what they’re being paid to do in that same space. Let me explain by setting a scene:

Imagine that you’re a sound or AV contractor, brought in to support an event in one of these corporate spaces. These industries being what they are now, you likely have one or more devices in your arsenal that either require or are very much more useful with Wifi, either for Internet connectivity (relatively rare still, in my experience), or for direct on-site control with a remote control application running on something like an iPad or Android tablet. And there are likely more on the way.

Such systems being what they are, you likely find with experience, as I have, that the systems only work well together when configured with one specific brand and even model of wifi access point/router, so you cable that into your rack so that wherever you go, your wifi network name and IP range is always the same, and all you perhaps need to do is change the channel to better fit those around you so you (and others) get the most reliable connection and best available throughput. 

So you rock up at one of the sites where such blocking is in place, and everything stops working, so you grab the attention of a member of staff a the hotel and see what they can do. They point you to their on-site tech-support person, in the best-case scenario that they have one, and that this person is knowledgeable and sympathetic to your cause…

“Sorry guys, I know that this is a business device,” says the harrassed-looking tech-support person, “but you’ve got caught out as it’s being seen as a ‘personal wifi hotspot’. There’s nothing we can do. You’ll either need to connect everything to our network, which is THIS brand, or get your hardware control surface and some cables out of the truck and use that instead.”

“But your brand doesn’t play nicely with my kit, even though it’s all supposed to be standards-compliant. Don’t ask me why, it just isn’t. I don’t/can’t carry that extra 60lbs/30Kg of kit, I don’t need it anywhere else. Soooo, can’t you just…”

“Nope. Sorry, rules are rules.”

Now sure, we could waste time arguing that relying on wifi, venue-provides or not, for any event-critical functions is asking for trouble. Certainly, I find there’s nothing quite like a piece of physical copper or fibre cable running between my devices for reliability. However we look at it, until the wider industry accepts that using over-saturated and effectively consumer-grade wifi is a no-go, and creates a separate radio band and maybe even a licenced protocol (much like the UK’s PMSE system) away from “consumers” to increase reliability and available bandwidth, we’re stuck with what we have – the often all-too-sucky wifi – for the foreseeable future.

Others might argue “why not just have hotels and other venues make provisions?” Okay, good start, but two points on this:

1) how much provision should they make, and how should they do it? And how do we even define and hold them to it?

2) as the owner and operator of such sound and AV kit, would you trust your show/event to someone else’s network and all the risks that can entail? I personally choose not to, not least because I don’t want to have to configure numerous devices to talk to each other over someone else’s IP schema and to inadvertently be bandwidth-restricted by someone else’s careless, callous or plain unwitting action(s) at a crucial moment. 

Let’s be clear on why this stuff is important: when I’m working on-site, both my own and my client’s reputations are at stake, so in showtime anything outside mine or my client’s direct control is a risk we’d rather not be responsible for without good clear disclaimers covering our backsides. And even then, potential reputation damage means we’d want to steer clear of that kind of risk anyways. Just like taking life and car insurance doesn’t lead a responsible person to drive everywhere 30mph faster than they otherwise would, simply because they know they’re covered if someone else runs out of talent on a critical apex and totals both cars and occupants.

So the message currently being sent seems to be “we want our clients to pay for everything, and to not have that paid service (and therefore income stream) interrupted, and consequences be damned”. Perhaps the finer details have indeed been thought through, but if nobody asks the question (I have), how do we know?

Technology is creating some amazing solutions in live events, that have real potential to not only make lives easier for those of us doing real work there, but also to save a lot of real mass being lugged in vehicles the world over when much smaller and lighter solutions can be deployed. It would be a shame to have to put the brakes on that because a few shareholders of a few large chain hotels end up changing the landscape by way of “thought leadership”, and this silly idea ends up spreading.

And besides… When was the last time anyone connected to a public wifi service actually suffered because of other wifi networks popping up around the place? It’s actually never yet happened to me, perhaps because as an events-tech I never trust my shows and equipment to such networks, precisely because I roll-my-own so I know everything plays nicely together. But what are the other modes of failure I have seen?

  • Run out of IP addresses in the DHCP pool? Sure!  Even managed to do it myself to a bunch of users when I managed to underestimate the usage that one of my own networks would actually get.  Easily fixed because I use my own network, so I saw it happen and fixed the problem immediately on-site. How many other sites can say the same, even at say, Starbucks, or similar?
  • Managed WiFi zones that can’t smoothly hand over a device from one base station to another? Plenty!
    • Granted this can be tricky to manage well, because different client devices respond differently to the management methods implemented by different zone managers. I have experience of trying to do this well across numerous platforms and have yet to find a true one-size-fits-all solution, especially one that doesn’t interrupt a significant group of client devices in some way.
  • Run out of bandwidth for the number of connected users? Yup – this is a biggie, and it happens nearly every time.
    • In the real world, I’ve given up using public wifi (paid or not), simply because the majority of the times I use it, the bandwidth available simply isn’t enough to provide for the number of client devices (and their users) connected to it. So instead I revert to tethering to 3G or 4G mobile/cellphone networks instead.
    • Even abroad, my costings from traveling around the US in 2013 suggest it’s about 10x cheaper than equivalent WiFi costs, and 10x more reliable; not to mention that I can take the cheaper mobile/cell service with me wherever I go – the WiFi only works within the confines of the hotel or campus. The same metric applied in Italy in 2012. Uuuuuh, no-brainer then.

A key thing to note here is that in none of those highlighted cases am I even considering connecting valuable event-critical tools to such networks for mission-critical tasks – here I’m only talking about personal “holiday” usage; finding out more about the immediate world around me, mailing the odd photo to friends and family, checking email to keep on top of bills and any big family news.

So please, if you’re a hotel chain and considering this kind of plan, or an IT provider for a similar corporate planning a similar exercise, don’t even talk to me about this kind of revenue-generating exercise in the world of WiFi until you get your ducks in a row on these and other much more simple provisioning issues, okay?  If I’m going to pay 10x for a service that I can get elsewhere and carry with me wherever I go, then you’d better make it worth my while. And a big hint here is that you don’t do that by blocking me from using that 0.1x cost (vs yours) service that works. You do it by making your service WORTH 10x the cost of the other one. If you can’t, then perhaps something else is wrong, and it’s time to reassess the cost-benefit analysis.

Acoustics Experiment 2 – Main sound system overview

Based on the system EQ settings (and process) that inspired this blog entry a couple of weeks ago, I took some time to do some system measurements to see what kind of pink-noise response the system gives now it’s been tuned “by ear” in our building.  This measurement session also set the groundwork for another blog entry concerning the spillover from stage monitors into the rest of our building.

I freely admit that these measurements were taken from mere curiosity and without any specific question in mind, nor were they taken with any specific point to investigate.  For this reason I find the measurements and their interpretation so interesting.

Theory of RTA measurement in sound systems

For true reproduction of a pink noise source, one would expect the 1/3-octave bars in the RTA display to all be at the same level – pink noise has the unique property of having equal power per musical octave.  I would suggest that this means there is as much power in the range from 20-40Hz as is found from 2000-4000Hz – hence explaining the “flat” response that should be seen on a 1/3-octave RTA such as my iPhone app used for these measurements.  Further reading on the theory of pink noise can be found in this wikipedia article on the subject.

A “perfect” sound system

I think the most commonly accepted definition of a “perfect” sound reinforcement or playback system is that it plays back exactly what is fed into it.  This definition seems to hold true in most fields, at both consumer and professional levels.  The thinking here is that if you play pink noise into the sound system, you should observe an exact replica of that original signal coming out of the speakers.  Any tonal (frequency-based) deviation from that original signal will show up on a 1/3-octave RTA display as a peak or a trough relative to the the neighbouring bars.

This is the basis on which EQ’ing/analysing with a pink-noise source is built – we know the source, and in theory we can therefore shape the frequency response of the system to get the height of the bars as even as possible.  Having made suitable adjustments to the sound system EQ and speaker choice/positioning, we ought to be able to draw a horizontal line from the top of the lowest frequency bar to the top of the highest, with no one bar falling above or below that line.

Of course the reality of many sound systems, even the ones we would judge to be “excellent” often fall quite short of this aim, due to a combination of physical system/room parameters and interactions, as well as the intentional “voicing” of a system by a human engineer/operator to flatter its usual source material.  Humans often like to interpret things in this quite non-scientific way and when this is done well it is considered an artistic addition to the system.  When such deviations are present due to the nature of the room or the system itself, or if they are due to an inappropriate “voicing” by a human engineer, these artefacts tend to be considered bad things.

On with the measurements!

Sound mix position

With all the above in mind, I thought it interesting to measure the output of our system when fed a pink noise source, after the system was EQ’d by ear to a series of test-tones, where I intentionally made a perceptive judgement of the apparent loudness of each tone relative to the others.  Since these judgements were taken from the sound mix position, I thought this a good location from which to take my first reading, shown here:

Sound mix position (802/302 only)

Now, this graph is interesting. Well okay, *I* find it interesting, even if nobody else does!  Overall, it suggests a reasonably even response from 20Hz up to around 2KHz, before the response of the system falls off significantly at around 4KHz, before climbing steadily up to 16KHz.  Above this point, either the sound system or the measurement mic seem to show a reduction in output.  I’m not terribly concerned about anything above 16KHz because many people cannot hear much above this anyway, and those who can will tend to not be too bothered by a slight reduction in output here.

But what about that dip centered around 4KHz?

A good question.  Human hearing is an odd thing, especially in buildings when listening to abstract tones that bear little relation to real music or human voices.  Frequencies have different perceived loudness compared to others, even if a perfect sound pressure measurement system shows them to be played at the same frequency.  The human ear tends to be most sensitive to mid-range frequencies, from around 1KHz up to around 5-6KHz.  This contains much of the intelligibility components of the human voice, so in some ways it makes sense that our ears are tuned to be most sensitive at such crucial frequencies.

BUT:  here’s another thing – our hearing sensitivity changes depending on the volume of the sound we’re hearing and/or responding to.  Typical speech levels tend to be in the 50-70dB range in normal conversation in a reasonably quiet room, so I chose 65dB(A) as reference sound pressure level for the test-tones as well as for the pink noise analysis, given that it’s towards the top end of the volume range most people find comfortable when listening to reinforced sound.  Any louder and people feel like they’re being shouted at.  Any quieter and the effects of tonal inconsistencies tend to be perceived as being less of an issue to the average person.

So what has happened here during my EQ session is that I’ve intentionally pulled a chunk out of the system’s sensitivity to these critical mid-frequencies based on the fact that they seemed so much louder to me than the others, despite their measured sound level being within 1 or 2 dB(A) of those I didn’t find so bothersome.  On its own, this is usually a bad move for setting up live sound systems for either speech or music reinforcement, so I checked out the new EQ profile by playing some well-known and much-loved music through the system, and found that the EQ curve still worked.  Comparing that to the same music played through a curve without the drop in the 2-8KHz frequency range made the music sound rather harsh and shouty, even brittle somehow.  So despite a significant issue showing up on paper, I left the new curve in place on the basis that it sounds pretty reasonable with music and speech playback without any further work being required by the mix engineer to make that material sound good.

Phew – so far so good.

Front row position (seats closest to the stage/chancel)

With my curiosity still ablaze, I took another measurement of the system from the front row of our main downstairs space, using the same pink noise signal at the same input level, and leaving the all other controls alone.  This will show up any difference between what is heard at the mix position and what is heard at the front row location.

First off, I note that the measured sound levels at this position are really not appreciably different to those experienced at the mix position – 0.3dB as shown between the 10-second measurement periods at each position.  Let’s consider for a moment that most people (even many sound engineers) would have a real struggle to reliably notice a 1dB difference between two signal levels.  Essentially we’re saying that someone sat at the front of the venue shouldn’t be hearing a signal level difference compared to the same person sitting at the back of the venue, at the mix position.  That’s quite astonishing, and shows something of good sound system design that the coverage is so even.

The second thing I notice is about the content of that sound level.  The output at frequencies below 2.5KHz shows to be generally slightly lower than measured at the mix position, but frequencies above 2.5KHz are played slightly higher than at the mix position.  Thus, despite the overall levels being so evenly matched, from the measured differences one could  expect the system will sound tonally quite different in the front row seats compared to the back.

From experience I can tell you: it really does!  I now know that if I’m mixing for “hifi sparkle” (with lots of high-frequency detail) as heard at the mix position, the seats in front of me will receive a mix that is unpleasantly biased towards higher-frequencies, sounding more like the brittle and shouty system that the aforementioned measures-flat-on-a-meter EQ method gave us.  Ugh.

Centre aisle, front row level (802/302 only)

The sound system as heard by the most important microphone feeding it

An interesting one this.  I’ve often observed that I can hear a whole lot of the main sound system output at the main speaking position we use, and I’ve been telling less experienced readers/service leaders for years not to let this fool them into thinking they can speak more quietly and let the sound system do all the work.

Well now I finally have something approximating proof of what I thought I’ve been hearing for all these years:

Central Aisle, Lectern mic position (802/302 only)

Again, there are two things I would note from this graph.  The first is that we’re only approximately 1dB(A) down in overall output volume from the sound system at the microphone position than we would be at the back of the church.  It’s almost no better than having the microphone directly in front of one or more of the speakers its output is fed into.  Gain-before-feedback is a significant area of struggle in our building and with this information I can see why.

Secondly, let’s think about the content of that sound.  Again, the output is quite even up to around 3KHz, with less of a relative dip at 4KHz than in either of the measured positions where sound output from the system is desired.  In this position we actually don’t want the sound system to be contributing any significant output – and in our current setup we have the very opposite of that desire!  A “flat” system response to pink noise would result in a microphone that is already very sensitive to frequencies between 2KHz and 8KHz receiving a lot of its own sound from the system at those very frequencies – leading to vast quantities of feedback if the person speaking into it is delivering less than around 65dB.  Sadly for us, many of our less experienced people using the lectern tend to deliver less than 60dB to the mic, so getting them sounding both loud enough and pleasant enough to listen to is a significant challenge.

Signing off…

I hope that rather than bemoaning an ongoing struggle I’ve actually contributed some useful thought and input to the subject that others can either learn from or correct me on.  This has been a wonderful learning exercise for me, and I hope the findings can eventually lead to some significant improvements to our systems, our methods and eventually the sound that is heard in our venue.

Another reason to not mix on headphones…

I got curious earlier today and had a listen to some test-tones on some Sennheiser HD25-1 headphones.  Like you do.

These are the headphones that we supply with all our PA systems for troubleshooting and quickly checking recordings are happening, and their signals clean as can be.  They sound quite flattering when listening in “hifi” or “walkman” situations and I’ve always liked them for this, especially where significant isolation is required from external sound sources.  Trouble is, it seems they don’t do so well in the very high frequencies, which I noted this morning while listening to the 30th Anniversary remaster/re-recording of Oxygene. I thought it was just my ears or setup, but the sparkle I remembered from the same recording played on my home “hifi” was all but gone.  That’s what got me doing a sweep with some test-tones, and that’s what had me doing a quick Google search, which turned up this plot, which very much explains what I thought I was hearing.

Frequency response for Sennheiser HD25-1 - shown in green. Frequency is shown in log scale.Note the peak between 8 and 10KHz, and the falloff thereafter.

In live mixing, much of the “air” of a vocal or instrument will be up past 10KHz, and if the headphones are audibly struggling at 13KHz, it’s likely they’ll not be doing what they should be even as low as 10KHz.  If these are used for monitoring, one would likely be thinking everything sounds a bit dull and cranking up HF from 10KHz up on pretty much every channel.  And I regularly see our guys mixing with our headphones as a reference doing exactly that, taking the paint off the walls in the room in the process.  My HD25SP’s have a similar response, and it explains why anything I mix on headphones tends to end up sound far too bright when played on other systems.

One answer might be “get better headphones” – all well and good, but these are absolutely no substitute for listening to, and mixing for, the room you’re actually standing/sitting in, rather mixing for the silent space between your own ears.  Everybody hears the room, only you hear your cans.

“The Faders” Cartoon on ProSoundWeb

Found this earlier today while doing some research for a big project.  It’s sad it made me laugh, but thought I’d share anyway…

Acoustics Experiment 1 – Choir Foldback

I’ve known our main building to be tricky from a mixing/acoustics perspective, even if the usual ‘reverb tail’ people associate with difficult acoustics is actually quite short at just under 2 seconds (based on measurements taken by a Bose dealer some years ago).

I’ve long thought that this building is unusual in that there’s really not much sound absorption or diffusion going on between the ‘stage’/chancel area and the congregation/audience areas.

Our sound mix position is now at the back of the church tucked in just in front and to the side of the main doors, so it’s about 15m from the action on the chancel. I’ve noted that singers sound better from then mix position when they’re running without monitors, so I did some quick and dirty measurements with pink-noise generated by the desk and my iPhone to see what was going on.

I set up the choir monitors in their usual position relative to the choir, and took these measurements:

Mid-field (front row of choir, as heard by choir member)
Sound mix position

Now, the boxes are small Ramsa single-driver boxes mounted on mic stands (without booms) to get them up off the floor and closer to the ears that need to hear them. They have some EQ and HPF applied to roll off from 150Hz down (4-inch drivers tend to struggle with low frequencies at reasonable volume!) and to gently shape their sound so that they punch above the ambient stage-noise without causing feedback issues. Typical output levels would be expected to be between 60 and 85dB – they top out at around 95dB (@1m in their intended location) whatever their specs might suggest.

What the measurements show is that their on-axis response is far from linear, though in use they’re just about ‘good enough’ for what we need. Their off-axis response is much reduced at higher frequencies. Over the 15m distance between them (turned away from the mix position towards the choir) and our mix position, we’re only losing around 7dB overall sound pressure level (SPL, A weighted). That seems rather less than I would like. Of that loss, most of it is from 3KHz upwards – but we lose much less at lower frequencies, which i believe explains the boxiness they add to the vocal sound when driven alongside our main sound system.

It’s worth noting too that the RTA app has picked up a lot of energy below 100Hz, which I was neither hearing nor expecting from such small boxes set up the way they were. I’d suggest this inaccuracy is built into the design of the measuring device and the software algorithms, as well as possible noise due to hand-held operation.

Maybe my theory is out of whack, but above 100Hz or at higher SPL’s the measurements certainly tally with my experience.

Anyone else done similar measurements of their systems/buildings that i could compare this to?

Note to self: Reverb in live sound mixing

Always keep the reverb send muted until singers actually start singing. This would save embarrassing mistakes when someone unexpectedly introduces the song to their audience rather than just getting on and playing it! ;o)

You don't want your singers to sound like they're doing this when they talk.

Interesting “quick” sound system EQ tip #1

Last Friday evening I found myself again having to EQ our main sound system at work, due to a combination of what I believe to be environmental factors and physical changes to the speaker setup, namely the replacement of some faulty speaker drivers in a bass cabinet that needed taking into account – itself a blog subject for another time I’m sure!

My usual experience with setting system EQ has usually been centred around one of two things:

  1. Making the system sound as good as possible with CD-sourced playback material, in the hope that this will provide a known starting point for the sound of any mix we create on said system during a live event, or…
  2. …putting a key microphone into its usual position (such as a lectern for a church) and having someone speak into it, making their voice sound as natural as possible (without resorting to desk EQ beyond a simple high-pass-filter). Once this is done I’d then slowly turn up the gain for that microphone (keeping the fader level constant) and using some form of EQ to pull out any frequency bands that feed back.

Both methods have been “good enough” for rock-n-rolling into a venue and making something more than reasonable come out of the speakers, but neither method is terribly scientific, nor does it lead to consistent results.

More recently I’ve been playing with using pink noise and an RTA to show me what the system’s doing, then EQ’ing the system so that pink noise played out of it and measured with a flat-response microphone is shown on the RTA as being as close to the original pink noise as I can get.  This has lead to more consistent results than either of my previous methods, and has cut down the time spent on the task by something like 50-70%, but still the resulting system sound is somewhat variable to say the least.

So at a pinch on Friday evening, I happened upon what seemed to be a better method, and one I’ve not tried since my earliest days of sound mixing/system engineering:

  1. Make a CD with test-tones at a fixed level (usually -20dB), centered at the typical frequencies found on the faders of a 31-band (1/3 octave) graphic equaliser.
  2. Starting with 40Hz (the lowest audible frequency in most mixes/systems I deal with), get the tone playing through the system at about 65dB on a typical SPL meter.  A or C weighting doesn’t matter at this point – what does matter is that I get it set in my mind how “loud” that tone sounds/feels.
  3. Then go to the next tone up.
  4. Is this playing at the same perceived level as the 40Hz tone that preceded it?
  5. If yes, move on to the next tone.
  6. If no, then set the system EQ (I had both parametric and graphics to hand) to compensate.  Keep comparing and adjusting until the level sounds comparable.
  7. Repeat steps 3-4 until all frequencies that you can hear, either due to the system itself or your hearing (!), are pretty much perceptively even.
Note 1:   If you have a parametric EQ, and you find that frequencies progressively become more or less prominent than those preceding them, you can set an EQ curve centered at the point where the smallest difference occurs between adjoining tones, boosting or cutting accordingly.  The width of the filter is  roughly defined by the number of tones you find to be different.  It’s hard to explain in text, but becomes more obvious the more you play with the EQ parameters.  Using a parametric EQ here gives more precision and control over what you do to the signal, without the distortions of graphic EQ, which essentially is a chain of 31 or more audio filters run in series.
Note 2:  This is best done as an iterative process, so it’s worth playing through the test tones up and down the scale and adjusting until you feel you can’t make any more positive adjustments.
Note 3:  On our system, I was able to accurately do this up to around 16KHz, as with the combination of my hearing and our system I wasn’t able to discern anything beyond around 17KHz.  Not bad for a tired near-30-year-old engineer, working late at night on a combination of Bose 802/402/302’s!

Having applied this method to our system I played a couple of favourite songs through it from my laptop, which has a pretty good quality sound output (equivalent to most “hifi” grade CD players when fed with CD-quality content), and the system sounded immediately more musical, more involving and less “PA-like”.

Out of interest I measured the pink-noise response of our system What I found with this method was that my system curve had a notable reduction in the 2-6KHz range than would be obtained by using the pink-noise method above, which might be seen by many engineers as a significant disadvantage.

I then had a couple of our other engineers use the system in live services with this new system EQ curve, and their feedback was that the system sounded so much better than they’ve been used to.  They were making much more subtle (And arguably more accurate) changes to desk channel EQ for both speech and music, and the usual issues we have with feedback or tinny-sounding speech microphones were much reduced.

On reflection, I wonder whether part of the success story here is that my chosen reference level of 65dB (SPL, A-weighted) is pretty close to someone talking passionately to another in a quiet lounge – and given that the sensitivity of human hearing at specific frequencies changes depending on the overall sound level, this coincides quite nicely with our main material, speech reinforcement with some louder music that doesn’t often get much louder than 90dB.

I’m sure I’ve done many things wrong by working this way, but it was quick, easy and seems to have worked out well for us – our engineers are happier working with the system set up this way than they have been for a long, long time.  I’m sure I’ve missed a few crucial things out in my explanation here, so I might re-visit the topic in the future.  But meanwhile I hope this stands as a demonstration of another way of using EQ to get more out of your sound system.

As always, your mileage may vary – and your needs might be very different to ours!