The concept of surround sound and multiple-channel audio has been...well...surrounding us since the 1940s, with Disney’s Fantasia. And now we have the technology, so here we are. And this is the story with rock concerts too. We’ve been up close and personal with the sound setups at our share of performances, and for all their sophistication, they really just boil down to “put speakers in room, and turn up the volume”.
But a few years ago — somewhere near Mark Knopfler’s performance in Mumbai — in a casual conversation, someone brought up this crazy idea: instead of using ten big speakers and hoping that they’re loud enough to cover the entire auditorium, why not just use a hundred smaller speakers all around the auditorium? This way, you don’t wake the neighbors, and you give everyone — even the cheapskates in the back — a better overall experience. Win-win. It’s highly possible that this someone threw the idea at the scientists at the Fraunhofer Institute, because at CeBIT 2007, we saw it in action.
In its physical form, Fraunhofer’s IOSONO technology is exactly what we just described: a continuous band of small speakers running along all four walls. The result, however, is the most immersive experience we’ve ever had, including the entire expensive tech we’ve been around in these offices. Also — and this is more significant than one would think — we could walk up to the speakers and not have our ears blown off. If you’ve ever walked up near the speakers at a club or concert, you’ll appreciate this.
IOSONO’s secret is the big sweet spot — unlike traditional sound setups; you get the best audio experience all over the stadium, rather than the tiny, center-of-the-hall sweet spot we hunt for all the time.
Perhaps we must dial back a bit and elaborate a little more on this sweet spot business. But first...
We hear sound because of the disturbances it creates in the air around our ears. If you think about this in slow motion, you can picture the sound leaving a speaker in expanding spheres:
These spheres are called wavefronts, and when you’re in their way, you hear sound. And sound gets less intense with distance, so you want to be an “optimal” distance from the speaker — too close and your eyeballs may burst, too far and, well, why bother? We’re building up to a point here...
Yes, there’s a reason volume icons look like thisThe sweet spot
Picture yourself in, say, a movie theatre. The speakers, like your household 5.1 system (if you are in such a household), are placed on the four corners of the room, and right in front with the screen. Each of these speakers is sending out sound in wavefronts, and at the point where the wavefronts for all these speakers meet, you get to experience all the goodness of 5.1-channel audio — you’re in a “sweet spot”. Unfortunately, this sweet spot is usually tiny — you may be one of only ten people in a theatre getting the full experience.
Most people watch their movies and concerts from outside the sweet spot
There’s a bit of advice you didn’t expect from this article — when you’re headed to the movies, pick seats in the center of the theatre.
Obviously, if more of us are to enjoy our outings, we must all be in the sweet spot. One option would have been to place speakers really far from the audience space. In an alternate universe, maybe they do have enough real estate to do that. In our world, we must find another way. It’s called wave field synthesis.
Imagine this: you’re about ten feet away from a large speaker, looking (hypothetically) at the wavefront. This may be the optimal non-eyeball-melting-but still- loud-enough distance, but what if you don’t have ten feet to spare? What if you live in a high-rent dingy studio in the wrong part of the wrong metropolis, cursing your sad, pathetic...sorry, sometimes we get carried away. Anyway, when you’re pressed for space, like say, a new multiplex in a crowded metro, it is possible to use an array of smaller speakers and a mathematical algorithm or two to simulate that same wavefront, just inches away from you.
The array of speakers (in front) can create a wavefront that simulates the effect of a speaker far behind
Creating the simulated wavefront is simple enough. The speaker in the center starts emitting sound first, the ones on either side next, and so on. And there you are — if the speakers simulate the wavefronts at the sweet spot, suddenly the sweet spot becomes every spot that’s at least six inches away from the wall. And the big selling point here is that this works even with regular 5.1 audio as well, so it even makes business sense. Right. Here we are, then.
We don’t normally do this, but permit us, if you will, to get up on our soapbox and yell for a bit. You see, as the beginning of this article plainly suggests, the “future” of technology already exists. With lossless formats like FLAC, and gigabytes of space available for peanuts, it’s entirely possible for us to enjoy music exactly as the artists recorded it. You’ll have to part with many months’ worth of family income, but it’s there. We don’t get to enjoy this because of good old human idiocy.
Here’s how we got the idea for a “future of sound” article: we’ve gone from CRT monitors to 22-inch HD capable screens in almost no time. We’ve gone from lugging around CDs and a portable CD player in a backpack, to dumping entire music collections on to a PMP and still have place left over. Our experiences with audio, on the other hand, haven’t changed in decades. Whether today or in 1979, there’s only one way to bring a “concert experience” home — get the music in its purest form (no, not by kidnapping the band), put it through the best equipment money can buy (ideally the kind that the studios own), and turn up the volume. With IOSONO, our concert experience may come closer to the experience of being in the studio with the artist. And if you can set up an IOSONO system at home, you can bring that experience home as well.
But the real problem — the rotten banana skin that even the most agile technology will slip on — is what’s known as the “loudness war”. At some point in the jukebox era, recording industry executives decided that consumers were such brain-dead twits that they couldn’t be asked to get up and turn up the volume when the noise in the bar/café/establishment-of-your choice was drowning out the song. And so they decided to make “hot” masters — recorded audio whose volume is artificially boosted to make them stand out more when they’re played. This came to be regarded as a really good idea, so they kept doing it. We’ve labored under the impression that the audio on CDs is as good as it gets, but in reality, they’re shining examples of artificially loud music — the boosted track requires less digital space, and so it made sense to use this technique to cram in as many songs as possible.
The worst part is that even though the audio CD, for all practical purposes, is dead, this practice continues. So even though you can get yourself the best equipment, the old garbage-in-garbage out rule applies, and all that money would be spent for naught.
But does this matter? When we take into account economics, the demands of consumers, and other such factors that have nothing to do with technology, we come to one key phrase...
The sound wave for “Something”, by The Beatles, from its original(ish) form in 1983, to what it became by 2000. You’ll notice that not only has the song been made much louder, but it’s been boosted disproportionately. The original has a few quiet and a few not-so-quiet bits, which have now been turned into annoyingly loud or annoyingly slightly-less-loud
The Audio SpotLight
Since we have already touched on the subject of performance audio, we should also mention another form of audio in public places — obnoxious public advertising. If you’ve been in a mall or supermarket and heard the repeated wail of an announcement or advertisement you didn’t really care about, you’ll be interested to know that it’s possible they’ll be obsolete soon. Like most audio innovations, the “audio spotlight” has been around for ages — in fact, we read about it in Digit way back when we were readers ourselves. The basic principle is this: instead of transmitting audible sound waves, transmit ultrasound waves that will degenerate in the air in such a way that they turn into regular sound waves by the time they reach your ears. This degeneration can be controlled quite precisely, too. Ultrasonic waves are directional — they don’t get transmitted everywhere, just in the direction the speaker is pointing in. This means that you could sit right in front of your TV and watch your soaps on full blast, while the rest of your family could sit as little as a foot to your right and not hear a thing, leaving them free to think about what a disappointment you’ve become.
If you, like so many of us, like your music in your pocket while you travel, do you really think — even with sound-cancelling headphones — that you’ve ever cared that the music is artificially “enhanced”? Or do you just kick back and enjoy? A soundgasm may be highly desirable, but its lack isn’t really going to tear the world in two. If you can get joy out of sound without caring what bitrate it’s reaching you at, then you probably aren’t even bothered about what this article is saying.
If we demand great technology, it may or may not come. At least there’ll always be great music.