The Soundstage: Home vs. Studio Control Room

 

Moderator
Username: Admin

Post Number: 99
Registered: Dec-03
The Soundstage: Home vs. Studio Control Room

A couple of months ago, I was involved in a discussion about the differences between the acoustics in the home and those of the recording studio. The various responses to the thread, on the audioasylum.com sponsored Rives Acoustic Forums, were diverse and very interesting in that they underscored what an extremely subjective thing listening to music can be.

As a recording engineer, on more than a few occasions, I have been reminded of this subjectivity in some very amusing ways. For instance, one evening, after a full day of mixing, I was invited to visit some friends in the Hollywood Hills, on the way home. Just stop by and say hello. As I had a cassette of the mix that I was working on with me, I asked if I could play it on their system, which consisted of Macintosh components and JBL speakers. A very nice system at that time, the mid 1970's. As we sat on the couch facing the speakers, I was horrified by what I heard! My mix sounded nothing like the well-defined track that I'd spent the better part of the day in the studio crafting. The bass was out of proportion, loud and boomy, and the relative balances between the instruments and voices seemed way off from what I remembered from a hour or so before. I started to apologizing, saying that it was just a rough, unfinished that I was taking home to evaluate, all of which was true. But my friend, who was a fine musician himself, waved me off, saying that it sounded really great to him. He said that he knew his speakers and offered to put on some other records for comparison, which we did. Over the years, I've since learned that you can adapt to pretty much anything. That is, you can train your ears to get used to a certain pair of speakers and the room that they are in. This has proved useful because many of the professional studio control rooms that I've worked in, all over the world, have left much to be desired with respect to the sonic accuracy of their playback systems. I usually carry a few of my favorite CDs around with me for comparison and to get a quick start on evaluating control room playback speakers.

There is a difference between the acoustic environment in which music is mixed and the acoustic environment in which it is listened to for enjoyment. In a 1981 publication called "Studio Acoustics", Michael Rettinger gives a classic definition of the control room as a working environment:

"Control rooms for music studios are for the purpose of regulating the quality of the recorded program rendered in the studio...The operations may involve level adjustment, the addition of a reverberatory note to the renditions, tonal modifications by means of equalizers, the use of limiting, compression, or expansion of various passages and other checks and modifications"

By contrast, the listening room is usually designed for no other purpose than leaning back in a comfortable chair and enjoying music. If I had to single out one area in which the relative differences were greatest between the control room and the listening room, I would have to say that it was in the soundstage. Whenever we talk about the soundstage, we are talking about what can be a very complex and convoluted subject, one that can involve everything from the relative diffusion and reverberation time of a room and also the interaction of its modes. Talking about diffusion and reverberation time might be explained by the fact that audiophiles invariably use the 'room' as a creative part of their listening experience, certainly to a much greater degree than their counterparts, the recording engineers. This is because the audiophile is seeking to add to his musical experience, enhancing and broadening the musical soundstage, sometimes beyond the intent of the original recording, whereas the recording engineer does not really have the freedom to do this. The recording engineer - or mixer as he or she is sometimes referred to - tries to keep the musical soundstage within the confines of the information coming from the speakers, eliminating, or at least desensitizing, any acoustic artifacts that might artificially enhance or otherwise adversely color what they are listening to and evaluating. The reason for this must be obvious? If the room is providing the reverberation (and with it the illusion of depth) or any other complimentary aural quality, then the engineer might not feel that he had to introduce it himself, thus seriously affecting the final outcome of the mix.

Reverberation and echo are used in the mixing of music primarily to separate instruments and vocals and to give an overall feeling (illusion) of depth and size to the performance. In this context, reverberation is used to counter and make up for the recording techniques sometimes employed while making records. By way of explanation: Studios are, for the most part, usually tightly controlled environments, designed specifically to record instruments in close proximity to each other. Studio acoustics have changed drastically since the advent of multi-track recording. Studios were originally designed to compliment and acoustically enhance musical ensembles (groups, bands or orchestras et cetera) playing live with their performance being documented on either mono or, later, stereo tape. Today, studios are more like workshops, were musical performances are no longer documented with all the musicians playing together in the same room as a norm, but created instrument by instrument, track by track. Rather they are recorded piecemeal. More common is that instruments are recorded just for tracking purposes - getting the structure of the song down - and replaced later, with more concentration being paid to the performance. The caveat to this is that, in order to have control over the instruments and record them on separate tracks with as many musical options as possible, the studio acoustics must be controlled and can be quite dead, sterile and lacking in kinetic energy (i.e: excitement). It is not hard to imagine the sound of one or two 100 watt guitar amplifiers, a bass amplifier and a full drum set all in the same room. If that room was live, that is, without any acoustic treatment, the sound would most probably be uncontrollable, albeit exciting, depending on your point of view. To counter this, many studios have what are called Iso (Isolation) Rooms or individual smaller soundproof rooms within the studio area in which to put instruments to ensure that they can be recorded with a high degree of acoustic isolation from other instruments. In mixing, the separate instruments are recombined to make them sound like they were playing live in a huge room without acoustic damping.

Having described the basic methodology behind the recording process, as it is today, together with a few of the reason for the controlled acoustic environment, let's look at what a 'soundstage' is with respect to the control room and the listening room. In general terms, the soundstage might be said to be the area in which the listener sits and the speakers perform. Also, by extension, it can extend to anything that has an acoustic influence or aural effect on that area. For instance, if the walls of the listening room are dead, without reflection, and the floor is carpeted, then the soundstage will be relying on, for the most part, only the sound coming out of the speakers themselves. This will be particularly true at lower volumes, where the room is not significantly involved in the reproduction of the lower frequencies. By this I mean, that the various room modes (relative to the primary physical dimensions of the room) are not being excited and reinforced, because of the low playback volume. As the volume is increased the room modes can become major contributors to the low frequency response curve, which can be good or bad. So far, this description is common to both the control room and the listening room. At this point the relative philosophies part ways as the engineer and the listener require different soundstages. As mentioned above, the engineer does not want his soundstage to be overly live or reverberant, as it could prevent him from hearing, or at least misinterpreting certain musical and aural nuances of the performance. The listener, in contrast, might want to create a 'stage' - literally.

The best of acoustic soundstages are, to me, ambient but diffuse. Diffuse is the key word here. I want my (early) reflections to be spread out, not concentrated on just one small area. Live areas that rely on singular or low multiples of acoustic reflections tend have adverse and harsh coloration when compared to areas that have as their source multiple and closely spaced reflections that do not converge directly on any one listening position in the general area. Small rooms tend to the harder to work with as far as creating well-diffused soundstages. This is because the boundary walls, hence the reflective surfaces, are so close and more easily overcome by loud music. This is one of the main sources of adverse reflective coloration. One of the biggest victims of this type of reflective coloration is the human voice - voice intelligibility.

We've all been in halls, churches or rooms were the acoustics have reduced voice intelligibility to zero, or thereabouts? Churches are prime examples of environments that have been designed primarily to enhance speech and as soon as a sound reinforcement system is powered up in it, as is the fashion, those beautiful acoustics are overpowered and all bets are off. Likewise, most older concert halls were designed to acoustically enhance orchestral performance and other unamplified events. It is interesting to note that the RT-60's (Reverberation Time) of some concert halls have extensions of 1-2 seconds, when measured in the critical bandwidth of the human voice (500Hz -- 5Khz). Listening rooms and even studio control rooms can suffer the same fate if not enough attention is paid to their acoustical treatments. Although smaller than churches and concert halls, they are, never-the-less, bound by the same laws of physics, just on a different, albeit, less grand scale.

One way to counter the 'closed-in' effect of small rooms is to deaden the walls. Stop the mid/high frequencies from reflecting, before they start. This is really a brute force and not an altogether ideal or advisable approach. It is often used in smaller control room situations where a more controlled listening area (soundstage) is usually preferred. It is also useful in some instances, in the smaller home listening room, when other more diffusive acoustic treatments are precluded because of room size or budget et cetera. The effect of diffusion as stated above is to produce multiple and closely spaced acoustic reflections which can, if correctly designed and implemented, enhance the listening area by adding dimension, excitement and kinetic energy to the musical soundstage.

Kinetic energy, in an acoustic context, can best be explained as follows: When low frequencies travel around in a room, they will eventually reach their physical potential, which are the boundaries of the room, the walls, ceiling and floor. After striking these boundaries, much of the acoustic energy (depending on the construction of the room) is turned back into the room and proceeds to complete another full cycle, relative to its wavelengths, until it naturally dies away having expended all their energy. As they move through the room, low frequencies are modulating the air, moving it around in complex patterns and forms. In this way, all the sound energy in the room is in constant flux and this is sometimes referred to as kinetic energy. I like to think of it as musical excitement. In a room that is too heavily damped, the low frequencies can be overly absorbed causing them to have less or no acoustic energy when they are returned from the room's boundaries. This can cause the mid/high frequencies to travel in more or less straight lines through the room, in basically unmodulated (unexcited) air, and the resultant sounds can be harsh and unpleasing to the ear. In this way, the low frequency characteristics of the room can act upon the soundstage - the critical listening area. There is a myth about so-called 'bass traps'? That they should act so efficiently that they totally remove the room modes and cure all low frequency problems? This is not going to happen as the modal habits of sound in any environment are governed by physics. Anyway, the idea is not to remove all the bass from the room. This can take all the excitement and energy out of a room. Rather it is to allow the bass to reach its full physical potential naturally, without unduly loading up, due to modal influences, thereby complimenting the musical performance rather than overpowering it.

Having come this far, it is only fair to mention that there is a school of thought that supports a different kind of soundstage, one that has the immediate area around the speakers dead and the rear of the room, behind the listening position, live. It is the LEDE concept was developed and championed by designer/acousticians, Don and Carolyn Davis. LEDE stands for Live End - Dead End and it was, funnily enough, initially intended for studio control rooms. I first had a chance to mix in an LEDE control room, in the mid-seventies. The studio was one of four or five studios belonging to Wally Heider, who had by far and away the hottest studios in both Los Angeles and San Francisco, in the late sixties and early seventies. Wally refurbished one of his control rooms and, wanting to stay on the cutting edge, incorporated the LEDE design into it. The studio did not get rave reviews as Wally had hoped. Just the opposite and he quickly changed it back to a more conventional acoustic design.

The philosophy, as I mentioned above, was to have the front of the room acoustically non-reflective, literally dead, and to rely on reflections generated by diffusive acoustic wooden arrays on the back and, sometimes, the rear side walls. I found it extremely hard to mix in it and could not really get used to the 'feeling' of the room, at all. In thinking it through, I realized that I didn't want to rely on reflected sound from behind me when listening to a stereo mix in front of me. Apart from feeling alien, I started to see some potential pitfalls in the concept? For instance, while, for some, it might work quite well in a smaller control room, what would happen if the depth of the room was, say 25'- 0" or more feet? That would mean that the sound had to travel past me to the back wall and then be reflected back to my position at the recording console: In a 25'- 0" deep room, say 23'- 0" to the back wall (if the speakers are a little off the front wall) + the distance back to me, sitting behind the recording console, say 14'- 0". That equals a path of 37'- 0" distance, round trip. It is not hard to see how these reflected signals could really be in conflict with the original signal . At that distance, it is starting to become problematic. Any longer and the signal path would be in real danger of becoming a discernable echo to the initial signal, rather than being easily integrated into it, by the brain. This condition would fall afoul of the Haas Effect. The Haas Effect states that the human brain has the wonderful ability to integrate sounds that arrive at the ear up to around 35-50 milliseconds. After that, instead of being perceived by the brain as part of the original sound, such late reflections would be heard as an echo, reverberation or an addition to it. This causes ear brain confusion and loss of clarity in the sound.

I hope that this short article has helped explain some of the differences between the control room and listening room environments and the reasoning behind them? Simply put: One is for creating a reproduction of the musical performances and the other is for enjoying them.

Christopher Huston
Senior Acoustical Engineer
Rives Audio

Chris is a recording engineer and producer with over 80 gold and platinum albums to his credit. He has recorded Led Zeppelin, the Who, Van Morrison, Pattie LaBelle and many many more. Chris has 30 years experience in studio and listening room design. His experience and insight into this subject is second to none.
 

Silver Member
Username: Kegger

MICHIGAN

Post Number: 769
Registered: Dec-03
I'm sure myself and others would be very interested
in a simular account/perspective/oppinion on the
new surround music formats.

and how they realte to this!

anything on that?

btw very good article.
 

Bronze Member
Username: Donaldekelly

Washington, DC Usa

Post Number: 65
Registered: Jul-04
What can one do in the real world - the living room you must share with your wife?

Ceiling panels? She probably wouldn't let me touch the walls. Carpet and upholstered furniture? Spiky punk hairdo?
 

Dragonfyr
Unregistered guest
I fear this description of the Haas effect is 'backwards'. Signals arriving within the 35-50ms window are not able to be resolved and localized with respect to their point of origin, and thus result in increased unintelligiblilty and a muddled 'soundstage', and are thus damped in practice by the absorptive materials in order to eliminate them.
Signals arriving after this are subject to time domain superposition and the phase anamolies manifested in the observed polar and frequency responses within the frequency domain. These anomalies are created within the time domain and cannot be corrected in the frequency domain without first resolving the time domain arrival time issues. Until a minimum phase delta condition between all the multi-source signal arrival times is achieved, adjustments within the frequency domain will not be effective. Thus the oft used 1st response frequency domain RTA and an EQ tools CANNOT correct anomalies that inhabit non minimum phase difference environments. (Although LRC filter EQ adjustments can make small changes to the phase of the component wave forms and thus the effects manifested as a reult of superposition - thus moving the problems around! I think this has been referred to as 'array steering'! - as the polar pattern is effecte and the problem null movedd, thus creating the impression that it is fixed relative to the listening position!)

A short answer to the dilemma is to employ an effective time domain analyzer such as TEF to resolve the time domain issues into a minimum phase condition. Thus the fundamental time domain factors effecting intelligibility influenced by the 'short term' arrival times (Haas effect) and the major acoustical anomalies resulting from phase superposition (so often measured in the frequency domain with RTAs and then futilly treated unsuccessfully by EQing) must be first identified and resolved in the time domain.

Once these time domain issues are resolved to establish a minimum phase difference realtionship between the component signal arrival times and the decay times (RT60 etc.), then you can play with your EQ to your heart's content!

And the new trendy inverse frequency response feedback 'room EQ' methods are simply fancy room EQ marketing schemes! More marketing snake oil!

May I suggest anyone interested in this topic attend a Syn-Aud-Con seminar!!!!


 

Silver Member
Username: Cheapskate

Post Number: 517
Registered: Mar-04
i don't have any WAF problems, so my front side walls have cheap yellow egg crate mattress liners on the front side walls LEDE (live end dead end) style and i think i'll get a couple more for my front wall soon too.

considering that my livingroom is asymmetrical, i found that treatment really tightened my image up.

sometimes i losely drape a blanket over my TV to tame backwaves too.

acoustical treatments look like the most ignored aspect of most people's home systems from what i've seen.

as small as my room is, damping helps alot.

i can't wait to get a behringer DEQ2496 digital room correction EQ to further improve my in room response to within 1/2 a dB of flat.
« Previous Thread Next Thread »

Add Your Message Here

Bold text Italics Create a hyperlink Insert a clipart image Add a YouTube Video
Need to Register?
Forgot Password?
Enable HTML code in message
   



Main Forums

Today's Posts

Forum Help

Follow Us