Silver MemberUsername: James_the_god
Doncaster, South Yorkshire England
Post Number: 112
I have a card on a computer which is 48khz sampling rate i think (creative sb live! 5.1) and i can play songs with all different bit rates. Why is this, i dont understand the difference between an actual songs biterate and the sampling rate you see on the specs of a card. Could anyone tell me the difference? I need to know before I purchase an m-audio 2496 soundcard for one of my pcs.
New memberUsername: Ensomniac
Gresham, Oregon USA
Post Number: 7
According to the Nyquist theorem, one must sample a signal at a minimum of twice the rate of the highest frequency in the signal, in order to be able to reconstruct it correctly at some later point. Since humans can (nominally) hear up to 20kHz, the absolute minimum sample rate required to encode a full-range audio signal is 40kHz.
CD quality audio is actually 16 bit / 44.1kHz. The 44.1kHz sampling rate is comes from the minimum (40kHz) plus a margin so that the slope of the low-pass reconstruction filter required after conversion from digital back to analogue does not need to be quite so sharp.
Now bitrate simply specifies the number of bits per second that are used to encode the audio. The uncompressed bitrate for CD audio is 16 bits x 44100 samples x 2 channels = 1411200bps, or roughly 1411kbps. When audio is uncompressed, the bitrate is a linear function of the sample rate; i.e. doubling the sample rate doubles the bitrate.
However these days we often compress digital audio using various algorithms; MPEG layer 3 being the most common. As such, there is no longer a direct correlation between bitrate and sample rate. Almost every MP3 file I've ever seen has a sample rate of 44.1kHz (since the audio has usually been ripped from CDs, which use that sample rate as previously mentioned) but the bitrates can vary wildly from 96kbps to 320kbps.
Just as lowering the sample rate means more information from the original audio signal is lost, lowering the bitrate generally means the same; i.e. a 256kbps MP3 will result in a more faithful reproduction than a 96kbps MP3, all else being equal. However, the effect on audio quality is different. A lower sampling rate cuts out high frequencies, whereas a lower bitrate MP3 suffers variously from the 'underwater' effect, bubbling, smeared transients, etc.
The need for 96kHz and 192kHz sampling frequencies is hotly debated in pro audio circles. Since we can't hear above 20kHz (by the time you hit 30 it's usually more like 15kHz), it seems a bit pointless to encode audio beyond 20kHz by using higher samplerates.
One benefit that higher sample rates do have is that one can use a much gentler reconstruction filter slope (without going into too much detail, converting from digital to analogue causes mirrored replicas of the original signal centered at multiples of the sample rate, and these need to be filtered out without affecting the signal we do want to hear).
The difference between 16 bit and 24 bit however is very audible, with the latter being far superior. Pretty much any decent digital studio will record and/or mix at 24 bits, and then dither down to 16 for the final CD (dithering is a process that adds a very small amount of noise to a signal to increase its dynamic range during sample bit reduction).
There are a lot of topics regarding sample rates, bit rates, and bit depth, and I only covered them very briefly. If you're interested in learning more I suggest you get a book called "The Principles of Digital Audio" which if memory serves is by a guy called Ken Pohlmann. It explains a lot of this stuff in great detail.