Sample Rate
Sample Rate is the number of times the audio is captured per second.
It is to audio what frame rate (Frames Per Second) is to video.
Sample Rate values are typically written in kHz (kilohertz).
Sample Rates come in 'bands' and common examples include:
- Single-band - 44.1kHz & 48kHz
- Dual-band - 88.2kHz & 96kHz
- Quad-band - 176.4kHz & 192 kHz
For example, when recording using a sample rate of 48kHz. 48000 (forty-eight thousand) samples are being captured each second by your audio recording device.
As you increase the sample rate, you capture more samples of the incoming audio signal each second.
The maximum frequency that can be captured correctly by a recording device1 is limited by the sample rate the device is set to.
There is a rule2 to this:
Sample rate ÷ 2 = maximum frequency that can be correctly captured
This means, when using a sample rate of 48kHz, we can capture audio frequencies up to 24kHz.
As we age, our ability to hear higher frequencies diminishes, but the human hearing range typically spans from 20Hz to 20kHz. To capture the complete range of sounds audible to humans, sample rates of 44.1 and 48kHz are more than sufficient.
As such, the vast majority of digital music available by typical distribution methods (streaming on Spotify/Apple Music, CDs) is at a 44.1kHz sample rate, and audio for film tends to be at 48kHz3.
What's the point of higher Sample Rate options?
Since sample rates of 44.1/48kHz allow us to capture frequencies spanning the full range of human hearing, you wonder what the purpose of higher sample rate options is.
There is debate in the audio community about the value (or lack of) of using higher sample rates for situations that don't fall into the above categories (I.e., for general recording purposes). We won't get into that here…
Bit-Depth
Bit-Depth is the number of “bits” captured in each sample per second.
As bit-depth changes, so does the dynamic range. Dynamic range is the difference between the lowest and highest volume of a signal that can be recorded. As you increase bit-depth, you expand the threshold of what can be heard and recorded by your recording software. However, the maximum range of human hearing typically does not exceed 120 dB.
Common Bit Depths: 16, 24, 32-bit float
Buffer Size
Buffer Size is the amount of time allowed for your computer to process the audio of your sound card or audio interface.
This applies when experiencing latency, which is a delay in processing audio in real-time. You can reduce your buffer size to reduce latency, but this can result in a higher burden on your computer that can cause glitchy audio or drop-outs.
This can often be fixed by increasing your buffer size in the audio preferences of your DAW or driver control panel.
When introducing more audio tracks to your session, you may need a larger buffer size to accurately record the signal with no distortion and limited latency. Increasing the buffer size allows more time for the audio to be captured without distortion.
It is important to find the right buffer size for your session as this can vary depending on the number of tracks, plug-ins, audio files etc. We do not recommend a specific setting because it will depend on your specific project. But as a general rule:
When Recording:
- Set the buffer size as low as you can to reduce latency. If you start hearing clicks and pops or your DAW gives you an error message, either raise the buffer size or reduce the number of effects plug-ins/audio tracks in your project.
When Mixing:
- As latency is not really a factor when mixing, you can afford to put the buffer size at its highest setting. This will reduce the chances of any clicks and pops being heard when you add effects plug-ins.
When listening to general Music/Audio outside a recording project:
- Latency is not a factor when just listening to music outside a DAW (YouTube/Spotify/Media Players) so the buffer size can be set to its highest setting
For more information about latency, please see the below article.
Latency Issues with Interfaces
1 This assumes that neither the analogue circuitry nor the analogue to digital converter, in the input stage, have any filtering to cut out or attenuate higher frequencies.
2 This rule is known as the Nyquist Theorem.
3 Audio for film tends to be recorded at either 48kHz or a higher multiple of 48kHz for better synchronisation against film frame rates.