HowAudacityWorks

From Audacity Wiki
Jump to: navigation, search
This is a page for technical questions about the algorithms used in Audacity's programming code. An algorithm can be defined as a finite list of instructions for accomplishing a task that, given an initial state, will terminate in a defined end-state.
If you have questions about how Audacity works, please post them here and the developers will answer them!
 
Related article(s):
Eventually we will get a lot more organised and have explanations of the Audacity algorithms in Doxygen  format, where they can be both in the source code for Audacity and on a web page like here.


Contents


When is Gain Applied?

Q: I know that Audacity has a 32-bit sample resolution, and that when mixed down to normal 16-bit wav, it renders much of the following moot... however... When gain/amplification (either negative or positive) is applied, the resulting interpolation must result in a less accurate representation of the original waveform. I'm wondering if when running down the EDL (edit decision list), Audacity performs each gain change calculation separately, or if it's smart enough to look at all the gain adjustments in total, and interpolate only once, thereby reducing the accumulation of error?

A: No. Audacity is not that smart. The order in which effects are applied to the tracks is exactly the same as the order that you apply them in. Audacity doesn't actually have an EDL at all. Effects are applied at the time that you request them.

How do Effects Work?

Resampling

Q: I'd like to know which resampling algorithm Audacity uses. I`m studying resampling for my thesis and I`m testing the influence of Audacity's resampler on perceived audio quality.

A: Audacity uses a library called libresample, which is an implementation of the resampling algorithm from Julius Orion Smith's Resample project. Audacity contains code to use Erik de Castro Lopo's libsamplerate as an alternative, but we can't distribute that with Audacity because of licensing issues.

For more information on our choice of resampling algorithms:

Interpolation

Q: Which interpolation algorithm does Audacity use to interpolate between frequency values in the spectrum analysis?

Waveform dB

Q: How is the Waveform dB scale calculated?

Gale: 04Apr13: The previous text which was confusing and seemingly incorrect said "If the sound amplitude (air pressure) goes up by a factor of 10 the dB goes up by one point. If it increases 100-fold then in dB it goes up by 2 and so on. This is very like the Richter scale for earthquakes. A one point change is a 10-fold increase in pressure."

A: See Wikipedia for full details. The basic idea is that dB is a logarithmic scale indicating a ratio of power or amplitude relative to a specified or implied reference level. In Audacity's case the ratio is of amplitude relative to zero dBFS which is the maximum possible level of a digital signal without clipping. We use an amplitude ratio because doubling the power of an audio signal does not double its amplitude.

To give a couple of examples, doubling amplitude raises it by 6 dB (applies a gain of + 6 dB) and halving amplitude reduces it by 6 dB (applies a gain of -6 dB). Increasing amplitude ten-fold (by a factor of 10) applies a gain of + 20 dB and reducing amplitude to one-tenth of the original applies a gain of -20 dB.

To compare that last example with Audacity's linear Waveform scale, an amplitude of 0 dB is 1 on that scale and an amplitude of -20 dB is 0.1 on that scale.

There is also some disorganised but useful information about decibels here which has never been put anywhere more appropriate.

Audio Mixing

Q: What is the algorithm used by Audacity to mix separate sound tracks (i.e. what is the process of merging the tracks to a single one when the "Mix and Render" command is used)?

A: Mixing is just addition. The waveforms show the air pressure moment by moment. If there are two sounds at the same time then the air pressures add. So we just add the waveform values. It is a little more complex than that since for stereo we add right tracks to right tracks and left tracks to left tracks and mono to both, and we apply gain and amplitude envelopes before adding. Gains are just multiplying the signal by some value. Left-right panning, which is also done during mixing, is similar in that it applies different gains to left and right channels. Also, if the tracks being mixed were not at the desired sample rate for the project, we have to first do sample rate conversion too. There is also the problem of 'clipping' where the value after mixing is too loud. At the moment Audacity mixes the tracks as indicated by the waveform values and the setting of the gain and pan sliders on the Track Control Panels, without preventing clipping in the result.

Personal tools

Donate securely by PayPal, using your credit card or PayPal account!