Advanced Docs Project

From Audacity Wiki
Jump to: navigation, search
The 'Advanced Docs' project is a proposed project to better document how Audacity works.
The proposed project would be led by James, with ongoing feedback from Steve, Peter and Paul.
I will try to keep the project small and attempt a 3 month schedule.

As a spin off the project should clarify big chunks of Audacity code, whether the code is looked at via Doxygen or in an IDE. The docs will also help advanced users in understanding concepts such as digital audio, compression and spectrograms.

  • Images so far...
  • Progress at...

See Also:


Feedback from newer developers is that our code needs more documentation.

  • GSoC students have found that there is not enough explanation in the code.
  • More experienced developers have much less problem finding their way around, but they notice the inconsistencies and multiple ways to do 'the same thing'.

The proposed changes to code and documentation will be topic driven. I will be writing documentation about design topics, to provide a better guide to developers. These topics will be held in our wiki.

We also have advanced users, who are progressing from basic editing to using spectrograms and automation.

  • We have relatively little information in Wiki or Manual for advanced users.
  • It would be good to provide them with more background, so that they understand the concepts.

The proposed changes to WIT and documentation will again be topic driven.

  • Developers will get a quick route into the code via WIT.
    • They get high level design topics in the WIT sidebar.
    • They can dive into code, and a cleaner class documentation, using the doxygen links.


  • Advanced users get more information about what is going on with audio.
    • My hope is that spectrograms will make more sense to them.
    • There will be a new graphical overview page on digital audio on WIT.

I will be reviewing Audacity code in writing these topics. Often it will be easier or clearer to rename classes to make their purpose clearer and the naming more consistent.

  • The new topics from wiki will be available in the WIT interface, which in turn is also linked to doxygen. So similar to our 'What is That?' for the Audacity User interface, we will gain a more complete 'What is That?' for Audacity internals.

I think it important that the project is topic led. I found that having the many topics from the manual is what made WIT work for the UI.

Annotation Feature

Existing 'What is That' for developers, which I will extend.

The work on explaining spectrograms and digital audio may suggest changes we could make to Audacity UI to make things clearer for users. For example:

  • The Digital Audio section uses ruler markings over the waveform, and a smoothed waveform.
  • The spectrogram view could be annotated to show the size and overlap of buckets, when zoomed in, and a graphical representation of the window function.

These would be good things to have in the advanced-user/developer documentation. Making such changes in Audacity code too is outside the scope of this project.

Planned Deliverables

1. Doxygenation

We have partial coverage of the purpose of each Audacity class at:

I aim to get coverage to 100% with explanations of the purpose of each class. These explanations will say more than currently.


Doxygen produces clickable images, like the one above, that allow developers to explore classes.

The WIT top level page for developers will link out to these doxygen pages.

The doxygenation WILL lead to some renaming of classes in Audacity to signpost design patterns and similarities of code. For example:

  • Meter becomes MeterPanel
  • Lyrics becomes LyricsPanel

This makes it clearer that these classes are primarily GUI elements. After the changes there will be greater consistency in naming classes and better signposting of code idioms (patterns such as Visitor, Facade, Prototype).

2. Developer Topics

The AOSA book chapter contains eight starter topics on Audacity architecture.

The chapter is creative commons (CC-BY) licensed. I will build on these topics and write new ones for needed areas not covered.


The image above is the AOSA logo and is an indication of how data can flow (and also is reminiscent of a house). I intend to use a small number of images like these in the developer topics.

Other Developer Topics

The doxygenation will draw attention to additional topics that are needed. In doing this I would welcome help from other developers on producing/refining documentation on:

  • Spectral refinement
    • Refer to the paper, but still need to give an overview
  • Nyquist

I would again like to focus on images, as these make a big difference to clarity, but they can take a lot of time if not automated in some way.

Some topics for developers at various levels of experience:

  • How we draw waveforms quickly.
  • Why you can't compress audio twice.
  • How the 'Repair' function works.

3. User Topics

This page is a rare example of existing documentation for users that explains what is going on in Audacity.

Waveform sample formats.png

I intend to extend and augment this page, with also add an interactive in WIT to go with it.

This way we can generate the currently by-hand images from that page.
I also intend to split the topic up, so it can more easily be extended.

  • Digital Audio (Waveforms, Samples, Frequency)
  • Digital Audio (Clipping)
  • Digital Audio (Compression)

These are 'Gateway' topics for people learning about how code can handle audio. The topics are also relevant to those users who want to dig deeper into understanding audio concepts.

Other planned topics for users

  • Spectrograms - Several topics showing the relationship between spectrograms and waves.
    • These will cover how the parameters affect the display.
    • I also intend to show how and why cutting a chunk out of a wave typically introduces a click.

I'd welcome help from documentation team in writing/refining these, so that I can concentrate on creating the images. My ideal is that Paul has implemented the way to show both waves and spectrograms (as extra channels) on the same track in time for this. Then I can use those images. But if not, I can 'fake it' with duplicated audio and two tracks.