QA Procedures overview

From Audacity Wiki
Revision as of 08:29, 29 April 2021 by PeterSampson (talk | contribs) (7. Comparison Speed testing: layout)
Jump to: navigation, search
This is a set of notes outlining Audacity QA procedures

1. Alpha testing

We continually test the Master branch alpha builds hosted here

Referenced by the commit log

I, (Peter Sampson) personally, tend to be bold (live on the bleeding-edge) and use the alphas for my production work, as the things I do are mostly easily repeatable.

2. Release Candidates

When the RM is ready – Release Candidates are built for external (and further internal) testing on all three platforms:


3. Bug finding

So how do we find bugs?

  1. Our own testing – reports via dev and quality email lists
  2. User reports on the Forum – review daily at least
  3. User reports on the three Facebook sites – review daily at least

4. Logging Bugs

All bugs are logged in Bugzilla bug-tracker

  • A key part of the bug entry is the “Steps to Reproduce” – mandatory, the bug will not log without it.
  • Platforms: we need to ascertain and record if the bug is single or multi-platform
  • Priority setting: this is perhaps the trickiest part to decide. We work with five levels
    1. P1: these are top priority and block releases
    2. P2: important but RM decides if they can be waved through into release
    3. P3: must have a Release Note and ideally a workaround
    4. P4 & P5: bugs are not release noted. Many of these may be "enhancement issues"


5. Testing fixed bugs

When a bug is fixed by a developer it gets marked DEVEL FIX Made If subscribed (I’m subscribed to “all bugs”) you get an email on the status change. In theory this should be enough to track fixed bugs that need testing – but I found that hard to manage, so I created a workbook in the Wiki:

This enables me to:

  1. See it all in a glance
  2. monitor the progress of testing on the three platforms
  3. Keep a record of bugs fixed for the current alpha

At release time we copy over the key fixed bugs to the Manual’s What’s New… page:

See the 3.0.2 version:

6. Monitoring outstanding bugs

See the table in Bugzilla

or with added ENH enhancements

7. Comparison Speed testing

I like to do comparison speed testing between versions for:

  1. I/O project open and close
  2. Exports and Imports
  3. Effects (and Generators and Analyzers)

I have developed Macros for this.

8. Manual

For me, the Manual is a key part of QA – plus it’s now integrated as a part of the app via the “?” help buttons in many error messages and warnings. Given the lack of any specs the manual can often be a good guide as to how the app is supposed to work for the user.

There are always two current manuals a) the released version this is immutable as it gets release with and as prt of the app: b) the alpha version:

I find it easiest to keep the alpha Manual in lock-step with changes to the app, new developments and bug fixes – playing catch-up later is too hard work.

9. Connie

An important couple of pages in the manual help us to maintain consistency in the Manual (and to some extent the app) – we anthropomorphise this and refer to this as “Connie”


The Wikipedia page should not need checking for malefactors as accounts are strictly limited:

The Wiki has a shared dev/QA section

Pages of QA interest: a) b) c) d) - RM’s workbench e) f)

11. Wikipedia

From time to time the Wikipedia page needs checking for malefactors who may corrupt the page: