QA Procedures overview
|This is a set of notes outlining Audacity QA procedures
1. Alpha testing
We continually test the Master branch alpha builds hosted here:
Referenced by the commit log:
I, (Peter Sampson) personally, tend to be bold (live on the bleeding-edge) and use the alphas for my production work, as the things I do are mostly easily repeatable.
2. Release Candidates
When the RM is ready – Release Candidates are built for external (and further internal) testing on all three platforms:
3. Bug finding
So how do we find bugs?
- Our own testing – reports via dev and quality email lists
- User reports on the Forum – review daily at least
- User reports on the dev and quality email lists – review daily at least
- User reports on the three Facebook sites – review daily at least
4. Logging Bugs
All bugs are logged in Bugzilla bug-tracker https://bugzilla.audacityteam.org/
- A key part of the bug entry is the “Steps to Reproduce” – mandatory, the bug will not log without it.
- Platforms: we need to ascertain and record if the bug is single or multi-platform
- Priority setting: this is perhaps the trickiest part to decide. We work with five levels
- P1: these are top priority and block releases
- P2: important but RM decides if they can be waved through into release
- P3: must have a Release Note and ideally a workaround
- P4 & P5: bugs are not release noted. Many of these may be "enhancement issues"
5. Testing fixed bugs
When a bug is fixed by a developer it gets marked DEVEL FIX Made If subscribed (I’m subscribed to “all bugs”) you get an email on the status change. In theory this should be enough to track fixed bugs that need testing – but I found that hard to manage, so I created a workbook in the Wiki:
This enables me to:
- See it all in a glance
- monitor the progress of testing on the three platforms
- Keep a record of bugs fixed for the current alpha
At release time we copy over the key fixed bugs to the Manual’s What’s New… page:
See the 3.0.2 version:
6. Monitoring outstanding bugs
See the table in Bugzilla
or with added ENH enhancements
7. Comparison Speed testing
I like to do comparison speed testing between versions for:
- I/O project open and close
- Exports and Imports
- Effects (and Generators and Analyzers)
I have developed Macros for this.
For me, the Manual is a key part of QA – plus it’s now integrated as a part of the app via the “?” help buttons in many error messages and warnings. Given the lack of any specs the manual can often be a good guide as to how the app is supposed to work for the user.
There are always two current manuals
- the released version this is immutable as it gets release with and as prt of the app:
- the alpha version:
I find it easiest to keep the alpha Manual in lock-step with changes to the app, new developments and bug fixes – playing catch-up later is too hard work.
An important couple of pages in the manual help us to maintain consistency in the Manual (and to some extent the app) – we anthropomorphize this and refer to this as “Connie”
The Wiki should not need checking for malefactors as accounts are strictly limited:
The Wiki has a shared dev/QA section
Pages of QA interest:
- Feature Requests
- Next Release - the RM’s workbench
- Audacity Versions
From time to time the Wikipedia page needs checking for malefactors who may corrupt the page:
The page appears to have a "guardian-angel" in the shape of Adam Hunt.