Automation Project

From Audacity Wiki
Jump to: navigation, search
The automation project is a project to help test and documentation using scripts. As a spin off it should help automation in Audacity using mod-script pipe. The project is being led by James with ongoing feedback from documentation (mainly from Peter) and from QA (mainly from Gale).
The project started on 26th May 2017 and Phase 1 runs until November 2017.


Contents

Strategy

Helping on manual, I've seen first hand how much work is required to track GUI changes. If we don't automate the routine stuff, GUI changes take too much time and attention from manual team. By automating the things that can be automated that help test and documentation, time is freed up for the things that can't be automated.

A key idea is that the automation should do double, or even treble duty. Image generation and capture saves a lot of repetitive work by documentation team. It is also a 'good workout' for Audacity, and as a test script should give us more confidence that Audacity is behaving correctly. The work on it can also feed into code that benefits Audacity in other ways, such as making modules more mainstream so we can release automation via mod-script-pipe to users.

The project has deliberate limitations over what could be produced, to keep the project in bounds. Limitations compared to the ideal that could be produced may include more fragile scripts, less control over automatic naming of files, more image annotation to be done by hand still, no specific support for the bot and scripts also working in the translated pages. The limitations should steer the work to the parts that gives most benefit soonest.


Extension for Pdf Manual

The project can help with production of a pdf manual. As that is something Audacity support team would like to have, it has been added to the project as a deliverable.

A wikibot, and automation of images, will make it easier to make mechanical transformations of the manual. An example of that would be if we want control over page breaks in the pdf manual. A bot could (more easily) go through and mark up every page with template markers to indicate automatically generated page breaks. Then human skill could tweak the few bad breaks that the algorithm was not smart enough about. Those tweaks would be remembered for the future. Without that, we would likely be adding/correcting page breaks manually to the pdf, each time we generate the pdf manual. Image maps in pdf should be easier too, rather than being individually hand done after creating the bulk of the manual automatically.

Extension for "What's That For?" page

The 'Image Map' at the front of the wiki manual is hand produced and does not lend itself well to automatic production. However a much extended version that is more interactive does lend itself to production via the script. In particular it can show all the menus, and connect into the manual to provide help on them. This also could act as a very compact summary of the test results, rather than pages and pages of screenshots to browse through. So this has been added to the deliverables too.

URL will be whatsthat4.audacityteam.org


Planned 'Deliverables'

Automation

Mod-script-pipe progressed to a point where we are OK with releasing it to enthusiasts. Requires new code in Audacity too.

Mod-script pipe is language agnostic. The commands sent over the pipe have a text format and the updates will continue to use and extend on that format. Example scripts will be in Python.

Example new Command: SetSelection

SetSelection sets the audio selection to a particular time interval. The default is to select the same tracks as were previously selected.

Parameters:

Start: A time in samples
End: A time in samples
Tracks: Either 'All', 'None' or 'Unchanged'


Example:

Send Receive
SetSelection: Start=567912 End=789882 Tracks=All Selection Set in 4 tracks
SetSelection finished: OK


Image-Script

A Python script to generate, annotate and image link images for the manual. This doubles as a test script for Audacity with a report showing pass/fail and speed for each test. Requires new code in Audacity as well as the Python.

When run as a test script, the results of the script will be in HTML format. There will also be a comma-separated output that can be (manually) pasted int a spreadsheet so that we can look at how performance has changed over time. Some of the screenshot related extensions to our existing screenshot tools built in to Audacity might be checked in to 2.2.0 HEAD (subject to discussion/agreement with RM). We need these extensions pretty urgently for 2.2.0 manual. Possibly I'd put them in within a #ifdef EXPERIMENTAL_DOC_SUPPORT. This new code could live in base classes for dialogs (especially effects dialog) and for toolbars to make it easier to capture their screen coordinates.

In Phase I, I will not be extending interface building features; So we could not run a Python script to add/remove items from menus/toolbars. Also I won't be providing a macro recorder to record steps for playback. Those are things that could be considered in a future phase. They are less about test/documentation and more about user facing features.

Wikibot

Logo for Pywikibot

The wikibot is for updating images and imagelinks in the manual. Will use the images created by the Image-Script.

The Wikibot will be in Python. It is likely to be based on Pywikibot which is the most popular bot framework for wikipedia. I may create a new repo for our version and its test scripts. The image script and wikibot will run from a user's machine. Hosting them on a server, e.g for continuous integration or as an autonomous wikibot would be nice, but isn't part of this project.

The most immediate need for the Image-Script and Wikibot combination is to capture, upload and templatise the images of the built-in and nyquist effects generators and analyzers.

New Foreign Functions

Nyquist extended by adding the scripting functions to it. See list below.

I will not be checking new Nyquist commands into Audacity HEAD for 2.2.0. They will most likely be in a branch ready for 2.2.1.

SWIG is on my radar for this part of the work, but I can probably manage without it. It is a question of whether it will save me work overall or not. SWIG could 'regularise' all the mod script pipe command handling, and at the same time help us move towards a unified 'Audacity API'. However that is a big change in the source code, and is not an essential for the immediate goals of the automation project.

List of Nyquist 'foreign functions'

  • Ability to set a selection programmatically.
  • Ability to set focus and cursor programmatically.
  • Ability to interrogate Audacity for clip types/locations.
  • Ability to interrogate Audacity for label types/locations.
  • Create and delete tracks of all types.
  • Resize tracks and set zoom level and scroll position.
  • Set size, visibility and position of toolbars.
  • Set parameters for any built in effect.
  • Read/write any preference.
  • Invoke any command that can be 'commanded' via a menu item and an OK.
  • Ability to get the coordinates of any window/dialog/track/clip
  • Ability to screen-capture any rectangle given the coordinates.


HTML5-What's-That-For-Page

A souped up version of Bill's front page that has explorable menus, and more dynamic annotation. Connects into the manual.


A PDF version of the manual

This will be indexed, searchable, and demonstrate working image maps.

PDF manual generation will be mostly in Python, though I expect to be patching PDF libs written in C as well to make that work.

I will be using ReportLab for generating the basic pdf. However it is limited in its handling of hyperlinks. To get the hyperlinks I will be using QPDF. This allows a textual version of the pdf to be generated, which can then be edited patched and reformed as a pdf document again. This should allow more flexibility in hyperlinks than ReportLab provides. It is also an essential step in linearizing a pdf document so that it can be read on the web before the whole document has been completely downloaded.

Logo for Evince

Whilst pdf is an open standard (ISO-32000-1), the code that Adobe use to produce an index is not. I believe I can work around that using python script to build an index, and check and debug the index using printfs etc in a debug version of Evince. If not, it may work out easier to feed an unindexed document into Adobe Acrobat and get it to do the indexing step.

Personal tools

Donate securely by PayPal, using your credit card or PayPal account!