Proposal Page Tabled Memory and Event Layout
|Proposal pages help us get from feature requests into actual plans. This is related to both Real Time Adjustment and Threading and Acceleration.|
Proposal pages are used on an ongoing basis by the Audacity development team and are open to edits from visitors to the wiki. They are a good way to get community feedback on a proposal.
- Note: Proposals for Google Summer of Code projects are significantly different in structure, are submitted via Google's web app and may or may not have a corresponding proposal page.
Proposed Feature: Page Tabled Memory and Event Layout
To facilitate an environment that would be conducive to both multi-threading and non-destructive processing.
- Andrew Hallendorff
In the abstract this implementation can be thought of as 3 streams.
- Base Indexes - 32-bit integers that store both data and instructions.
- Data Blocks - Indexed chunks of data of a fixed size. Most likely 512 (32-bit samples) or 1024 (16-bit samples)
- Events (instructions) - Indexed list actions. Size and scope of these actors would be variable.
- In its simplest form, to just play data, the processing agent would walk the Base Indexes and move the Data Blocks into the play buffers. A linear stream of data would be indexed by a list of incremental integers form [0..N] each pointing to a Data Block sized chunk of data.
- The Data Blocks could be stored in any order and indexed multiple times providing a functional sequencing of audio with a resolution of rate / blocksize. (512 / 48k = ~10ns) Sequencing at a higher resolution would require events.
- Data Blocks would be further paged in groups of 512 to fit the general page table size of most computers and allow for them to be read and written from disk as necessary.
- How events interact with the streams. Within the stream of Base Indexes a negative number would correspond to a certain event. The event would preserve the removed page entry while linking to a operation to be preformed upon the stream. This maintains the original data while allowing almost infinite customization. (2^31 events).
A simple cut
|Indexes Before Cut 3 and 4||0||1||2||3||4||5|
|Indexes After Cut 3 and 4||0||1||2||-1||4||5|
Event 1 - A simple Cut: Hold saved data 3 jump to bin 5
The processor would walk the data. For 0..2 it would just be copy block and play. When it reaches the 3rd bin it would find a negative number and thus look up event 1 where it was told to jump to bin 5. In this case the saved 3 was unused but other events where sub data block events are necessary or effects the data would remain in the stream.
To complement the general page level of modern computers allocations should be made to a megabyte size.
- The initial Index Group would be allocated at 256k entries. (32-bit int)
- Each Block Group would be 512x512 (32-bit samples) 512x1024 (16-bit samples) For 32-bit samples this equates to 1mb of data or 5.461 and 1/3 seconds of audio at 48khz. The 16-bit would be 10.922 and 2/3 seconds of data.
- For the above a 256k entry Index Group would scale to 128m samples and 512 Block Groups. This would address 46.6 min of audio at 48k. Double for 16-bit.
- For 32-bit samples. The above would overflow at 2^40 (31+9) samples or 6362.91 hours of data. Further banking could easily extend this far beyond that.
The raw processing engine and allocator would be completely self-contained and abstract from custom interfaces. Wrapper classes would provide functionality with existing data structures (i.e WaveTrack) for either import or emulation.
There would be sister modules that would provide static transformations of the above implementation for rendering and drawing. The goal being to make everything, including the display, a piecewise transformation of the data.
(end first pass)