This folder provides the tools and scripts used to conduct the empirical evaluation reported on in Section V of the paper.
In the following, this guide describes how to reproduce the plots shown and discussed in Section V of the paper.
Table of Contents:
This folder provides the following instructions:
README.md
— the Markdown source of this file.README.html
— this file.The following files and directories are required for the setup:
requirements.txt
— a list of the Python requirements needed to run the experiments.reponse-time-analysis
— the Rust backend library implementing the actual response-time analyses.
reponse-time-analysis/README.md
for a description of the library.rta-tool
— the Rust frontend tool that parses workloads, invokes the appropriate analyses, and stores results.
rta-tool/README.md
for a description of the tool and its input and output formats.The following scripts are used to run the experiments:
tsgen.py
— a Python script that generates synthetic task sets.run-rta.sh
— a Bash script that invokes rta-tool
on all generated task sets for all considered scheduling polices.eval.py
— a Python script that aggregates the results of the experiments.copy-results.sh
— a Bash script that extracts the final results from the output of eval.py
.plot.py
— a Python script that plots the results.There are three initially empty folders:
workloads
— a folder populated with synthetic workloads by tsgen.py
.results
- the final result datasets, populated by copy-results.sh
.plots
- the final figures, rendered by plot.py
.Finally, we provide two scripts for running the experiments within a Docker container in the Docker
directory:
Docker/run-all-in-Docker.sh
— a simple wrapper script to launch the experiments in an ephemeral Docker container.Docker/steps-in-Docker.sh
— an executable summary of the steps described next, for execution within the Docker container launched by run-all-in-Docker.sh
.This and the following steps explain how to reproduce the experiments manually. For convenience, we provide as an alternative also a fully automated Docker-based script that requires no user interaction. Readers interested only in the alternative Docker-based approach may skip forward to the section entitled Alternative: Steps 0 - 5 in Docker.
The following instructions have been tested on macOS 12.4 “Monterey”. They are also expected to work on recent mainstream Linux distributions such as Ubuntu 22.04 LTS.
NB: Microsoft Windows is not supported.
The following standard software is required and in the following assumed to already be in place:
rustup
To install Python dependencies, we recommend using virtualenv
(installation instructions can be found here).
In the terminal, navigate to the folder 2_Evaluation
and run:
python3 -m venv pyenv
source pyenv/bin/activate
pip3 install -r requirements.txt
This will create a virtual Python installation in the pyenv
folder and ensure that all required Python dependencies have been installed, without interfering with the system’s globally installed packages.
To compile the Rust-based RTA tool (called rta-tool
), just run cargo
as usual in the rta-tool
folder:
cd rta-tool
cargo build --release
cd ..
The minimum stable version of Rust required is 1.61.0. The current stable release should work without problems, too.
The experiments involve the creation of many small files. If you are running the experiments on macOS, it is recommended that you turn off Spotlight indexing for the folder holding the experiments to avoid inducing excessive CPU load in Spotlight’s continuous indexing component.
That’s it: we are ready to run the experiments. However, before doing so, let’s first run the test suites and check out the documentation.
This step may be skipped entirely — jump ahead to Step 1 if you are in a hurry.
The Rust implementation is fully documented and comes with test cases. We recommend to first run the test suites and to then browse the generated documentation.
To run the test suite of the Rust response-time analysis library, run cargo test
in the response-time-analysis
folder:
cd response-time-analysis
cargo test
cd ..
The output should show 79 passing unit tests and 3 passing doc tests.
rta-tool
Unit TestsTo run the test suite of rta-tool
frontend, run cargo test
in the rta-tool
folder:
cd rta-tool
cargo test
cd ..
The output should show 7 passing unit tests.
The source code of both the analysis library and the frontend tool is extensively commented. To generate hyperlinked documentation, run cargo doc
in the rta-tool
folder:
cd rta-tool
cargo doc
cd ..
To open the documentation in your default browser, simply run cargo doc --open
in the rta-tool
folder:
cd rta-tool
cargo doc --open
cd ..
NB: if the opened HTML document looks corrupted, try turning off your ad blocker or using a browser other than Safari.
We invite the reader to study the Rust codebase implementing the analyses. As starting points, we recommend the following parts of the codebase:
NB: The following links in this section will become clickable only after the documentation has been generated as described above.
First, read the README.md
file describing the rta-tool
and its input format to get an idea of its functionality.
Next, open the overview page for rta-tool
and browse through the modules. It will become quickly apparent that the tool itself is fairly simple.
In particular, consider the method TaskSet::rta()
and check out its source code.
TaskSet::rta_np_fp()
), all rta-tool
does is to call the appropriate RTA provided by the response-time-analysis
(short rta::
) crate.analysis.rs:75
shows how TaskSet::rta_np_fp()
calls the underlying analysis provided by the library (rta::fixed_priority::fully_nonpreemptive::dedicated_uniproc_rta
in this case).Next, consider the overview page for the response-time analysis library and explore its module structure as desired.
The relevant analyses for the experiments are the following:
rta::fixed_priority::fully_nonpreemptive::dedicated_uniproc_rta
, implemented in the file response-time-analysis/src/fixed_priority/fully_nonpreemptive.rs
.rta::edf::fully_nonpreemptive::dedicated_uniproc_rta
, implemented in the file response-time-analysis/src/edf/fully_nonpreemptive.rs
.rta::fifo::dedicated_uniproc_rta
, implemented in the file response-time-analysis/src/fifo/rta.rs
.The sole exception is the AFIFO analysis, which is implemented directly in rta-tool
as TaskSet::rta_fifo_altmeyer()
in the the file rta-tool/src/analysis.rs
.
To get started with the experiments, generate synthetic task sets by running the following command:
./tsgen.py
This will take some time (about 5 minutes on recent laptop) and generate a bunch of files in subfolders of ./workloads
, using the method described in the paper.
No output is expected while the tool is running and upon a successful completion.
Invoke the Rust-based analysis tool on the generated workloads with the following command:
./run-rta.sh
Compared to Step 1, this should be fairly quick.
The script will mention which experiment it is processing as it proceeds through the workloads. No other output is expected while the tool is running.
The tool produces YAML files in subfolders of ./workloads
(look for files with the prefix results-
).
Process the RTA results by running the following command:
./eval.py
This will take quite a while since it must ingest and process all analysis results. You may want to get a coffee or a brew a tea…
The tool will provide progress updates while it is working.
The script will produce aggregated statistics and store them as CSV files.
For example:
workloads/ex1/ratios_by_tcount.csv
workloads/ex1/ratios_by_util.csv
workloads/ex2/ratios_by_tcount.csv
workloads/ex2/ratios_by_util.csv
workloads/ex3/ratios_by_tcount.csv
workloads/ex3/ratios_by_util.csv
The previous step creates a lot of files. Only a few of those are relevant for the figures in the paper. To copy the relevant files to the results/
directory, run the following command:
./copy-results.sh
The final data files (in CSV format) may now be found in the subfolder results/
.
To visualize the data copied out in the previous step, run the following command:
./plot.py
This will create a number of plots in PDF format in the folder ./plots
.
For your convenience, we provide a script to execute Steps 0–5 completely automatically within an ephemeral Docker container.
Assuming the docker
binary is in your shell’s $PATH
, run the following command to execute the above steps:
Docker/run-all-in-Docker.sh
Docker will download an official Rust image and then execute the above steps one by one. This will take quite a while, in part due to Docker-induced I/O overheads, so we recommend to let this run in the background or over night.
The script will provide progress updates as it completes the steps.
Of the plots generated in ./plots
, only six could be shown in the paper.
NB: The links in this section will become clickable only after the plots have been rendered as described in the previous step.
Because the corpus of evaluated workloads is generated randomly, one cannot expect to obtain exactly the same curves as shown in the paper. However, the same trends should be clearly apparent.