FIFO RTA: Description of Experiments (Section V)

This folder provides the tools and scripts used to conduct the empirical evaluation reported on in Section V of the paper.

In the following, this guide describes how to reproduce the plots shown and discussed in Section V of the paper.


Table of Contents:


Folder Contents

This folder provides the following instructions:


The following files and directories are required for the setup:


The following scripts are used to run the experiments:

  1. tsgen.py — a Python script that generates synthetic task sets.
  2. run-rta.sh — a Bash script that invokes rta-tool on all generated task sets for all considered scheduling polices.
  3. eval.py — a Python script that aggregates the results of the experiments.
  4. copy-results.sh — a Bash script that extracts the final results from the output of eval.py.
  5. plot.py — a Python script that plots the results.

There are three initially empty folders:


Finally, we provide two scripts for running the experiments within a Docker container in the Docker directory:

Step 0 — Setup and Preliminaries

This and the following steps explain how to reproduce the experiments manually. For convenience, we provide as an alternative also a fully automated Docker-based script that requires no user interaction. Readers interested only in the alternative Docker-based approach may skip forward to the section entitled Alternative: Steps 0 - 5 in Docker.

Environment

The following instructions have been tested on macOS 12.4 “Monterey”. They are also expected to work on recent mainstream Linux distributions such as Ubuntu 22.04 LTS.

NB: Microsoft Windows is not supported.

The following standard software is required and in the following assumed to already be in place:

Python

To install Python dependencies, we recommend using virtualenv (installation instructions can be found here).

In the terminal, navigate to the folder 2_Evaluation and run:

  python3 -m venv pyenv     
  source pyenv/bin/activate
  pip3 install -r requirements.txt 

This will create a virtual Python installation in the pyenv folder and ensure that all required Python dependencies have been installed, without interfering with the system’s globally installed packages.

Compile the Rust-Based RTA Tool

To compile the Rust-based RTA tool (called rta-tool), just run cargo as usual in the rta-tool folder:

cd rta-tool
cargo build --release
cd ..

The minimum stable version of Rust required is 1.61.0. The current stable release should work without problems, too.

Hint: On macOS, Turn off Spotlight Indexing

The experiments involve the creation of many small files. If you are running the experiments on macOS, it is recommended that you turn off Spotlight indexing for the folder holding the experiments to avoid inducing excessive CPU load in Spotlight’s continuous indexing component.

That’s it: we are ready to run the experiments. However, before doing so, let’s first run the test suites and check out the documentation.

 Run Tests and Peruse the Documentation

This step may be skipped entirely — jump ahead to Step 1 if you are in a hurry.

The Rust implementation is fully documented and comes with test cases. We recommend to first run the test suites and to then browse the generated documentation.

 Run the Analysis Unit Tests

To run the test suite of the Rust response-time analysis library, run cargo test in the response-time-analysis folder:

cd response-time-analysis
cargo test
cd ..

The output should show 79 passing unit tests and 3 passing doc tests.

 Run the rta-tool Unit Tests

To run the test suite of rta-tool frontend, run cargo test in the rta-tool folder:

cd rta-tool
cargo test
cd ..

The output should show 7 passing unit tests.

 Generate the Documentation

The source code of both the analysis library and the frontend tool is extensively commented. To generate hyperlinked documentation, run cargo doc in the rta-tool folder:

cd rta-tool
cargo doc
cd ..

To open the documentation in your default browser, simply run cargo doc --open in the rta-tool folder:

cd rta-tool
cargo doc --open
cd ..

NB: if the opened HTML document looks corrupted, try turning off your ad blocker or using a browser other than Safari.

Peruse the Documentation and/or Study the Source Code

We invite the reader to study the Rust codebase implementing the analyses. As starting points, we recommend the following parts of the codebase:

NB: The following links in this section will become clickable only after the documentation has been generated as described above.

First, read the README.md file describing the rta-tool and its input format to get an idea of its functionality.

Next, open the overview page for rta-tool and browse through the modules. It will become quickly apparent that the tool itself is fairly simple.

In particular, consider the method TaskSet::rta() and check out its source code.

Next, consider the overview page for the response-time analysis library and explore its module structure as desired.

The relevant analyses for the experiments are the following:

The sole exception is the AFIFO analysis, which is implemented directly in rta-tool as TaskSet::rta_fifo_altmeyer() in the the file rta-tool/src/analysis.rs.

Step 1 — Generate Synthetic Workloads

To get started with the experiments, generate synthetic task sets by running the following command:

./tsgen.py

This will take some time (about 5 minutes on recent laptop) and generate a bunch of files in subfolders of ./workloads, using the method described in the paper.

No output is expected while the tool is running and upon a successful completion.

Step 2 — Run the RTAs

Invoke the Rust-based analysis tool on the generated workloads with the following command:

./run-rta.sh

Compared to Step 1, this should be fairly quick.

The script will mention which experiment it is processing as it proceeds through the workloads. No other output is expected while the tool is running.

The tool produces YAML files in subfolders of ./workloads (look for files with the prefix results-).

Step 3 — Crunch the Data

Process the RTA results by running the following command:

./eval.py

This will take quite a while since it must ingest and process all analysis results. You may want to get a coffee or a brew a tea…

The tool will provide progress updates while it is working.

The script will produce aggregated statistics and store them as CSV files.

For example:

workloads/ex1/ratios_by_tcount.csv
workloads/ex1/ratios_by_util.csv
workloads/ex2/ratios_by_tcount.csv
workloads/ex2/ratios_by_util.csv
workloads/ex3/ratios_by_tcount.csv
workloads/ex3/ratios_by_util.csv

Step 4 — Copy the Results

The previous step creates a lot of files. Only a few of those are relevant for the figures in the paper. To copy the relevant files to the results/ directory, run the following command:

./copy-results.sh

The final data files (in CSV format) may now be found in the subfolder results/.

Step 5 — Plot the Results

To visualize the data copied out in the previous step, run the following command:

./plot.py

This will create a number of plots in PDF format in the folder ./plots.

Alternative: Steps 0 - 5 in Docker

For your convenience, we provide a script to execute Steps 0–5 completely automatically within an ephemeral Docker container.

Assuming the docker binary is in your shell’s $PATH, run the following command to execute the above steps:

Docker/run-all-in-Docker.sh

Docker will download an official Rust image and then execute the above steps one by one. This will take quite a while, in part due to Docker-induced I/O overheads, so we recommend to let this run in the background or over night.

The script will provide progress updates as it completes the steps.

Step 6 — Compare with the Paper

Of the plots generated in ./plots, only six could be shown in the paper.

NB: The links in this section will become clickable only after the plots have been rendered as described in the previous step.

Because the corpus of evaluated workloads is generated randomly, one cannot expect to obtain exactly the same curves as shown in the paper. However, the same trends should be clearly apparent.