RTSS 2016 — Artifact Evaluation

This tutorial explains how to reproduce the experiments discussed in the paper

A. Biondi, B. Brandenburg, and A. Wieder, “A Blocking Bound for Nested FIFO Spin Locks”, In Proceedings of the 37th IEEE Real-Time Systems Symposium (RTSS 16), Porto, Portugal, November 29 - December 2, 2016.

This document is organized as follows.

  1. Environment setup

  2. How to run a demo experiment

  3. How to run the experiments discussed in Section V

  4. Overview of the code structure

Please follow the instructions detailed in each section. In case of problems or further questions, feel free to contact us by email.

1. Environment Setup

The schedulability analyses presented in the paper have been implemented as part of the SchedCAT library. While SchedCAT supports both Linux and Mac OSX environments in principle, this guide assumes the use of Linux.

In particular, we recommend to use Ubuntu Linux version 16.04, the version used to test these instructions.

Packages needed for SchedCAT

To run the experiments, the following standard packages are required:

On a debian-based Linux distribution, these packages can be easily installed via the following apt command:

# apt-get install make python2.7 python-dev python-numpy python-scipy swig g++ libgmp3-dev

The above command has been tested on the latest release of Ubuntu Linux (16.04).

Installing a Linear Program Solver

A linear program solver is also required. SchedCAT supports IBM CPLEX.

IBM CPLEX is commercial software, hence it cannot be redistributed with the package provided for this artifact evaluation. Free academic licenses are available.

Warning: The CPLEX Community Edition (90-days free trial) cannot be used to evaluate this artifact due to artificial limitations concerning the maximum problem size that can be solved with it.

At the time of writing, IBM provides the file cplex_studio1263.linux-x86-64.bin for Linux (x86_64). The filename may change when newer versions of CPLEX are released.

To install CPLEX it is sufficient to execute the following commands.

$ chmod +x cplex_studio1263.linux-x86-64.bin
# ./cplex_studio1263.linux-x86-64.bin

The above commands have been tested on the latest release of Ubuntu Linux (16.04) in September, 2016.

2. How to run a demo experiment

The following steps explain how to run a demo experiment to test the setup described in the previous section.

This demo considers the example task set discussed in Section II.D of the paper. A possible schedule of this task set is illustrated in Figure 2 and its corresponding blocking graph is illustrated in Figure 3.

First, download the provided ZIP file including SchedCAT and a set of Python scripts to run the experiments:

$ mkdir ~/nFIFO; cd ~/nFIFO
$ wget http://www.mpi-sws.org/~bbb/papers/ae/rtss16/nFIFO.zip
$ unzip nFIFO.zip

Move into the SchedCAT folder, compile the source code, and move back to the root folder:

$ cd lib/schedcat
$ make
$ cd ../../

Note: The makefiles try to automatically locate an installation of CPLEX. In some cases this procedure may fail and the build process terminates with the error “No LP Solver available”. If this happens, the CPLEX path must be manually configured with the following command:

$ export CPLEX_PATH=<your_CPLEX_path_here>

For instance, if the version 1263 of CPLEX is used under Linux (x86_64) and all the default options have been used when installing CPLEX, then the path must be configured as follows:

$ export CPLEX_PATH=/opt/ibm/ILOG/CPLEX_Studio1263

In addition, some installations of CPLEX may not create symbolic links of the CPLEX static libraries into the /usr/lib/ directory, which is where the makefiles try to automatically locate them. For CPLEX installations on 64-bit Linux, the links can be created with the following commands:

# ln -s <your_CPLEX_path_here>/concert/lib/x86-64_linux/static_pic/libconcert.a /usr/lib/libconcert.a
# ln -s <your_CPLEX_path_here>/cplex/lib/x86-64_linux/static_pic/libcplex.a /usr/lib/libcplex.a 
# ln -s <your_CPLEX_path_here>/cplex/lib/x86-64_linux/static_pic/libilocplex.a /usr/lib/libilocplex.a 

Before launching the experiment, it is necessary to set an environmental variable for SchedCAT:

$ export PYTHONPATH=$PYTHONPATH:./lib/schedcat

We are now ready to launch the demo experiment.

$ python example.py

The output of this command should look as follows.

[nFIFO Locks] Blocking times:
Task #1 ==> 8
Task #2 ==> 9
Task #3 ==> 8
Task #4 ==> 6
Task #5 ==> 1
[Group Locks with MSRP] Blocking times:
Task #1 ==> 8
Task #2 ==> 13
Task #3 ==> 12
Task #4 ==> 10
Task #5 ==> 10

Explanation: The script example.py sets up the task set used as a running example in the paper and then computes for each task the blocking bounds under (i) the nFIFO policy and (ii) group locks protected with the Multiprocessor Stack Resource Policy (MSRP).

This example serves to demonstrate that the basic analysis works. We next discuss how to reproduce the quantitative experiments reported in the paper.

3. How to run the experiments discussed in Section V

Before proceeding with the next steps, it necessary to introduce the notion of a configuration.

A configuration represents all the settings for the task set generators (e.g., how many processors, how many resources, etc.) and defines the experiment to be performed (e.g., varying the number of tasks from 10 to 80).

The directory ~/nFIFO/confs/rtss16/tcount contains a .conf file for each configuration tested in the experimental study discussed in the paper. The name of each .conf file contains key-value pairs with the values for (most of) the parameters.

As example, a configuration file named


corresponds to a configuration for an experiment named rtss16-tcount that considers

Please refer to the paper (Section V) for further details concerning these parameters.

How to run the configurations considered in Figure 5

In the paper we reported (Figure 5) the results of the three following representative configurations:

(a) sd_exp=rtss16-tcount__m=04__util=uni-70-90__per=10-100__nr=4__p_outer=25__p_nest=25__NG=2__ND=2__cs=short__nreq=4.conf
(b) sd_exp=rtss16-tcount__m=04__util=uni-50-70__per=10-100__nr=16__p_outer=25__p_nest=25__NG=3__ND=4__cs=short__nreq=1.conf
(c) sd_exp=rtss16-tcount__m=08__util=uni-50-70__per=10-100__nr=16__p_outer=25__p_nest=25__NG=3__ND=2__cs=short__nreq=1.conf

The following command allows launching the schedulability experiment for a specific configuration:

$ python -m exp -f ./confs/rtss16/tcount/<configuration file>

where <configuration file> must be replaced with the filename of the configuration of interest. This command will produce a CSV output file ./output/rtss16/tcount/<configuration string>.csv that contains the measured schedulability ratios for each tested analysis. For instance, the output file for the configuration (a) should look structurally as follows, although the exact numbers will vary due to sampling noise.

#################################### DATA ####################################
#   TASKS per CPU      no blocking            nFIFO      group-locks
                1,            1.00,            1.00,            1.00
                2,            0.98,            0.98,            0.98
                3,            0.91,            0.87,            0.82
                4,            0.92,            0.83,            0.75
                5,            0.91,            0.80,            0.68
                6,            0.91,            0.76,            0.60
                7,            0.93,            0.70,            0.53
                8,            0.88,            0.63,            0.41
                9,            0.87,            0.57,            0.34
               10,            0.86,            0.48,            0.24

The first column represents the parameter varied in the experiment (number of tasks per processor in this example) while the other columns report the schedulability ratio for each analysis. Specifically, the label no blocking refers to the P-FP analysis where blocking is ignored, nFIFO considers the analysis of the nFIFO protocol presented in the paper and group-locks considers group locks protected by the MSRP.

To ease the reproduction of Figure 5, we prepared a separate folder confs/paper/ which includes the three configurations discussed in the figure. To launch the three experiments execute the following command

$ python -m exp -p -d ./confs/paper

The output files will be produced in output/paper/.

Note: The experiments may take quite some time to complete (which clearly depends on the machine used to run the experiments) - in general, in the order of 10 hours. To speed-up the experiments, we provided modified versions of the configurations in the folder confs/paper-simpl/, which allows running the experiments with a lower number of samples (with a consequent payback in terms of accuracy due to higher sampling noise). Alternatively, the number of samples can be freely configured by modifying the samples parameter inside each configuration file.

Running times of Figure 4

For each configuration, together with the CSV output file, a .RUNTIMES file is produced in the same folder. The measured running time is (clearly) strictly dependent on the machine used to run the experiments. However, we expect that the same trend observable in Figure 4 can be also observed by running the experiment on a machine different to the one we used.

How to run all the configurations

The experiments for all the configurations inside a directory can be run with the following command, after performing all the steps reported in previous section:

$ python -m exp -p -d ./confs/rtss16/tcount

Note: The configurations have been generated for the large-scale experimental study we conducted for the paper. We executed a subset of the configurations on 8 machines, each one equipped with 48 cores, and it took days to get the results.

4. Overview of the code structure

The following files can be found at the relative path lib/schedcat/native/src/blocking/linprog/ and contain the implementations of the analyses used in the experiments.

File Description
lpspinlocknested_fifo.cpp Implementation of the analyis for nested FIFO spin locks presented in the paper
lpspinlockmsrp.cpp Implementation of the analysis for the MSRP used to check the schedulability with group–locks