# RTSS’16 Supplemental Materials

B. Brandenburg and M. Gül, Global Scheduling Not Required: Simple, Near-Optimal Multiprocessor Real-Time Scheduling with Semi-Partitioned Reservations, Proceedings of the 37th IEEE Real-Time Systems Symposium, 2016.

The following information is provided:

## Pseudo-Code for the Meta-Heuristics

Due to space constraints, we were not able to provide pseudo-code for the proposed meta-heuristics in the paper itself. For ease of understanding, we provide brief sketches of the algorithms here.

### Pre-Assign Failures (PAF) Meta-Heuristic

At a high level, the Pre-Assign Failures (PAF) meta-heuristic attempts to use the observation that certain tasks failed to be assigned as a signal in a feedback loop: tasks that could not be assigned are likely the “most difficult to assign,” and hence should be placed before considering the remaining tasks. See Section IV.B of the paper for additional details.

Inputs:

• $$h_1$$ — a (semi-)partitioning heuristic to pre-assign troublesome tasks
• $$h_2$$ — a (semi-)partitioning heuristic to assign the remaining tasks
• $$\tau$$ — the task set that is to be partitioned

Initialization:

• failures$$\emptyset$$
• rest$$\tau$$

PAF procedure:

#iterate until success or failure
while no complete assignment has been found:

# First place any troublesome tasks.
# In the first iteration, this does nothing.

# If even the pre-assignment phase failed, we give up.
return "no suitable semi-partitioning found"

# Otherwise, let h2 assign the remaining tasks
# ...while respecting the partial pre-assignment of h1.

# Did we find a solution?
# Good, we have a solution!
return "success", final_assignment
else:
# Nope, add any tasks that we couldn't assign to the failures set
# and then try again.

The above procedure terminates either when the pre-assignment phase fails, or when a valid semi-partitioning has been found by $$h_2$$. Either exit condition is reached after at most $$|\tau|$$ iterations.

### Reduce Periods (RP) Meta-Heuristic

The Reduce Periods (RP) Meta-Heuristic repeatedly period-transforms both (i) all tasks with periods above a certain threshold (which is lowered from iteration to iteration) and (ii) any tasks that failed to be assigned in previous iterations (as in the PAF meta-heuristic). See Section IV.B of the paper for additional details.

For an explanation of the period-transformation technique, see Section II.B of the paper. In a nutshell, given an integer $$k_i \geq 2$$, an implicit-deadline sporadic task $$\tau_i$$ (or an equivalent polling reservation, see Section II.A) is period-transformed into an equivalent task $$\tau_i'$$ with a (much) shorter period by dividing both its WCET $$C_i$$ and minimum inter-arrival time $$T_i$$ by $$k_i$$. $$C_i' = \frac{C_i}{k_i} \quad\quad\quad T_i' = \frac{T_i}{k_i}$$

Inputs:

• $$h$$ — a semi-partitioning heuristic used to actually split and place tasks
• $$L$$ — a set of period thresholds (or limits)
• $$C$$ — a set of candidate periods to try (can be the same as $$L$$)
• $$min$$ — a lower limit on the minium period length
• $$\tau$$ — the task set that is to be partitioned

Initialization:

• failures$$\emptyset$$

Main RP procedure:

#Progressively reduce the maximum allowed period.
for each limit ∈ L in decreasing order:

# Period-transform task set (see below)
tau' ← transform(tau, failures, limit)

# Try to semi-partition the transformed task set

# Did we find a solution?
# Good, we have a solution!
return "success", final_assignment
else:
# Nope, add any tasks that we couldn't assign to the failures set
# and then try again.

#Finally, if we could not find a solution even with the shortest allowed
#then give up and signal failure.
return "no suitable semi-partitioning found"

The main RP loop relies on a transformation procedure, which works as follows.

transform(tau, failures, limit):

# Don't modify original task set
tau' ← make copy of tau

# Consider periods not exceeding limit, and at least
possible_choices ← {p | p ∈ C and p ≤ limit and p ≥ min }

# As a fallback, give up on the minimum threshold.
fallback_choices ← {p | p ∈ C and p ≤ limit}

# Look for tasks with periods exceeding the limit, or that failed
# to be assigned previously.
for each task t in tau' such that t.period ≥ limit or t ∈ failed:

# Try the longest candidate that divides the period.
for each p in possible_choices in decreasing order:
if t.period mod p = 0:
k ← t.period / p
t.period ← p
t.wcet ← t.wcet / k

# If that didn't work, try the fallback choices.
for each p in fallback_choices in decreasing order:
if t.period mod p = 0:
k ← t.period / p
t.period ← p
t.wcet ← t.wcet / k

return tau'

The sets $$L$$ and $$C$$ need to be chosen appropriately for the type of expected workloads $$\tau$$ (i.e., to match the expected period ranges). In the experiments discussed in the paper, task periods (in milliseconds) were chosen uniformly at random from the set $$\mathcal P$$ = {1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 100, 125, 200, 250, 500, 1000}. To ensure a short hyperperiod after period transformation, we simply chose $$L = C = \mathcal P$$ for the experiments discussed in the paper.

## Actual Code for all Heuristics and Meta-Heuristics

All heuristics and meta-heuristics have been implemented in SchedCAT, a schedulability analysis toolkit and library implemented in Python and C++.

The heuristics and meta-heuristics used in this paper have been implemented in Python in the schedcat.mapping.apa module; the underlying QPA algorithm was already available in SchedCAT and is implemented in C++ for speed reasons. The name of the Python module derives from the fact that the heuristics have been implemented to also work with tasks that are subject to arbitrary processor affinity (APA) constraints, which however is beyond the scope of the RTSS paper.

For convenience, the following table provides stable links to the relevant code.

Algorithm Code in SchedCAT Remarks
QPA native/src/edf/qpa.cpp:140 Accessed via the Python wrapper qpa_it_fits() (➞ schedcat/mapping/apa.py:63).
C=D split native/src/edf/qpa.cpp:212 Accessed via the Python wrapper qpa_split_off_chunk() (➞ schedcat/mapping/apa.py:63).
FFD schedcat/mapping/apa.py:193
WFD schedcat/mapping/apa.py:150
FFD–C=D schedcat/mapping/apa.py:193 With with_splits set to True.
WFD–C=D schedcat/mapping/apa.py:150 With with_splits set to True.
WFD–C=D–MS schedcat/mapping/apa.py:150 With both with_splits and max_chunk_split set to True.
2WFD–C=D schedcat/mapping/apa.py:150
WWFD schedcat/mapping/apa.py:274
FWFD schedcat/mapping/apa.py:299
WFFD schedcat/mapping/apa.py:324
FFFD schedcat/mapping/apa.py:349
PAF schedcat/mapping/apa.py:374
RP schedcat/mapping/apa.py:406 In this implementation, $$L = C$$ (candidate_periods).

All file and line references should be understood with regard to the SchedCAT master branch at revision ab81f87f0467e4228aafe2e7568a88c3085805c4 (October 2016).

Note: If you want to reuse this code, always use the latest SchedCAT version, and not the code dump provided as part of the artifact evaluation instructions (see below). The code dump is provided as-is, whereas SchedCAT is being actively maintained.

## Complete Data Set and All Graphs

For the sake of transparency, we provide all relevant data and graphs for download. Each archive contains a detailed README document that explains the respective contents and data formats.

## Reproduction Instructions

To aid in reproducing our results, and to help with any potential reuse of our code, we provided detailed reproduction instructions. Originally, these instructions were provided for the RTSS’16 Artifact Evaluation process.

The paper contains two sets of experiments:

1. Sections V and VI present results from synthetic schedulability experiments, which were conducted with SchedCAT, and

2. Sections VII presents results from overhead and migration-rate experiments on real hardware, which were conducted with an implementation of the proposed reservation-based scheduling approach in LITMUSRT (a real-time extension of the Linux kernel).

Accordingly, we provide two separate sets of instructions. As both guides are rather lengthy, we provide the instructions on separate pages for ease of reading.

Note: If you want to build upon this work, always use the latest SchedCAT and/or LITMUSRT versions, and not the code dumps provided as part of these artifact evaluation instructions. The code dumps are provided as-is, whereas the main projects are being actively maintained. By starting with the old code dumps, you lose out on any cleanups and fixes that have gone into the code base since work on this paper was completed.