Network Experiment Configuration¶
This chapter explains a complete experiment configuration for AMNES using a simple example. First of all, the experimental network and its different scenarios are explained. After that, the creation of the required AMNES project configuration is shown step by step.
Info
For the experiment execution AMNES must be installed on all network devices, which should be controlled. Please note that you can only use AMNES standard modules and self-developed modules within an experiment. In addition, specified node addresses and ports must be accessible by the AMNES controller and the specified testbed links have to be existent.
As an example for the experiment configuration with AMNES we investigate the use of Multipath-TCP (MPTCP)1 with two network connections. In this case, two endpoints, representing for example a client and a web server, are connected via two paths.
Example Testbed¶
For this experiment the experiment testbed is configured as follows: Two endpoints (client and server) are connected to two ethernet bridges that represent two different paths. No direct connections between the endpoints were used, allowing bandwidth limitations and packet delays on the paths to be configured using Traffic Control2 without direct optimization by the kernels of the respective endpoints. The following figure illustrates the structure described above.
Experiment Design¶
The necessary random traffic is generated using the tool iperf33, with the iperf3 server being started on endpoint A and the iperf3 client on endpoint B. Traffic Control is used on the bridges to set bandwidth limits and delay the packet flow. The tool tcpdump4, which is started for both network interfaces of endpoint A, is used to record the data traffic for evaluation. The ethernet bridges as well as the endpoints are realized by virtual machines. An instance of the AMNES Worker runs on all virtual machines, which is responsible for measuring and starting data transfers on the endpoints and for configuring Traffic Control on the ethernet bridges. Furthermore, all virtual machines are located in a seperate network that is used by the externally running AMNES controller for control and data exchange.
When running an experiment, the ethernet bridges are configured according to the desired scenarios via Traffic Control and the correct MPTCP settings are made on the endpoints. After that, the iperf3 client running on endpoint B sends randomized data to the iperf3 server running on endpoint A for two minutes. When the measurement is complete, the the previously set configurations are reverted.
Configure the corresponding AMNES project¶
In the following section, the creation of the corresponding YAML configuration for this example experiment is explained step by step. First of all, the AMNES project, which represents the entire network configuration with additional possible parameterizations, must be named with a slug. In addition, you can also specify a name and a more detailed description.
# AMNES Example Project
slug: amnes-example
name: AMNES Example Project
description: Simple example project for the AMNES documentation.
# Config
repetitions: 10
Defining NodeTasks¶
The next step is to define the tasks to be executed later during an experiment.
Info
To be able to use these definitions several times and for a better overview, YAML node anchors5 are used under the key .ignored
.
All underlying definitions are ignored when importing the experiment later.
It is also possible to insert these parts directly at the intended position.
Here we first define two different ways of handling output from executed tasks that we want to use later.
With to_devnull
we discard the output from STDOUT
and STDIN
. The to_file
definition redirects the two streams into separate files and stores them persistently for evaluation.
#
# Anchors and Templates
#
.ignored:
# Destinations for generated Output
output_destinations:
to_devnull: &out_dest_devnull
stdout: "DEVNULL"
stderr: "DEVNULL"
to_file: &out_dest_files
stdout: "out.txt"
stderr: "err.txt"
tcpdump
module.
As parameters we pass a custom command and a timeout.
After this time the task should be finished, but no error should occur.
Therefore we set timeouterr
to False
.
In addition to this, we use the anchor we just defined to store the output in files.
# TCPDump Tasks
tasks_tcpdump:
tcpdump_enp0s9: &task_tcpdump_enp0s9
module: amnes.modules.TcpDump
params:
<<: *out_dest_files
custom: "-U -n -i enp0s9 -s 150 -w -"
timeout: "140"
timeouterr: "False"
tcpdump_enp0s10: &task_tcpdump_enp0s10
module: amnes.modules.TcpDump
params:
<<: *out_dest_files
custom: "-U -n -i enp0s10 -s 150 -w -"
timeout: "140"
timeouterr: "False"
[[___]]
instead of the concrete values.
This allows us to perform different sequences of measurements with different ParameterSets
later.
Since we are not interested in the output of the module, we set the previously defined to_devnull
as parameters for STDOUT
and STDIN
.
# NetEm Tasks
tasks_netem:
# NetEm - Rate - Emulator-P1
netem_setup_enp0s9_rate_p1: &task_netem_setup_enp0s9_rate_p1
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc add dev enp0s9 root netem rate [[netem_p1_rate_enp0s9]]"
netem_setup_enp0s10_rate_p1: &task_netem_setup_enp0s10_rate_p1
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc add dev enp0s10 root netem rate [[netem_p1_rate_enp0s10]]"
# NetEm - Rate - Emulator-P2
netem_setup_enp0s9_rate_p2: &task_netem_setup_enp0s9_rate_p2
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc add dev enp0s9 root netem rate [[netem_p2_rate_enp0s9]]"
netem_setup_enp0s10_rate_p2: &task_netem_setup_enp0s10_rate_p2
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc add dev enp0s10 root netem rate [[netem_p2_rate_enp0s10]]"
# NetEm - Delay - Emulator-P1
netem_setup_enp0s9_delay_p1: &task_netem_setup_enp0s9_delay_p1
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc change dev enp0s9 root netem delay [[netem_p1_delay_enp0s9]]"
netem_setup_enp0s10_delay_p1: &task_netem_setup_enp0s10_delay_p1
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc change dev enp0s10 root netem delay [[netem_p1_delay_enp0s10]]"
# NetEm - Delay - Emulator-P2
netem_setup_enp0s9_delay_p2: &task_netem_setup_enp0s9_delay_p2
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc change dev enp0s9 root netem delay [[netem_p2_delay_enp0s9]]"
netem_setup_enp0s10_delay_p2: &task_netem_setup_enp0s10_delay_p2
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc change dev enp0s10 root netem delay [[netem_p2_delay_enp0s10]]"
# NetEm - Cleanup
netem_cleanup_enp0s9: &task_netem_cleanup_enp0s9
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc del dev enp0s9 root"
netem_cleanup_enp0s10: &task_netem_cleanup_enp0s10
module: amnes.modules.NetEm
params:
<<: *out_dest_devnull
custom: "qdisc del dev enp0s10 root"
3
with a parameter.
To ensure that the server is started before the client, a sleep
parameter can be specified for the client in this module.
The task will then be started after the specified number of seconds.
# IPerf Tasks
tasks_iperf:
.iperf_default: &task_iperf_defaults
<<: *out_dest_files
version: "3"
iperf_server: &task_iperf_server
module: amnes.modules.Iperf
params:
<<: *task_iperf_defaults
custom: "-s"
timeout: "130"
timeouterr: "False"
iperf_client: &task_iperf_client
module: amnes.modules.Iperf
params:
<<: *task_iperf_defaults
custom: "-c 10.254.11.101 -t 120"
timeout: "130"
timeouterr: "False"
sleep: "5"
ShellCommandBase
module.
This module can be used to execute arbitrary command line commands.
# Multipath
tasks_multipath_interfaces:
multipath_interfaces: &task_multipath_interfaces
module: amnes.modules.ShellCommandBase
params:
<<: *out_dest_devnull
command: "ip link set dev enp0s3 multipath off && ip link set dev enp0s8 multipath off && sysctl -w net.mptcp.mptcp_checksum=0"
multipath_scheduler: &task_multipath_scheduler
module: amnes.modules.ShellCommandBase
params:
<<: *out_dest_devnull
command: "sysctl -w net.mptcp.mptcp_scheduler=[[scheduler]]"
Sleep
command.
We will see the use of that later.
# Sleep
tasks_sleep:
sleep: &task_sleep
module: amnes.modules.ShellCommandBase
params:
<<: *out_dest_devnull
command: "sleep 3"
Template Configuration¶
Now that all tasks we want to use in our experiment are defined, we can create our template for a concrete experiment.
In this part of the configuration the order of execution is defined by stages.
All tasks assigned to the same stage can be executed in parallel and all defined stages are executed in a fixed order.
This makes it very easy to model dependencies in the experiment execution process.
We can now define the stages we need under the stages
key.
#
# Template
#
template:
stages:
- Multipath-Setup
- NetEm-Setup-Rate
- NetEm-Setup-Delay
- Iperf
- NetEm-Cleanup
- Sleep
nodes:
endpoint_a:
endpoint:
address: "10.254.0.101"
port: 22022
tasks:
multipath_interfaces:
stage: Multipath-Setup
<<: *task_multipath_interfaces
multipath_scheduler:
stage: Multipath-Setup
<<: *task_multipath_scheduler
tcpdump_enp0s9:
stage: Iperf
<<: *task_tcpdump_enp0s9
tcpdump_enp0s10:
stage: Iperf
<<: *task_tcpdump_enp0s10
iperf_server:
stage: Iperf
<<: *task_iperf_server
sleep:
stage: Sleep
<<: *task_sleep
endpoint_b:
endpoint:
address: "10.254.0.102"
port: 22022
tasks:
multipath_interfaces:
stage: Multipath-Setup
<<: *task_multipath_interfaces
multipath_scheduler:
stage: Multipath-Setup
<<: *task_multipath_scheduler
iperf_client:
stage: Iperf
<<: *task_iperf_client
sleep:
stage: Sleep
<<: *task_sleep
emulator_1:
endpoint:
address: "10.254.0.201"
port: 22022
tasks:
netem_setup_enp0s9_rate:
stage: NetEm-Setup-Rate
<<: *task_netem_setup_enp0s9_rate_p1
netem_setup_enp0s10_rate:
stage: NetEm-Setup-Rate
<<: *task_netem_setup_enp0s10_rate_p1
netem_setup_enp0s9_delay:
stage: NetEm-Setup-Delay
<<: *task_netem_setup_enp0s9_delay_p1
netem_setup_enp0s10_delay:
stage: NetEm-Setup-Delay
<<: *task_netem_setup_enp0s10_delay_p1
netem_cleanup_enp0s9:
stage: NetEm-Cleanup
<<: *task_netem_cleanup_enp0s9
netem_cleanup_enp0s10:
stage: NetEm-Cleanup
<<: *task_netem_cleanup_enp0s10
emulator_2:
endpoint:
address: "10.254.0.202"
port: 22022
tasks:
netem_setup_enp0s9_rate:
stage: NetEm-Setup-Rate
<<: *task_netem_setup_enp0s9_rate_p2
netem_setup_enp0s10_rate:
stage: NetEm-Setup-Rate
<<: *task_netem_setup_enp0s10_rate_p2
netem_setup_enp0s9_delay:
stage: NetEm-Setup-Delay
<<: *task_netem_setup_enp0s9_delay_p2
netem_setup_enp0s10_delay:
stage: NetEm-Setup-Delay
<<: *task_netem_setup_enp0s10_delay_p2
netem_cleanup_enp0s9:
stage: NetEm-Cleanup
<<: *task_netem_cleanup_enp0s9
netem_cleanup_enp0s10:
stage: NetEm-Cleanup
<<: *task_netem_cleanup_enp0s10
Adding ParameterSets¶
Since we have used placeholders in the definition of the NodeTasks
, we can now perform different series of experiments with different ParameterSets
.
Within a ParameterSet
, the cartesian product of all parameter assignments is formed and the template is executed once for each of these assignments.
We use this possibility in our example experiment to compare scenarios with different network heterogeneity using different MPTCP schedulers.
parameters:
homogeneous:
name: "Homogeneous"
assignments:
"scheduler": ["default", "redundant", "roundrobin"]
"netem_p1_rate_enp0s9": ["5mibps"]
"netem_p1_rate_enp0s10": ["5mibps"]
"netem_p1_delay_enp0s9": ["10ms"]
"netem_p1_delay_enp0s10": ["10ms"]
"netem_p2_rate_enp0s9": ["5mibps"]
"netem_p2_rate_enp0s10": ["5mibps"]
"netem_p2_delay_enp0s9": ["10ms"]
"netem_p2_delay_enp0s10": ["10ms"]
heterogeneous-dsl:
name: "Heterogeneous Slow DSL"
assignments:
"scheduler": ["default", "redundant", "roundrobin"]
"netem_p1_rate_enp0s9": ["5mibps"]
"netem_p1_rate_enp0s10": ["5mibps"]
"netem_p1_delay_enp0s9": ["10ms"]
"netem_p1_delay_enp0s10": ["10ms"]
"netem_p2_rate_enp0s9": ["2mibps"]
"netem_p2_rate_enp0s10": ["2mibps"]
"netem_p2_delay_enp0s9": ["20ms"]
"netem_p2_delay_enp0s10": ["20ms"]
heterogeneous-umts:
name: "Heterogeneous UMTS Mobile Network"
assignments:
"scheduler": ["default", "redundant", "roundrobin"]
"netem_p1_rate_enp0s9": ["5mibps"]
"netem_p1_rate_enp0s10": ["5mibps"]
"netem_p1_delay_enp0s9": ["10ms"]
"netem_p1_delay_enp0s10": ["10ms"]
"netem_p2_rate_enp0s9": ["50kibps"]
"netem_p2_rate_enp0s10": ["50kibps"]
"netem_p2_delay_enp0s9": ["20ms"]
"netem_p2_delay_enp0s10": ["20ms"]
Data Extraction¶
After the experiment has been executed using the CLI, the collected files can be further processed and analyzed via the API-documented interface to the StorageBackend
.