Skip to content

Switch scalability test with active MT Cbench switches

panageo edited this page Mar 11, 2016 · 48 revisions

Test description

This is a switch scalability test with switches emulated using MT-Cbench. Its target is to explore the maximum number of switches the controller can sustain while they consistently initiate traffic to it (active), and how the controller servicing throughput scales as more switches are being added. MT-Cbench switches send artificial OF1.0 Packet-In messages to the controller, which replies with also artificial OF1.0 Flow-Mod messages; these message types dominate the traffic exchanged between the switches and the controller. The controller should be configured to start with the drop-test feature installed to be able to reply to MT-Cbench messages. The emulated switches are arranged in a disconnected topology, meaning they do not have any interconnection between them. This, along with the limited protocol support, constitute MT-Cbench a special-purpose OF generator and not a full-fledged, realistic OF switch emulator.

Usage

A switch scalability test with active MT-Cbench switches can be started by specifying the following options in NSTAT command line:

  • --test=sb_active_scalability_mtcbench
  • --sb-generator-base-dir=<MT-Cbench dir>

Under the stress_test/sample_test_confs/ directory, the JSON files ending in _sb_active_scalability_mtcbench can be handled as template configuration files for this kind of test scenario. You can specify them to the --json-config option to run a sample test. For larger-scale stress tests, have a look at the corresponding files under the stress_test/stress_test_confs/ directory.

Deployment

VMs Deployment

  • Follow instructions of installation wiki up to step 3.

  • Edit file /vagrant/packaged_multi/Vagrantfile, and remove comments from the following section:

    # VMs setup for sample MT-Cbench tests------------------------------------------
    nstat_node_names = ['ndnstat', 'ndcntrlr', 'ndcbench']
    nstat_node_vm_ram_arr = [2048, 16384, 2048]
    nstat_node_vm_cpu_arr = [1, 4, 1]

    comment out all other sections:

    # VMs setup for sample Mininet tests--------------------------------------------
    #nstat_node_names = ['ndmn', 'ndcntrlr', 'ndnstat']
    #nstat_node_vm_ram_arr = [2048, 16384, 2048]
    #nstat_node_vm_cpu_arr = [1, 4, 1]
    
    # VMs setup for sample Multinet tests-------------------------------------------
    #nstat_node_names = ['mn01', 'mn02', 'mn03', 'ndcntrlr', 'ndnstat']
    #nstat_node_vm_ram_arr = [2048, 2048, 2048, 16384, 2048]
    #nstat_node_vm_cpu_arr = [1, 1, 1, 4, 1]
    
    # VMs setup for non multinet tests----------------------------------------------
    #nstat_node_names = ['mn01']
    #nstat_node_vm_ram_arr = [4096] # in MB
    #nstat_node_vm_cpu_arr = [2] # number of CPUs
    # VMs setup for multinet tests--------------------------------------------------
    #nstat_node_names = ['mn01', 'mn02', 'mn03', 'mn04', 'mn05', 'mn06', 'mn07', 'mn08', 'mn09', 'mn10', 'mn11', 'mn12', 'mn13', 'mn14', 'mn15', 'mn16']
    #nstat_node_vm_ram_arr = [4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096]
    #nstat_node_vm_cp
  • cd into path /vagrant/packaged_multi

    cd <NSTAT base dir>/vagrant/packaged_multi

    and run command:

    vagrant up

Change the test configuration file

The IP addresses of all deployed VMs and the credentials to open SSH connections, must be configured in the json configuration file of the sample test we want to run. This action must be done in nstat_node.

  • SSH into nstat_node

    the password to connect is vagrant.

  • Edit json file /home/vagrant/nstat/stress_test/sample_test_confs/lithium_sr3/lithium_RPC_sb_active_scalability_mtcbench.json and change the following lines changing IP addresses and SSH credentials:

    "nstat_node_spec":"/nstat/vagrant/node_nstat/Vagrantfile",
    "nstat_host_ip":"127.0.0.1",
    "nstat_node_ip":"192.168.100.20",
    "nstat_node_ssh_port":"22",
    "nstat_node_username":"vagrant",
    "nstat_node_password":"vagrant",
    
    "controller_node_spec":"/nstat/vagrant/node_controller/Vagrantfile",
    "controller_host_ip":"127.0.0.1",
    "controller_node_ip":"192.168.100.21",
    "controller_node_ssh_port":"22",
    "controller_node_username":"vagrant",
    "controller_node_password":"vagrant",
    
    "cbench_node_spec":"/nstat/vagrant/node_cbench/Vagrantfile",
    "cbench_host_ip":"127.0.0.1",
    "cbench_node_ip":"192.168.100.22",
    "cbench_node_ssh_port":"22",
    "cbench_node_username":"vagrant",
    "cbench_node_password":"vagrant",

Run the test

In order to run the test

  • Execute the following command

    export CONFIG_FILENAME="lithium_RPC_sb_active_scalability_mtcbench"
    export WORKSPACE='/home/vagrant/nstat'
    export PYTHONPATH=$WORKSPACE
    export OUTPUT_FILENAME=$CONFIG_FILENAME
    export RESULTS_DIR=results_"$CONFIG_FILENAME
    
    python3.4 ./stress_test/nstat_orchestrator.py \
        --test="sb_active_scalability_mtcbench" \
        --ctrl-base-dir=$WORKSPACE/controllers/odl_lithium_sr3_pb/ \
        --sb-generator-base-dir=$WORKSPACE/emulators/mt_cbench/ \
        --json-config=$WORKSPACE/stress_test/sample_test_confs/lithium_sr3/$CONFIG_FILENAME".json" \
        --json-output=$WORKSPACE/$OUTPUT_FILENAME"_results.json" \
        --html-report=$WORKSPACE/report.html \
        --output-dir=$WORKSPACE/$RESULTS_DIR/
  • Inspect results under the path /home/vagrant/nstat/results_lithium_RPC_sb_active_scalability_mtcbench

Configuration keys

The configuration keys that must be specified in the JSON configuration file are:

Config key type description
nstat_node_spec string Vagrant provisioning script for NSTAT VM
nstat_host_ip string IP Address of the Host machine where the NSTAT VM will be created
nstat_node_ip string IP Address of the NSTAT VM
nstat_node_ssh_port string the ssh port of the NSTAT VM
nstat_node_username string username for ssh login in the NSTAT VM
nstat_node_password string password for ssh login in the NSTAT VM
controller_node_spec string Vagrant provisioning script for Controller VM
controller_host_ip string IP Address of the Host machine where the Controller VM will be created
controller_node_ip string IP Address of the Controller VM
controller_node_ssh_port string The ssh port of the Controller VM
controller_node_username string Username for ssh login in the Controller VM
controller_node_password string Password for ssh login in the Controller VM
cbench_node_spec string Vagrant provisioning script for Mininet VM
cbench_host_ip string IP Address of the Host machine where the MT-Cbench VM will be created
cbench_node_ip string IP Address of the MT-Cbench VM
cbench_node_ssh_port string The ssh port of the MT-Cbench VM
cbench_node_username string username for ssh login in the MT-Cbench VM
cbench_node_password string password for ssh login in the MT-Cbench VM
controller_build_handler string executable for building controller (relative to --ctrl-base-dir command line parameter)
controller_start_handler string executable for starting controller (relative to --ctrl-base-dir command line parameter)
controller_stop_handler string executable for stopping controller (relative to --ctrl-base-dir command line parameter)
controller_status_handler string executable for querying controller status (relative to ctrl-base-dir command line parameter)
controller_clean_handler string executable for cleaning up controller directory (relative to --ctrl-base-dir command line parameter)
controller_statistics_handler string executable for changing the period that the controller collects topology statistics (relative to --ctrl-base-dir command line parameter)
controller_logs_dir string controllers logs directory (relative to --ctrl-base-dir command line parameter)
controller_rebuild boolean whether to build controller during test initialization
controller_cleanup boolean whether to cleanup controller after test completion
controller_name string descriptive name for controller
controller_port number controller port number where OF switches should connect
controller_statistics_period_ms array of numbers controller different statistics period values (in (ms))
controller_cpu_shares number the percentage of CPU resources of the physical machine to be assigned to the controller process CPU Shares
cbench_build_handler string executable for building MT-Cbench (relative to --sb-generator-base-dir command line parameter)
cbench_run_handler string executable for running MT-Cbench (relative to --sb-generator-base-dir command line parameter)
cbench_clean_handler string executable for cleaning up MT-Cbench (relative to --sb-generator-base-dir command line parameter)
cbench_rebuild boolean whether to build MT-Cbench during test initialization
cbench_cleanup boolean whether to cleanup MT-Cbench after test completion
cbench_name string descriptive name for MT-Cbench
cbench_simulated_hosts array of numbers _number of hosts (MACs) simulated by the MT-Cbench
cbench_threads array of numbers number of total MT-Cbench threads
cbench_switches_per_thread array of numbers number of OF switches simulated per MT-Cbench thread
cbench_thread_creation_delay_ms array of numbers delay (in ms) between creation of consecutive MT-Cbench threads
cbench_delay_before_traffic_ms array of numbers delay (in ms) before MT-Cbench threads start transmitting OF traffic
cbench_mode string MT-Cbench mode ("Latency" or "Throughput")
cbench_warmup number number of initial internal iterations that should be treated as "warmup" and are not considered when computing aggregate performance results
cbench_ms_per_test number duration (in ms) of generator internal iteration
cbench_internal_repeats number number of internal iterations during traffic transmission where performance and other statistics are sampled
cbench_cpu_shares number the percentage of CPU resources of the physical machine to be assigned to the MT-Cbench process CPU Shares
java_opts array of strings Java options to initialize JAVA_OPTS env variable
test_repeats number number of external iterations for a test, i.e. the number of times a test should be repeated to derive aggregate results (average, min, max, etc.)
plots array of plot objects configurations for plots to be produced after the test

The array-valued configuration keys shown in bold are the test dimensions of the test scenario. The stress test will be repeated over all possible combinations of their values.

The most important configuration keys are

  • cbench_threads
  • cbench_switches_per_thread
  • cbench_thread_creation_delay_ms

These keys determine the parameters for progressively booting switches into an SDN network, allowing in this way to find the combination of values for booting a topology of certain size, optimally. The values of cbench_threads and cbench_switches_per_thread, define the overall number of network nodes (topology size), connected on the controller. This number is equal to (cbench_threads * cbench_switches_per_thread).

Plot configuration

See the plotting page.

Result keys

The result keys produced by this kind of test scenario and which can be used subsequently to generate custom plots, are the following:

Result key type description
global_sample_id number unique (serial) ID for this sample
timestamp number unique timestamp for this sample
date string date this sample was taken
test_repeats number number of times the test was repeated (for reliability reasons)
repeat_id number ID for the external iteration of this sample
cbench_internal_repeats number number of internal iterations during traffic transmission where performance and other statistics were sampled
internal_repeat_id number ID for the internal MT-Cbench iteration corresponding to this sample
throughput_responses_sec number controller measured throughput (responses/sec)
cbench_simulated_hosts number number of hosts (MACs) simulated by the MT-Cbench
cbench_switches number total number of MT-Cbench simulated switches (equals to #threads*#switches_per_thread
cbench_threads number number of total MT-Cbench threads
cbench_switches_per_thread number number of OF switches simulated per MT-Cbench thread
cbench_thread_creation_delay_ms number delay (in ms) between creation of consecutive threads
cbench_delay_before_traffic_ms number delay (in ms) before MT-Cbench threads start transmitting OF traffic
cbench_ms_per_test number duration (in ms) of MT-Cbench internal iteration
cbench_warmup number number of initial internal iterations that were treated as "warmup" and are not considered when computing aggregate performance results
cbench_mode string generator mode (Latency or Throughput)
cbench_cpu_shares number the percentage of CPU resources of the physical machine, allocated to the MT-Cbench process
controller_node_ip string controller IP address where OF switches were connected
controller_port number controller port number where OF switches should connect
controller_java_xopts array of strings controller Java optimization flags (-X)
one_minute_load number one-minute average system load
five_minute_load number five-minute average system load
fifteen_minute_load number fifteen-minute average system load
used_memory_bytes number system used memory in bytes
total_memory_bytes number system total memory in bytes
controller_cpu_shares number the percentage of CPU resources of the physical machine, allocated to the controller process
controller_cpu_system_time number CPU system time for controller
controller_cpu_user_time number CPU user time for controller
controller_num_threads number number of controller threads measured when this sample was taken
controller_num_fds number number of open file descriptors measured when this sample was taken
controller_statistics_period_ms number the interval (in ms) of the statistics period of the controller

The result key in bold (throughput_responses_sec) is the main performance metric produced by this test scenario.

Sample experimental results

The following figures show sample results from switch scalability stress tests with the OpenDaylight controller operating in two modes:

  • RPC mode: the controller is configured to directly reply to the switches with a predefined Flow-Mod message at the OpenFlow plugin level (use of start_droptestRPC.sh handler)
  • DataStore mode: the controller additionally performs updates in its DataStore (use of start_droptestDS.sh handlers)

Clone this wiki locally