-
Notifications
You must be signed in to change notification settings - Fork 5
Switch scalability test with idle Multinet switches
This test is an expansion of Switch scalability test with idle mininet switches. The only difference is that instead of using the mininet based topology generator, Mininet custom topologies in order to generate the topology of idle switches, we use multinet. Multinet is also a mininet basis topology generator, which can generate much larger topologies, using clustering.
The integration of Multinet in NSTAT, makes possible a more efficient stressing of the controller.
A switch scalability test with Multinet topology generator, can be started by specifying the following options in NSTAT command line:
--test=sb_idle_scalability_multinet
Under the stress_test/sample_test_confs/
directory, the JSON files ending in
_sb_idle_scalability_multinet
can be handled as template configuration files
for this kind of test scenario. You can specify them to the --json-config
option to run a sample test. For larger-scale stress tests, have a look at the
corresponding files under the stress_test/stress_test_confs/
directory.
-
Follow instructions of installation wiki up to step 3.
-
Edit file /vagrant/packaged_multi/Vagrantfile, and remove comments from the following section:
# VMs setup for sample Multinet tests------------------------------------------- nstat_node_names = ['mn01', 'mn02', 'mn03', 'ndcntrlr', 'ndnstat'] nstat_node_vm_ram_arr = [2048, 2048, 2048, 16384, 2048] nstat_node_vm_cpu_arr = [1, 1, 1, 4, 1]
comment out all other sections:
# VMs setup for sample MT-Cbench tests------------------------------------------ #nstat_node_names = ['ndnstat', 'ndcntrlr', 'ndcbench'] #nstat_node_vm_ram_arr = [2048, 16384, 2048] #nstat_node_vm_cpu_arr = [1, 4, 1] # VMs setup for sample Mininet tests-------------------------------------------- #nstat_node_names = ['ndmn', 'ndcntrlr', 'ndnstat'] #nstat_node_vm_ram_arr = [2048, 16384, 2048] #nstat_node_vm_cpu_arr = [1, 4, 1] # VMs setup for non multinet tests---------------------------------------------- #nstat_node_names = ['mn01'] #nstat_node_vm_ram_arr = [4096] # in MB #nstat_node_vm_cpu_arr = [2] # number of CPUs # VMs setup for multinet tests-------------------------------------------------- #nstat_node_names = ['mn01', 'mn02', 'mn03', 'mn04', 'mn05', 'mn06', 'mn07', 'mn08', 'mn09', 'mn10', 'mn11', 'mn12', 'mn13', 'mn14', 'mn15', 'mn16'] #nstat_node_vm_ram_arr = [4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096, 4096] #nstat_node_vm_cp
-
cd into path /vagrant/packaged_multi
cd <NSTAT base dir>/vagrant/packaged_multi
and run command:
vagrant up
The IP addresses of all deployed VMs and the credentials to open SSH connections, must be configured in the json configuration file of the sample test we want to run. This action must be done in nstat_node.
-
SSH into nstat_node
the password to connect is vagrant.
-
Edit json file /home/vagrant/nstat/stress_test/sample_test_confs/lithium_sr3/lithium_sb_idle_scalability_multinet.json and change the following lines changing IP addresses and SSH credentials:
"nstat_node_spec":"/nstat/vagrant/node_nstat/Vagrantfile", "nstat_host_ip":"127.0.0.1", "nstat_node_ip":"192.168.100.24", "nstat_node_ssh_port":"22", "nstat_node_username":"vagrant", "nstat_node_password":"vagrant", "controller_node_spec":"/nstat/vagrant/node_controller/Vagrantfile", "controller_host_ip":"127.0.0.1", "controller_node_ip":"192.168.100.23", "controller_node_ssh_port":"22", "controller_node_username":"vagrant", "controller_node_password":"vagrant", "topology_node_spec":"/nstat/vagrant/node_cbench/Vagrantfile", "topology_host_ip":"127.0.0.1", "topology_node_ip":"192.168.100.20", "topology_node_ssh_port":"22", "topology_node_username":"vagrant", "topology_node_password":"vagrant",
Also in the same file you must change the IP address list and the port list of REST interface of Multinet workers
"multinet_worker_ip_list":["192.168.100.20", "192.168.100.21", "192.168.100.22"], "multinet_worker_port_list":[3333, 3333, 3333],
in this setup the Multinet Master node is a Worker too.
In order to run the test
-
cd into /home/vagrant/nstat
cd /home/vagrant/nstat
-
Execute the following command
export CONFIG_FILENAME="lithium_sb_idle_scalability_multinet" export WORKSPACE='/home/vagrant/nstat' export PYTHONPATH=$WORKSPACE export OUTPUT_FILENAME=$CONFIG_FILENAME export RESULTS_DIR=results_"$CONFIG_FILENAME python3.4 ./stress_test/nstat_orchestrator.py \ --test="sb_idle_scalability_multinet" \ --ctrl-base-dir=$WORKSPACE/controllers/odl_lithium_sr3_pb/ \ --sb-generator-base-dir=$WORKSPACE/emulators/multinet/ \ --json-config=$WORKSPACE/stress_test/sample_test_confs/lithium_sr3/$CONFIG_FILENAME".json" \ --json-output=$WORKSPACE/$OUTPUT_FILENAME"_results.json" \ --html-report=$WORKSPACE/report.html \ --output-dir=$WORKSPACE/$RESULTS_DIR/
Once test execution is over, inspect the results under
/home/vagrant/nstat/results_lithium_sb_idle_scalability_multinet
The configuration keys that must be specified in the JSON configuration file are:
config key | type | description |
---|---|---|
nstat_node_spec |
string | Vagrant provisioning script for NSTAT Host machine. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
nstat_host_ip |
string | IP Address of the NSTAT Host machine. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
nstat_node_ip |
string | IP Address of the NSTAT node |
nstat_node_ssh_port |
string | the ssh port of the NSTAT node |
nstat_node_username |
string | username for ssh login in the NSTAT node |
nstat_node_password |
string | password for ssh login in the NSTAT node |
controller_node_spec |
string | Vagrant provisioning script for Controller VM. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
controller_host_ip |
string | IP Address of the Host machine where the Controller VM will be created. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
controller_node_ip |
string | IP Address of the Controller node |
controller_node_ssh_port |
string | The ssh port of the Controller node |
controller_node_username |
string | Username for ssh login in the Controller node |
controller_node_password |
string | Password for ssh login in the Controller node |
topology_node_spec |
string | |
topology_host_ip |
string | IP Address of the Multinet Host machine. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
topology_node_ip |
string | IP Address of the Multinet node. Based on the multinet documentation, this is the IP address of the Multinet master node. This configuration key currently is not in use and is reserved for future releases of NSTAT. |
topology_node_ssh_port |
string | The ssh port of the Multinet node |
topology_node_username |
string | username for ssh login in the Multinet node |
topology_node_password |
string | password for ssh login in the Multinet node |
controller_build_handler |
string | executable for building controller (relative to --ctrl-base-dir command line parameter) |
controller_start_handler |
string | executable for starting controller (relative to --ctrl-base-dir command line parameter) |
controller_stop_handler |
string | executable for stopping controller (relative to --ctrl-base-dir command line parameter) |
controller_status_handler |
string | executable for querying controller status (relative to --ctrl-base-dir command line parameter) |
controller_clean_handler |
string | executable for cleaning up controller directory (relative to --ctrl-base-dir command line parameter) |
controller_statistics_handler |
string | executable for changing the period that the controller collects topology statistics (relative to --ctrl-base-dir command line parameter) |
controller_logs_dir |
string | controllers logs directory (relative to --ctrl-base-dir command line parameter) |
controller_rebuild |
boolean | whether to build controller during test initialization |
controller_cleanup |
boolean | whether to cleanup controller after test completion |
controller_name |
string | descriptive name for controller |
controller_restart |
boolean | whether to restart controller in every iteration of the test |
controller_port |
number | controller port number where OF switches should connect |
controller_restconf_port |
number | controller RESTCONF port number |
controller_restconf_user |
string | controller RESTCONF user name |
controller_restconf_password |
string | controller RESTCONF password |
controller_statistics_period_ms |
array of numbers | controller different statistics period values (in (ms)) |
controller_cpu_shares |
number | the percentage of CPU resources of the physical machine to be assigned to the controller process CPU Shares |
topology_rest_server_boot |
string | executable that boots up all REST servers on Multinet master and workers nodes. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_rest_server_stop |
string | executable that stop's all REST servers initiated by Multinet on master and workers nodes. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_server_rest_port |
number | the port that Multinet server will listen to |
topology_init_handler |
string | executable that initializes a Multinet topology. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_start_switches_handler |
string | executable that stops a Multinet topology. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_stop_switches_handler |
string | executable that stops a Multinet topology. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_get_switches_handler |
string | executable that retrieves the number of booted switches in a Multinet topology. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
topology_size |
array of numbers | number of Multinet switches per Worker. Total switches is equal to topology_size * number_of_workers |
topology_type |
array of strings | type of Multinet topology {RingTopo, LinearTopo, DisconnectedTopo} |
topology_hosts_per_switch |
array of numbers | number of Multinet hosts per switch |
topology_group_size |
array of numbers | size of a group of switches |
topology_group_delay_ms |
array of numbers | delay between different switches groups (in milliseconds) |
multinet_clean_handler |
string | executable that cleans up locally Multinet files cloned from Multinet repository. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
multinet_build_handler |
string | executable that clones locally Multinet files from Multinet repository. The root path of this executable, is defined by --sb-generator-base-dir command line parameter |
multinet_worker_ip_list |
array of strings | a list of all IP addresses of worker nodes |
multinet_worker_port_list |
array of numbers | a list of port numbers of all REST servers on worker nodes |
multinet_switch_type |
string | the type of software switch, that will be used from our Multinet topology |
java_opts |
array of strings | java options parameters (we use this usually to define java VM memory and garbage collector configuration) |
plots |
array of plot objects | configurations for plots to be produced after the test |
** All IP addresses described for the configuration keys in the previous table, must be in the same subnet. Also all the ssh users must be able to execute commands in privileged mode passwordless.**
The array-valued configuration keys shown in bold are the test dimensions of the test scenario. The stress test will be repeated over all possible combinations of their values.
The most important configuration keys, which affect the stressing level on the controller, are:
-
topology_size
: defines the total number of switches in the network (controls switch scalability). -
topology_group_size
: defines the batch size of switches to be started -
topology_group_delay_ms
: defines the delay between batches of switches
With the parameters of topology_group_size
and topology_group_delay_ms
we can
gradually start a topology of switches and not start them all in once
(actually this is the case where topology_group_delay_ms
=0)
See the plotting page.
The result keys produced by this kind of test scenario and which can be used subsequently to generate custom plots, are the following:
Result key | type | description |
---|---|---|
global_sample_id |
number | unique (serial) ID for this sample |
timestamp |
number | unique timestamp for this sample |
date |
string | date this sample was taken |
bootup_time_secs |
number | Topology Bootup Time. This is the time interval (in seconds) between the start of topology elements until the controller topology discovery from datastore. |
discovered_switches |
number | The number of discovered switches from controller |
multinet_size |
number | number of Multinet switches connected to the controller |
multinet_worker_topo_size |
number | number of switches created within a single worker node |
multinet_workers |
number | total number of worker nodes, deployed by Multinet |
multinet_topology_type |
string | Multinet network topology type {ring, linear, disconnected, mesh} |
multinet_hosts_per_switch |
number | number of Multinet hosts per switch |
multinet_group_size |
number | size of a group of switches |
multinet_group_delay_ms |
number | delay between different switches groups (in (ms)) |
controller_node_ip |
string | controller IP address where OF switches were connected |
controller_port |
number | controller port number where OF switches should connect |
controller_java_xopts |
array of strings | controller Java optimization flags (-X ) |
one_minute_load |
number | one-minute average system load |
five_minute_load |
number | five-minute average system load |
fifteen_minute_load |
number | fifteen-minute average system load |
used_memory_bytes |
number | system used memory in bytes |
total_memory_bytes |
number | system total memory in bytes |
controller_cpu_shares |
number | the percentage of CPU resources of the physical machine, allocated to the controller process |
controller_cpu_system_time |
number | CPU system time for controller |
controller_cpu_user_time |
number | CPU user time for controller |
controller_num_threads |
number | number of controller threads measured when this sample was taken |
controller_num_fds |
number | number of open file descriptors measured when this sample was taken |
controller_statistics_period_ms |
number | the interval (in (ms)) of the statistics period of the controller |
The result key in bold (bootup_time_secs
) is the main performance
metric produced by this test scenario. Another important parameter is
discovered_switches
. In success test cases the value of this result key
must be the same with mininet_size
.
Sample experimental results for this test are shown below
Intro
Stress Tests
- Switch scalability test with active MT-Cbench switches
- Switch scalability test with active Multinet switches
- Switch scalability test with idle MT-Cbench switches
- Switch scalability test with idle Multinet switches
- Controller stability test with active MT-Cbench switches
- Controller stability test with idle Multinet switches
- Flow scalability test with idle Multinet switches
Emulators
Monitoring tools
- OpenFlow monitoring tools
Design
Releases
ODL stress tests performance reports
Sample Performance Results