fragattacks/tests/remote/test_devices.py

125 lines
4.0 KiB
Python
Raw Normal View History

tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
#!/usr/bin/env python2
#
# Show/check devices
# Copyright (c) 2016, Tieto Corporation
#
# This software may be distributed under the terms of the BSD license.
# See README for more details.
import time
import traceback
import config
import os
import sys
import getopt
import re
import logging
logger = logging.getLogger()
import rutils
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
from remotehost import Host
from wpasupplicant import WpaSupplicant
import hostapd
def show_devices(devices, setup_params):
"""Show/check available devices"""
print("Devices:")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
for device in devices:
host = rutils.get_host(devices, device['name'])
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# simple check if authorized_keys works correctly
status, buf = host.execute(["id"])
if status != 0:
print("[" + host.name + "] - ssh communication: FAILED")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
continue
else:
print("[" + host.name + "] - ssh communication: OK")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# check setup_hw works correctly
rutils.setup_hw_host(host, setup_params)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# show uname
status, buf = host.execute(["uname", "-s", "-n", "-r", "-m", "-o"])
print("\t" + buf)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# show ifconfig
ifaces = re.split('; | |, ', host.ifname)
for iface in ifaces:
status, buf = host.execute(["ifconfig", iface])
if status != 0:
print("\t" + iface + " failed\n")
continue
lines = buf.splitlines()
for line in lines:
print("\t" + line)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# check hostapd, wpa_supplicant, iperf exist
status, buf = host.execute([setup_params['wpa_supplicant'], "-v"])
if status != 0:
print("\t" + setup_params['wpa_supplicant'] + " not find\n")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
continue
lines = buf.splitlines()
for line in lines:
print("\t" + line)
print("")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
status, buf = host.execute([setup_params['hostapd'], "-v"])
if status != 1:
print("\t" + setup_params['hostapd'] + " not find\n")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
continue
lines = buf.splitlines()
for line in lines:
print("\t" + line)
print("")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
status, buf = host.execute([setup_params['iperf'], "-v"])
if status != 0 and status != 1:
print("\t" + setup_params['iperf'] + " not find\n")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
continue
lines = buf.splitlines()
for line in lines:
print("\t" + line)
print("")
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
def check_device(devices, setup_params, dev_name, monitor=False):
host = rutils.get_host(devices, dev_name)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# simple check if authorized_keys works correctly
status, buf = host.execute(["id"])
if status != 0:
raise Exception(dev_name + " - ssh communication FAILED: " + buf)
rutils.setup_hw_host(host, setup_params)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
ifaces = re.split('; | |, ', host.ifname)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
# check interfaces (multi for monitor)
for iface in ifaces:
status, buf = host.execute(["ifconfig", iface])
if status != 0:
raise Exception(dev_name + " ifconfig " + iface + " failed: " + buf)
# monitor doesn't need wpa_supplicant/hostapd ...
if monitor == True:
return
status, buf = host.execute(["ls", "-l", setup_params['wpa_supplicant']])
if status != 0:
raise Exception(dev_name + " - wpa_supplicant: " + buf)
status, buf = host.execute(["ls", "-l", setup_params['hostapd']])
if status != 0:
raise Exception(dev_name + " - hostapd: " + buf)
status, buf = host.execute(["which", setup_params['iperf']])
if status != 0:
raise Exception(dev_name + " - iperf: " + buf)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
status, buf = host.execute(["which", "tshark"])
if status != 0:
logger.debug(dev_name + " - tshark: " + buf)
tests: Add remote directory to tests Add tests/remote directory and files: config.py - handle devices/setup_params table run-tests.py - run test cases test_devices.py - run basic configuration tests You can add own configuration file, by default this is cfg.py, and put there devices and setup_params definition in format you can find in config.py file. You can use -c option or just create cfg.py file. Print available devices/test_cases: ./run-tests.py Check devices (ssh connection, authorized_keys, interfaces): ./run-test.py -t devices Run sanity tests (test_sanity_*): ./run-test.py -d <dut_name> -t sanity Run all tests: ./run-tests.py -d <dut_name> -t all Run test_A and test_B: ./run-tests.py -d <dut_name> -t "test_A, test_B" Set reference device, and run sanity tests: ./run-tests.py -d <dut_name> -r <ref_name> -t sanity Multiple duts/refs/monitors could be setup: e.g. ./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity Monitor could be set like this: ./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor> You can also add filters to tests you would like to run ./run-tests.py -d <dut_name> -t all -k wep -k g_only ./run-tests.py -d <dut_name> -t all -k VHT80 ./run-test.py doesn't start/terminate wpa_supplicant or hostpad, test cases are resposible for that, while we don't know test case requirements. Restart (-R) trace (-T) and perf (-P) options available. This request trace/perf logs from the hosts (if possible). As parameters each test case get: - devices - table of available devices - setup_params - duts - names of DUTs should be tested - refs - names of reference devices should be used - monitors - names of monitors list Each test could return append_text. Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-04-29 01:07:33 -04:00
def check_devices(devices, setup_params, refs, duts, monitors):
"""Check duts/refs/monitors devices"""
for dut in duts:
check_device(devices, setup_params, dut)
for ref in refs:
check_device(devices, setup_params, ref)
for monitor in monitors:
if monitor == "all":
continue
check_device(devices, setup_params, monitor, monitor=True)