Commit Graph

5 Commits

Author SHA1 Message Date
Jouni Malinen
fab49f6145 tests: Python coding style cleanup (pylint3 bad-whitespace)
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
2019-03-16 18:52:09 +02:00
Jonathan Afek
8efc83d4e7 tests: Add support for wlantest for remote hwsim tests
Use a monitor interface given in the command line that is not also a
station or an AP as a monitor running wlantest on the channel used by
the test. This makes all the tests that use wlantest available for
execution on real hardware on remote hosts.

Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
2016-05-28 16:34:09 +03:00
Janusz Dziedzic
a185e9b10b tests/remote: Add hwsim wrapper
This allow to run hwsim test cases.

duts go to apdev while refs go to dev

For now I tested:
./run-tests.py -d hwsim0 -r hwsim1 -h ap_open -h dfs
./run-tests.py -r hwsim0 -r hwsim1 -h ibss_open -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h ap_vht80 -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h all -k ap -k vht

Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-05-14 17:56:37 +03:00
Janusz Dziedzic
ede4719718 tests/remote: Add monitor.py
Add monitor support. This supports monitors added to the current
interfaces. This also support standalone monitor with multi interfaces
support. This allows to get logs from different channels at the same
time to one pcap file.

Example of t3-monitor added to config.py file.

Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-05-14 17:56:37 +03:00
Janusz Dziedzic
5865186e31 tests: Add remote directory to tests
Add tests/remote directory and files:
config.py - handle devices/setup_params table
run-tests.py - run test cases
test_devices.py - run basic configuration tests

You can add own configuration file, by default this is cfg.py, and put
there devices and setup_params definition in format you can find in
config.py file. You can use -c option or just create cfg.py file.

Print available devices/test_cases:
./run-tests.py

Check devices (ssh connection, authorized_keys, interfaces):
./run-test.py -t devices

Run sanity tests (test_sanity_*):
./run-test.py -d <dut_name> -t sanity

Run all tests:
./run-tests.py -d <dut_name> -t all

Run test_A and test_B:
./run-tests.py -d <dut_name> -t "test_A, test_B"

Set reference device, and run sanity tests:
./run-tests.py -d <dut_name> -r <ref_name> -t sanity

Multiple duts/refs/monitors could be setup:
e.g.
./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity

Monitor could be set like this:
./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor>

You can also add filters to tests you would like to run
./run-tests.py -d <dut_name> -t all -k wep -k g_only
./run-tests.py -d <dut_name> -t all -k VHT80

./run-test.py doesn't start/terminate wpa_supplicant or hostpad,
test cases are resposible for that, while we don't know test
case requirements.

Restart (-R) trace (-T) and perf (-P) options available.
This request trace/perf logs from the hosts (if possible).

As parameters each test case get:
- devices - table of available devices
- setup_params
- duts - names of DUTs should be tested
- refs - names of reference devices should be used
- monitors - names of monitors list

Each test could return append_text.

Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
2016-05-14 17:56:35 +03:00