summaryrefslogtreecommitdiff
path: root/testing/tests/benchmarks
diff options
context:
space:
mode:
authordrebs <drebs@leap.se>2017-06-30 11:48:35 -0300
committerKali Kaneko <kali@leap.se>2017-07-07 21:12:39 +0200
commit6a3fd4e40236b37399720d5620afd12ddbe2168c (patch)
tree8ce357b3cdc0f1707eae50a95bbaaaf26a0c84ea /testing/tests/benchmarks
parent305318a6b2a9cbd638c6c48ce447fb228d7fe47d (diff)
[test] change memory sampling interval to 0.1 on benchmarks
Diffstat (limited to 'testing/tests/benchmarks')
-rw-r--r--testing/tests/benchmarks/README.md44
-rw-r--r--testing/tests/benchmarks/conftest.py2
2 files changed, 45 insertions, 1 deletions
diff --git a/testing/tests/benchmarks/README.md b/testing/tests/benchmarks/README.md
new file mode 100644
index 00000000..06a75282
--- /dev/null
+++ b/testing/tests/benchmarks/README.md
@@ -0,0 +1,44 @@
+Benchmark tests
+===============
+
+This folder contains benchmark tests for Soledad. It aims to provide a fair
+account on the time and resources taken to perform some actions.
+
+These benchmarks are built on top of `pytest-benchmark`, a `pytest` fixture that
+provides means for running test functions multiple times and generating
+reports. The results are printed to screen and also posted to elasticsearch.
+
+`pytest-benchmark` runs tests multiple times so it can provide meaningful
+statistics for the time taken for a tipical run of a test function. The number
+of times that the test is run can be manually or automatically configured. When
+automatically configured, the number of runs is decided by taking into account
+multiple `pytest-benchmark` configuration parameters. See the following page
+for more details on how `pytest-benchmark` works:
+
+ https://pytest-benchmark.readthedocs.io/en/stable/calibration.html
+
+Some graphs and analysis resulting from these tests can be seen on:
+
+ https://benchmarks.leap.se/
+
+
+Resource consumption
+--------------------
+
+For each test, CPU and memory usage statistics are also collected, by querying
+`cpu_percent()` and `memory_percent()` from `psutil.Process` for the current
+test process. Some notes about the current resource consumption estimation process:
+
+* Currently, resources are measured for the whole set of rounds that a test
+ function is run. That means that the CPU and memory percentage include the
+ `pytest` and `pytest-benchmark` machinery overhead. Anyway, for now this might
+ provide a fair approximation of per-run test function resource usage.
+
+* CPU is measured before and after the run of the benchmark function and
+ returns the percentage that the currnet process occupied of the CPU time
+ between the two calls.
+
+* Memory is sampled during the benchmark run by a separate thread. Sampling
+ interval might have to be configured on a per-test basis, as different tests
+ take different times to execute (from milliseconds to tens of seconds). For
+ now, an interval of 0.1s seems to cover all tests.
diff --git a/testing/tests/benchmarks/conftest.py b/testing/tests/benchmarks/conftest.py
index cfad458a..4dbc4377 100644
--- a/testing/tests/benchmarks/conftest.py
+++ b/testing/tests/benchmarks/conftest.py
@@ -88,7 +88,7 @@ def txbenchmark_with_setup(monitored_benchmark_with_setup):
class ResourceWatcher(threading.Thread):
- sampling_interval = 1
+ sampling_interval = 0.1
def __init__(self):
threading.Thread.__init__(self)