summaryrefslogtreecommitdiff
path: root/testing/tests/benchmarks/README.md
diff options
context:
space:
mode:
authordrebs <drebs@riseup.net>2017-09-17 12:08:25 -0300
committerdrebs <drebs@riseup.net>2017-09-17 15:50:55 -0300
commitcfff46ff9becdbe5cf48816870e625ed253ecc57 (patch)
tree8d239e4499f559d86ed17ea3632008303b25d485 /testing/tests/benchmarks/README.md
parentf29abe28bd778838626d12fcabe3980a8ce4fa8c (diff)
[refactor] move tests to root of repository
Tests entrypoint was in a testing/ subfolder in the root of the repository. This was made mainly because we had some common files for tests and we didn't want to ship them (files in testing/test_soledad, which is itself a python package. This sometimes causes errors when loading tests (it seems setuptools is confused with having one python package in a subdirectory of another). This commit moves the tests entrypoint to the root of the repository. Closes: #8952
Diffstat (limited to 'testing/tests/benchmarks/README.md')
-rw-r--r--testing/tests/benchmarks/README.md51
1 files changed, 0 insertions, 51 deletions
diff --git a/testing/tests/benchmarks/README.md b/testing/tests/benchmarks/README.md
deleted file mode 100644
index b2465a78..00000000
--- a/testing/tests/benchmarks/README.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Benchmark tests
-===============
-
-This folder contains benchmark tests for Soledad. It aims to provide a fair
-account on the time and resources taken to perform some actions.
-
-These benchmarks are built on top of `pytest-benchmark`, a `pytest` fixture that
-provides means for running test functions multiple times and generating
-reports. The results are printed to screen and also posted to elasticsearch.
-
-`pytest-benchmark` runs tests multiple times so it can provide meaningful
-statistics for the time taken for a tipical run of a test function. The number
-of times that the test is run can be manually or automatically configured. When
-automatically configured, the number of runs is decided by taking into account
-multiple `pytest-benchmark` configuration parameters. See the following page
-for more details on how `pytest-benchmark` works:
-
- https://pytest-benchmark.readthedocs.io/en/stable/calibration.html
-
-Some graphs and analysis resulting from these tests can be seen on:
-
- https://benchmarks.leap.se/
-
-
-Resource consumption
---------------------
-
-For each test, CPU and memory usage statistics are also collected, by querying
-`cpu_percent()` and `memory_percent()` from `psutil.Process` for the current
-test process. Some notes about the current resource consumption estimation process:
-
-* Currently, resources are measured for the whole set of rounds that a test
- function is run. That means that the CPU and memory percentage include the
- `pytest` and `pytest-benchmark` machinery overhead. Anyway, for now this might
- provide a fair approximation of per-run test function resource usage.
-
-* CPU is measured before and after the run of the benchmark function and
- returns the percentage that the currnet process occupied of the CPU time
- between the two calls.
-
-* Memory is sampled during the benchmark run by a separate thread. Sampling
- interval might have to be configured on a per-test basis, as different tests
- take different times to execute (from milliseconds to tens of seconds). For
- now, an interval of 0.1s seems to cover all tests.
-
-
-Benchmarks website
-------------------
-
-To update the benchmarks website, see the documentation in
-``../../../docs/misc/benchmarks-website.rst``.