summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--docs/benchmarks.rst26
-rw-r--r--testing/tests/benchmarks/test_sync.py4
2 files changed, 19 insertions, 11 deletions
diff --git a/docs/benchmarks.rst b/docs/benchmarks.rst
index 8c9d7677..d24590f5 100644
--- a/docs/benchmarks.rst
+++ b/docs/benchmarks.rst
@@ -68,26 +68,30 @@ sizes are in KB):
| stddev | 1376.930 | 732.933 |
+--------+-----------+-----------+
-Test scenarios
---------------
+Sync test scenarios
+-------------------
Ideally, we would want to run tests for a big data set (i.e. a high number of
documents and a big payload size), but that may be infeasible given time and
resource limitations. Because of that, we choose a smaller data set and suppose
that the behaviour is somewhat linear to get an idea for larger sets.
-Supposing a data set size of 10MB, some possibilities for number of documents
-and document sizes for testing download and upload are:
+Supposing a data set total size of 10MB, some possibilities for number of
+documents and document sizes for testing download and upload can be seen below.
+Scenarios marked in bold are the ones that are actually run in the current sync
+benchmark tests, and you can see the current graphs for each one by following
+the corresponding links:
+
* 10 x 1M
-* 20 x 500K
-* 100 x 100K
+* **20 x 500K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_20_500k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_20_500k.html>`_)
+* **100 x 100K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_100_100k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_100_100k.html>`_)
* 200 x 50K
-* 1000 x 10K
+* **1000 x 10K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_1000_10k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_1000_10k.html>`_)
-The above scenarios all have documents of the same size. If we want to account
-for some variability on document sizes, it is sufficient to come up with a
-simple scenario where the average, minimum and maximum sizes are somehow
-coherent with the above statistics, like the following one:
+In each of the above scenarios all the documents are of the same size. If we
+want to account for some variability on document sizes, it is sufficient to
+come up with a simple scenario where the average, minimum and maximum sizes are
+somehow coherent with the above statistics, like the following one:
* 60 x 15KB + 1 x 1MB
diff --git a/testing/tests/benchmarks/test_sync.py b/testing/tests/benchmarks/test_sync.py
index fcfab998..04566678 100644
--- a/testing/tests/benchmarks/test_sync.py
+++ b/testing/tests/benchmarks/test_sync.py
@@ -30,6 +30,8 @@ def create_upload(uploads, size):
return test
+# ATTENTION: update the documentation in ../docs/benchmarks.rst if you change
+# the number of docs or the doc sizes for the tests below.
test_upload_20_500k = create_upload(20, 500 * 1000)
test_upload_100_100k = create_upload(100, 100 * 1000)
test_upload_1000_10k = create_upload(1000, 10 * 1000)
@@ -63,6 +65,8 @@ def create_download(downloads, size):
return test
+# ATTENTION: update the documentation in ../docs/benchmarks.rst if you change
+# the number of docs or the doc sizes for the tests below.
test_download_20_500k = create_download(20, 500 * 1000)
test_download_100_100k = create_download(100, 100 * 1000)
test_download_1000_10k = create_download(1000, 10 * 1000)