summaryrefslogtreecommitdiff
path: root/docs/development
diff options
context:
space:
mode:
authordrebs <drebs@riseup.net>2017-09-28 14:47:04 -0300
committerdrebs <drebs@riseup.net>2017-09-28 14:48:07 -0300
commit672a6d4e62376a834378aa70c91ab8612b15094c (patch)
tree6dd760167814b9b5a90add413a2e34a76a096c04 /docs/development
parentfba9c274f21e593a3d041949d873435abe922dc9 (diff)
[doc] move development docs to a subdir
Diffstat (limited to 'docs/development')
-rw-r--r--docs/development/benchmarks.rst106
-rw-r--r--docs/development/contributing.rst48
-rw-r--r--docs/development/deprecation.rst34
-rw-r--r--docs/development/tests.rst33
4 files changed, 221 insertions, 0 deletions
diff --git a/docs/development/benchmarks.rst b/docs/development/benchmarks.rst
new file mode 100644
index 00000000..25e39ae7
--- /dev/null
+++ b/docs/development/benchmarks.rst
@@ -0,0 +1,106 @@
+.. _benchmarks:
+
+Benchmarks
+==========
+
+We currently use `pytest-benchmark <https://pytest-benchmark.readthedocs.io/>`_
+to write tests to assess the time and resources taken by various tasks.
+
+To run benchmark tests, once inside a cloned Soledad repository, do the
+following::
+
+ tox -e benchmark
+
+Results of automated benchmarking for each commit in the repository can be seen
+in: https://benchmarks.leap.se/.
+
+Benchmark tests also depend on `tox` and `CouchDB`. See the :ref:`tests` page
+for more information on how to setup the test environment.
+
+Test repetition
+---------------
+
+``pytest-benchmark`` runs tests multiple times so it can provide meaningful
+statistics for the time taken for a tipical run of a test function. The number
+of times that the test is run can be manually or automatically configured.
+
+When automatically configured, the number of runs is decided by taking into
+account multiple ``pytest-benchmark`` configuration parameters. See the `the
+corresponding documenation
+<https://pytest-benchmark.readthedocs.io/en/stable/calibration.html>`_ for more
+details on how automatic calibration works.
+
+To achieve a reasonable number of repetitions and a reasonable amount of time
+at the same time, we let ``pytest-benchmark`` choose the number of repetitions
+for faster tests, and manually limit the number of repetitions for slower tests.
+
+Currently, tests for `synchronization` and `sqlcipher asynchronous document
+creation` are fixed to run 4 times each. All the other tests are left for
+``pytest-benchmark`` to decide how many times to run each one. With this setup,
+the benchmark suite is taking approximatelly 7 minutes to run in our CI server.
+As the benchmark suite is run twice (once for time and cpu stats and a second
+time for memory stats), the whole benchmarks run takes around 15 minutes.
+
+The actual number of times a test is run when calibration is done automatically
+by ``pytest-benchmark`` depends on many parameters: the time taken for a sample
+run and the configuration of the minimum number of rounds and maximum time
+allowed for a benchmark. For a snapshot of the number of rounds for each test
+function see `the soledad benchmarks wiki page
+<https://0xacab.org/leap/soledad/wikis/benchmarks>`_.
+
+Sync size statistics
+--------------------
+
+Currenly, the main use of Soledad is to synchronize client-encrypted email
+data. Because of that, it makes sense to measure the time and resources taken
+to synchronize an amount of data that is realistically comparable to a user's
+email box.
+
+In order to determine what is a good example of dataset for synchronization
+tests, we used the size of messages of one week of incoming and outgoing email
+flow of a friendly provider. The statistics that came out from that are (all
+sizes are in KB):
+
++--------+-----------+-----------+
+| | outgoing | incoming |
++========+===========+===========+
+| min | 0.675 | 0.461 |
++--------+-----------+-----------+
+| max | 25531.361 | 25571.748 |
++--------+-----------+-----------+
+| mean | 252.411 | 110.626 |
++--------+-----------+-----------+
+| median | 5.320 | 14.974 |
++--------+-----------+-----------+
+| mode | 1.404 | 1.411 |
++--------+-----------+-----------+
+| stddev | 1376.930 | 732.933 |
++--------+-----------+-----------+
+
+Sync test scenarios
+-------------------
+
+Ideally, we would want to run tests for a big data set (i.e. a high number of
+documents and a big payload size), but that may be infeasible given time and
+resource limitations. Because of that, we choose a smaller data set and suppose
+that the behaviour is somewhat linear to get an idea for larger sets.
+
+Supposing a data set total size of 10MB, some possibilities for number of
+documents and document sizes for testing download and upload can be seen below.
+Scenarios marked in bold are the ones that are actually run in the current sync
+benchmark tests, and you can see the current graphs for each one by following
+the corresponding links:
+
+
+* 10 x 1M
+* **20 x 500K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_20_500k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_20_500k.html>`_)
+* **100 x 100K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_100_100k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_100_100k.html>`_)
+* 200 x 50K
+* **1000 x 10K** (`upload <https://benchmarks.leap.se/test-dashboard_test_upload_1000_10k.html>`_, `download <https://benchmarks.leap.se/test-dashboard_test_download_1000_10k.html>`_)
+
+In each of the above scenarios all the documents are of the same size. If we
+want to account for some variability on document sizes, it is sufficient to
+come up with a simple scenario where the average, minimum and maximum sizes are
+somehow coherent with the above statistics, like the following one:
+
+* 60 x 15KB + 1 x 1MB
diff --git a/docs/development/contributing.rst b/docs/development/contributing.rst
new file mode 100644
index 00000000..76e7450c
--- /dev/null
+++ b/docs/development/contributing.rst
@@ -0,0 +1,48 @@
+Contributing
+============
+
+Thank you for your interest in contributing to Soledad!
+
+Filing bug reports
+------------------
+
+Bug reports are very welcome. Please file them on the `0xacab issue tracker
+<https://0xacab.org/leap/soledad/issues>`_. Please include an extensive
+description of the error, the steps to reproduce it, the version of Soledad
+used and the provider used (if applicable).
+
+Patches
+-------
+
+All patches to Soledad should be submitted in the form of pull requests to the
+`main Soledad repository <https://0xacab.org/leap/soledad>`_. These pull
+requests should satisfy the following properties:
+
+* The pull request should focus on one particular improvement to Soledad.
+ Create different pull requests for unrelated features or bugfixes.
+* Code should follow `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_,
+ especially in the "do what code around you does" sense.
+* Pull requests that introduce code must test all new behavior they introduce
+ as well as for previously untested or poorly tested behavior that they touch.
+* Pull requests are not allowed to break existing tests. We will consider pull
+ requests that are breaking the CI as work in progress.
+* When introducing new functionality, please remember to write documentation.
+* Please sign all of your commits. If you are merging code from other
+ contributors, you should sign their commits.
+
+Review
+~~~~~~
+
+Pull requests must be reviewed before merging. The final responsibility for the
+reviewing of merged code lies with the person merging it. Exceptions to this
+rule are modifications to documentation, packaging, or tests when the need of
+agility of merging without review has little or no impact on the security and
+functionality of working code.
+
+Getting help
+~~~~~~~~~~~~
+
+If you need help you can reach us through one of the following means:
+
+* IRC: ``#leap`` at ``irc.freenode.org``.
+* Mailing list: `leap-discuss@lists.riseup.net <https://lists.riseup.net/www/info/leap-discuss>`_
diff --git a/docs/development/deprecation.rst b/docs/development/deprecation.rst
new file mode 100644
index 00000000..d0454d5a
--- /dev/null
+++ b/docs/development/deprecation.rst
@@ -0,0 +1,34 @@
+Backwards-compatibility and deprecation policy
+==============================================
+
+Since Soledad has not reached a stable `1.0` release yet, no guarantees are made
+about the stability of its API or the backwards-compatibility of any given
+version.
+
+Currently, the internal storage representation is experimenting changes that
+will take some time to mature and settle up. For the moment, no given SOLEDAD
+release is offering any backwards-compatibility guarantees.
+
+Although serious efforts are being made to ensure no data is corrupted or lost
+while upgrading soledad versions, it's not advised to use SOLEDAD for any
+critical storage at the moment, or to upgrade versions without any external data
+backup (for instance, an email application that uses SOLEDAD should allow to
+export mail data or PGP keys in a convertible format before upgrading).
+
+Deprecation Policy
+------------------
+
+The points above standing, the development team behind SOLEDAD will strive to
+provide clear migration paths between any two given, consecutive **minor
+releases**, in an automated form wherever possible.
+
+This means, for example, that a migration script will be provided with the
+``0.10`` release, to migrate data stored by any of the ``0.9.x`` soledad
+versions. Another script will be provided to migrate from ``0.10`` to ``0.11``,
+etc (but not, for instance, from ``0.8`` to ``0.10``).
+
+At the same time, there's a backwards-compatibility policy of **deprecating APIs
+after 2 minor releases**. This means that a feature will start to be marked as
+deprecated in ``0.10``, with a warning being raised for 2 minor releases, and
+the API will disappear completely no sooner than in ``0.12``.
+
diff --git a/docs/development/tests.rst b/docs/development/tests.rst
new file mode 100644
index 00000000..72e2d087
--- /dev/null
+++ b/docs/development/tests.rst
@@ -0,0 +1,33 @@
+.. _tests:
+
+Tests
+=====
+
+We use `pytest <https://docs.pytest.org/en/latest/>`_ as a testing framework
+and `Tox <https://tox.readthedocs.io>`_ as a test environment manager.
+Currently, tests reside in the `testing/` folder and some of them need a
+couchdb server to be run against.
+
+If you do have a couchdb server running on localhost on default port, the
+following command should be enough to run tests::
+
+ tox
+
+CouchDB dependency
+------------------
+
+In case you want to use a couchdb on another host or port, use the
+`--couch-url` parameter for `pytest`::
+
+ tox -- --couch-url=http://couch_host:5984
+
+If you want to exclude all tests that depend on couchdb, deselect tests marked
+with `needs_couch`::
+
+ tox -- -m 'not needs_couch'
+
+Benchmark tests
+---------------
+
+A set of benchmark tests is provided to measure the time and resources taken to
+perform some actions. See the :ref:`documentation for benchmarks <benchmarks>`.