summaryrefslogtreecommitdiff
path: root/testing/tests/benchmarks
AgeCommit message (Collapse)Author
2017-07-18[benchmarks] store test function docstringdrebs
2017-07-13[doc] add info on how to update benchmarks websitedrebs
2017-07-13[benchmarks] change 20_500k to 10_1000kdrebs
2017-07-12[doc] mark which sync benchmark scenarios are actually rundrebs
2017-07-12[benchmarks] limit number of runs of sqlcipher benchmark testsdrebs
As sqlcipher benchmark tests take longer, we want to limit the number of repetitions. Previous to this change, these tests were being calibrated automatically and would run 5 times becuase it is the default minimum number of times for pytest-benchmark. By changing the runner to pedantic mode, now they will be run 4 times, the same number of times as benchmark sync tests.
2017-07-12[benchmarks] allow passing args and kwargs to txbenchmark_with_setupdrebs
2017-07-09[benchmarks] separate memory sampling from cpu measurementdrebs
2017-07-08[benchmarks] run benchmarks twice, for time and resourcesdrebs
We noticed that instrumentation added for watching resources has an impact in time statistics (i.e. it increases average and stddev). This commit makes the benchmark tests run twice: once for measuring time and a second time for measuring resources.
2017-07-08[benchmarks] add --watch-resources optiondrebs
This commit adds the --watch-resources command line option for benchmarks tests, and allows to running the benchmark test suite with and without resource monitoring instrumentation code. This is needed because resource consumption monitoring impacts the mean time and standard deviation of time taken to run benchmarked tests.
2017-07-08[benchmarks] skip some tests by defaultdrebs
2017-07-07[test] change memory sampling interval to 0.1 on benchmarksdrebs
2017-04-27[test] monitor cpu/mem for all benchmarksdrebs
2017-04-27[test] add memory measurementdrebs
2017-04-27[test] measure cpu percentage during benchmarkdrebs
2017-04-19[test] avoid running sqlcipher synchonous tests when benchmarkingdrebs
SQLCipher synchronous benchmark tests were introduced when we started developing benchmark tests to compare synchronous and asynchronous code. Synchronous access to sqlcipher database is not used in soledad, and those tests are much slower than asynchronous tests (more than 10 times using ssd drive), so we want to avoid running them on ci. This commit introduces a "synchronous" marker and avoid running tests markes as such in ci environment.
2017-04-04[feat] add the host hostname to benchmark machine infodrebs
2017-03-02[test] add comments explaining behaviour of upload/download benchmarkdrebs
2017-03-02[test] mark benchmark tests using their group namesdrebs
2017-03-02[test] bugfix: actually use an empty local db in download benchmarksdrebs
We were previously not using an empty local db for download benchmark tests, so there was actually nothing to sync. This commit fixes that by adding a way to force an empty local db on soledad client instantiation.
2016-12-12[feature] Add retro compat on secrets.py ciphersVictor Shyba
Integrated the secrets's JSON key that specifies ciphers into _crypto and added optional GCM. Also added a test to check if both cipher types can be imported. Resolves: #8680 Signed-off-by: Victor Shyba <victor1984@riseup.net>
2016-12-12[feature] use GCM instead of CTR+HMACVictor Shyba
Resolves: #8668 - client: substitute usage of CTR mode + HMAC by GCM cipher mode Signed-off-by: Victor Shyba <victor1984@riseup.net>
2016-12-12[feature] speed up sync benchmark setup codeVictor Shyba
We aren't testing huge payloads on CI, so it doesn't make sense to insert docs one by one. 'gatherResults' can speed up bench setup.
2016-12-12[feature] make _crypto stream on decryptionVictor Shyba
We are already doing this on encryption, now we can stream also from decryption. This unblocks the reactor and will be valuable for blobs-io.
2016-12-12[tests] fixes test_crypto benchVictor Shyba
encrypt returns a deferred and needs the adapted benchmark runner.
2016-12-12[tests] use options instead of marksVictor Shyba
When we use marks the new pytests from benchmarks folder are collected and ignored, but this causes trial to fail sometimes. Using --ignore avoids it from being loaded while --benchmark-only will properly select the benchmarks for tox, as intended.
2016-12-12[test] rename benchmark tests directory and tagdrebs