Age | Commit message (Collapse) | Author |
|
If we create all at once we cant test higher loads because it will try
to hold all in memory at the same time. Also, this code is smaller and
more readable.
|
|
|
|
1) enable HTTP 1.1 chunked upload on server
2) make the client sync.py generate a list of function calls instead of
a list of full docs
3) disable encryption pool
4) make the doc encryption a list of function calls
5) create a twisted protocol for sending
6) make a producer that calls the doc generation as necessary
|
|
It's not being used
|
|
Temporary fix for server streaming
|
|
Conflicts:
server/src/leap/soledad/server/__init__.py
testing/tests/conftest.py
|
|
|
|
We were using 'x'*size as payload, but on real usage the payload will be
random. This commit randomizes the payload using a predefined seed, so
the random payload will be the same across benchmarks.
Using random payloads also improves accuracy of compression or encoding
impacts and we will be evaluating those changes for resouce usage
issues.
Also note that base64 is used on payload. That was needed for utf8
safety, but overhead was removed to leave payloads as defined by
benchmarks.
Base64 was chosen also due its popular usage on MIME encoding, which is
used on mail attachments (our current scenario).
|
|
This isnt a test, but a benchmark. Initialization sounds more like an
operation while instance is just something.
|
|
We are using lower values on test_encdecpool due high memory usage,
described in #7370. Added a comment to explain it.
|
|
defer parameter wasnt clear
|
|
Otherwise it will add unrelated overhead to results.
|
|
|
|
They arent used so far and using empty dicts to make them work is ugly.
Removing it leaves the return function on setup code clean and readable.
|
|
1000 docs at 100k~500k are exploding memory (4Gb+4Gb swap).
Changed for 100 docs in order to be able to get measures on higher
loads. Now its 10k, 100k and 500k
|
|
Hypothesis: raw vs doc
Added the same sizes set (10k, 100k, 500k, 1M, 10M, 50M) as the document
crypto test, so we can compare how close to raw the higher level
operation is.
|
|
10k, 100k, 500k, 1m, 10m and 50m for encryption and decryption of a
whole document.
|
|
Most of them are commented as memory usage is going out of control for
now.
|
|
It has a heavy scrypt hashing processing with room for improvement.
|
|
Syncing without any changes was reported as slow. This benchmark will
help measure it.
|
|
Use a new one to avoid reusing the same database.
|
|
function is the default scope, so there is no need to pass this
parameter. Previously, one of the scopes was 'module', but it is a
nested function that fires on demand, so it should clean up itself from
test to test in order to avoid conflict while putting.
|
|
|
|
Creating 20/500k, 100/100k and 1000/10k.
|
|
If we have many scenarios (like 20/500k, 100/100k, 1000,10k) then making
a nested function to generate tests based on scenario parameters
simplifies the code a lot.
|
|
Adapted pytest-benchmark to Twisted as it's synchronous and added
fixtures for benchmarking.
|
|
`pytest_addoption` was declared twice making the second declaration
replace the first, thus removing couch url parameter.
|
|
|
|
|
|
|
|
perf tests
|
|
|