Age | Commit message (Collapse) | Author |
|
It's not being used
|
|
We were using 'x'*size as payload, but on real usage the payload will be
random. This commit randomizes the payload using a predefined seed, so
the random payload will be the same across benchmarks.
Using random payloads also improves accuracy of compression or encoding
impacts and we will be evaluating those changes for resouce usage
issues.
Also note that base64 is used on payload. That was needed for utf8
safety, but overhead was removed to leave payloads as defined by
benchmarks.
Base64 was chosen also due its popular usage on MIME encoding, which is
used on mail attachments (our current scenario).
|
|
We are using lower values on test_encdecpool due high memory usage,
described in #7370. Added a comment to explain it.
|
|
Otherwise it will add unrelated overhead to results.
|
|
They arent used so far and using empty dicts to make them work is ugly.
Removing it leaves the return function on setup code clean and readable.
|
|
1000 docs at 100k~500k are exploding memory (4Gb+4Gb swap).
Changed for 100 docs in order to be able to get measures on higher
loads. Now its 10k, 100k and 500k
|
|
10k, 100k, 500k, 1m, 10m and 50m for encryption and decryption of a
whole document.
|
|
Most of them are commented as memory usage is going out of control for
now.
|