Age | Commit message (Collapse) | Author |
|
This should avoid tox virtualenv recreation.
|
|
When importing server, couch_state will load itself against couch_db url
configured on server. This fails when running on Docker as couchdb is in
another node.
|
|
This is necessary for keymanager and this image is shared, thus adding
here with a comment explaining why. Also explained why using
jessie-backports.
|
|
create_cmd lacked an explanation and check_schema_versions lacked
reasoning on why it defaults to False.
|
|
CouchServerState is spread across test codebase and this option is
intended to be used only on server startup. This commit makes it default
to False and explicitly set it to True on where it's necessary.
|
|
code-check is running with py3 randomly on CI, this commit should pin
it.
|
|
Instead of hardcoding a version. This should give us the flexibility of
changing images without changing code.
|
|
Current docker image is broken due missing libsqlcipher. This commit
adds it and jessie-backports due package needs.
Resolves: #8508
|
|
We discovered that class was registering a `finalClose` to be
executed on reactor shutdown.
On the multiuser scenario, a logout destroys Soledad and should
properly terminate everything related to it. That SQLCipherU1DBSync
instance was being held even after logout by the reactor so it
could call that `finalClose` on shutdown.
The `finalClose` only set running to False and set a `shutdownID` that
was not used anywhere else, so we removed it and moved setting
running to False to the `close` function method. That way we preserve
the functionality but let the instance be properly garbage collected
on logout.
|
|
|
|
|
|
Otherwise it will put the exception as an additional parameter.
|
|
|
|
Tests that were imported from u1db or created on top of that structure
were leaving temporary directories behind. This could cause problems in
test servers, either by filling the partition or by extrapolating the
maximum amount of files in a directory.
This commit replaces all usages of temporary directories in the old test
structure by pytest tmpdir fixture, which properly cares for removing
temporary directories.
|
|
|
|
|
|
In order to configure performance tests to run in a specific machine we
need to add a tagged job to .gitlab-ci.yml file. That job will only
execute the perf tests, and then we can have runners that will only run
those jobs.
|
|
|
|
|
|
add coverage reports too.
(hereby we swear not to write stupid tests just because it feels good to
have an increased coverage metric).
- Resolves: #8416
|
|
this is needed for some mail tests.
|
|
|
|
|
|
We were using 'x'*size as payload, but on real usage the payload will be
random. This commit randomizes the payload using a predefined seed, so
the random payload will be the same across benchmarks.
Using random payloads also improves accuracy of compression or encoding
impacts and we will be evaluating those changes for resouce usage
issues.
Also note that base64 is used on payload. That was needed for utf8
safety, but overhead was removed to leave payloads as defined by
benchmarks.
Base64 was chosen also due its popular usage on MIME encoding, which is
used on mail attachments (our current scenario).
|
|
TestSyncEncrypterPool.test_encrypt_doc_and_get_it_back was trying to do
an operation and asserting the number of attempts. This test is about
putting a doc on encrypter pool and getting it encrypted. If we dont
wait for the encryption operation to succeed, then complex
trial-and-error happens, but if we just ask twisted to wait for one
operation before going to the other, this is not needed.
-- Resolves: #8398
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If the moving of the config document is the last action of the couch
schema migration script, then we can test for successful migration of a
certain db by checking if the config document was already moved. This
commit just changes the order of migration actions to enforce this
situation.
|
|
Previous versions of the couchdb schema used documents "u1db_sync_log"
and "u1db_sync_state" to store sync metadata. At some point this was
changed, but the documents might have stayed as leftovers. This commit
adds the deletion of such documents to the migration script.
|
|
|
|
This isnt a test, but a benchmark. Initialization sounds more like an
operation while instance is just something.
|
|
|
|
We are using lower values on test_encdecpool due high memory usage,
described in #7370. Added a comment to explain it.
|
|
defer parameter wasnt clear
|
|
Otherwise it will add unrelated overhead to results.
|
|
|
|
They arent used so far and using empty dicts to make them work is ugly.
Removing it leaves the return function on setup code clean and readable.
|
|
1000 docs at 100k~500k are exploding memory (4Gb+4Gb swap).
Changed for 100 docs in order to be able to get measures on higher
loads. Now its 10k, 100k and 500k
|
|
Hypothesis: raw vs doc
Added the same sizes set (10k, 100k, 500k, 1M, 10M, 50M) as the document
crypto test, so we can compare how close to raw the higher level
operation is.
|
|
10k, 100k, 500k, 1m, 10m and 50m for encryption and decryption of a
whole document.
|
|
This was discovered during load tests: Trying to process more than 999
docs triggers an error on SQLite due a select query not supporting 999
values to query.
|
|
Most of them are commented as memory usage is going out of control for
now.
|
|
It has a heavy scrypt hashing processing with room for improvement.
|
|
Syncing without any changes was reported as slow. This benchmark will
help measure it.
|
|
Use a new one to avoid reusing the same database.
|