Age | Commit message (Collapse) | Author |
|
Current implementation can allow tampering and the CTR->GCM exchange can
help to avoid it.
This commits also alters a behaviour where we moved ahead after failing
to decrypt a recovery document. IMHO we can't move ahead as this is a
fatal error.
Signed-off-by: Victor Shyba <victor1984@riseup.net>
|
|
Integrated the secrets's JSON key that specifies ciphers into _crypto
and added optional GCM. Also added a test to check if both cipher types
can be imported.
Resolves: #8680
Signed-off-by: Victor Shyba <victor1984@riseup.net>
|
|
Resolves: #8668 - client: substitute usage of CTR mode + HMAC by GCM
cipher mode
Signed-off-by: Victor Shyba <victor1984@riseup.net>
|
|
Our magic value wasn't being used and were represented as a string.
Refactored it to a constant, increased it's size to 2 bytes and optimzed
is_symmetrically_encrypted to look for the magic and symmetrically
encrypted flag under base64 encoding. Most file types will use this
feature to help identifying themselves, so it got refactored to serve
the purpose it was created.
|
|
We aren't testing huge payloads on CI, so it doesn't make sense to
insert docs one by one. 'gatherResults' can speed up bench setup.
|
|
AESWriter and HMACWriter are just applying hmac or aes into a flow of
data. Abstracted the application of those operations into a super class
and highlighted just the difference on each implementation.
|
|
After adding the streaming decrypt, some classes were doing almost the
same thing. Unified them.
Also fixed some module level variables to upper case and some class name
to camel case.
|
|
Unfortunately, if a doc finishes decryption before the previous one we
will still have an issue while inserting. This commits solves it by
adding the parse and decrypt inside of the semaphore.
|
|
We are already doing this on encryption, now we can stream also from
decryption. This unblocks the reactor and will be valuable for blobs-io.
|
|
We now encode preamble and ciphertext+hmac in two distinct payloads
separated by a space. This allows metadata to be extracted and used
before decoding the whole document.
It also introduces a single packer for packing and unpacking of data
instead of reads and writes. Downside: doc_id and rev are limited to 255
chars now.
|
|
IV was being set during tests and this required some defensive coding to
avoid IV being set in production. This commits makes the test use the
generated IV and "hides" it using a read-only property to let it clear
this should never happen.
Also refactored out some parameters that are generated automatically to
reduce some lines of code and enhance readability.
|
|
|
|
Document sending happens after encryption, so the last sent document
needs to be signalled after request end.
|
|
encrypt returns a deferred and needs the adapted benchmark runner.
|
|
When we use marks the new pytests from benchmarks folder are collected
and ignored, but this causes trial to fail sometimes. Using --ignore
avoids it from being loaded while --benchmark-only will properly select
the benchmarks for tox, as intended.
|
|
test_deprecated_crypto was using pytest, which unfortunately doesnt work
when mixed with trial. Migrated back.
Also added norecursedirs option back, as it is necessary for parallel
testing mode.
|
|
|
|
|
|
|
|
|
|
|
|
Fixes setup.cfg, adding current exclude rules, simplified tox.ini to use
setup.cfg and fixed all.
|
|
Also refactored tests and code to stop relying on old parameters which
included docs instead of get_doc calls.
|
|
Deferred encryption option is gone.
|
|
Will be removed when we have the proper tool to migrate data.
|
|
We have benchmarks now to test sync limits and 100mb is too far from
current needs.
|
|
If we create all at once we cant test higher loads because it will try
to hold all in memory at the same time. Also, this code is smaller and
more readable.
|
|
This test was using pytest, but it was making other trial based tests
fail. I couldn't figure out why, but falling back to TestCase solved it.
|
|
We aren't using leap.common.http implementation and we need specific
features from original Twisted Web Agent. This commit implements it on
HTTP Targer.
|
|
Parsing from metadata we can store the total of docs and handle it for
the doc parser in order to be able to keep consistent events info.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1) enable HTTP 1.1 chunked upload on server
2) make the client sync.py generate a list of function calls instead of
a list of full docs
3) disable encryption pool
4) make the doc encryption a list of function calls
5) create a twisted protocol for sending
6) make a producer that calls the doc generation as necessary
|
|
This commit finishes reversion into u1db original streaming protocol for
downloads.
|
|
It's not being used
|
|
Temporary fix for server streaming
|
|
CouchServerState is spread across test codebase and this option is
intended to be used only on server startup. This commit makes it default
to False and explicitly set it to True on where it's necessary.
|
|
code-check is running with py3 randomly on CI, this commit should pin
it.
|
|
|
|
|
|
Tests that were imported from u1db or created on top of that structure
were leaving temporary directories behind. This could cause problems in
test servers, either by filling the partition or by extrapolating the
maximum amount of files in a directory.
This commit replaces all usages of temporary directories in the old test
structure by pytest tmpdir fixture, which properly cares for removing
temporary directories.
|
|
In order to configure performance tests to run in a specific machine we
need to add a tagged job to .gitlab-ci.yml file. That job will only
execute the perf tests, and then we can have runners that will only run
those jobs.
|
|
|
|
add coverage reports too.
(hereby we swear not to write stupid tests just because it feels good to
have an increased coverage metric).
- Resolves: #8416
|
|
We were using 'x'*size as payload, but on real usage the payload will be
random. This commit randomizes the payload using a predefined seed, so
the random payload will be the same across benchmarks.
Using random payloads also improves accuracy of compression or encoding
impacts and we will be evaluating those changes for resouce usage
issues.
Also note that base64 is used on payload. That was needed for utf8
safety, but overhead was removed to leave payloads as defined by
benchmarks.
Base64 was chosen also due its popular usage on MIME encoding, which is
used on mail attachments (our current scenario).
|