Age | Commit message (Collapse) | Author |
|
Also explaining how we are using Twisted's consumer interfaces.
|
|
received docs makes no sense for a single request download, plus all its
comments and docstrings. Also updated docstrings for other methods.
The method that tests if sqlcipher is encrypted can return a db handle
that can be used right away. If we ignore it and reopen we can end up
with a lost open cursor.
|
|
Batching is now decided on server side, so the code can be simplified.
Also, sync_db and other parameters were used to initialize encdecpool,
which is no longer supported.
|
|
Document sending happens after encryption, so the last sent document
needs to be signalled after request end.
|
|
We need to emit zmq status during doc prepare, which is called during
upload.
|
|
Fixes setup.cfg, adding current exclude rules, simplified tox.ini to use
setup.cfg and fixed all.
|
|
Also refactored tests and code to stop relying on old parameters which
included docs instead of get_doc calls.
|
|
Giving the proper name to the function and arguments helps to make the
producer wizardry less magic.
|
|
|
|
Code was complex and raised a flag during review.
|
|
|
|
|
|
Asserts aren't a good solution for stream parsing, its cleaner to check
and raise in place. Also, asserts can be ignored.
|
|
Will be removed when we have the proper tool to migrate data.
|
|
This is supposed to be used only for temporary backwards compatibility,
while we develop a proper migration tool.
|
|
A dict was used to store references for the synchronizers based on a
URL. This commit removes it as it doesnt make sense with current code.
|
|
We aren't using leap.common.http implementation and we need specific
features from original Twisted Web Agent. This commit implements it on
HTTP Targer.
|
|
Parsing from metadata we can store the total of docs and handle it for
the doc parser in order to be able to keep consistent events info.
|
|
|
|
Some code were duplicated, got removed. Additional comments added for
documenting such a critical and complex part as a protocol.
|
|
Both classes holds u1db error handling. Making DocStreamReceiver a
subclass reduces the error handling to a single place thus removing
duplicated code.
|
|
Insertion is synchronous and blocks the reactor. That's a temporary
solution as we used to have on decpool.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This allow different paths for raw data and metadata, avoiding
unnecessary json parsing.
|
|
There was an if without an else on error handler that avoided handling
errors that falled back current logic. Added a generic one to the tail
so we dont miss it.
|
|
1) enable HTTP 1.1 chunked upload on server
2) make the client sync.py generate a list of function calls instead of
a list of full docs
3) disable encryption pool
4) make the doc encryption a list of function calls
5) create a twisted protocol for sending
6) make a producer that calls the doc generation as necessary
|
|
This commit finishes reversion into u1db original streaming protocol for
downloads.
|
|
It's not being used
|
|
If a doc doesnt have a content it means it was deleted. Sync stream was
unable to represent this state.
|
|
Check if the backend provides a commit method before calling or we will
break the tests with InMemoryDatabase
|
|
Make the client parse a 2-line doc on sync download stream.
|
|
Temporary fix for server streaming
|
|
We were using 1 transaction per doc, which is bad.
Reference:
http://stackoverflow.com/questions/1711631/improve-insert-per-second-performance-of-sqlite
Code now uses 1 transaction for the whole sync.
|
|
We discovered that class was registering a `finalClose` to be
executed on reactor shutdown.
On the multiuser scenario, a logout destroys Soledad and should
properly terminate everything related to it. That SQLCipherU1DBSync
instance was being held even after logout by the reactor so it
could call that `finalClose` on shutdown.
The `finalClose` only set running to False and set a `shutdownID` that
was not used anywhere else, so we removed it and moved setting
running to False to the `close` function method. That way we preserve
the functionality but let the instance be properly garbage collected
on logout.
|
|
Otherwise it will put the exception as an additional parameter.
|
|
|
|
|
|
this is needed for some mail tests.
|
|
|