Age | Commit message (Collapse) | Author |
|
We aren't using leap.common.http implementation and we need specific
features from original Twisted Web Agent. This commit implements it on
HTTP Targer.
|
|
Parsing from metadata we can store the total of docs and handle it for
the doc parser in order to be able to keep consistent events info.
|
|
|
|
Some code were duplicated, got removed. Additional comments added for
documenting such a critical and complex part as a protocol.
|
|
Both classes holds u1db error handling. Making DocStreamReceiver a
subclass reduces the error handling to a single place thus removing
duplicated code.
|
|
Insertion is synchronous and blocks the reactor. That's a temporary
solution as we used to have on decpool.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This allow different paths for raw data and metadata, avoiding
unnecessary json parsing.
|
|
There was an if without an else on error handler that avoided handling
errors that falled back current logic. Added a generic one to the tail
so we dont miss it.
|
|
1) enable HTTP 1.1 chunked upload on server
2) make the client sync.py generate a list of function calls instead of
a list of full docs
3) disable encryption pool
4) make the doc encryption a list of function calls
5) create a twisted protocol for sending
6) make a producer that calls the doc generation as necessary
|
|
This commit finishes reversion into u1db original streaming protocol for
downloads.
|
|
It's not being used
|
|
If a doc doesnt have a content it means it was deleted. Sync stream was
unable to represent this state.
|
|
Check if the backend provides a commit method before calling or we will
break the tests with InMemoryDatabase
|
|
Make the client parse a 2-line doc on sync download stream.
|
|
Temporary fix for server streaming
|
|
We were using 1 transaction per doc, which is bad.
Reference:
http://stackoverflow.com/questions/1711631/improve-insert-per-second-performance-of-sqlite
Code now uses 1 transaction for the whole sync.
|
|
We discovered that class was registering a `finalClose` to be
executed on reactor shutdown.
On the multiuser scenario, a logout destroys Soledad and should
properly terminate everything related to it. That SQLCipherU1DBSync
instance was being held even after logout by the reactor so it
could call that `finalClose` on shutdown.
The `finalClose` only set running to False and set a `shutdownID` that
was not used anywhere else, so we removed it and moved setting
running to False to the `close` function method. That way we preserve
the functionality but let the instance be properly garbage collected
on logout.
|
|
Otherwise it will put the exception as an additional parameter.
|
|
|
|
|
|
this is needed for some mail tests.
|
|
|
|
This was discovered during load tests: Trying to process more than 999
docs triggers an error on SQLite due a select query not supporting 999
values to query.
|
|
test_processing_order aims to check that unordered docs wont be
processed, but if we let the pool start and advance Twisted LoopingCall
clock right before calling the processing method manually, the process
method will run concurrently and cause a race condition issue.
|
|
|
|
|
|
SQLCipher database access errors can raise Soledad exceptions. Database access
and multithreading resources are allocated in different places, so we have to
be careful to close all multithreading mechanismis in case of database access
errors. If we don't, zombie threads may haunt the reactor.
This commit adds SQLCipher exception trapping and Soledad exception raising
for database access errors, while properly shutting down multithreading
resources.
|
|
|
|
|
|
From this moment on, we embed a fork of u1db called l2db.
|
|
|
|
It was breaking E126 and E202 before
|
|
|
|
Do not initialize the openssl context on each call to decrypt.
I'm not 100% sure of the causal chain, but it seems that the
initialization of the osrandom engine that openssl backend does might be
breaking havoc when sqlcipher is calling rand_bytes concurrently.
further testing is needed to confirm this is the ultimate cause, but in
my tests this change avoids the occurrence of the dreaded Floating Point
Exception in soledad/sqlcipher.
- Resolves: #8180
|
|
|
|
|
|
For the case where the user already has data synced, this commit will
migrate the docs_received table to have the column sync_id.
That is required by the refactoring in the previous commits.
|
|
Docs created from one failed sync would be there for the next one,
possibly causing a lot of hard to find errors. This commit adds a
sync_id field to track each sync documents isolated and cleans up the
pool on start instead of constructor.
|