Age | Commit message (Collapse) | Author |
|
|
|
From this moment on, we embed a fork of u1db called l2db.
|
|
|
|
It was breaking E126 and E202 before
|
|
|
|
Do not initialize the openssl context on each call to decrypt.
I'm not 100% sure of the causal chain, but it seems that the
initialization of the osrandom engine that openssl backend does might be
breaking havoc when sqlcipher is calling rand_bytes concurrently.
further testing is needed to confirm this is the ultimate cause, but in
my tests this change avoids the occurrence of the dreaded Floating Point
Exception in soledad/sqlcipher.
- Resolves: #8180
|
|
|
|
|
|
For the case where the user already has data synced, this commit will
migrate the docs_received table to have the column sync_id.
That is required by the refactoring in the previous commits.
|
|
Docs created from one failed sync would be there for the next one,
possibly causing a lot of hard to find errors. This commit adds a
sync_id field to track each sync documents isolated and cleans up the
pool on start instead of constructor.
|
|
This commit adds tests for doc ordering and encdecpool control
(start/stop). Also optimizes by deleting in batch and checking for a
sequence in memory before asking the local staging for documents.
|
|
This commit removes the multiprocessing pool and gives a step closer to
make encdecpool simpler. Download speed is now at a constant rate, CPU
usage lower and reactor responding fast when running with a HTTP server
like Pixelated.
|
|
|
|
Theoretically (until now), Soledad inherits from U1DB the behaviour of only
accepting valid JSON for documents contents. JSON documents only allow for
unicode strings. Despite that, until now we had implemented lossy convertion
to unicode to avoid encoding errors when dumping/loading JSON content. This
allowed for API users to pass non-unicode to Soledad, but caused the
application to take more time because of conversion.
There were 2 problem with this: (1) conversion may take a long time and a lot
of memory when convertin large payloads; and (2) conversion was being made
before deferring to the adbapi, and this was blocking the reactor.
This commit completelly removes the conversion to unicode, thus leaving the
responsibility of unicode conversion to users of the Soledad API.
|
|
The constructor method of Soledad was receiving two arguments for user
id. One of them was optional with None as default. It could cause an
inconsistent state with uuid set but userid unset.
This change remove the optional user_id argument from initialization
method and return the uuid if anyone call Soledad.userid method.
|
|
|
|
|
|
|
|
Shared db locking was used to avoid the case in which two different devices
try to store/modify remotelly stored secrets at the same time. We want to
avoid remote locks because of the problems they create, and prefer to crash
locally.
For the record, we are currently using the user's password to encrypt the
secrets stored in the server, and while we continue to do this we will have to
re-encrypt the secrets and update the remote storage whenever the user changes
her password.
|
|
|
|
for some reason, available_backends does not work inside a frozen
PyInstaller binary.
- Resolves: #7952
|
|
cryptography comes from OpenSSL and Twisted dependencies, so it's
already installed.
This commit removes a compiled dependency, also possibly making it
easier to use on Windows.
|
|
- Move them to a thread so reactor can continue
processing e.g. http requests
|
|
|
|
|
|
this allows to switch the online/offline mode on a running soledad
instance.
|
|
|
|
for the moment, userid has to be passed to constructor.
eventually, we might drop support for passing uuid, since it will be
mapped in the service tree
|
|
- Resolves: #7656
- Releases: 0.8.0
|
|
All batching code has no effect by default with this commit. Since we
know that this is a dangerous new feature we will enable them only on
our test servers and check them manually before setting it as default
or adding more configuration features.
Use SyncTarget and server conf file to enable it for testing.
|
|
u1db provides batching by default. Current Soledad HTTPS Sync Target was
stuck at 1 doc per request. This commit adds batching capability,
limiting the size to a predefined value.
Default limit size: 500kB
|
|
|
|
This line was missing an yield and without it we end up inserting a
document that is being retrieved and bad things happen.
This is the core fix from yesterday debugging session. During sequential
syncs the pool was inserting and querying at the same time and sometimes
repeating or failing to delete documents.
|
|
If we reset the vars after firing the finish callback, other thread can pick
up a dirty state on due concurrency.
|
|
by subclassing the MissingDesignDocError, we don't have to import the
soledad.common.couch submodule into the soledad.client.sync
- Resolves: #7626
|
|
errors.py was holding a few specific CouchDB errors, now moved into
couch.errors module. Also, some of CouchDatabase methods were declared
as private, but external classes needs them.
|
|
MissingDesignDocError raised on get_sync_info due to a missing design
document will be handled by the server during sync.
Ensure is now False by default, and thus database creation can deliver
an empty one that will be ensured during sync, following the ensure
parameter.
|
|
this is a workaroud to reduce the chances of failed sync due to
timeouts. this should be properly tackled by:
1. implementing proper cancellable for the sync operation.
2. implementing a retry count at the level of a single request, handled
internally by soledad.
in this way we can remove the retries logic from the soledadbootstrapper
in the bitmask client.
- Related: #7382
|
|
- Related: #7503
|
|
This error raises while getting info on target (or server) replica. On
previous implementation there was nothing to do here, but now that we
have db creation in place this error should be handled just like u1db
original implementation.
The reason is that db creation occurs during sync exchange, but before
this we try to ask for info and the code that checks for info raises an
error that will be used to signal the client that a database will be
created and that it must save the replica uid returned by server, after
sync exchange (where we send/fetch documents).
|
|
- bug: we were dumping the received secrets locally to disk *before*
setting the received property for the active secret, and therefore the
'active_secret' was always marked as null.
- refactor common code into an utility method.
|
|
|
|
We are getting "too many files open" while running tests with 1024 max
files open. This is a leak from close methods. Some of them should be fixed
on this commit, but further investigation may be necessary.
|
|
in this way we use the reactor pattern to dispatch the events, instead
of having the overhead of running a separate client thread.
- Resolves: #7274
|
|
Code is trying to close a closed threadpool. This raises errors on
Twisted 15.4.
|
|
- Resolves: #7412
|
|
The http_target.py refactor started in 8adf2dedb74941352520d8de42326b0c59818728
forgot to remove the old file.
|
|
|
|
Preparing many docs is useful for batching only. As we are sendind one
by one I will leave prepare_one_doc method to do the encrypt as the docs
goes to upload.
Also fixes method name as kaliy suggested.
|
|
isinstance is better, as kaliy pointed out, and the constructor is also
in a safer place on __init__.py to be explicit.
Also re-apply a change from last rebase;
|