Age | Commit message (Collapse) | Author |
|
|
|
All the response parse tests are passing now, response
with no entries was broken because it wasn't being treated
and the others were broken because of calls that no longer
existed
|
|
Created a setup for the http target tests
Fixed two tests relying on http target that were
outdated
Fixed a call for an exception that doesn't exist, it
won't break anymore if it gets to that exception
|
|
Line break before binary operator breaks PEP8, fixed
that in the client api.py
|
|
before sqlcipher backend, or the attribute is not found.
this is a leftover of the recent refactor
|
|
|
|
|
|
|
|
SoledadCrypto had Soledad as parameter to be able to use
SoledadSecrets. SoledadSecrets had SoledadCrypto as parameter to use
*crypt_sym. This commit removes this circular dependency passing
directly the secret that SoledadCrypto cares about to the constructor
and removing the *crypt_sym methods from SoledadCrypto.
- Resolves: #7338
|
|
* change close method name to stop
* add start/stop methods to both enc/dec clases
* remove any delayed calls on pool shutdown
|
|
|
|
The bolean operator must come before a line break, not after
according to pep8
|
|
Because of how the incoming document queue is implemented, it could be the
case that a document was sent to async decryption queue more than once. This
commit creates a list of documents to be decrypted, so we avoid sending the
same document to the queue more than once.
|
|
|
|
The incoming documents events are meant to be used by a progress bar for
soledad sync, yet to be implemented. When deferred decryption was used, the
events were sent out of order, depending on the order of arrival of the
documents. This commit changes it so that the content of the emited events are
in order, so it is meaningful for the implementation of a progress bar.
Note that even after documents are received from the server, they will still
be decrypted asynchronously, so another signal could be implemented to signal
for the waiting of the decryption of incoming documents.
|
|
This is how a secret was stored in the secrets json file:
* each secret is symmetrically encrypted amd MACed with keys derived from
the user's passphrase.
* the encrypted secrets dictionary is then MACed with another key derived
* from the user's passphrase.
* each key is derived using scrypt and a unique random salt.
There are disadvantages to this approach:
* repeating scrypt many times is a waste of time.
* an attacker could crack whichever has weaker parameters, if they get out
of sync.
* if an attacker can modify the secret in a way it is good to decrypt the
database, then she can also modify the MAC.
The solution for this is:
* completelly eliminate the MAC from the storage secrets file.
* attempt to decrypt the database with whatever is got from the decryption
of the secret. If that is wrong, report an error.
Closes #6980.
|
|
resulting from the previous pep8 cleanup
|
|
|
|
|
|
|
|
to make all CIs happy :)
|
|
Deferred encryption was disabled because the soledad u1db wrapper for adbapi
did not correctly udated the parameter that controls it. Also, it did not
contain the encrypter pool. This commit moves the sync db and encrypt pool to
the main api, so they can be passed to the wrapper and deferred encryption
can work.
|
|
It makes the code simpler and clearer to use a deferred instead of
having to pull on 'has_finished'.
- Related: #7234
|
|
HTTP client cached connections will hang around in the reactor if they are not
properly cleaned up, and might raise a "reactor unclean" message on shutdown.
This commit adds a close() method to the client http target that will cleanup
those connections.
|
|
after suggestions in the review
|
|
|
|
|
|
|
|
|
|
implementing a generic plugin interface to allow other modules to react
to soledad syncs, receiving a list of document ids that they've
subscribed to.
- Resolves: #6996
- Releases: 0.7.1
|
|
|
|
When async decrypting, we want to finish as fast as possible. When encrypting,
though, we don't have such a rush. With an encryption loop period of 2
seconds, we're able to encrypt 30 documents in one minute (the current bitmask
client sync period), which is meaningful: should moderatelly use the processor
while not syncing and relief from some work when actually syncing.
|
|
Previous to this change, the actual encryption method used to run on its own
thread. When the close method was called from another thread, the queue could
be deleted after the encryption method loop had started, but before the queue
was checked for new items.
By removing that thread and moving the encryption loop to the reactor, that
race condition should disappear.
Closes: #7088.
|
|
Queue exceptions are not in multiprocessing.Queue module, but in plain Queue
instead.
|
|
|
|
- Related: #6359
|
|
|
|
I tested that code and this cant happen. We need to iterate keys and
then ask 'del'. The previous method raised: RuntimeError: dictionary
changed size during iteration
|
|
When handling this exception Python got lost because the import was
incorrect. Queue.Empty comes from Queue, not from multiprocessing.Queue
|
|
|
|
Instead of opening one TCP connection for each HTTP request, we want to reuse
connections. Also, we need to be able to verify SSL certificates. This commit
implements both features in the twisted http client sync.
|
|
The whole idea of the encrypter/decrypter pool is to be able to use multiple
cores to allow parallel encryption/decryption. Previous to this commit, the
encryptor/decryptor pools could be configured to not use workers and instead
do encryption/decryption inline. That was meant for testing purposes and
defeated the purpose of the pools.
This commit removes the possibility of inline encrypting/decrypting when using
the pools. It also refactors the enc/dec pool code so any failures while using
the pool are correctly grabbed and raised to the top of the sync deferred
chain.
|
|
When we initialized the async decrypter pool in the target's init method we
needed a proxy to ensure we could update the insert doc callback with the
correct method later on. Now we initialize the decrypter only when we need it,
so we don't need this proxy anymore. This commit removes the unneeded proxy.
|
|
We have to make sure any failures in asynchronous decryption code is grabbed
and properly transmitted up the deferred chain so it can be logged. This
commit adds errbacks in the decryption pool that grab any failure and a
check on the http target the failure if that is the case.
|
|
|
|
|
|
|
|
This commit does the following:
* Remove the autocreate parameter from the sync() method.
* Remove the syncing lock from the sync module because it did the same job
as the lock in the sqlcipher module.
* Remove the close/stop methods from sync module as they don't make sense
after we started to use twisted in client-side sync.
|
|
This change uses twisted deferreds for the whole syncing process and paves the
way to implementing other transport schemes. It removes a lot of threaded code
that used locks and was very difficult to maintain, and lets twisted to the
dirty work. Furthermore, all blocking network i/o is now handled
asynchronously by the twisted.
This commit removes the possibility of interrupting a sync, and we should
reimplement it using cancellable deferreds if we need it.
|
|
The access to the sync db was modified to use twisted.enterprise.adbapi, but
only the asynchronous decryption of incoming documents during sync was
adapted. This commit modifies the asynchornous encryption of documents to also
use the adbapi for accessing the sync db.
|