Age | Commit message (Collapse) | Author |
|
* change close method name to stop
* add start/stop methods to both enc/dec clases
* remove any delayed calls on pool shutdown
|
|
|
|
generate_wheels uses $WHEELHOUSE to generate and store the wheels for
requirements.pip and requirements-testing.pip (if it exists).
pip_install_requirements.sh installs requirements.pip from them if
possible (if not, then it fetches them from pypi) or, if passed the
--testing flag, it installs requirements-testing.pip.
Related: #7327
|
|
requirements-latest.pip will try to clone and install. Since it is meant
to be latest, I added a small change to specify the branch 'develop'.
|
|
The bolean operator must come before a line break, not after
according to pep8
|
|
With this, you can setup soledad for using locally
and running the tests with the latest head in a simpler
way
|
|
|
|
|
|
- update pip
- install base reqs, with insecure flags for dirspec and u1db
|
|
Because of how the incoming document queue is implemented, it could be the
case that a document was sent to async decryption queue more than once. This
commit creates a list of documents to be decrypted, so we avoid sending the
same document to the queue more than once.
|
|
|
|
The incoming documents events are meant to be used by a progress bar for
soledad sync, yet to be implemented. When deferred decryption was used, the
events were sent out of order, depending on the order of arrival of the
documents. This commit changes it so that the content of the emited events are
in order, so it is meaningful for the implementation of a progress bar.
Note that even after documents are received from the server, they will still
be decrypted asynchronously, so another signal could be implemented to signal
for the waiting of the decryption of incoming documents.
|
|
This is how a secret was stored in the secrets json file:
* each secret is symmetrically encrypted amd MACed with keys derived from
the user's passphrase.
* the encrypted secrets dictionary is then MACed with another key derived
* from the user's passphrase.
* each key is derived using scrypt and a unique random salt.
There are disadvantages to this approach:
* repeating scrypt many times is a waste of time.
* an attacker could crack whichever has weaker parameters, if they get out
of sync.
* if an attacker can modify the secret in a way it is good to decrypt the
database, then she can also modify the MAC.
The solution for this is:
* completelly eliminate the MAC from the storage secrets file.
* attempt to decrypt the database with whatever is got from the decryption
of the secret. If that is wrong, report an error.
Closes #6980.
|
|
resulting from the previous pep8 cleanup
|
|
|
|
|
|
|
|
to make all CIs happy :)
|
|
Deferred encryption was disabled because the soledad u1db wrapper for adbapi
did not correctly udated the parameter that controls it. Also, it did not
contain the encrypter pool. This commit moves the sync db and encrypt pool to
the main api, so they can be passed to the wrapper and deferred encryption
can work.
|
|
this is part of a process to make the setup of the development mode less
troublesome. from now on, setting up a virtualenv in pure development
mode will be as easy as telling pip to just install the external dependencies::
pip install -r pkg/requirements.pip
and traversing all the leap repos for the needed leap dependencies doing::
python setup.py develop
- Related: #7288
|
|
It makes the code simpler and clearer to use a deferred instead of
having to pull on 'has_finished'.
- Related: #7234
|
|
bump leap.common min required version, new change needed
'collect_plugins'.
|
|
|
|
HTTP client cached connections will hang around in the reactor if they are not
properly cleaned up, and might raise a "reactor unclean" message on shutdown.
This commit adds a close() method to the client http target that will cleanup
those connections.
|
|
after suggestions in the review
|
|
|
|
|
|
|
|
|
|
|
|
implementing a generic plugin interface to allow other modules to react
to soledad syncs, receiving a list of document ids that they've
subscribed to.
- Resolves: #6996
- Releases: 0.7.1
|
|
|
|
Tag version 0.7.0.
Conflicts:
client/pkg/requirements.pip
common/pkg/requirements.pip
|
|
When async decrypting, we want to finish as fast as possible. When encrypting,
though, we don't have such a rush. With an encryption loop period of 2
seconds, we're able to encrypt 30 documents in one minute (the current bitmask
client sync period), which is meaningful: should moderatelly use the processor
while not syncing and relief from some work when actually syncing.
|
|
Previous to this change, the actual encryption method used to run on its own
thread. When the close method was called from another thread, the queue could
be deleted after the encryption method loop had started, but before the queue
was checked for new items.
By removing that thread and moving the encryption loop to the reactor, that
race condition should disappear.
Closes: #7088.
|
|
Queue exceptions are not in multiprocessing.Queue module, but in plain Queue
instead.
|
|
|
|
|
|
|
|
|
|
- Related: #6359
|
|
|
|
I tested that code and this cant happen. We need to iterate keys and
then ask 'del'. The previous method raised: RuntimeError: dictionary
changed size during iteration
|
|
When handling this exception Python got lost because the import was
incorrect. Queue.Empty comes from Queue, not from multiprocessing.Queue
|
|
|
|
Instead of opening one TCP connection for each HTTP request, we want to reuse
connections. Also, we need to be able to verify SSL certificates. This commit
implements both features in the twisted http client sync.
|
|
The whole idea of the encrypter/decrypter pool is to be able to use multiple
cores to allow parallel encryption/decryption. Previous to this commit, the
encryptor/decryptor pools could be configured to not use workers and instead
do encryption/decryption inline. That was meant for testing purposes and
defeated the purpose of the pools.
This commit removes the possibility of inline encrypting/decrypting when using
the pools. It also refactors the enc/dec pool code so any failures while using
the pool are correctly grabbed and raised to the top of the sync deferred
chain.
|
|
When we initialized the async decrypter pool in the target's init method we
needed a proxy to ensure we could update the insert doc callback with the
correct method later on. Now we initialize the decrypter only when we need it,
so we don't need this proxy anymore. This commit removes the unneeded proxy.
|
|
We have to make sure any failures in asynchronous decryption code is grabbed
and properly transmitted up the deferred chain so it can be logged. This
commit adds errbacks in the decryption pool that grab any failure and a
check on the http target the failure if that is the case.
|
|
|