Age | Commit message (Collapse) | Author |
|
- Releas: 0.8.0
|
|
ensure_ddocs is a privileged operation. The code was defaulting to True,
which caused unprivileged code to fail. This commit changes it to False,
forcing you to check your privileges and declare a new argument when
calling in order to ensure that this behavior is only supposed to happen
on privileged parts.
|
|
|
|
- Resolves: #7509
|
|
Wheezy is still at 0.8 and it is yet supported.
This commit changes all necessary calls from python-couchdb 1.0 back to
python-couchdb 0.8. We can migrate this back to simpler implementation
with python-couchdb 1.0 when support for wheezy is dropped.
|
|
Wheezy has python-couchdb 0.8 and python-beaker 1.6.3.
Pinning them to avoid false positives on tests.
|
|
this is a workaroud to reduce the chances of failed sync due to
timeouts. this should be properly tackled by:
1. implementing proper cancellable for the sync operation.
2. implementing a retry count at the level of a single request, handled
internally by soledad.
in this way we can remove the retries logic from the soledadbootstrapper
in the bitmask client.
- Related: #7382
|
|
- Related: #7503
|
|
- Release: 0.8.0
|
|
netrc file was hardcoded inside create-user-db. Now it reads the path
from /etc/leap/soledad-server.conf as done on server process.
The new configuration property is called 'admin_netrc'.
|
|
|
|
|
|
- Releases: 0.8.0
|
|
|
|
|
|
Those hardcoded mocks are leaking into other tests and are unnecessary.
|
|
README with information about latest change, missing docs and licenses,
variable naming and pep8.
|
|
This error raises while getting info on target (or server) replica. On
previous implementation there was nothing to do here, but now that we
have db creation in place this error should be handled just like u1db
original implementation.
The reason is that db creation occurs during sync exchange, but before
this we try to ask for info and the code that checks for info raises an
error that will be used to signal the client that a database will be
created and that it must save the replica uid returned by server, after
sync exchange (where we send/fetch documents).
|
|
As the other tests does. Make sure that a fresh database gets proper
security doc after calling ensure_security method.
|
|
Beyond ensuring ddocs, it is also necessary to ensure _security doc
presence while creating a database.
This document will tell couchdb to grant access to 'soledad' user as a
member role and no one as admin.
|
|
Added a simple script for user db creation and design docs creation.
It uses a netrc from /etc/couchdb/couchdb-admin.netrc and same validator
used on couch.py for database names.
|
|
ensure database needs to return a db and its replica_uid. Updated tests,
doc and code to reflect that.
|
|
We can now use a custom script to create databases by setting a
parameter 'create_cmd' on soledad configuration.
This will set CouchServerState to use it on ensure_database.
|
|
Tests that Unauthorized is raised in any failure scenario, leaving user
blind for tips on what happened during execution. This should lower
chances of information disclosure on execution failure.
|
|
If CouchServerState is created with a create_cmd parameter, it can now
use this parameter to invoke a command to create databases. A validator
for database name is also used to ensure that command injection is not
possible if user manages to manipulate database name argument.
|
|
Checks if arguments validation occurs properly and command execution
brings back status code and stdout or stderr on some scenarios.
|
|
This commit adds a way to validate and execute commands using an
argument validator. Commands are executed via subprocess.
|
|
- Releases: 0.8.0
|
|
|
|
As meskio found commented, setting this attribute directly is ugly,
CouchDatabase now has a init_caching method for setting up cache
instance.
|
|
We use CouchDB with single doc read/write. Following this documentation
about performance, we should get more performance by enabling couch to
delay and commit later.
See: http://guide.couchdb.org/draft/performance.html#single
|
|
Now each backend object will be retrieved from cache for sync.py and
values will live for 3600 by default. That is changed via parameter if
needed.
|
|
Before this change, we used a complicated update handler for storing the sync
state on the couchdb backend. That update handler was implemented as an
attempt to make couchdb take care of some validation for the update of the
sync log during the sync exchange, mainly to allow concurrent received
documents insertion during a sync.
Right now we rely on the remote sending one document at a time and do not
support concurrent insertions in the remote database backed by couch. Because
of that, the code removed by this commit was unneeded. And more: it was a
bottleneck of the sync process because we were writing to an unique file and
using unnecessary couch design docs processing for that. So this commit both
simplifies the storage of remote sync and removes a bottleneck of the sync
process.
Conflicts:
common/src/leap/soledad/common/couch.py
common/src/leap/soledad/common/tests/test_couch.py
|
|
The CouchDB backend implementation was accessing CouchDB too many times
for the same values. Those values are known inside the same sync_id,
which is the id of current sync session.
This commit adds caching for all redundant calls to Couch inside the
same sync_id for each replica.
Refactoring is still needed, but for now couch.py works normally as if
caching is not present, while sync.py injects the cache as a attribute
to enable it. This needs a simpler implementation.
|
|
There are two functions in couch.py used to save and retrieve the last
know gen and trans id for the syncing replica. The get function is
called very often, but is only set on one point. Added a simple caching
to avoid queying couch for a value that we already have.
If cache is empty, it just query as usual and fills it.
|
|
Python has a native ThreadPool implementation that fits our needs.
Changing it to use this instead and making some calls simpler.
|
|
_put_doc_if_newer is implemented on CommonBackend already. This was
copied over to CouchBackend just to add ensure conflicts. We can do this
before calling the super method instead.
|
|
This commit changes sync_state to be in memory, with all tests passing.
The memory variable for now is a dict with each key composed by
source_replica_uid and sync_id, replicating CouchDB implementation. Next
steps includes migrating this to Beaker and refactor/clean up code.
Changed the module's INFO dict to use Beaker's caching and adapted
methods to get and save from it. Still needs refactoring, all tests
passes.
Beaker is now using memory as default; It is configurable, but we aren't
opening the possibility of config now for security. We need to check
what can be misconfigured first.
We are not sure if beaker will be the definitive solution for server
side caching. This change isolates it with more granularity.
In order to replace it, just change get_cache_for to return the proper
caching object using another implementation. This caching object is
supposed to behave as a dict.
|
|
Soledad server will use Beaker as cache provider, starting with
sync_state being in memory.
|
|
|
|
- bug: we were dumping the received secrets locally to disk *before*
setting the received property for the active secret, and therefore the
'active_secret' was always marked as null.
- refactor common code into an utility method.
|
|
|
|
This tests the previous fix on ensuring a db that is missing a doc other
than 'docs'.
|
|
This code only checks for 'docs' presence, while we have 3 design
documents. If one of them is missing, but 'docs' is not, then it will
not ensure the others.
This is needed to properly ensure ddocs on create command line script.
|
|
|
|
|
|
|
|
|
|
This was used during db isolation to make sure that everything created
was destroyed, but it fails with -j (multiprocess). Removing it allows
parallelism.
|
|
We are getting "too many files open" while running tests with 1024 max
files open. This is a leak from close methods. Some of them should be fixed
on this commit, but further investigation may be necessary.
|