summaryrefslogtreecommitdiff
path: root/src/leap/soledad/client
AgeCommit message (Collapse)Author
2017-10-27[refactor] semaphore.run instead acquire/releaseVictor Shyba
2017-10-27[refactor] add a table for sync_statusVictor Shyba
As defined in #8970, this table and the new module will ease adding sync features such as priority queues and streaming. --Resolves: #8970
2017-10-27[style] E722 do not use bare exceptVictor Shyba
2017-10-27[refactor] split blobs into modulesVictor Shyba
So we can have manager, sync, sql and errors in its own places. --Related: #8970
2017-10-12[bug] limit number of concurrent requests to local dbdrebs
When running stress tests on blobs storage, we get weird errors when hundreds of requests are made concurrently to the sqlite backend. This commit adds a limit so only 10 requests will be delivered to the backend at a time.
2017-10-05[bug] fix retries for blobs downloaddrebs
- add a MaximumRetriesError exception to encapsulate other exceptions. - record the pending status before trying to download - modify update_sync_status to insert or update - modify retry tests to check number of retries - add a test for download retry limit
2017-10-05[bug] retry blob download for all retriable errorsdrebs
Because the exception catching was being made inside _download_and_decrypt() and only accounted for InvalidBlob exceptions, not all retriable errors would lead to an actual retry. This commit moves the exception catching to one level up and catches any kind of exception, as is done in the upload part. This allows for retrying on all retriable errors.
2017-10-05[bug] improve error message on blob download errordrebs
The previous error message had some problems: - the connection should not be a problem, as this is going over TCP. If the HTTP request was succesful, there's no reason to think its contents could have been corrupted by a connection problem. - I am not sure what's the best communication strategy here, but the real problem is either a bug or actual tampering, so i make this explicit. - A problem like this should be reported always, not only when the problem persists.
2017-10-05[bug] log the exception on blob download errordrebs
2017-10-05[bug] don't use hardcoded number of retries when downloading blobsdrebs
2017-10-05[style] use python3 compatible try/except syntaxdrebs
2017-10-05[bug] ensure maximum concurrency on blobs transferdrebs
The way in that concurrency limit was being enforced was such that transfer attempts were being spawned in groups of 3, and all of them had to finish before a new group could be spawned. This modification allows for use of maximum concurrency level at all times.
2017-10-05[style] param is not keyword argumentdrebs
2017-10-05[style] name keyword arguments in function callsdrebs
2017-10-05[style] rename exception to match standardsdrebs
We have been using "Error" instead of "Exception" in exception names, so this commit is only enforcing an unwritten policy.
2017-10-05[bug] set as PROCESSING during processing flowVictor Shyba
It was previously setting to PROCESSED. Also added some tests to check if the underlying wrapped calls matches the intent. -- Resolves: #8955
2017-10-05[feature] notify, retry and fail from invalid tagVictor Shyba
Notify, log something meaninful and retry at most 3 times before marking the download as unusable (FAILED_DOWNLOAD). -- Related: #8825
2017-10-05[feature] retry during upload + proper waitVictor Shyba
Added retry to upload and modified retry implementation to comply with discussed spec. According to it, we should wait between retries, something like 1s, 10s, .. up to 1 minute. -- Resolves: #8822
2017-10-05[feature] retry during downloadVictor Shyba
-- Related: #8822
2017-10-05[feature] send/fetch missing using local statusesVictor Shyba
Instead of querying the server, fetch_missing and send_missing now uses the PENDING_DOWNLOAD and PENDING_UPLOAD statuses to guide itself on what to do. This allows the sync mechanism to control when/how to query data from server and reuse the query data during the sync. -- Related: #8822
2017-10-05[feature] blob get/put handle unavailable statusesVictor Shyba
PENDING_DOWNLOAD is an empty blob, so during blob_manager.get we need to return empty as it's not available. This status is used during sync. During put, if we have an empty unavailable blob, then we delete and replace with is being put, marking it as SYNCED. -- Related: #8822
2017-10-05[feature] persist pending_download remote listingVictor Shyba
-- Related: #8822
2017-10-05[feature] filter out unavailable blobs on listingVictor Shyba
-- Related: #8822
2017-10-05[feature] concurrent blob download/uploadVictor Shyba
-- Related: #8932
2017-09-30[bug] fix argument passing in blobs queriesdrebs
2017-09-29[refactor] make parameters of blobmanager methods explicitdrebs
2017-09-29[bug] check all http response status codesdrebs
2017-09-29[bug] raise when trying to get flags of unexisting blobdrebs
2017-09-28[doc] add api reference to the docsdrebs
2017-09-13[refactor] remove dead code and improve namingVictor Shyba
From code review. -- Related: #8945
2017-09-11[bug] use sql file handler from adbapi threadpoolVictor Shyba
This commit makes all write calls happen inside the same thread that opened the blob handle. Doing it outside using FileBodyProducer will yield and run the writes across random reactor threads. This is an attempt to fix #8945 -- Resolves: #8945
2017-09-11[bug] do not allow concurrent schema creationVictor Shyba
Moved schema creation and migrations to the pragma locked call, so we avoid it running concurrently on a thread pool. -- Resolves: #8945
2017-09-11[bug] close consumer on FileBodyProducerVictor Shyba
It isn't closed by Twisted like the producer is. -- Resolves: #8924 -- Related: #8932
2017-09-11[style] fixes from code reviewVictor Shyba
2017-09-11[feature] save sync status on client sideVictor Shyba
Adds two new columns for sync status and retries. Also some initial rough logic for upload retry limiting. -- Resolves: #8823 -- Related: #8822
2017-09-05[feat] toggle http persistence depending on environment variabledrebs
2017-09-05[feat] use a persistent connetion pool in http agentdrebs
2017-09-05[feat] use cookies in the client syncerdrebs
2017-09-05[bug] ensure the number of threads in blobs thread pooldrebs
The number of threads in the blobs databae thread pool can't be smaller than the number of attemps to write concurrently to the database, otherwise different kinds of concurrency problems may arise. By setting the minimum and maximum number of threads to the same number, we make sure there will always be that number of available threads for interaction with the blobs db.
2017-09-05[bug] use a different name for each user's blobs dbdrebs
2017-08-31[bug] revert pool size change pushed by mistakedrebs
2017-08-25[bug] increase number of connections in local blobs db pooldrebs
If the number of threads on the connection pool is small and the local blobs db is stressed, different concurrent access problems may arise.
2017-08-23[bug] use remote secret for uploading blobsdrebs
2017-08-23[bug] use correct uuid in blobmanager setupdrebs
2017-08-11[refactor] make blobs client unaware of 'default'Victor Shyba
This value was hardcoded on client, but it's assumed to be default by the server and there is no need for it to be hardcoded. -- Resolves: #8882
2017-08-11[bug] track namespace information on blobs clientVictor Shyba
A reported bug on namespace feature was that we couldn't delete a namespaced blob after a cold start, since the client wasn't able to check which namespace it belongs. This commits completes the tracking of namespace over client site code, making it possible to query and store namespce information on disk, through sqlcipher. -- Resolves: #8882
2017-08-11[feature] add namespace to local blobs db tableVictor Shyba
This column will keep track of namespace locally. -- Related: #8882
2017-08-07[bug] skip processing if no consumers to avoid data lossVictor Shyba
2017-08-07[refactor] use endStream public method instead of private oneVictor Shyba
2017-08-07[docs] fix typos and improve text from code reviewVictor Shyba