summaryrefslogtreecommitdiff
path: root/src/leap/soledad
AgeCommit message (Collapse)Author
2017-10-31[feature] add sync progress attributeVictor Shyba
--Resolves: #8848
2017-10-27[bug] there is no retry limit for usual transfersVictor Shyba
Retry limit was originally specified in #8825 as a protection mechanism, but #8822 (retry) doesn't specify a retry limit. In fact, blobs is supposed to retry until transfer is complete using timed delays between attempts, but never giving up. -- Related: #8822 -- Related: #8825
2017-10-27[refactor] semaphore.run instead acquire/releaseVictor Shyba
2017-10-27[refactor] add a table for sync_statusVictor Shyba
As defined in #8970, this table and the new module will ease adding sync features such as priority queues and streaming. --Resolves: #8970
2017-10-27[style] E722 do not use bare exceptVictor Shyba
2017-10-27[bug] TypeError: Incorrect paddingVictor Shyba
preamble.py wasn't using urlsafe version of base64, while all other parts of blobs were using it. --Resolves: #8980
2017-10-27[refactor] split blobs into modulesVictor Shyba
So we can have manager, sync, sql and errors in its own places. --Related: #8970
2017-10-23[bug] revert unintentional changes from last commitdrebs
2017-10-23[doc] add script for copying doc to leap_se repodrebs
2017-10-16[bug] use all default server config valuesdrebs
Server config dictionary was being poorly updated, and not all default values were being added in runtime. This was mainly a problem in tests, but fixing may avoid possible bugs with this implementation in the future.
2017-10-12[bug] limit number of concurrent requests to local dbdrebs
When running stress tests on blobs storage, we get weird errors when hundreds of requests are made concurrently to the sqlite backend. This commit adds a limit so only 10 requests will be delivered to the backend at a time.
2017-10-12[feature] make concurrent blob writes configurabledrebs
2017-10-11[bug] limit concurrent blob writes in serverdrebs
If there's no limit to the number of concurrent blob writes in the server, the maximum limit of open files will eventually be reached, and the processing of requests will start crashing. This commit adds a semaphore to limit the number of concurrent writes in the server. Related: #8973
2017-10-10[feature] log OS errors when writing blobsdrebs
2017-10-10[bug] handle put errors in the incoming blobs apidrebs
An errback was missing in the PUT renderer method of the incoming API. Because of that, requests to that endpoint were not being correctly finished in case of errors when writing blobs. That was causing delivery requests to hang until timeout. Closes: #8977
2017-10-05[bug] fix retries for blobs downloaddrebs
- add a MaximumRetriesError exception to encapsulate other exceptions. - record the pending status before trying to download - modify update_sync_status to insert or update - modify retry tests to check number of retries - add a test for download retry limit
2017-10-05[bug] retry blob download for all retriable errorsdrebs
Because the exception catching was being made inside _download_and_decrypt() and only accounted for InvalidBlob exceptions, not all retriable errors would lead to an actual retry. This commit moves the exception catching to one level up and catches any kind of exception, as is done in the upload part. This allows for retrying on all retriable errors.
2017-10-05[bug] improve error message on blob download errordrebs
The previous error message had some problems: - the connection should not be a problem, as this is going over TCP. If the HTTP request was succesful, there's no reason to think its contents could have been corrupted by a connection problem. - I am not sure what's the best communication strategy here, but the real problem is either a bug or actual tampering, so i make this explicit. - A problem like this should be reported always, not only when the problem persists.
2017-10-05[bug] log the exception on blob download errordrebs
2017-10-05[bug] don't use hardcoded number of retries when downloading blobsdrebs
2017-10-05[style] use python3 compatible try/except syntaxdrebs
2017-10-05[bug] ensure maximum concurrency on blobs transferdrebs
The way in that concurrency limit was being enforced was such that transfer attempts were being spawned in groups of 3, and all of them had to finish before a new group could be spawned. This modification allows for use of maximum concurrency level at all times.
2017-10-05[style] param is not keyword argumentdrebs
2017-10-05[style] name keyword arguments in function callsdrebs
2017-10-05[style] rename exception to match standardsdrebs
We have been using "Error" instead of "Exception" in exception names, so this commit is only enforcing an unwritten policy.
2017-10-05[bug] refuse to start if blobs is misconfiguredVictor Shyba
As kali pointed out, one can disable blobs after enabling it, which would cause data loss as blobs documents would become unreacheable. This commit adds a warning and refuses to start the server. -- Resolves: #8866
2017-10-05[style] fix typos on filenames and commentsVictor Shyba
2017-10-05[bug] set as PROCESSING during processing flowVictor Shyba
It was previously setting to PROCESSED. Also added some tests to check if the underlying wrapped calls matches the intent. -- Resolves: #8955
2017-10-05[feature] notify, retry and fail from invalid tagVictor Shyba
Notify, log something meaninful and retry at most 3 times before marking the download as unusable (FAILED_DOWNLOAD). -- Related: #8825
2017-10-05[feature] retry during upload + proper waitVictor Shyba
Added retry to upload and modified retry implementation to comply with discussed spec. According to it, we should wait between retries, something like 1s, 10s, .. up to 1 minute. -- Resolves: #8822
2017-10-05[feature] retry during downloadVictor Shyba
-- Related: #8822
2017-10-05[feature] send/fetch missing using local statusesVictor Shyba
Instead of querying the server, fetch_missing and send_missing now uses the PENDING_DOWNLOAD and PENDING_UPLOAD statuses to guide itself on what to do. This allows the sync mechanism to control when/how to query data from server and reuse the query data during the sync. -- Related: #8822
2017-10-05[feature] blob get/put handle unavailable statusesVictor Shyba
PENDING_DOWNLOAD is an empty blob, so during blob_manager.get we need to return empty as it's not available. This status is used during sync. During put, if we have an empty unavailable blob, then we delete and replace with is being put, marking it as SYNCED. -- Related: #8822
2017-10-05[refactor] change default dict paramsVictor Shyba
As raised by kali, they can bring some bugs and avoiding it is pretty easy. -- Resolves: #8957
2017-10-05[feature] improve preamble comparisonsVictor Shyba
We were comparing the raw content of preambles. This commit adds a way to compare excluding time so comparisons don't suffer from false negatives caused by time deltas. -- Resolves: #8920
2017-10-05[feature] persist pending_download remote listingVictor Shyba
-- Related: #8822
2017-10-05[feature] filter out unavailable blobs on listingVictor Shyba
-- Related: #8822
2017-10-05[feature] concurrent blob download/uploadVictor Shyba
-- Related: #8932
2017-09-30[bug] fix argument passing in blobs queriesdrebs
2017-09-29[refactor] make parameters of blobmanager methods explicitdrebs
2017-09-29[bug] check all http response status codesdrebs
2017-09-29[bug] raise when trying to get flags of unexisting blobdrebs
2017-09-28[doc] add api reference to the docsdrebs
2017-09-14[pkg] standardize location of services tokens filedrebs
Introduction of local services authentication added a configuration file containing the auth tokens for each service. There were different names for that file, and this commit standardizes all of them to the same value: /etc/soledad/services.tokens
2017-09-14[pkg] use /var/lib/soledad/blobs to store blobsdrebs
Soledad Server was previously using something in /srv to store blobs in the server side. Debian/lintian doesn't like that at all, so we are changing to /var/lib/soledad/blobs. Closes: #8948
2017-09-13[refactor] remove dead code and improve namingVictor Shyba
From code review. -- Related: #8945
2017-09-11[bug] use sql file handler from adbapi threadpoolVictor Shyba
This commit makes all write calls happen inside the same thread that opened the blob handle. Doing it outside using FileBodyProducer will yield and run the writes across random reactor threads. This is an attempt to fix #8945 -- Resolves: #8945
2017-09-11[bug] do not allow concurrent schema creationVictor Shyba
Moved schema creation and migrations to the pragma locked call, so we avoid it running concurrently on a thread pool. -- Resolves: #8945
2017-09-11[bug] close consumer on FileBodyProducerVictor Shyba
It isn't closed by Twisted like the producer is. -- Resolves: #8924 -- Related: #8932
2017-09-11[style] fixes from code reviewVictor Shyba