Age | Commit message (Collapse) | Author |
|
|
|
|
|
To avoid corrupting data, Soledad Server checks all user databases
during startup to make sure all of them use the correct schema version.
This was done synchronously, so when there are many databases startup
would take a long time. This commit makes that verification
asynchronous, thus speeding up server startup.
|
|
|
|
--Resolves: #8848
|
|
Retry limit was originally specified in #8825 as a protection mechanism,
but #8822 (retry) doesn't specify a retry limit. In fact, blobs is
supposed to retry until transfer is complete using timed delays between
attempts, but never giving up.
-- Related: #8822
-- Related: #8825
|
|
|
|
As defined in #8970, this table and the new module will ease adding sync
features such as priority queues and streaming.
--Resolves: #8970
|
|
|
|
preamble.py wasn't using urlsafe version of base64, while all other
parts of blobs were using it.
--Resolves: #8980
|
|
So we can have manager, sync, sql and errors in its own places.
--Related: #8970
|
|
|
|
|
|
Server config dictionary was being poorly updated, and not all default
values were being added in runtime. This was mainly a problem in tests,
but fixing may avoid possible bugs with this implementation in the
future.
|
|
When running stress tests on blobs storage, we get weird errors when
hundreds of requests are made concurrently to the sqlite backend. This
commit adds a limit so only 10 requests will be delivered to the backend
at a time.
|
|
|
|
If there's no limit to the number of concurrent blob writes in the
server, the maximum limit of open files will eventually be reached, and
the processing of requests will start crashing. This commit adds
a semaphore to limit the number of concurrent writes in the server.
Related: #8973
|
|
|
|
An errback was missing in the PUT renderer method of the incoming API.
Because of that, requests to that endpoint were not being correctly
finished in case of errors when writing blobs. That was causing delivery
requests to hang until timeout.
Closes: #8977
|
|
- add a MaximumRetriesError exception to encapsulate other exceptions.
- record the pending status before trying to download
- modify update_sync_status to insert or update
- modify retry tests to check number of retries
- add a test for download retry limit
|
|
Because the exception catching was being made inside
_download_and_decrypt() and only accounted for InvalidBlob exceptions,
not all retriable errors would lead to an actual retry.
This commit moves the exception catching to one level up and catches any
kind of exception, as is done in the upload part. This allows for
retrying on all retriable errors.
|
|
The previous error message had some problems:
- the connection should not be a problem, as this is going over TCP. If
the HTTP request was succesful, there's no reason to think its
contents could have been corrupted by a connection problem.
- I am not sure what's the best communication strategy here, but the
real problem is either a bug or actual tampering, so i make this
explicit.
- A problem like this should be reported always, not only when the
problem persists.
|
|
|
|
|
|
|
|
The way in that concurrency limit was being enforced was such that
transfer attempts were being spawned in groups of 3, and all of them had
to finish before a new group could be spawned. This modification allows
for use of maximum concurrency level at all times.
|
|
|
|
|
|
We have been using "Error" instead of "Exception" in exception names, so
this commit is only enforcing an unwritten policy.
|
|
As kali pointed out, one can disable blobs after enabling it, which
would cause data loss as blobs documents would become unreacheable. This
commit adds a warning and refuses to start the server.
-- Resolves: #8866
|
|
|
|
It was previously setting to PROCESSED. Also added some tests to check
if the underlying wrapped calls matches the intent.
-- Resolves: #8955
|
|
Notify, log something meaninful and retry at most 3 times before marking
the download as unusable (FAILED_DOWNLOAD).
-- Related: #8825
|
|
Added retry to upload and modified retry implementation to comply with
discussed spec.
According to it, we should wait between retries, something like 1s, 10s,
.. up to 1 minute.
-- Resolves: #8822
|
|
-- Related: #8822
|
|
Instead of querying the server, fetch_missing and send_missing now uses
the PENDING_DOWNLOAD and PENDING_UPLOAD statuses to guide itself on what
to do. This allows the sync mechanism to control when/how to query data
from server and reuse the query data during the sync.
-- Related: #8822
|
|
PENDING_DOWNLOAD is an empty blob, so during blob_manager.get we need to
return empty as it's not available. This status is used during sync.
During put, if we have an empty unavailable blob, then we delete and
replace with is being put, marking it as SYNCED.
-- Related: #8822
|
|
As raised by kali, they can bring some bugs and avoiding it is pretty
easy.
-- Resolves: #8957
|
|
We were comparing the raw content of preambles. This commit adds a way
to compare excluding time so comparisons don't suffer from false
negatives caused by time deltas.
-- Resolves: #8920
|
|
-- Related: #8822
|
|
-- Related: #8822
|
|
-- Related: #8932
|
|
|
|
|
|
|
|
|
|
|
|
Introduction of local services authentication added a configuration file
containing the auth tokens for each service. There were different names
for that file, and this commit standardizes all of them to the same
value: /etc/soledad/services.tokens
|
|
Soledad Server was previously using something in /srv to store blobs in
the server side. Debian/lintian doesn't like that at all, so we are
changing to /var/lib/soledad/blobs.
Closes: #8948
|
|
From code review.
-- Related: #8945
|