Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
download was being inferred. It's now being set on each and will raise
an error if node is provided. Also removed a duplication on params
variable.
|
|
Stream production wasn't pausing or stopping as asked by protocol.
|
|
|
|
-- Resolves: #8809
|
|
-- Resolves #8773
|
|
|
|
|
|
|
|
-- Closes: #9004
|
|
It was generating spaces, causing split to fail sometimes.
|
|
|
|
First version, still missing consumer/producer model and some tweaks,
but working.
-- Related: #8809
|
|
|
|
|
|
|
|
|
|
Closes: #8691
|
|
Intercept the creation of the protocol factory in the HTTP connection
pool to use twisted.protocols.policies.ThrottlingFactory and control the
incoming and outgoing bandwidth.
The factory only controls one connection, so when throttling we limit
the number of connections of the pool to one per host. This way,
throttling happens in a per-host basis.
Closes: #8931
|
|
|
|
Adds the ability to have document that wont be synced. This enables
applications to use soledad to store temporary blobs that should be
discarded later instead of unnecessarily keeping the sync loop busy.
-- Resolves: #8819
|
|
Sync method to propagate deletions in batch locally.
-- Resolves: #8961
|
|
-- Related: #8961
|
|
We were doing it for downloads, but not for uploads.
|
|
This commit creates a PENDING_DELETE sync status which can be used to
keep track of whats deleted locally in order to propagate to server
later.
-- Related: #8961
|
|
Closes: #8981.
|
|
|
|
|
|
--Resolves: #8848
|
|
Retry limit was originally specified in #8825 as a protection mechanism,
but #8822 (retry) doesn't specify a retry limit. In fact, blobs is
supposed to retry until transfer is complete using timed delays between
attempts, but never giving up.
-- Related: #8822
-- Related: #8825
|
|
|
|
As defined in #8970, this table and the new module will ease adding sync
features such as priority queues and streaming.
--Resolves: #8970
|
|
|
|
So we can have manager, sync, sql and errors in its own places.
--Related: #8970
|
|
When running stress tests on blobs storage, we get weird errors when
hundreds of requests are made concurrently to the sqlite backend. This
commit adds a limit so only 10 requests will be delivered to the backend
at a time.
|
|
- add a MaximumRetriesError exception to encapsulate other exceptions.
- record the pending status before trying to download
- modify update_sync_status to insert or update
- modify retry tests to check number of retries
- add a test for download retry limit
|
|
Because the exception catching was being made inside
_download_and_decrypt() and only accounted for InvalidBlob exceptions,
not all retriable errors would lead to an actual retry.
This commit moves the exception catching to one level up and catches any
kind of exception, as is done in the upload part. This allows for
retrying on all retriable errors.
|
|
The previous error message had some problems:
- the connection should not be a problem, as this is going over TCP. If
the HTTP request was succesful, there's no reason to think its
contents could have been corrupted by a connection problem.
- I am not sure what's the best communication strategy here, but the
real problem is either a bug or actual tampering, so i make this
explicit.
- A problem like this should be reported always, not only when the
problem persists.
|
|
|
|
|
|
|
|
The way in that concurrency limit was being enforced was such that
transfer attempts were being spawned in groups of 3, and all of them had
to finish before a new group could be spawned. This modification allows
for use of maximum concurrency level at all times.
|
|
|
|
|
|
We have been using "Error" instead of "Exception" in exception names, so
this commit is only enforcing an unwritten policy.
|
|
It was previously setting to PROCESSED. Also added some tests to check
if the underlying wrapped calls matches the intent.
-- Resolves: #8955
|