Age | Commit message (Collapse) | Author |
|
So we can have manager, sync, sql and errors in its own places.
--Related: #8970
|
|
|
|
|
|
|
|
Fix this lintian error:
00:14:29 W: soledad source: maintainer-script-lacks-debhelper-token
debian/soledad-server.postinst
00:14:29 N:
00:14:29 N: This package is built using debhelper commands that may
modify
00:14:29 N: maintainer scripts, but the maintainer scripts do not
contain the
00:14:29 N: "#DEBHELPER#" token debhelper uses to modify them.
00:14:29 N:
00:14:29 N: Adding the token to the scripts is recommended.
00:14:29 N:
00:14:29 N: Severity: normal, Certainty: possible
00:14:29 N:
00:14:29 N: Check: debhelper, Type: source
00:14:29 N:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Server config dictionary was being poorly updated, and not all default
values were being added in runtime. This was mainly a problem in tests,
but fixing may avoid possible bugs with this implementation in the
future.
|
|
|
|
With the introduction of semaphores in the blobmanager level, there's no
need for them in the benchmark tests now.
|
|
When running stress tests on blobs storage, we get weird errors when
hundreds of requests are made concurrently to the sqlite backend. This
commit adds a limit so only 10 requests will be delivered to the backend
at a time.
|
|
|
|
If there's no limit to the number of concurrent blob writes in the
server, the maximum limit of open files will eventually be reached, and
the processing of requests will start crashing. This commit adds
a semaphore to limit the number of concurrent writes in the server.
Related: #8973
|
|
With the set -o xtrace option all the variables are printed to stdout,
thus leaking all secrets in the CI output. This commit removes that
option from relevant scripts.
|
|
If pyzqm is built with --install-option, then all dependencies will be
forced to install from source. By removing that option, we allow pip to
use wheels.
|
|
|
|
|
|
|
|
|
|
An errback was missing in the PUT renderer method of the incoming API.
Because of that, requests to that endpoint were not being correctly
finished in case of errors when writing blobs. That was causing delivery
requests to hang until timeout.
Closes: #8977
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- add a MaximumRetriesError exception to encapsulate other exceptions.
- record the pending status before trying to download
- modify update_sync_status to insert or update
- modify retry tests to check number of retries
- add a test for download retry limit
|
|
Because the exception catching was being made inside
_download_and_decrypt() and only accounted for InvalidBlob exceptions,
not all retriable errors would lead to an actual retry.
This commit moves the exception catching to one level up and catches any
kind of exception, as is done in the upload part. This allows for
retrying on all retriable errors.
|
|
|
|
|
|
|
|
The previous error message had some problems:
- the connection should not be a problem, as this is going over TCP. If
the HTTP request was succesful, there's no reason to think its
contents could have been corrupted by a connection problem.
- I am not sure what's the best communication strategy here, but the
real problem is either a bug or actual tampering, so i make this
explicit.
- A problem like this should be reported always, not only when the
problem persists.
|
|
|
|
|
|
|
|
The way in that concurrency limit was being enforced was such that
transfer attempts were being spawned in groups of 3, and all of them had
to finish before a new group could be spawned. This modification allows
for use of maximum concurrency level at all times.
|
|
|
|
|