Age | Commit message (Collapse) | Author |
|
and serve / banner and robots to anon users.
instead of returning 401 for all cases, I treat the unauthenticated case
as a special case, and switch the service tree apart.
this allows to serve a different resource tree to unauthenticated users.
the new URLs are registered with the mapper.
I don't really like that dependency, could be handled by twisted alone, but meh.
- Resolves: #8764
|
|
|
|
the authentication wrapper is goin to look for the _credentialFactories
attribute. it will raise an exception if not found
- Resolves: #8766
|
|
- Resolves: #8765
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
the code for passing the configuration to the couch initialization was
never called. it seems the entrypoint module wasn't finally hooked as
expected. I think this fixes the problem, but further review is needed
here: either the entrypoint module is to be used, or it better is
removed. in the first case, this workaround probably needs to be
reverted.
|
|
|
|
|
|
this is to ease the packaging flow used in some environments like
Pixelated, that use a debian branch against different branches.
- Resolves: #8762
|
|
|
|
|
|
|
|
- add a new ServerInfo resource for /
- move entrypoint to its own module
|
|
|
|
|
|
|
|
|
|
Conflicts:
server/src/leap/soledad/server/_resource.py
testing/tests/server/test__resource.py
|
|
Conflicts:
server/src/leap/soledad/server/_wsgi.py
server/src/leap/soledad/server/entrypoint.py
server/src/leap/soledad/server/resource.py
testing/tests/server/test__resource.py
|
|
|
|
|
|
|
|
The need for token caching in server is a matter of debate, as is the
ideal way to do it. Twisted sessions store the session id in a cookie
and use that session id to persist. It is not clear if that
implementation is needed, works with future features (as multiple
soledad servers) or represents a security problem in some way. Because
of these, this commit removes it for now. The feature is left in git
history so we can bring it back later if needed.
|
|
|
|
Because the wsgi resource has its own threadpool, tests might get
confused when shutting down and the reactor may get clogged waiting for
the threadpool to be stopped. By refactoring the URLMapper to its own
module, server tests can avoid loading the resource module, where the
wsgi threadpool resides, so the threapool will not be started.
|
|
|
|
|
|
|
|
|
|
|
|
Something happened during rebase. This configuration is supposed to be
True by default now.
|
|
received docs makes no sense for a single request download, plus all its
comments and docstrings. Also updated docstrings for other methods.
The method that tests if sqlcipher is encrypted can return a db handle
that can be used right away. If we ignore it and reopen we can end up
with a lost open cursor.
|
|
Request size on a stream can't be measured upfront and a limit doesn't
make much sense. The real limit is user's Quota, to be implemented.
|
|
Moved out magic numbers into a constant and simplified logic during doc
upload.
|
|
|
|
|
|
|
|
|
|
batch is slower than usual insert for a single doc, so, if a document
exceeds the buffer, commit the batch (if any) and put the huge load by
traditional insert.
refactor coming.
|
|
This allow different paths for raw data and metadata, avoiding
unnecessary json parsing.
|
|
We enabled chunking, which means that a use can upload his entire db on
a single request. This commit makes server enable this and throttle
download as Twisted cant control the payload producer code as its
synchronous and blocking code.
|
|
1) enable HTTP 1.1 chunked upload on server
2) make the client sync.py generate a list of function calls instead of
a list of full docs
3) disable encryption pool
4) make the doc encryption a list of function calls
5) create a twisted protocol for sending
6) make a producer that calls the doc generation as necessary
|
|
This commit finishes reversion into u1db original streaming protocol for
downloads.
|
|
Will put a file object on doc json string if read_content is False,
otherwise it will fetch and fill as usual. This is useful for improving
server througput on sync download stream by receiving a bulk-get without
attachments and consume the file-objects as they come.
|