Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Conflicts:
apps/couch/src/couch_rep.erl
|
|
This reverts commit faf9071260147275bbac1633b599e85b4a302e8b.
|
|
|
|
|
|
|
|
|
|
Allow building on Gentoo
|
|
Modify SConscript so that it recognizes the standard name for the
mozjs/spidermonkey lib on gentoo.
|
|
|
|
This fixes stale=update_after.
|
|
|
|
|
|
|
|
A replication with both an HTTP source and target on the same host and
port could end up in a dead lock due to ibrowse replication pipelining
when attachments are present on the source. The ibrowse http worker
would end up forming a multipart/mime body using anonymous reader
functions for attachment stubs. When the attachment stub functions are
executed it is possible that they end up assigned to the same ibrowse
worker.
This is a bit of a long path but then end result is equivalent to
calling gen_server:call(self(), Args, infinity) from a gen_server
callback.
A quick work around for users is to set up a DNA alias (possibly in
/etc/hosts) or to use a combination of hostname and ip address so that
ibrowse assigns the requests to different pools.
|
|
|
|
|
|
|
|
|
|
As Filipe correctly points out, we want the parent to die if the child dies.
|
|
|
|
Fix attachment replication
BugzID: 13133
|
|
|
|
BugzID: 13133
|
|
|
|
|
|
The fabric bump fixes some corner cases for read repair. The chttpd
bump improves error handling for delayed responses.
|
|
This makes sure that we only optionally require the same version of cURL
that CouchDB does.
|
|
This makes sure that we only optionally require the same version of cURL
that CouchDB does.
|
|
|
|
I was forgetting to pass the args through to evalcx so that it could use
the stack size specified on the command line.
|
|
|
|
|
|
I was forgetting to pass the args through to evalcx so that it could use
the stack size specified on the command line.
|
|
|
|
|
|
|
|
|
|
BigCouch 0.3 cannot parse requests of the form /db/_changes?since="123-foo" so
the recent ?JSON_ENCODE addition to Since in two places causes 0.3 <-> 0.4
replication to fail with json_encode/badterm errors.
This patch applies JSON encoding only when the Since value is not already a
binary (i.e, when it's a [integer(), binary()]) and interop is restored.
BugzID: 12833
|
|
|
|
BigCouch 0.3 cannot parse requests of the form /db/_changes?since="123-foo" so
the recent ?JSON_ENCODE addition to Since in two places causes 0.3 <-> 0.4
replication to fail with json_encode/badterm errors.
This patch applies JSON encoding only when the Since value is not already a
binary (i.e, when it's a [integer(), binary()]) and interop is restored.
BugzID: 12833
|
|
|
|
|
|
|
|
|
|
Conflicts:
acinclude.m4.in
configure.ac
couchjs/c_src/http.c
src/erlang-oauth/Makefile.am
src/erlang-oauth/oauth.app.in
src/erlang-oauth/oauth_hmac_sha1.erl
src/erlang-oauth/oauth_http.erl
src/erlang-oauth/oauth_plaintext.erl
src/etap/etap_web.erl
|
|
Our headers start with a <<1>> and then four bytes indicating the length
of the header and its checksum. When the header is larger than 4090
bytes it will be split across multiple blocks in the file and will need
to be reassembled on read. The reassembly consists of stripping out
<<0>> from the beginning of each subsequent block in the
remove_block_prefixes/2 function. The bug here is that we tell
remove_block_prefixes that we're starting 1 byte into the current block
instead of 5, so it ends up removing one good byte from the header and
injecting one or more random <<0>>s.
Headers larger than 4k are very rare and generally require a view group
with a huge number of indexes or indexes with fairly large reductions,
which explains why this bug has gone undetected until now.
Closes COUCHDB-1319.
|
|
Our headers start with a <<1>> and then four bytes indicating the length
of the header and its checksum. When the header is larger than 4090
bytes it will be split across multiple blocks in the file and will need
to be reassembled on read. The reassembly consists of stripping out
<<0>> from the beginning of each subsequent block in the
remove_block_prefixes/2 function. The bug here is that we tell
remove_block_prefixes that we're starting 1 byte into the current block
instead of 5, so it ends up removing one good byte from the header and
injecting one or more random <<0>>s.
Headers larger than 4k are very rare and generally require a view group
with a huge number of indexes or indexes with fairly large reductions,
which explains why this bug has gone undetected until now.
Closes COUCHDB-1319.
|