From 84a80882489596f5c3ac3275f12f46aa4fc002ed Mon Sep 17 00:00:00 2001 From: drebs Date: Wed, 12 Jul 2017 14:59:59 -0300 Subject: [doc] clarify what we mean with "big data set" --- docs/benchmarks.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'docs') diff --git a/docs/benchmarks.rst b/docs/benchmarks.rst index 2c5e9eeb..8c9d7677 100644 --- a/docs/benchmarks.rst +++ b/docs/benchmarks.rst @@ -71,10 +71,10 @@ sizes are in KB): Test scenarios -------------- -Ideally, we would want to run tests for a big data set, but that may be -infeasible given time and resource limitations. Because of that, we choose a -smaller data set and suppose that the behaviour is somewhat linear to get an -idea for larger sets. +Ideally, we would want to run tests for a big data set (i.e. a high number of +documents and a big payload size), but that may be infeasible given time and +resource limitations. Because of that, we choose a smaller data set and suppose +that the behaviour is somewhat linear to get an idea for larger sets. Supposing a data set size of 10MB, some possibilities for number of documents and document sizes for testing download and upload are: -- cgit v1.2.3