summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKali Kaneko (leap communications) <kali@leap.se>2016-03-31 21:38:14 -0400
committerKali Kaneko (leap communications) <kali@leap.se>2016-03-31 21:48:12 -0400
commit37218f7881e5f94a0cb14ccad9ad101efe1203fd (patch)
tree9da277fbfb211f7770cc90045ef0ca654cde8487
parent7d6356c9802d01819549d40c22ccc426d27030f1 (diff)
some preliminar analysis
-rw-r--r--.gitignore1
-rw-r--r--Makefile8
-rw-r--r--README.rst51
3 files changed, 54 insertions, 6 deletions
diff --git a/.gitignore b/.gitignore
index bee8a64..8d35cb3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,2 @@
__pycache__
+*.pyc
diff --git a/Makefile b/Makefile
index b5ca7dc..9bb781a 100644
--- a/Makefile
+++ b/Makefile
@@ -1,5 +1,11 @@
perf:
- httperf --server localhost --port 8080 --num-calls 200 --num-conns 10 --uri /
+ httperf --server localhost --port 8080 --num-calls 200 --num-conns 20 --uri /
+
+perf-little:
+ httperf --server localhost --port 8080 --num-calls 5 --num-conns 20 --uri /
+
+perf-easy:
+ httperf --server localhost --port 8080 --num-calls 5 --num-conns 20 --uri /hi
inline-server:
python server.py
diff --git a/README.rst b/README.rst
index 33b2026..77f0ec3 100644
--- a/README.rst
+++ b/README.rst
@@ -5,10 +5,10 @@ small prototypes to compare async behavior under load.
intended to evolve to small prototypes that:
- - run soledad
- - try different things for the decryption pool
- - serve also a simple server to test how reactor is able to respond under cpu
- load.
+* run soledad
+* try different options for the decryption pool (inline, threads, processes).
+* serve also a simple server to test how reactor is able to respond under cpu
+ load.
You can use the makefile to launch different servers and compare their
@@ -16,5 +16,46 @@ performance::
make inline-server # executes cpu load inline
make thread-server # cpu load in a twisted threadpool
make ampoule-server # cpu load in an ampoule process pool
- make perf # runs httperf against the server
+ make perf # runs httperf against the server, with a moderate cpu load.
+ make perf-little # runs httperf against the server, (less requests).
+ make perf-easy # runs httperf against a no-cpu load.
+
+Analysis
+---------------
+Let's run some experiments, in the three situations.
+
+A) Compare a **fixed, moderate cpu load** (ie, parameters to the fib function in the range fib(25)/fib(30)) in terms of req/sec.
+
+* Very similar rates. For fib(30), this gives something ~3 req/s in my machine.
+* some overhead is appreciated.
+* RAM usage??
+* graph it!!
+
+B) Stablish a **baseline for the no-cpu** perf case (perf-easy). In my machine this varies, but
+ it's in the range 600-800 req/s. Note: since w/o cpu load this target runs very
+ fas, this can be scripted to log one measure ever 500ms or so.
+
+c) **Simultaneous easy+load**: Observe how the no-cpu perf case degrades when run
+ against each one of the three servers, *while the servers are handling a moderate cpu load*.
+ Still have to graph this properly, and measure std etc, but looking quickly
+ at data I have three conclusions (yes, I'm biased!).
+
+ * inline cpu load is a no-go: it blocks the reactor.
+ * threaded is better,
+ * but ampoule-multiprocessing is superior when we look at how responsive the reactor is still.
+
+ to do still for this case:
+
+ * RAM usage?? graph before, during, after
+ * graph it!!
+ * experiment with different parameters for the process pool.
+
+
+To-Do
+--------------
+* [ ] make the cpu load variable (parameter to fib function: pass it as request parameter)
+* [ ] graph req/sec in response to variable cpu loads (parameter to fib).
+* [ ] graph response of perf-easy DURING a run of perf/perf-little.
+* [ ] compare the rate of responsiveness against variable cpu loads.
+* [ ] scale these minimalistic examples to realistic payload decryption using gnupg.