summaryrefslogtreecommitdiff
path: root/pages/docs
diff options
context:
space:
mode:
authorelijah <elijah@riseup.net>2015-02-18 23:44:14 -0800
committerelijah <elijah@riseup.net>2015-02-18 23:44:14 -0800
commite53e113dcde3e3686095c3661307efccc5c7e64e (patch)
tree2d5219d73587750ec478811c65499325a95a04db /pages/docs
initial conversation from leap_doc and leap_website
Diffstat (limited to 'pages/docs')
-rw-r--r--pages/docs/client/bundle-testing.md101
-rw-r--r--pages/docs/client/dev-environment.md200
-rw-r--r--pages/docs/client/en.md57
-rw-r--r--pages/docs/client/known-issues.md64
-rw-r--r--pages/docs/client/testers-howto.md131
-rw-r--r--pages/docs/client/user-install.md30
-rw-r--r--pages/docs/client/user-running.md42
-rw-r--r--pages/docs/design/bonafide.text290
-rw-r--r--pages/docs/design/cuttlefish.md7
-rw-r--r--pages/docs/design/en.haml5
-rw-r--r--pages/docs/design/nicknym-draft.md578
-rw-r--r--pages/docs/design/nicknym.md498
-rw-r--r--pages/docs/design/overview.md403
-rw-r--r--pages/docs/design/soledad.md423
-rw-r--r--pages/docs/en.haml4
-rw-r--r--pages/docs/get-involved/bad-project-ideas.md69
-rw-r--r--pages/docs/get-involved/coding.haml73
-rw-r--r--pages/docs/get-involved/communication.md25
-rw-r--r--pages/docs/get-involved/en.haml4
-rw-r--r--pages/docs/get-involved/project-ideas.md412
-rw-r--r--pages/docs/get-involved/source.haml85
-rw-r--r--pages/docs/platform/details/couchdb.md74
-rw-r--r--pages/docs/platform/details/development.md355
-rw-r--r--pages/docs/platform/details/en.haml4
-rw-r--r--pages/docs/platform/details/faq.md65
-rw-r--r--pages/docs/platform/details/under-the-hood.md26
-rw-r--r--pages/docs/platform/details/webapp.md282
-rw-r--r--pages/docs/platform/en.md77
-rw-r--r--pages/docs/platform/guide/commands.md419
-rw-r--r--pages/docs/platform/guide/config.md263
-rw-r--r--pages/docs/platform/guide/en.haml4
-rw-r--r--pages/docs/platform/guide/environments.md69
-rw-r--r--pages/docs/platform/guide/keys-and-certificates.md194
-rw-r--r--pages/docs/platform/guide/miscellaneous.md14
-rw-r--r--pages/docs/platform/guide/nodes.md187
-rw-r--r--pages/docs/platform/service-diagram.odgbin0 -> 12131 bytes
-rw-r--r--pages/docs/platform/service-diagram.pngbin0 -> 25988 bytes
-rw-r--r--pages/docs/platform/troubleshooting/en.haml3
-rw-r--r--pages/docs/platform/troubleshooting/known-issues.md115
-rw-r--r--pages/docs/platform/troubleshooting/tests.md33
-rw-r--r--pages/docs/platform/troubleshooting/where-to-look.md249
-rw-r--r--pages/docs/platform/tutorials/en.haml4
-rw-r--r--pages/docs/platform/tutorials/quick-start.md82
-rw-r--r--pages/docs/platform/tutorials/single-node-email.md338
-rw-r--r--pages/docs/platform/tutorials/single-node-vpn.md389
-rw-r--r--pages/docs/tech/en.haml5
-rw-r--r--pages/docs/tech/hard-problems/en.md169
-rw-r--r--pages/docs/tech/hard-problems/pt.md133
-rw-r--r--pages/docs/tech/infosec/_table-style.haml59
-rw-r--r--pages/docs/tech/infosec/_table.haml233
-rw-r--r--pages/docs/tech/infosec/en.haml105
-rw-r--r--pages/docs/tech/limitations.md123
-rw-r--r--pages/docs/tech/routing.md65
-rw-r--r--pages/docs/tech/secure-email/en.md578
-rw-r--r--pages/docs/test/release_tests15
55 files changed, 8232 insertions, 0 deletions
diff --git a/pages/docs/client/bundle-testing.md b/pages/docs/client/bundle-testing.md
new file mode 100644
index 0000000..24890b0
--- /dev/null
+++ b/pages/docs/client/bundle-testing.md
@@ -0,0 +1,101 @@
+@nav_title = "Bundle QA"
+@title = "Guidelines for bundle QA"
+
+Recommended setup
+-----------------
+
+VirtualBox (or similar) with virtual machines installed for supported OSs
+
+For each system that you are going to test, you should do:
+
+- Install the VM
+- Restart the VM and check that the process is finished.
+- Turn it off and make a snapshot named 'fresh install' or similar.
+
+The OS should be installed with the default settings and no extra packages. However, you can choose your language, username, timezone, etc
+
+
+Test process
+------------
+
+- roll back the virtual machine to its *fresh install* state, to make sure that you're testing against a reproducible environment.
+- download the bundle, verify signature (if apply), extract and run the app
+- test the application (see next section)
+
+
+Tests to do
+-----------
+
+- **check if the version number is the same as the current bundle version**
+ - 'Help->About Bitmask'
+ - `./bitmask --version`
+- **correct installation of files to 'better protect privacy'**
+ - `/etc/leap/resolv-update`
+ - `/usr/share/polkit-1/actions/net.openvpn.gui.leap.policy`
+
+ You should check that they get copied when the user says 'yes' and they don't get copied if the user says 'no'.
+- **installation of tun/tap in Windows and MAC**
+ TODO: explain more here
+
+- **account creation**
+
+ Recommended username template: test_bundleversion_os_arch, that way you avoid conflicts between test iterations.
+ e.g.: 'test_036_debian7_64', 'test_036_win7_32', etc
+
+ If you need to create extra users in order to test a bug or whatever, you can use 'test_036_ubuntu1204_32a', 'test_036_ubuntu1204_32b', etc
+
+ In case of being a lot of users testing a version you may want to use your username instead of test, e.g.: 'johndoe_036_ubuntu1204_32'.
+- **eip connection**
+
+ You can check if the vpn is working entering to the site: http://wtfismyip.com
+
+ or using the console:
+ `shell> wget -qO- wtfismyip.com/json`
+- **Soledad key generation**
+- **Thunderbird configuration manually and using add-on**
+- **Send and receive mail**
+
+ You need to test communication between inside and outside users, e.g.: someuser@bitmask.net and otheruser@gmail.com
+
+ A good thing to do is to subscribe to a mailing list that have a lot of activity.
+
+- **Check if the account data is correctly synced.**
+
+ After the account creation, have everything working and the app closed:
+ - remove the configuration files created by the app (`~/.config/leap` in linux)
+ - log in with your recently created credentials and check that everything is working and your mails are there too.
+
+
+Problems report
+---------------
+
+You should to create an issue with the followinw information:
+
+- OS, version, architecture, desktop environment (if relevant).
+- bitmask.log file located in the root folder of the uncompressed bundle
+- steps to reproduce
+
+If you find a problem, try to reproduce and take note of the steps needed to get the same error.
+
+Also, in some cases, a failure appears but if you run again is not there anymore (e.g.: some initialization issue), please report that too.
+
+For more details look at [Reporting bugs](client/testers-howto) section.
+
+
+Utils
+-----
+
+Download, extract and run helper script for linux:
+
+ shell> ./download-extract-run-bitmask.sh
+
+Script contents:
+
+ #!/bin/bash
+ HOST="https://dl.bitmask.net/client/linux/"
+ VERSION="0.3.7"
+ # FOLDER="Bitmask-linux32-${VERSION}"
+ FOLDER="Bitmask-linux64-${VERSION}"
+ FILE="${FOLDER}.tar.bz2"
+
+ wget ${HOST}${FILE} && tar xjf ${FILE} && cd ${FOLDER} && ./bitmask
diff --git a/pages/docs/client/dev-environment.md b/pages/docs/client/dev-environment.md
new file mode 100644
index 0000000..b41e7af
--- /dev/null
+++ b/pages/docs/client/dev-environment.md
@@ -0,0 +1,200 @@
+@nav_title = 'Hacking'
+@title = 'Setting up a development environment'
+
+Quick start
+===========
+
+This document will guide you to get an environment ready to contribute code to
+Bitmask.
+
+Using an automagic script
+=========================
+
+You can use a helper script that will get you started with bitmask and all the
+related repos.
+
+1. download automagic script
+2. run it :)
+
+Commands so you can copy/paste:
+
+ $ mkdir bitmask && cd bitmask
+ $ wget https://raw.githubusercontent.com/leapcode/bitmask_client/develop/pkg/scripts/bootstrap_develop.sh
+ $ chmod +x bootstrap_develop.sh
+ $ ./bootstrap_develop.sh help # check out the options :)
+ $ ./bootstrap_develop.sh deps # requires sudo
+ $ ./bootstrap_develop.sh init ro
+ $ ./bootstrap_develop.sh helpers # requires sudo
+ $ ./bootstrap_develop.sh run
+
+This script allows you to get started, update and run the bitmask app with all
+its repositories.
+
+Note: the `deps` option is meant to be used in a Debian based Linux distro.
+
+
+Doing the work manually
+=======================
+
+Clone the repo
+--------------
+
+> **note**
+>
+> Stable releases are in *master* branch. Development code lives in
+> *develop* branch.
+
+ git clone https://leap.se/git/bitmask_client
+ git checkout develop
+
+Install Dependencies
+--------------------
+
+Bitmask depends on these libraries:
+
+- python 2.6 or 2.7
+- qt4 libraries
+- openssl
+- [openvpn](http://openvpn.net/index.php/open-source/345-openvpn-project.html)
+
+### Install dependencies in a Debian based distro
+
+In debian-based systems:
+
+ $ sudo apt-get install git python-dev python-setuptools python-virtualenv python-pip libssl-dev python-openssl libsqlite3-dev g++ openvpn pyside-tools python-pyside libffi-dev
+
+
+Working with virtualenv
+-----------------------
+
+### Intro
+
+*Virtualenv* is the *Virtual Python Environment builder*.
+
+It is a tool to create isolated Python environments.
+
+> The basic problem being addressed is one of dependencies and versions,
+and indirectly permissions. Imagine you have an application that needs
+version 1 of LibFoo, but another application requires version 2. How can
+you use both these applications? If you install everything into
+`/usr/lib/python2.7/site-packages` (or whatever your platform's standard
+location is), it's easy to end up in a situation where you
+unintentionally upgrade an application that shouldn't be upgraded.
+
+Read more about it in the [project documentation
+page](http://www.virtualenv.org/en/latest/virtualenv.html).
+
+### Create and activate your dev environment
+
+You first create a virtualenv in any directory that you like:
+
+ $ mkdir ~/Virtualenvs
+ $ virtualenv ~/Virtualenvs/bitmask
+ $ source ~/Virtualenvs/bitmask/bin/activate
+ (bitmask)$
+
+Note the change in the prompt.
+
+### Avoid compiling PySide inside a virtualenv
+
+If you attempt to install PySide inside a virtualenv as part of the rest
+of the dependencies using pip, basically it will take ages to compile.
+
+As a workaround, you can run the following script after creating your
+virtualenv. It will symlink to your global PySide installation (*this is
+the recommended way if you are running a debian-based system*):
+
+ $ pkg/postmkvenv.sh
+
+A second option if that does not work for you would be to install PySide
+globally and pass the `--system-site-packages` option when you are creating
+your virtualenv:
+
+ $ apt-get install python-pyside
+ $ virtualenv --system-site-packages .
+
+After that, you must export `LEAP_VENV_SKIP_PYSIDE` to skip the
+installation:
+
+ $ export LEAP_VENV_SKIP_PYSIDE=1
+
+And now you are ready to proceed with the next section.
+
+### Install python dependencies
+
+You can install python dependencies with `pip`. If you do it inside your
+working environment, they will be installed avoiding the need for
+administrative permissions:
+
+ $ pip install -r pkg/requirements.pip
+
+This step is not strictly needed, since the `setup.py develop` in the next
+paragraph with also fetch the needed dependencies. But you need to know abou it:
+when you or any person in the development team will be adding a new dependency,
+you will have to repeat this command so that the new dependencies are installed
+inside your virtualenv.
+
+Install Bitmask
+---------------
+
+Normally we would install the `leap.bitmask` package as any other package
+inside the virtualenv.
+But, instead, we will be using setuptools **development mode**. The difference
+is that, instead of installing the package in a permanent location in your
+regular installed packages path, it will create a link from the local
+site-packages to your working directory. In this way, your changes will always
+be in the installation path without need to install the package you are working
+on.::
+
+ (bitmask)$ python2 setup.py develop --always-unzip
+
+After this step, your Bitmask launcher will be located at
+`~/Virtualenvs/bitmask/bin/bitmask`, and it will be in the path as long as you
+have sourced your virtualenv.
+
+Note: the `--always-unzip` option prevents some dependencies to be installed in
+a zip/egg, which causes some issues with libraries like 'scrypt' that needs to
+access to the files directly from the filesystem.
+
+Compile Qt resources
+--------------------
+
+We also need to compile the resource files:
+
+ (bitmask)$ make
+
+Note: you need to repeat this step each time you change a `.ui` file.
+
+Running openvpn without root privileges
+---------------------------------------
+
+In linux, we are using `policykit` to be able to run openvpn without root
+privileges, and a policy file is needed to be installed for that to be
+possible.
+The setup script tries to install the policy file when installing bitmask
+system-wide, so if you have installed bitmask in your global site-packages at
+least once it should have copied this file for you.
+
+If you *only* are running bitmask from inside a virtualenv, you will need to
+copy this file by hand:
+
+ $ sudo cp pkg/linux/polkit/se.leap.bitmask.policy /usr/share/polkit-1/actions/
+
+Installing the bitmask EIP helper
+---------------------------------
+
+In linux, we have a `openvpn` and `firewall` helper that is needed to run EIP.
+You need to manually copy it from `bitmask_client/pkg/linux/bitmask-root`.
+Use the following command to do so:
+
+ $ sudo cp bitmask_client/pkg/linux/bitmask-root /usr/sbin/
+
+
+Running!
+--------
+
+If everything went well, you should be able to run your client by invoking
+`bitmask`. If it does not get launched, or you just want to see more verbose
+output, try the debug mode:
+
+ (bitmask)$ bitmask --debug
diff --git a/pages/docs/client/en.md b/pages/docs/client/en.md
new file mode 100644
index 0000000..15c55bf
--- /dev/null
+++ b/pages/docs/client/en.md
@@ -0,0 +1,57 @@
+@nav_title = "Bitmask"
+@title = 'Bitmask'
+@summary = "The Internet Encryption Toolkit: Encrypted Internet Proxy and Encrypted Mail"
+
+**Bitmask** is the multiplatform desktop client for the services offered by [the LEAP Platform](platform).
+
+It is written in python using [PySide](http://qt-project.org/wiki/PySide) and licensed under the GPL3. Currently we distribute pre-compiled bundles for Linux and OSX, with Windows bundles following soon.
+
+We include below some sections of the user guide and the development documentation so you can get started.
+
+User Guide
+----------
+* [Installing Bitmask](client/user-install)
+* [Running Bitmask](client/user-running)
+
+Tester Guide
+------------
+
+This part of the documentation details how to fetch the last development version and how to report bugs.
+
+* [Howto for testers](client/testers-howto)
+
+Hackers Guide
+-------------
+
+If you want to contribute to the project, we wrote this for you.
+
+* [Setting up a development environment](client/dev-environment)
+
+
+<!--
+* [Running latest code](client/bleeding-edge)
+* [Getting started with development](client/dev-guide)
+* [Configuration](client/configuration)
+* [Client API](client/client-api) -->
+
+
+Supported OSs
+-------------
+
+We currently support:
+
+### Through the bundle
+
+* Debian 7
+* Ubuntu 12.04 (LTS)
+* Ubuntu 14.04 (LTS)
+* Ubuntu 14.10 (latest)
+* Mac OSX >= 10.8 (coming very soon)
+* Note: It *should* work in other Debian based distros
+
+### Through the debian package
+
+* Ubuntu 14.04 (Trusty Tahr)
+* Ubuntu 14.10 (Utopic Unicorn) coming very soon
+* Debian 7.0 (Wheezy)
+* Debian 8.0 (Jessie)
diff --git a/pages/docs/client/known-issues.md b/pages/docs/client/known-issues.md
new file mode 100644
index 0000000..e1507d7
--- /dev/null
+++ b/pages/docs/client/known-issues.md
@@ -0,0 +1,64 @@
+@title = 'Bitmask known issues'
+@nav_title = 'Known issues'
+@summary = 'Known issues in Bitmask.'
+@toc = true
+
+Here you can find documentation about known issues and potential work-arounds
+in the current Leap Platform release.
+
+No polkit agent available
+-------------------------
+
+To run Bitmask and the services correctly you need to have a running polkit
+agent. If you don't have one you will get an error and won't be able to start
+Bitmask.
+
+The currently recognized polkit agents are:
+
+| process name | Who uses it? |
+|---------------------------------------|-----------------------------------|
+| `polkit-gnome-authentication-agent-1` | Gnome |
+| `polkit-kde-authentication-agent-1` | KDE |
+| `polkit-mate-authentication-agent-1` | Mate |
+| `lxpolkit` | LXDE |
+| `gnome-shell` | Gnome shell |
+| `fingerprint-polkit-agent` | the `fingerprint-gui` package |
+
+
+If you have a different polkit agent running that it's not in theat list,
+please report a bug so we can include in our checks.
+
+You can get the list of running processes that match polkit with the following
+command: `ps aux | grep -i polkit`.
+Here is an example on my KDE desktop:
+
+ ➜ ps aux | grep polkit
+ root 1392 0.0 0.0 298972 6120 ? Sl Sep22 0:02 /usr/lib/policykit-1/polkitd --no-debug
+ user 1702 0.0 0.0 12972 920 pts/16 S+ 16:42 0:00 grep polkit
+ user 3259 0.0 0.4 559764 38464 ? Sl Sep22 0:05 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1
+
+
+Other Issues
+------------
+
+- You may get the error: "Unable to connect: Problem with provider" in
+ situations when the problem is the network instead of the provider.
+ See: https://leap.se/code/issues/4023
+
+Mail issues
+-----------
+
+Note that email is not stable yet so this list may not be accurate.
+
+- If you have received a big ammount of mails (tested with more than 400), you
+ may experience that Thunderbird won't respond.
+
+That problem does not happen if you have the client open and Thunderbird
+loading mails while are reaching your inbox.
+
+
+- Opening the same account from more than one box at the same time will
+ possibly break your account.
+
+- Managing a huge ammount of mails (e.g.: moving mails to a folder) will block
+ the UI (see https://leap.se/code/issues/4837)
diff --git a/pages/docs/client/testers-howto.md b/pages/docs/client/testers-howto.md
new file mode 100644
index 0000000..9e6ff7d
--- /dev/null
+++ b/pages/docs/client/testers-howto.md
@@ -0,0 +1,131 @@
+@nav_title = "Testing"
+@title = "Howto for Testers"
+
+This document covers a how-to guide to:
+
+1. Where and how report bugs for Bitmask \<reporting\_bugs\>, and
+2. Quickly fetching latest development code \<fetchinglatest\>.
+
+Let's go!
+
+Reporting bugs
+--------------
+
+Report all the bugs you can find to us! If something is not quite
+working yet, we really want to know. Reporting a bug to us is the best
+way to get it fixed quickly, and get our unconditional gratitude.
+
+It is quick, easy, and probably the best way to contribute to Bitmask
+development, other than submitting patches.
+
+> **Reporting better bugs**
+>
+> New to bug reporting? Here you have a [great document about this noble
+> art](http://www.chiark.greenend.org.uk/~sgtatham/bugs.html).
+
+### Where to report bugs
+
+We use the [Bitmask Bug
+Tracker](https://leap.se/code/projects/eip-client), although you can
+also use [Github
+issues](https://github.com/leapcode/bitmask_client/issues). But we
+reaaaally prefer if you sign up in the former to send your bugs our way.
+
+### What to include in your bug report
+
+- The symptoms of the bug itself: what went wrong? What items appear
+ broken, or do not work as expected? Maybe an UI element that appears
+ to freeze?
+- The Bitmask version you are running. You can get it by doing bitmask
+ --version, or you can go to Help -\> About Bitmask menu.
+- The installation method you used: bundle? from source code? debian
+ package?
+- Your platform version and other details: Ubuntu 12.04? Debian
+ unstable? Windows 8? OSX 10.8.4? If relevant, your desktop system
+ also (gnome, kde...)
+- When does the bug appear? What actions trigger it? Does it always
+ happen, or is it sporadic?
+- The exact error message, if any.
+- Attachments of the log files, if possible (see section below).
+
+Also, try not to mix several issues in your bug report. If you are
+finding several problems, it's better to issue a separate bug report for
+each one of them.
+
+### Attaching log files
+
+If you can spend a little time getting them, please add some logs to the
+bug report. They are **really** useful when it comes to debug a problem.
+To do it:
+
+Launch Bitmask in debug mode. Logs are way more verbose that way:
+
+ bitmask --debug
+
+Get your hand on the logs. You can achieve that either by clicking on
+the "Show log" button, and saving to file, or directly by specifying the
+path to the logfile in the command line invocation:
+
+ bitmask --debug --logfile /tmp/bitmask.log
+
+Attach the logfile to your bug report.
+
+### Need human interaction?
+
+You can also find us in the `#leap` channel on the [freenode
+network](https://freenode.net). If you do not have a IRC client at hand,
+you can [enter the channel via
+web](http://webchat.freenode.net/?nick=leaper....&channels=%23leap&uio=d4).
+
+Fetching latest development code
+--------------------------------
+
+Normally, testing the latest client bundles \<standalone-bundle\> should
+be enough. We are engaged in a three-week release cycle with minor
+releases that are as stable as possible.
+
+However, if you want to test that some issue has *really* been fixed
+before the next release is out (if you are testing a new provider, for
+instance), you are encouraged to try out the latest in the development
+branch. If you do not know how to do that, or you prefer an automated
+script, keep reading for a way to painlessly fetch the latest
+development code.
+
+We have put together a script to allow rapid testing in different
+platforms for the brave souls like you. Check it out in the
+*Using automagic helper script* section of the
+[Hacking](client/dev-environment) page only that in a more compact
+way suitable (ahem) also for non developers.
+
+> **note**
+>
+> At some point in the near future, we will be using standalone bundles
+> \<standalone-bundle\> with the ability to self-update.
+
+### Local config files
+
+If you want to start fresh without config files, just move them. In
+linux:
+
+ mv ~/.config/leap ~/.config/leap.old
+
+### Testing the packages
+
+When we have a release candidate for the supported platforms, we will
+announce also the URI where you can download the rc for testing in your
+system. Stay tuned!
+
+Testing the status of translations
+----------------------------------
+
+We need translators! You can go to
+[transifex](https://www.transifex.com/projects/p/bitmask/), get an
+account and start contributing.
+
+If you want to check the current status of bitmask localization in a
+language other than the one set in your machine, you can do it with a
+simple trick (under linux). For instance, do:
+
+ $ lang=es_ES bitmask
+
+for running Bitmask with the spanish locales.
diff --git a/pages/docs/client/user-install.md b/pages/docs/client/user-install.md
new file mode 100644
index 0000000..173a9c5
--- /dev/null
+++ b/pages/docs/client/user-install.md
@@ -0,0 +1,30 @@
+@nav_title = 'Installing'
+@title = 'Installing Bitmask'
+
+For download links and installation instructions go to https://dl.bitmask.net/
+
+Distribute & Pip
+----------------
+
+**Note**
+
+If you are familiar with python code and you can find your way through the
+process of dependencies install, you can installing Bitmask using [pip](http://www.pip-installer.org/)
+for the already released versions :
+
+ $ pip install leap.bitmask
+
+Show me the code!
+-----------------
+
+For the brave souls that can find their way through python packages, you can
+get the code from LEAP public git repository :
+
+ $ git clone https://leap.se/git/bitmask_client
+
+Or from the github mirror :
+
+ $ git clone https://github.com/leapcode/bitmask_client.git
+
+For more information go to the [Hacking](client/dev-environment) section :)
+
diff --git a/pages/docs/client/user-running.md b/pages/docs/client/user-running.md
new file mode 100644
index 0000000..2fda469
--- /dev/null
+++ b/pages/docs/client/user-running.md
@@ -0,0 +1,42 @@
+@nav_title = 'Running'
+@title = 'Running Bitmask'
+
+This document covers how to launch Bitmask. Also know as, where the
+magic happens.
+
+Launching Bitmask
+-----------------
+
+After a successful installation, there should be a launcher called
+bitmask somewhere in your path:
+
+ % bitmask
+
+The first time you launch it, it should launch the first run wizard that
+will guide you through the mostly automatic configuration of the LEAP
+Services.
+
+> **note**
+>
+> You will need to enter a valid test provider running the LEAP
+> Platform. You can use the LEAP test service, *<https://bitmask.net>*
+
+Debug mode
+----------
+
+If you are happy having lots of output in your terminal, you will like
+to know that you can run bitmask in debug mode:
+
+ $ bitmask --debug
+
+If you ask for it, you can also have all that debug info in a beautiful
+file ready to be attached to your bug reports:
+
+ $ bitmask --debug --logfile /tmp/leap.log
+
+I want all the options!
+-----------------------
+
+To see all the available command line options:
+
+ $ bitmask --help
diff --git a/pages/docs/design/bonafide.text b/pages/docs/design/bonafide.text
new file mode 100644
index 0000000..1db2311
--- /dev/null
+++ b/pages/docs/design/bonafide.text
@@ -0,0 +1,290 @@
+@title = 'Bonafide'
+@summary = 'Secure user registration, authentication, and provider discovery.'
+@toc = true
+
+h1. Introduction
+
+Bonafide is a protocol that allows a user agent to communicate with a service provider. It includes the following capabilities:
+
+* Discover basic information about a provider.
+* Register a new account with a provider.
+* Discover information about all the services offered by a provider.
+* Authenticate with a provider.
+* Destroy a user account.
+
+Bonafide user SRP (Secure Remote Password) for password-based authentication.
+
+h1. Configuration Files
+
+h2. JSON files
+
+h3. GET /provider.json
+
+The @provider.json@ file includes basic information about a provider. The URL for provider.json is always the same for all providers (`http://DOMAIN/provider.json`). This is the basic 'bootstrap' file that informs the user agent what URLs to use for the other actions.
+
+JSON files are always in UTF8. When loaded in the browser, they are not displayed in UTF8, so non-ascii characters look off, but the files are correct.
+
+Here is an example `provider.json` (from https://demo.bitmask.net/provider.json):
+
+bc.. {
+ "api_uri": "https://api.demo.bitmask.net:4430",
+ "api_version": "1",
+ "ca_cert_fingerprint": "SHA256: 0f17c033115f6b76ff67871872303ff65034efe7dd1b910062ca323eb4da5c7e",
+ "ca_cert_uri": "https://demo.bitmask.net/ca.crt",
+ "default_language": "en",
+ "description": {
+ "en": "A demonstration provider."
+ },
+ "domain": "demo.bitmask.net",
+ "enrollment_policy": "open",
+ "languages": [
+ "en"
+ ],
+ "name": {
+ "en": "Bitmask"
+ },
+ "services": [
+ "openvpn"
+ ]
+}
+
+p. In this document, `API_BASE` consists of `api_uri/api_version`
+TODO: define a schema for this file.
+
+h3. GET API_BASE/configs.json
+
+For each supported service code, `configs.json` lists the available configuration file (there might be more than one for a particular service if there are different formats available). The service codes are listed in "services" in `provider.json`. A provider can use whatever service codes they want, but the user agent will only respond to the ones that it understands.
+
+For example:
+
+bc.. {
+ "openvpn": {
+ "formats": ["1", "2"],
+ "1": "eip-service.json",
+ "2": "eip-service-2.json"
+ },
+ "soledad": {
+ "formats": ["1"],
+ "1": "soledad-service.json"
+ },
+ "mx": {
+ "formats": ["1"],
+ "1": "smtp-service.json"
+ }
+}
+
+h3. GET API_BASE/config/eip-service.json
+
+e.g. https://api.bitmask.net:4430/1/config/eip-service.json
+
+This file defines the "encrypted internet proxy" capabilities and gateways.
+
+h2. Keys
+
+h3. GET /ca.crt
+
+e.g. https://bitmask.net/ca.crt
+
+This is the CA certificate for the provider. It is used to validate servers when not using the web browser. In particular, for OpenVPN. The URL for this is the same for all providers. The fingerprint for this CA cert should be distributed with the client whenever possible.
+
+
+h1. REST API
+
+h2. Version
+
+The API_BASE for the webapp API is constructed from 'api_uri' and 'api_version' from provider.json.
+
+For example, given this in provider.json:
+
+<code>
+{
+ "api_uri": "https://api.bitmask.net:4430",
+ "api_version": "1",
+}
+</code>
+
+The API_BASE would be https://api.bitmask.net:4430/1
+
+The API_VERSION will increment if breaking changes to the api are made. The API might be enhanced without incrementing the version. For Version 1 this may include sending additional data in json responses.
+
+h2. Session
+
+h3. Handshake
+
+Starts authentication process (values A and B are part of the two step SRP authentication process).
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">POST API / sessions(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"A": "12…345", "login": "swq055"}@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>200 @{"B": "17…651", "salt": "A13CDE"}@</td>
+</tr>
+</table>
+
+If the query_params leave out the @A@, then no @B@ will be included and only the salt for the given login send out:
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">POST API / sessions(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"login": "swq055"}@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>200 @{"salt": "A13CDE"}@</td>
+</tr>
+</table>
+
+h3. Authenticate
+
+Finishes authentication handshake, after which the user is successfully authenticated (assuming no errors). This needs to be run after the Handshake.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">PUT API / sessions/:login(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"client_auth": "123…45", "A": "12…345"}@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>200 @{"M2": "A123BC", "id": "234863", "token": "Aenfw893-zh"}@</td>
+</tr>
+<tr>
+ <td>Error Response:</td>
+ <td>500 @{"field":"password","error":"wrong password"}@</td>
+</tr>
+</table>
+
+Variables:
+
+* *A*: same as A param from the first Handshake request (POST).
+* *client_auth*: SRP authentication value M, calculated by client.
+* *M2*: Server response for SRP.
+* *id*: User id for updating user record
+* *token*: Unique identifier used to authenticate the user (until the session expires).
+
+h3. Token Authentication
+
+Tokens returned by the authentication request are used to authenticate further requests to the API and stored as a Hash in the couch database. Soledad directly queries the couch database to ensure the authentication of a user. It compares a hash of the token to the one stored in the database. Hashing prevents timing attacks.
+
+h3. Logout
+
+Destroy the current session and invalidate the token. Requires authentication.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">DELETE API / logout(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"login": "swq055"}@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>204 NO CONTENT</td>
+</tr>
+</table>
+
+h2. Certificates
+
+h3. Get a VPN client certificate
+
+The client certificate will be a "free" cert unless client is authenticated.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">POST API / cert</th>
+ </tr>
+</thead>
+<tr>
+ <td>Response:</td>
+ <td>200 @PEM ENCODED CERT@</td>
+</tr>
+</table>
+
+The response also includes the corresponding private key.
+
+h3. Get a SMTP client certificate
+
+The client certificate will include the user's email address and the fingerprint will be stored with the users identity and the date it was created. Authentication is required.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">POST API / smtp_cert</th>
+ </tr>
+</thead>
+<tr>
+ <td>Response:</td>
+ <td>200 @PEM ENCODED CERT@</td>
+</tr>
+</table>
+
+The response also includes the corresponding private key.
+
+h2. Users
+
+h3. Signup
+
+Create a new user.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">POST API / users(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"user[password_salt]": "5A...21", "user[password_verifier]": "12...45", "user[login]": "that_s_me"}@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>200 @{"password_salt":"5A...21","login":"that_s_me"}@</td>
+</tr>
+</table>
+
+h3. Update user record
+
+Update information about the user. Requires Authentication.
+
+<table class="table table-bordered table-striped">
+<thead>
+ <tr>
+ <th colspan="2">PUT API /users/:uid(.json)</th>
+ </tr>
+</thead>
+<tr>
+ <td>Query params:</td>
+ <td>@{"user[param1]": "value1", "user[param2]": "value2" }@</td>
+</tr>
+<tr>
+ <td>Response:</td>
+ <td>204 @NO CONTENT@</td>
+</tr>
+</table>
+
+Possible parameters to update:
+
+* @login@ (requires @password_verifier@)
+* @password_verifier@ combined with @salt@
+* @public_key@
diff --git a/pages/docs/design/cuttlefish.md b/pages/docs/design/cuttlefish.md
new file mode 100644
index 0000000..6b2c0f5
--- /dev/null
+++ b/pages/docs/design/cuttlefish.md
@@ -0,0 +1,7 @@
+@title = 'Cuttlefish'
+@toc = true
+@summary = "Federated events and callback notifications."
+
+Not yet written.
+
+About the name: Cuttlefish are able to communicate by creating [different patterns on their skin](http://www.newscientist.com/article/dn3728-mathematics-reveals-the-cuttlefishs-wink.html) and communicate secretly with each other by [changing the polarization of their skin](http://www.ncbi.nlm.nih.gov/pubmed/9319987). Also, cuttlefish are [freakishly smart](http://www.pbs.org/wgbh/nova/nature/spineless-smarts.html).
diff --git a/pages/docs/design/en.haml b/pages/docs/design/en.haml
new file mode 100644
index 0000000..427dbb6
--- /dev/null
+++ b/pages/docs/design/en.haml
@@ -0,0 +1,5 @@
+- @nav_title = "Design Docs"
+- @title = "Design Documents"
+- @summary = "Design documents and specifications for various LEAP components and protocols."
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/design/nicknym-draft.md b/pages/docs/design/nicknym-draft.md
new file mode 100644
index 0000000..9398a9f
--- /dev/null
+++ b/pages/docs/design/nicknym-draft.md
@@ -0,0 +1,578 @@
+@title = 'Nicknym'
+@nav_title = 'Nicknym'
+@toc = true
+@summary = "Automatic discovery and validation of public keys."
+
+Introduction
+==========================================
+
+Although many interesting key validation infrastructure schemes have been recently proposed, it is not at all clear what someone writing secure email software today should do.
+
+1. **Automatic Management Of Keys (Amok)**: concrete rules for software agents that automatically managing keys, with forward support for new validation protocols as they are developed.
+1. **X-Key-Validation Email Header**: a simple, in-line method of advertising support for different key validation schemes.
+1. **Super Basic Provider Endorsement Protocol**:
+
+super
+basic
+easy
+simple
+provider
+endorsement
+public keys
+protocol
+http
+web
+
+**What is Nicknym?**
+
+Nicknym is a protocol to map user nicknames to public keys. With Nicknym, the user is able to think solely in terms of nickname, while still being able to communicate with a high degree of security (confidentiality, integrity, and authenticity). Essentially, Nicknym is a system for binding human-memorable nicknames to a cryptographic key via automatic discovery and automatic validation.
+
+Nicknym is a federated protocol: a Nicknym address is in the form `username@domain` just alike an email address and Nicknym includes both a client and a server component. Although the client can fall back to legacy methods of key discovery when needed, domains that run the Nicknym server component enjoy much stronger identity guarentees.
+
+Nicknym is key agnostic, and supports whatever public key information is available for an address (OpenPGP, OTR, X.509, RSA, etc).
+
+**Why is Nicknym needed?**
+
+Existing forms of secure identity are deeply flawed. These systems rely on either a single trusted entity (e.g. Skype), a vulnerable Certificate Authority system (e.g. S/MIME), or key identifiers that are not human memorable (e.g. fingerprints used in OpenPGP, OTR, etc). When an identity system is hard to use, it is effectively compromised because too few people take the time to use it properly.
+
+The broken nature of existing identities systems (either in security or in usability) is especially troubling because identity remains a bedrock precondition for any message security: you cannot ensure confidentiality or integrity without confirming the authenticity of the other party. Nicknym is a protocol to solve this problem in a way that is backward compatible, easy for the user, and includes very strong authenticity.
+
+Goals
+==========================================
+
+**High level goals**
+
+* Pseudo-anonymous and human friendly addresses in the form `username@domain`.
+* Automatic discovery and validation of public keys associated with an address.
+* The user should be able to use Nicknym without understanding anything about public/private keys or signatures.
+
+**Technical goals**
+
+* Wide utility: nicknym should be a general purpose protocol that can be used in wide variety of contexts.
+* Prevent dangerous actions: Nicknym should fail hard when there is a possibility of an attack.
+* Minimize false positives: because Nicknym fails hard, we should minimize false positives where it fails incorrectly.
+* Resistant to malicious actors: Nicknym should be externally auditable in order to assure service providers are not compromised or advertising bogus keys.
+* Resistant to association analysis: Nicknym should not reveal to any actor or network observer a map of a user's associations.
+
+**Non-goals**
+
+* Nicknym does not try to create a decentralized peer-to-peer identity system. Nicknym is federated, akin to the way email is federated.
+
+Nicknym Overview
+=============================================
+
+1. Nicknym Key Management Rules (NickKMR)
+1. Nicknym Key Discovery Protocol (NickKDP)
+1. Nicknym Key Endorsement Protocol (NickKEP)
+1. Nicknym Key Auditing Protocol ()
+
+
+Nicknym attempts to solve the binding problem using several strategies:
+
+1. **TOFU**:
+1. **Provider Endorsement**:
+1. **Network Perspective**:
+
+Related work
+===================================
+
+**The Binding Problem**
+
+Nicknym attempts to solve the problem of binding a human memorable identifier to a cryptographic key. If you have the identifier, you should be able to get the key with a high level of confidence, and vice versa. The goal is to have federated, human memorable, globally unique public keys.
+
+There are a number of established methods for binding identifier to key:
+
+* [X.509 Certificate Authority System](https://en.wikipedia.org/wiki/X.509)
+* Trust on First Use (TOFU)
+* Mail-back Verification
+* [Web of Trust (WOT)](http://en.wikipedia.org/wiki/Web_of_trust)
+* [DNSSEC](https://en.wikipedia.org/wiki/Dnssec)
+* [Shared Secret](https://en.wikipedia.org/wiki/Socialist_millionaire)
+* [Network Perspective](http://convergence.io/)
+* Nonverbal Feedback (a la ZRTP)
+* Global Append-only Log
+* Key fingerprint as unique identifiers
+
+The methods differ widely, but they all try to solve the same general problem of proving that a person or organization is in control of a particular key.
+
+**Nyms**
+
+http://nyms.io
+
+**DANE**
+
+[DANE](https://datatracker.ietf.org/wg/dane/), and the specific proposal for [OpenPGP user keys using DANE](https://datatracker.ietf.org/doc/draft-wouters-dane-openpgp/), offer a standardized method for securely publishing and locating OpenPGP public keys in DNS.
+
+As noted above, DANE will be very cool if ever adopted widely, but user keys are probably not a good fit for DNSSEC, because of issues of observability of DNS queries and complexity on the server and client end.
+
+By relying on the central authority of the root DNS zone, and the authority of TLDs (many of which are of doubtful trustworthiness), DANE potentially suffers from problems of compromised or nefarious authorities. Because DNS queries are not secure, a single user is particularly vulnerable to MiTM attacks that rewrite all their DNS queries. Adopting an alternate DNS query system, like [DNSCurve](http://dnscurve.org/), [DNSCrypt](https://www.opendns.com/technology/dnscrypt/), an alternate HTTPS based API, or restricting DNS queries to a VPN, would go a long way to fix this problem, and would effectively turn any supporting DNS server into a network perspectives notary. Regardless, the other problems with using DANE for user keys remain.
+
+**DIME**
+
+DIME, formerly DarkMail, uses DNSSEC for provider endorsement, in a manner similar to DANE. Each key endorsement includes the fingerprint of the previously endorsed key, allowing for some limited form of eventual consistency auditing.
+
+**End-To-End**
+
+https://code.google.com/p/end-to-end/wiki/KeyDistribution
+
+Certificate Transparency, but applied to email addresses.
+
+**Prism Proof Email**
+
+http://prismproof.org/
+
+* S/MIME
+* TOFU for legacy clients. Most mail user agents already support S/MIME, and will TOFU the key when they get a new message.
+
+**STEED**
+
+[STEED](http://g10code.com/steed.html) is a proposal with very similar goals to Nicknym. In a nutshell, Nicknym basically looks very similar to STEED when the domain owner does not support Nicknym. STEED includes four main ideas:
+
+* trust upon first contact: Nicknym uses this as well, although this is the fallback mechanism when others fail.
+* automatic key distribution and retrieval: Nicknym uses this as well, although we used HTTP for this instead of DNS.
+* automatic key generation: Nicknym is designed specifically to support automatic key generation, but this is outside the scope of the Nicknym protocol and it is not required.
+* opportunistic encryption: Again, Nicknym is designed to support opportunistic encryption, but does not require it.
+
+Additional differences include:
+
+* Nicknym is key agnostic: Nicknym does not make an assumption about what types of public keys a user wants to associate with their address.
+* Nicknym is protocol agnostic: Nicknym can be used with SMTP, XMPP, SIP, etc.
+* Nicknym relies on service provider adoption: With Nicknym, the strength of verification of public keys rests the degree to which a service provider adopts Nicknym. If a service provider does not support Nicknym, then effectively Nicknym opperates like STEED for that domain.
+
+**The Simple Thing**
+
+"The Simple Thing" (TST) is not really a protocol, but it could be. The idea is to just do the simple thing, which is to ignored any type of key endorsement and just TOFU all keys and allow people who care to manually verify fingerprints of the keys they hold.
+
+In all the other proposals, the burden of key validation is on the person who owns the key. TST works in the opposite way: all the burden for key validation is placed on the person using the public key, not on the key's owner.
+
+If written as a rule, TST might look like this:
+
+1. The client should use whatever latest key is advertised inline via headers in email it receives. Ideally, this would be validated by the
+provider via a very simple mechanism (such as grab user Bob's key from this well-known https URL or DNSSEC/DANE).
+2. To cold start, sender can grab recipient's key via this well-known method.
+3. Sender should confirm before sending a message that they have the most up to date key. Messages received that are encrypted to unsupported keys should be bounced.
+
+For a long discussion of the simple thing, see [messaging list](https://moderncrypto.org/mail-archive/messaging/2014/000855.html)
+
+**WebID and Mozilla Persona**
+
+What about [WebID](http://www.w3.org/wiki/WebID) or [Mozilla Persona](https://www.mozilla.org/en-US/persona/)? These are both interesting standards for cryptographically proving identify, so why do we need something new?
+
+These protocols, and the poorly conceived OpenID Connect, are designed to address a fundamentally different problem: authenticating a user to a website. The problem of authenticating users to one another requires a different architecture entirely. There are some similarities, however, and in the long run a Nicknym provider could also be a WebID and Mozilla Persona provider.
+
+
+Nicknym protocol
+==============================
+
+Definitions
+-------------------------
+
+General terms:
+
+* **address**: A globally unique handle in the form username@domain (i.e. an email, SIP, or XMPP address) that we attempt to bind to a particular key.
+
+Actors:
+
+* **user**: the person with an email account through a service provider.
+* **provider**: A service provider that offers end-user services on a particular domain.
+* **key manager**: The key manager is a trusted user agent that is responsible for storing a database of all the keys for the user, updating these keys, and auditing the endorsements of the user's own keys. Typically, the key manager will run on the user's device, but might be running on any device the user chooses to trust.
+* **key directory**: An online service that stores public keys and allows clients to search for keys by address or fingerprint. A key directory does not make any assertions regarding the validity of an address + key binding. Existing OpenPGP keyservers are a type of key directory in this context, but several of the key validation proposals include new protocols for key directories.
+* **key endorser**: A key endorser is an organization that makes assertions regarding the binding of username@domain address to public key, typically by signing public keys. When supported, all such endorsement signatures must apply only to the uid corresponding to the address being endorsed.
+* **nickagent**: A key manager that supports nicknym.
+* **nickserver**: A daemon that acts as a key directory and key endorser for nicknym.
+
+Keys:
+
+* **user key**: A public/private key pair associated with a user address. If not specified, "user key" refers to the public key.
+* **endorsement key**: The public/private key pair that a service provider or third party endorser uses to sign user keys.
+* **provider key**: A public/private key pair owned by the provider used as an endorsement key.
+* **validated key**: A key is "validated" if the nickagent has bound the user address to a public key.
+
+Key actions:
+
+* **key discovery**: The act of encountering a new key, either inline the message, via URL, or via a key directory.
+* **verified key transition**: A process where a key owner generates a new public/private key pair and signs the new key with a prior key. Someone verifying this new key then must check to see if there is a signature on the new key from a key previously validated for that particular email address. In effect, "verified key transition" is a process where verifiers treat all keys as name-constrained signing authorities, with the ability to sign any new key matching the same email address. In the case of a system that supports signing particular uids, like OpenPGP, the signatures for key transition must apply only to the relevant uid.
+* **key registration**: the key has been stored by the key manager, and assigned a validation level. The user agent always uses registered keys. This is analogous to adding a key to a user's keyring, although implementations may differ.
+
+Key information:
+
+* **binding information**: evidence that the key manager uses to make an educated guess regarding what key to associate with what email address. This information could come from the headers in an email, a DNS lookup, a key endorser, etc.
+* **key validation level**: the level of confidence the key manager has that we have the right key for a particular address. For automatic key management, we don't say that a key is ever "trusted" unless the user has manually verified the fingerprint.
+
+
+Nickserver requests
+-----------------------
+
+A nickagent will attempt to discover the public key for a particular user address by contacting a nickserver. The nickserver returns JSON encoded key information in response to a simple HTTP request with a user's address. For example:
+
+ curl -X POST -d address=alice@domain.org https://nicknym.domain.org:6425
+
+* The port is always 6425.
+* The HTTP verb may be POST or GET.
+* The request must use TLS (see [Query security](#Query.security)).
+* The query data should have a single field 'address'.
+* For POST requests to nicknym.domain.org, the query data may be encrypted to the the public OpenPGP key nicknym@domain.org (see [Query security](#Query.security)).
+* The request may include an "If-Modified-Since" header. In this case, the response might be "304 Not Modified".
+
+Requests may be local or foreign, and for user keys or for provider keys.
+
+* **local** requests are for information that the nickserver is authoritative. In other words, when the requested address is for the same domain that the nickserver is running on.
+* **foreign** request are for information about other domains.
+* **user key** requests are for addresses in the form "username@domain".
+* **provider key** requests are for addresses in the form "domain".
+
+**Local, Provider Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=domain.org
+
+The response is the authoritative provider key for that domain.
+
+**Local, User Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=alice@domain.org
+
+The nickserver returns authoritative key information from the provider's own user database. Every public key returned for local requests must be signed by the provider's key.
+
+**Foreign, Provider Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=otherdomain.org
+
+1. First, check the nickserver's cache database of discovered keys. If the cache is not old, return this key. This step is skipped if the request is encrypted to the foreign provider's key.
+2. Otherwise, fetch provider key from the provider's nickserver, cache the result, and return it.
+
+**Foreign, User Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=bob@otherdomain.org
+
+* First, check the nickserver's database cache of nicknyms. If the cache is not old, return the key information found in the cache. This step is skipped if the request is encrypted to a foreign provider key.
+* Otherwise, attempt to contact a nickserver run by the provider of the requested address. If the nickserver exists, query that nickserver, cache the result, and return it in the response.
+* Otherwise, fall back to querying existing SKS keyservers, cache the result and return it.
+* Otherwise, return a 404 error.
+
+If the key returned for a foreign request contains multiple user addresses, they are all ignored by nicknym except for the user address specified in the request.
+
+Nickserver response
+---------------------------------
+
+The nickserver will respond with one of the following status codes:
+
+* "200 Success": found keys for this address, and the result is in the body of the response encoded as JSON.
+* "304 Not Modified": if the request included an "If-Modified-Since" header and the keys in question have not been modified, the response will have status code 304.
+* "404 Not Found": no keys were found for this address.
+* "500 Error": An unknown error occurred. Details may be included in the body.
+
+Responses with status code 200 include a body text that is a JSON encoded map with a field "address" plus one or more of the following fields: "openpgp", "otr", "rsa", "ecc", "x509-client", "x509-server", "x509-ca". For example:
+
+ {
+ "address": "alice@example.org",
+ "openpgp": "6VtcDgEKaHF64uk1c/crFhRHuFW9kTvgxAWAK01rXXjrxEa/aMOyXnVQuQINBEof...."
+ }
+
+Responses with status codes other than 200 include a body text that is a JSON encoded map with the following fields: "address", "status", and "message". For example:
+
+ {
+ "address": "bob@otherdomain.org",
+ "status": 404,
+ "message": "Not Found"
+ }
+
+A nickserver response is always signed with a OpenPGP public signing key associated with the provider. Both successful AND unsuccessful responses are signed. Responses to successful local requests must be signed by the key associated with the address "nicknym@domain.org". Foreign requests and non-200 responses may alternately be signed with a key associated with the address nickserver@domain.org. This allows for user keys to be signed off-site and in advance, if they so choose. The signature is ASCII armored and appended to the JSON.
+
+ {
+ "address": "alice@example.org",
+ "openpgp": "6VtcDgEKaHF64uk1c/crFhRHuFW9kTvgxAWAK01rXXjrxEa/aMOyXnVQuQINBEof...."
+ }
+ -----BEGIN PGP SIGNATURE-----
+ iQIcBAEBCgAGBQJRhWO+AAoJEIaItIgARAAl2IwP/24z9CjKjD0fd27pQs+r+e3h
+ p8KAYDbVac3+c3vm30DjHO/RKF4Zq6+sTAIkrFvXOwYJl9KgjMpQVV/voInjxATz
+ -----END PGP SIGNATURE-----
+
+If the data in the request was encrypted to the public key nicknym@domain.org, then the JSON response and signature are additionally encrypted to the symmetric key found in the request and returned base64 encoded.
+
+TBD: maybe we should just switch to a raw RSA or ECC signature.
+
+Query balancing
+------------------------
+
+A nickagent must choose what IP address to query by selecting randomly from among hosts that resolve from `nicknym.domain.org` (where `domain.org` is the domain name of the provider).
+
+If a host does not response, a nickagent must skip over it and attempt to contact another host in the pool.
+
+Query security
+--------------------------
+
+TLS is required for all nickserver queries.
+
+When querying https://nicknym.domain.org, nickagent must validate the TLS connection in one of four possible ways:
+
+1. Using a commercial CA certificate distributed with the host operating system.
+2. Using DANE TLSA record to discover and validate the server certificate.
+3. Using a seeded CA certificate (see [Discovering nickservers](#Discovering.nickservers)).
+4. Using a custom self-signed CA certificate discovered for the domain, so long as the CA certificate was discovered via #1 or #2 or #3. Custom CA certificates may be discovered for a domain by making a provider key request of a nickserver (e.g. https://nicknym.known-domain.org/?address=new-domain.org).
+
+Optionally, a nickagent may make an encrypted query like so:
+
+0. Suppose the nickagent wants to make an encrypted query regarding the address alice@domain-x.org.
+1. Nickagent discovers the public key for nicknym@domain-y.org.
+2. The nickagent makes a POST request to a nickserver with two fields: address and ciphertext.
+3. The address only contains the domain part of the address (unlike an unencrypted request).
+4. The ciphertext field is encrypted to the public key for nicknym@domain-y.org. The corresponding cleartext contains the full address on the first line followed by randomly generated symmetric key on the second line.
+5. If the request was local, the nickserver handles the request. If the request for foreign, the nickserver proxies the request to the domain specified in the address field.
+6. When the request gets to the right nickserver, the body of the nickserver response is encrypted using using the symmetric key. The first line of the response specifies the cipher and mode used (allowed ciphers TBD).
+
+Comment: although it may seem excessive to encrypt both the request via TLS and the request body via OpenPGP, the reason for this is that many requests will not use OpenPGP.
+
+Automatic key validation
+----------------------------------
+
+A key is "validated" if the nickagent has bound the user address to a public key.
+
+Nicknym supports three different levels of key validation:
+
+* Level 3 - **path trusted**: A path of cryptographic signatures can be traced from a trusted key to the key under evaluation. By default, only the provider key from the user's provider is a "trusted key".
+* Level 2 - **provider signed**: The key has been signed by a provider key for the same domain, but the provider key is not validated using a trust path (i.e. it is only registered)
+* Level 1 - **registered**: The key has been encountered and saved, it has no signatures (that are meaningful to the nickagent).
+
+A nickagent will try to validate using the highest level possible.
+
+Automatic renewal
+-----------------------------
+
+A validated public key is replaced with a new key when:
+
+* The new key is **path trusted**
+* The new key is **provider signed**, but the old key is only **registered**.
+* The new key has a later expiration, and the old key is only **registered** and will expire "soon" (exact time TBD).
+* The agent discovers a new subkey, but the master signing key is unchanged.
+
+In all other cases, the new key is rejected.
+
+The nickagent will attempt to refresh a key by making request to a nickserver of its choice when a key is past 3/4 of its lifespan and again when it is about to expire.
+
+Nicknym encourages, but does not require, the use of short lived public keys, in the range of X to Y days. It is recommended that short lived keys are not uploaded to OpenPGP keyservers.
+
+Automatic invalidation
+----------------------------
+
+A key is invalidated if:
+
+* The old key has expired, and no new key can be discovered with equal or greater validation level.
+
+This means validation is a one way street: once a certain level of validation is established for a user address, no client should accept any future keys for that address with a lower level of validation.
+
+Discovering nickservers
+--------------------------------
+
+It is entirely up to the nickagent to decide what nickservers to query. If it wanted to, a nickagent could send all its requests to a single nickserver.
+
+However, nickagents should discover new nickservers and balance their queries to these nickservers for the purposes of availability, load balancing, network perspective, and hiding the user's association map.
+
+Whenever the nickagent is asked by a locally running application for a public key corresponding to an address on the domain `domain.org`, it may check to see if the host `nicknym.domain.org` exists. If the domain resolves, then the nickagent may add it to the pool of known nickservers. A nickagent should only perform this DNS check if it is able to do so over an encrypted tunnel.
+
+Additionally, a nickagent may be distributed with an initial list of "seed" nickservers. In this case, the nickagent is distributed with a copy of the CA certificate used to validate the TLS connection with each respective seed nickserver.
+
+Cross-provider signatures
+----------------------------------
+
+Nicknym does not support user signatures on user keys. There is no trust path from user to user. However, a service provider may sign the provider key of another provider.
+
+To be written.
+
+Auditing
+----------------------------
+
+In order to keep the user's provider from handing out bogus public keys, a nickagent should occasionally make foreign queries of the user's own address against nickservers run by third parties. The recommended frequency of these queries is once per day, at a random time during local waking hours.
+
+In order to prevent a nickserver from handing out bogus provider keys, a nickagent should query multiple nickservers before a provider key is registered or path trusted.
+
+Possible attacks:
+
+**Attack 1 - Intercept Outgoing:**
+
+* Attack: provider `A` signs an impostor key for provider `B` and distributes it to users of `A` (in order to intercept outgoing messages sent to `B`).
+* Countermeasure: By querying multiple nickservers for the provider key of `B`, the nickagent can detect if provider `A` is attempting to distribute impostor keys.
+
+**Attack 2 - Intercept Incoming:**
+
+* Attack: provider `A` signs an impostor key for one of its own users, and distributes to users of provider `B` (in order to intercept incoming messages).
+* Countermeasure: By querying for its own keys, a nickagent can detect if a provider is given out bogus keys for their addresses.
+
+**Attack 3 - Association Mapping:**
+
+* Attack: A provider tracks all the requests for key discovery in order to build a map of association.
+* Countermeasure: By performing foreign key queries via third party nickservers, an agent can prevent any particular entity from tracking their queries.
+
+Known vulnerabilities
+------------------------------------------
+
+The nicknym protocol does not yet have a good solution for dealing with the following problems:
+
+* Enumeration attack: an attacker can enumerate the list of all users for a provider by simply querying every possible username combination. We have no defense against this, although it would surely take a while.
+* DDoS attack: by their very nature, nickservers perform a bit of work for every request. Because of this, they are vulnerable to be overloaded by a a flood of bogus requests.
+* Besmirch attack: a MitM attacker can sully the reputation of a provider by generating many bad responses (responses signed with the wrong key), thus leading other nickservers and nicknym agents to consider the provider compromised.
+
+Future enhancements
+---------------------
+
+**Additional discovery mechanisms**
+
+In addition to nickservers and SKS keyservers, there are two other potential methods for discovering public keys:
+
+* **Webfinger** includes a standard mechanism for distributing a user's public key via a simple HTTP request. This is very easy to implement on the server, and very easy to consume on the client side, but there are not many webfinger servers with public keys in the wild.
+* **DNS** is used by multiple competing standards for key discovery. When and if one of these emerges predominate, Nicknym should attempt to use this method when available.
+
+Discussion
+----------------------
+
+*Why not use WoT?* Most users are empirically unable to properly maintain a web of trust. The concepts are hard, it is easy to mess up the signing practice, most people default to TOFU anyway, and very few users use revocation properly. Most importantly, the WOT exposes a user's social network, potentially highly sensitive information in its own right. When first proposed, WOT was a clever innovation, but contemporary threats have greatly reduced its usefulness.
+
+*Why not use DANE/DNSSEC?* DANE is great for discovery and validation of server keys, but there are many reasons why it is not so good for user keys: DNS records are slow to update; DNS queries are observable, unlike HTTP over TLS; it is difficult for a provider to publish thousands of keys in DNS; it is much easier for a client to do a simple HTTP fetch (and more possible for HTML5 clients). Also, RSA Public keys will soon be too big for UDP packets (though this is not true of ECC), so putting keys in DNS will mean putting a URL to a key in DNS, so you might as well just use HTTP anyway.
+
+*Why not use Shared Secret?* Shared secrets, like with the Socialist Millionaire protocol, are cool in theory but prone to user error and frustration in practice. A typical user is not in a position to have established a prior secret with most of the people they need to make first contact with. Shared secrets also cannot be scaled to a group setting. Finally, shared secrets are often typed incorrectly (e.g. was the secret "Invisible Zebra" or "invisibleZebra"? This could be fixed with rules for secret normalization, but this is tricky and language specific). For the special case of advanced users with special security needs, however, a shared secret provides a much stronger validation than other methods of key binding (so long as the validation window is small).
+
+*Why not use Mail-back Verification?* If the provider distributes user keys, there is not any benefit to mail-back verification. The nicknym protocol could potentially benefit from a future enhancement to support mail-back for users on a non-cooperating legacy provider. However, at its best, mail-back is a very weak form of key validation.
+
+*Why not use Global Append-only Log?* Maybe we should, they are neat. However, current implementations are slow, resource intensive, and experimental (e.g. namecoin).
+
+*Why not use Nonverbal Feedback?* ZRTP can use non-verbal clues to establish secure identity because of the nature of a live phone call. This doesn't work for text only messaging.
+
+*Why not use the key fingerprint as the unique identifier?* This is the strategy taken by all systems for peer-to-peer messaging (e.g. retroshare, bitmessage, etc). Depending on the length of the fingerprint, this method is very secure. Essentially, this approach neatly solves the binding problem by collapsing the key and the identifier together as one. The problem, of course, is that this is not very user friendly. Users must either have pre-arranged some way to exchange fingerprints, or they must fall back to one of the other methods for verification (shared secret, WoT, etc). The friction associated with pre-arranged sharing of fingerprints can be reduced with technology, using QR-codes and hand held devices, for example. In the best case scenario, however, fingerprints as identifiers will always be much less user friendly than simple username@domain.org addresses. The motivating premise behind Nicknym is that when an identity system is hard to use, it is effectively compromised because too few people take the time to use it properly.
+
+Reference nickagent implementation
+====================================================
+
+https://github.com/leapcode/keymanager
+
+There is a reference nickagent implementation called "key manager" written in Python and integrated into the LEAP client. It uses Soledad to store its data.
+
+Public API
+----------------------------
+
+**refresh_keys()**
+
+updates the keys with fresh ones, as needed.
+
+**get_key(address, type)**
+
+returns a single public key for address. type is one of 'openpgp', 'otr', 'x509', or 'rsa'.
+
+**send_key(address, public_key, type)**
+
+authenticates with the appropriate provider and saves the public_key in the user database.
+
+Storage
+--------------------------
+
+Key manager uses Soledad for storage. GPGME, however, requires keys to be stored in keyrings, which are read from disk.
+
+For now, Key Manager deals with this by storing each key in its own keyring. In other words, every key is in a keyring with exactly 1 key, and this keyring is stored in a Soledad document. To keep from confusing this keyring from a normal keyring, I will call it a 'unitary keyring'.
+
+Suppose Alice needs to communicate with Bob:
+
+1. Alice's Key Manager copies to disk her private key and bob's public key. The key manager gets these from Soledad, in the form of unitary Keyrings.
+2. Client code uses GPGME, feeding it these temporary keyring files.
+3. The keyrings are destroyed.
+
+TBD: how best to ensure destruction of the keyring files.
+
+An example Soledad document for an address:
+
+ {
+ "address":"alice@example.org",
+ "keys": [
+ {
+ "type": "opengpg"
+ "key": "binary blob",
+ "keyring": "binary blob",
+ "expires_on": "2014-01-01",
+ "validation": "provider_signed",
+ "first_seen_at": "2013-04-01 00:11:00",
+ "last_audited_at": "2013-04-02 12:00:00",
+ },
+ {
+ "type": "otr"
+ "key": "binary blob",
+ "expires_on": "2014-01-01",
+ "validation": "registered",
+ "first_seen_at": "2013-04-01 00:11:00",
+ "last_audited_at": "2013-04-02 12:00:00",
+ }
+ ]
+ }
+
+Pseudocode
+---------------------------
+
+get_key
+
+ #
+ # return a key for an address
+ #
+ function get_key(address, type)
+ if key for address exists in soledad database?
+ return key
+ else
+ fetch key from nickserver
+ save it in soledad
+ return key
+ end
+ end
+
+send_key
+
+ #
+ # send the user's provider the user's key. this key will get signed by the provider, and replace any prior keys
+ #
+ function send_key(type)
+ if not authenticated:
+ error!
+ end
+ get (self.address, type)
+ send (key_data, type) to the provider
+ end
+
+refresh_keys
+
+ #
+ # update the user's db of validated keys to see if there are changes.
+ #
+ function refresh_keys()
+ for each key in the soledad database (that should be checked?):
+ newkey = fetch_key_from_nickserver()
+ if key is about to expire and newkey complies with the renewal paramters:
+ replace key with newkey
+ else if fingerprint(key) != fingerprint(newkey):
+ freak out, something wrong is happening? :)
+ may be handle revokation, or try to get some voting for a given key and save that one (retrieve it through tor/vpn/etc and see what's the most found key or something like that.
+ else:
+ everything's cool for this key, continue
+ end
+ end
+ end
+
+private fetch_key_from_nickserver
+
+ function fetch_key_from_nickserver(key)
+ randomly pick a subset of the available nickservers we know about
+ send a tcp request to each in this subset in parallel
+ first one that opens a successful socket is used, all the others are terminated immediately
+ make http request
+ parse json for the keys
+ return keys
+ end
+
+
+Reference nickserver implementation
+=====================================================
+
+https://github.com/leapcode/nickserver
+
+The reference nickserver is written in Ruby 1.9 and licensed GPLv3. It is lightweight and scalable (supporting high concurrency, and reasonable latency). Data is stored in CouchDB.
diff --git a/pages/docs/design/nicknym.md b/pages/docs/design/nicknym.md
new file mode 100644
index 0000000..3f94875
--- /dev/null
+++ b/pages/docs/design/nicknym.md
@@ -0,0 +1,498 @@
+@title = 'Nicknym'
+@toc = true
+@summary = "Automatic discovery and validation of public keys."
+
+Introduction
+==========================================
+
+**What is Nicknym?**
+
+Nicknym is a protocol to map user nicknames to public keys. With Nicknym, the user is able to think solely in terms of nickname, while still being able to communicate with a high degree of security (confidentiality, integrity, and authenticity). Essentially, Nicknym is a system for binding human-memorable nicknames to a cryptographic key via automatic discovery and automatic validation.
+
+Nicknym is a federated protocol: a Nicknym address is in the form `username@domain` just alike an email address and Nicknym includes both a client and a server component. Although the client can fall back to legacy methods of key discovery when needed, domains that run the Nicknym server component enjoy much stronger identity guarentees.
+
+Nicknym is key agnostic, and supports whatever public key information is available for an address (OpenPGP, OTR, X.509, RSA, etc). However, Nicknym enforces a strict one-to-one mapping of address to public key.
+
+**Why is Nicknym needed?**
+
+Existing forms of secure identity are deeply flawed. These systems rely on either a single trusted entity (e.g. Skype), a vulnerable Certificate Authority system (e.g. S/MIME), or key identifiers that are not human memorable (e.g. fingerprints used in OpenPGP, OTR, etc). When an identity system is hard to use, it is effectively compromised because too few people take the time to use it properly.
+
+The broken nature of existing identities systems (either in security or in usability) is especially troubling because identity remains a bedrock precondition for any message security: you cannot ensure confidentiality or integrity without confirming the authenticity of the other party. Nicknym is a protocol to solve this problem in a way that is backward compatible, easy for the user, and includes very strong authenticity.
+
+Goals
+==========================================
+
+**High level goals**
+
+* Pseudo-anonymous and human friendly addresses in the form `username@domain`.
+* Automatic discovery and validation of public keys associated with an address.
+* The user should be able to use Nicknym without understanding anything about public/private keys or signatures.
+
+**Technical goals**
+
+* Wide utility: nicknym should be a general purpose protocol that can be used in wide variety of contexts.
+* No revocation: instead of key revocation, support short lived keys that frequently and automatically refresh.
+* Prevent dangerous actions: Nicknym should fail hard when there is a possibility of an attack.
+* Minimize false positives: because Nicknym fails hard, we should minimize false positives where it fails incorrectly.
+* Resistant to malicious actors: Nicknym should be externally auditable in order to assure service providers are not compromised or advertising bogus keys.
+* Resistant to association analysis: Nicknym should not reveal to any actor or network observer a map of a user's associations.
+
+**Non-goals**
+
+* Nicknym does not try to create a decentralized peer-to-peer identity system. Nicknym is federated, akin to the way email is federated.
+
+The binding problem
+=============================================
+
+Nicknym attempts to solve the problem of binding a human memorable identifier to a cryptographic key. If you have the identifier, you should be able to get the key with a high level of confidence, and vice versa. The goal is to have federated, human memorable, globally unique public keys.
+
+There are a number of established methods for binding identifier to key:
+
+* [X.509 Certificate Authority System](https://en.wikipedia.org/wiki/X.509)
+* Trust on First Use (TOFU)
+* Mail-back Verification
+* [Web of Trust (WOT)](http://en.wikipedia.org/wiki/Web_of_trust)
+* [DNSSEC](https://en.wikipedia.org/wiki/Dnssec)
+* [Shared Secret](https://en.wikipedia.org/wiki/Socialist_millionaire)
+* [Network Perspective](http://convergence.io/)
+* Nonverbal Feedback (a la ZRTP)
+* Global Append-only Log
+* Key fingerprint as unique identifiers
+
+The methods differ widely, but they all try to solve the same general problem of proving that a person or organization is in control of a particular key.
+
+**Nicknym overview**
+
+Nicknym solves the binding problem by using a combination of methods, utilizing TOFU, X.509, Network Perspective, and additional methods we call "Provider Keys" and "Federated Web of Trust" (FWOT).
+
+1. Nicknym starts with TOFU of user keys, because it is easy to do and backward compatible with legacy providers. In TOFU, your client naively accept the key of another user when it first encounters it. When you accept a key via TOFU, you are making a bet that possible attackers against you did not have the foresight to specifically target you with a false key during discovery.
+2. Next, we add X.509. For those providers that publish the public keys of their users, we require that these keys be fetched over validated TLS. This makes third party attacks against TOFU more difficult, but also places a lot of trust in the providers (and the Certificate Authorities).
+3. Next, we add a simple form of Network Perspective where the client can ask one provider what key another provider is distributing. This allows a user's client to be able to audit their provider and keep them honest in an automated manner. If a service provider distributes bogus keys, their users and other providers will be quickly alerted to the problem.
+4. Next, we add Provider Keys. If a service provider supports nicknym, the public keys of its users are additionally signed by a "provider key". If your client has the correct provider key, you no longer need to TOFU the keys of the provider's users. This has the benefit making it possible for a user to issue new keys, and to add support for very short-lived keys rather than trying to use key revocation. A service provider is much less likely to lose their private key or have it compromised, a significant problem with TOFU of user keys.
+5. Finally, we add a Federated Web of Trust. The system works like this: each service provider is responsible for the due diligence of properly signing the keys of a few other providers, akin to the distributed web of trust model of OpenPGP, but with all the hard work of proper signature validation placed upon the service provider. When a user communicates with another party who happens to use a service provider that participates in the FWOT, the user's software will automatically trace a chain of signature from the other party's key, to their service provider, to the user's own service provider (with some possible intermediary signatures). This allows for identity that is verified through an end-to-end trust path from any user to any other user in a way that can be automated and is human memorable. Support for a FWOT allows us to bypass entirely X.509 Certificate Authorities, to gracefully handle short lived provider keys, and to handle emergency re-key events if a provider's key is lost.
+
+As we move down this list, each measure taken gets more complicated, requires more provider cooperation, and provides less additional benefit than the one before it. Nevertheless, each measure contributes some important benefit toward the goal of automatic binding of user identity to public key.
+
+**Questions**
+
+*Why not use WoT?* Most users are empirically unable to properly maintain a web of trust. The concepts are hard, it is easy to mess up the signing practice, most people default to TOFU anyway, and very few users use revocation properly. Most importantly, the WOT exposes a user's social network, potentially highly sensitive information in its own right. When first proposed, WOT was a clever innovation, but contemporary threats have greatly reduced its usefulness.
+
+*Why not use DANE/DNSSEC?* DANE is great for discovery and validation of server keys, but there are many reasons why it is not so good for user keys: DNS records are slow to update; DNS queries are observable, unlike HTTP over TLS; it is difficult for a provider to publish thousands of keys in DNS; it is much easier for a client to do a simple HTTP fetch (and more possible for HTML5 clients). Also, RSA Public keys will soon be too big for UDP packets (though this is not true of ECC), so putting keys in DNS will mean putting a URL to a key in DNS, so you might as well just use HTTP anyway.
+
+*Why not use Shared Secret?* Shared secrets, like with the Socialist Millionaire protocol, are cool in theory but prone to user error and frustration in practice. A typical user is not in a position to have established a prior secret with most of the people they need to make first contact with. Shared secrets also cannot be scaled to a group setting. Finally, shared secrets are often typed incorrectly (e.g. was the secret "Invisible Zebra" or "invisibleZebra"? This could be fixed with rules for secret normalization, but this is tricky and language specific). For the special case of advanced users with special security needs, however, a shared secret provides a much stronger validation than other methods of key binding (so long as the validation window is small).
+
+*Why not use Mail-back Verification?* If the provider distributes user keys, there is not any benefit to mail-back verification. The nicknym protocol could potentially benefit from a future enhancement to support mail-back for users on a non-cooperating legacy provider. However, at its best, mail-back is a very weak form of key validation.
+
+*Why not use Global Append-only Log?* Maybe we should, they are neat. However, current implementations are slow, resource intensive, and experimental (e.g. namecoin).
+
+*Why not use Nonverbal Feedback?* ZRTP can use non-verbal clues to establish secure identity because of the nature of a live phone call. This doesn't work for text only messaging.
+
+*Why not use the key fingerprint as the unique identifier?* This is the strategy taken by all systems for peer-to-peer messaging (e.g. retroshare, bitmessage, etc). Depending on the length of the fingerprint, this method is very secure. Essentially, this approach neatly solves the binding problem by collapsing the key and the identifier together as one. The problem, of course, is that this is not very user friendly. Users must either have pre-arranged some way to exchange fingerprints, or they must fall back to one of the other methods for verification (shared secret, WoT, etc). The friction associated with pre-arranged sharing of fingerprints can be reduced with technology, using QR-codes and hand held devices, for example. In the best case scenario, however, fingerprints as identifiers will always be much less user friendly than simple username@domain.org addresses. The motivating premise behind Nicknym is that when an identity system is hard to use, it is effectively compromised because too few people take the time to use it properly.
+
+Related work
+===================================
+
+**WebID and Mozilla Persona**
+
+What about [WebID](http://www.w3.org/wiki/WebID) or [Mozilla Persona](https://www.mozilla.org/en-US/persona/)? These are both interesting standards for cryptographically proving identify, so why do we need something new?
+
+These protocols, and the poorly conceived OpenID Connect, are designed to address a fundamentally different problem: authenticating a user to a website. The problem of authenticating users to one another requires a different architecture entirely. There are some similarities, however, and in the long run a Nicknym provider could also be a WebID and Mozilla Persona provider.
+
+**STEED**
+
+[STEED](http://g10code.com/steed.html) is a proposal with very similar goals to Nicknym. In a nutshell, Nicknym basically looks very similar to STEED when the domain owner does not support Nicknym. STEED includes four main ideas:
+
+* trust upon first contact: Nicknym uses this as well, although this is the fallback mechanism when others fail.
+* automatic key distribution and retrieval: Nicknym uses this as well, although we used HTTP for this instead of DNS.
+* automatic key generation: Nicknym is designed specifically to support automatic key generation, but this is outside the scope of the Nicknym protocol and it is not required.
+* opportunistic encryption: Again, Nicknym is designed to support opportunistic encryption, but does not require it.
+
+Additional differences include:
+
+* Nicknym is key agnostic: Nicknym does not make an assumption about what types of public keys a user wants to associate with their address.
+* Nicknym is protocol agnostic: Nicknym can be used with SMTP, XMPP, SIP, etc.
+* Nicknym relies on service provider adoption: With Nicknym, the strength of verification of public keys rests the degree to which a service provider adopts Nicknym. If a service provider does not support Nicknym, then effectively Nicknym opperates like STEED for that domain.
+
+**DANE**
+
+[DANE](https://datatracker.ietf.org/wg/dane/), and the specific proposal for [OpenPGP user keys using DANE](https://datatracker.ietf.org/doc/draft-wouters-dane-openpgp/), offer a standardized method for securely publishing and locating OpenPGP public keys in DNS.
+
+As noted above, DANE will be very cool if ever adopted widely, but user keys are probably not a good fit for DNSSEC, because of issues of observability of DNS queries and complexity on the server and client end.
+
+By relying on the central authority of the root DNS zone, and the authority of TLDs (many of which are of doubtful trustworthiness), DANE potentially suffers from problems of compromised or nefarious authorities. Because DNS queries are not secure, a single user is particularly vulnerable to MiTM attacks that rewrite all their DNS queries. Adopting an alternate DNS query system, like [DNSCurve](http://dnscurve.org/), [DNSCrypt](https://www.opendns.com/technology/dnscrypt/), an alternate HTTPS based API, or restricting DNS queries to a VPN, would go a long way to fix this problem, and would effectively turn any supporting DNS server into a network perspectives notary. Regardless, the other problems with using DANE for user keys remain.
+
+Nicknym protocol
+==============================
+
+Definitions
+-------------------------
+
+* **address**: A globally unique handle in the form user@domain (i.e. an email, SIP, or XMPP address).
+* **provider**: A service provider that offers end-user services on a particular domain.
+* **user key**: A public/private key pair associated with a user address. If not specified, "user key" refers to the public key.
+* **provider key**: A public/private key pair owned by the provider. The address associated with this key is just the domain of the service provider.
+* **validated key**: A key is "validated" if the nickagent has bound the user address to a public key.
+* **nickagent**: Client side program that manages a user's contact list, the public keys they have encountered and validated, and the user's own key pairs. The nickagent may also expose an API for other local applications to query for a public key.
+* **nickserver**: Server side daemon run by providers who support Nicknym. A nickserver is responsible for answering the question "what public key do you see for this address"?
+
+Nickserver requests
+-----------------------
+
+A nickagent will attempt to discover the public key for a particular user address by contacting a nickserver. The nickserver returns JSON encoded key information in response to a simple HTTP request with a user's address. For example:
+
+ curl -X POST -d address=alice@domain.org https://nicknym.domain.org:6425
+
+* The port is always 6425.
+* The HTTP verb may be POST or GET.
+* The request must use TLS (see [Query security](#Query.security)).
+* The query data should have a single field 'address'.
+* For POST requests to nicknym.domain.org, the query data may be encrypted to the the public OpenPGP key nicknym@domain.org (see [Query security](#Query.security)).
+* The request may include an "If-Modified-Since" header. In this case, the response might be "304 Not Modified".
+
+Requests may be local or foreign, and for user keys or for provider keys.
+
+* **local** requests are for information that the nickserver is authoritative. In other words, when the requested address is for the same domain that the nickserver is running on.
+* **foreign** request are for information about other domains.
+* **user key** requests are for addresses in the form "username@domain".
+* **provider key** requests are for addresses in the form "domain".
+
+**Local, Provider Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=domain.org
+
+The response is the authoritative provider key for that domain.
+
+**Local, User Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=alice@domain.org
+
+The nickserver returns authoritative key information from the provider's own user database. Every public key returned for local requests must be signed by the provider's key.
+
+**Foreign, Provider Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=otherdomain.org
+
+1. First, check the nickserver's cache database of discovered keys. If the cache is not old, return this key. This step is skipped if the request is encrypted to the foreign provider's key.
+2. Otherwise, fetch provider key from the provider's nickserver, cache the result, and return it.
+
+**Foreign, User Key request**
+
+For example:
+
+ https://nicknym.domain.org:6425/?address=bob@otherdomain.org
+
+* First, check the nickserver's database cache of nicknyms. If the cache is not old, return the key information found in the cache. This step is skipped if the request is encrypted to a foreign provider key.
+* Otherwise, attempt to contact a nickserver run by the provider of the requested address. If the nickserver exists, query that nickserver, cache the result, and return it in the response.
+* Otherwise, fall back to querying existing SKS keyservers, cache the result and return it.
+* Otherwise, return a 404 error.
+
+If the key returned for a foreign request contains multiple user addresses, they are all ignored by nicknym except for the user address specified in the request.
+
+Nickserver response
+---------------------------------
+
+The nickserver will respond with one of the following status codes:
+
+* "200 Success": found keys for this address, and the result is in the body of the response encoded as JSON.
+* "304 Not Modified": if the request included an "If-Modified-Since" header and the keys in question have not been modified, the response will have status code 304.
+* "404 Not Found": no keys were found for this address.
+* "500 Error": An unknown error occurred. Details may be included in the body.
+
+Responses with status code 200 include a body text that is a JSON encoded map with a field "address" plus one or more of the following fields: "openpgp", "otr", "rsa", "ecc", "x509-client", "x509-server", "x509-ca". For example:
+
+ {
+ "address": "alice@example.org",
+ "openpgp": "6VtcDgEKaHF64uk1c/crFhRHuFW9kTvgxAWAK01rXXjrxEa/aMOyXnVQuQINBEof...."
+ }
+
+Responses with status codes other than 200 include a body text that is a JSON encoded map with the following fields: "address", "status", and "message". For example:
+
+ {
+ "address": "bob@otherdomain.org",
+ "status": 404,
+ "message": "Not Found"
+ }
+
+A nickserver response is always signed with a OpenPGP public signing key associated with the provider. Both successful AND unsuccessful responses are signed. Responses to successful local requests must be signed by the key associated with the address "nicknym@domain.org". Foreign requests and non-200 responses may alternately be signed with a key associated with the address nickserver@domain.org. This allows for user keys to be signed off-site and in advance, if they so choose. The signature is ASCII armored and appended to the JSON.
+
+ {
+ "address": "alice@example.org",
+ "openpgp": "6VtcDgEKaHF64uk1c/crFhRHuFW9kTvgxAWAK01rXXjrxEa/aMOyXnVQuQINBEof...."
+ }
+ -----BEGIN PGP SIGNATURE-----
+ iQIcBAEBCgAGBQJRhWO+AAoJEIaItIgARAAl2IwP/24z9CjKjD0fd27pQs+r+e3h
+ p8KAYDbVac3+c3vm30DjHO/RKF4Zq6+sTAIkrFvXOwYJl9KgjMpQVV/voInjxATz
+ -----END PGP SIGNATURE-----
+
+If the data in the request was encrypted to the public key nicknym@domain.org, then the JSON response and signature are additionally encrypted to the symmetric key found in the request and returned base64 encoded.
+
+TBD: maybe we should just switch to a raw RSA or ECC signature.
+
+Query balancing
+------------------------
+
+A nickagent must choose what IP address to query by selecting randomly from among hosts that resolve from `nicknym.domain.org` (where `domain.org` is the domain name of the provider).
+
+If a host does not response, a nickagent must skip over it and attempt to contact another host in the pool.
+
+Query security
+--------------------------
+
+TLS is required for all nickserver queries.
+
+When querying https://nicknym.domain.org, nickagent must validate the TLS connection in one of four possible ways:
+
+1. Using a commercial CA certificate distributed with the host operating system.
+2. Using DANE TLSA record to discover and validate the server certificate.
+3. Using a seeded CA certificate (see [Discovering nickservers](#Discovering.nickservers)).
+4. Using a custom self-signed CA certificate discovered for the domain, so long as the CA certificate was discovered via #1 or #2 or #3. Custom CA certificates may be discovered for a domain by making a provider key request of a nickserver (e.g. https://nicknym.known-domain.org/?address=new-domain.org).
+
+Optionally, a nickagent may make an encrypted query like so:
+
+0. Suppose the nickagent wants to make an encrypted query regarding the address alice@domain-x.org.
+1. Nickagent discovers the public key for nicknym@domain-y.org.
+2. The nickagent makes a POST request to a nickserver with two fields: address and ciphertext.
+3. The address only contains the domain part of the address (unlike an unencrypted request).
+4. The ciphertext field is encrypted to the public key for nicknym@domain-y.org. The corresponding cleartext contains the full address on the first line followed by randomly generated symmetric key on the second line.
+5. If the request was local, the nickserver handles the request. If the request for foreign, the nickserver proxies the request to the domain specified in the address field.
+6. When the request gets to the right nickserver, the body of the nickserver response is encrypted using using the symmetric key. The first line of the response specifies the cipher and mode used (allowed ciphers TBD).
+
+Comment: although it may seem excessive to encrypt both the request via TLS and the request body via OpenPGP, the reason for this is that many requests will not use OpenPGP.
+
+Automatic key validation
+----------------------------------
+
+A key is "validated" if the nickagent has bound the user address to a public key.
+
+Nicknym supports three different levels of key validation:
+
+* Level 3 - **path trusted**: A path of cryptographic signatures can be traced from a trusted key to the key under evaluation. By default, only the provider key from the user's provider is a "trusted key".
+* Level 2 - **provider signed**: The key has been signed by a provider key for the same domain, but the provider key is not validated using a trust path (i.e. it is only registered)
+* Level 1 - **registered**: The key has been encountered and saved, it has no signatures (that are meaningful to the nickagent).
+
+A nickagent will try to validate using the highest level possible.
+
+Automatic renewal
+-----------------------------
+
+A validated public key is replaced with a new key when:
+
+* The new key is **path trusted**
+* The new key is **provider signed**, but the old key is only **registered**.
+* The new key has a later expiration, and the old key is only **registered** and will expire "soon" (exact time TBD).
+* The agent discovers a new subkey, but the master signing key is unchanged.
+
+In all other cases, the new key is rejected.
+
+The nickagent will attempt to refresh a key by making request to a nickserver of its choice when a key is past 3/4 of its lifespan and again when it is about to expire.
+
+Nicknym encourages, but does not require, the use of short lived public keys, in the range of X to Y days. It is recommended that short lived keys are not uploaded to OpenPGP keyservers.
+
+Automatic invalidation
+----------------------------
+
+A key is invalidated if:
+
+* The old key has expired, and no new key can be discovered with equal or greater validation level.
+
+This means validation is a one way street: once a certain level of validation is established for a user address, no client should accept any future keys for that address with a lower level of validation.
+
+Discovering nickservers
+--------------------------------
+
+It is entirely up to the nickagent to decide what nickservers to query. If it wanted to, a nickagent could send all its requests to a single nickserver.
+
+However, nickagents should discover new nickservers and balance their queries to these nickservers for the purposes of availability, load balancing, network perspective, and hiding the user's association map.
+
+Whenever the nickagent is asked by a locally running application for a public key corresponding to an address on the domain `domain.org`, it may check to see if the host `nicknym.domain.org` exists. If the domain resolves, then the nickagent may add it to the pool of known nickservers. A nickagent should only perform this DNS check if it is able to do so over an encrypted tunnel.
+
+Additionally, a nickagent may be distributed with an initial list of "seed" nickservers. In this case, the nickagent is distributed with a copy of the CA certificate used to validate the TLS connection with each respective seed nickserver.
+
+Cross-provider signatures
+----------------------------------
+
+Nicknym does not support user signatures on user keys. There is no trust path from user to user. However, a service provider may sign the provider key of another provider.
+
+To be written.
+
+Auditing
+----------------------------
+
+In order to keep the user's provider from handing out bogus public keys, a nickagent should occasionally make foreign queries of the user's own address against nickservers run by third parties. The recommended frequency of these queries is once per day, at a random time during local waking hours.
+
+In order to prevent a nickserver from handing out bogus provider keys, a nickagent should query multiple nickservers before a provider key is registered or path trusted.
+
+Possible attacks:
+
+**Attack 1 - Intercept Outgoing:**
+
+* Attack: provider `A` signs an impostor key for provider `B` and distributes it to users of `A` (in order to intercept outgoing messages sent to `B`).
+* Countermeasure: By querying multiple nickservers for the provider key of `B`, the nickagent can detect if provider `A` is attempting to distribute impostor keys.
+
+**Attack 2 - Intercept Incoming:**
+
+* Attack: provider `A` signs an impostor key for one of its own users, and distributes to users of provider `B` (in order to intercept incoming messages).
+* Countermeasure: By querying for its own keys, a nickagent can detect if a provider is given out bogus keys for their addresses.
+
+**Attack 3 - Association Mapping:**
+
+* Attack: A provider tracks all the requests for key discovery in order to build a map of association.
+* Countermeasure: By performing foreign key queries via third party nickservers, an agent can prevent any particular entity from tracking their queries.
+
+Known vulnerabilities
+------------------------------------------
+
+The nicknym protocol does not yet have a good solution for dealing with the following problems:
+
+* Enumeration attack: an attacker can enumerate the list of all users for a provider by simply querying every possible username combination. We have no defense against this, although it would surely take a while.
+* DDoS attack: by their very nature, nickservers perform a bit of work for every request. Because of this, they are vulnerable to be overloaded by a a flood of bogus requests.
+* Besmirch attack: a MitM attacker can sully the reputation of a provider by generating many bad responses (responses signed with the wrong key), thus leading other nickservers and nicknym agents to consider the provider compromised.
+
+Future enhancements
+---------------------
+
+**Additional discovery mechanisms**
+
+In addition to nickservers and SKS keyservers, there are two other potential methods for discovering public keys:
+
+* **Webfinger** includes a standard mechanism for distributing a user's public key via a simple HTTP request. This is very easy to implement on the server, and very easy to consume on the client side, but there are not many webfinger servers with public keys in the wild.
+* **DNS** is used by multiple competing standards for key discovery. When and if one of these emerges predominate, Nicknym should attempt to use this method when available.
+
+
+Reference nickagent implementation
+====================================================
+
+https://github.com/leapcode/keymanager
+
+There is a reference nickagent implementation called "key manager" written in Python and integrated into the LEAP client. It uses Soledad to store its data.
+
+Public API
+----------------------------
+
+**refresh_keys()**
+
+updates the keys with fresh ones, as needed.
+
+**get_key(address, type)**
+
+returns a single public key for address. type is one of 'openpgp', 'otr', 'x509', or 'rsa'.
+
+**send_key(address, public_key, type)**
+
+authenticates with the appropriate provider and saves the public_key in the user database.
+
+Storage
+--------------------------
+
+Key manager uses Soledad for storage. GPGME, however, requires keys to be stored in keyrings, which are read from disk.
+
+For now, Key Manager deals with this by storing each key in its own keyring. In other words, every key is in a keyring with exactly 1 key, and this keyring is stored in a Soledad document. To keep from confusing this keyring from a normal keyring, I will call it a 'unitary keyring'.
+
+Suppose Alice needs to communicate with Bob:
+
+1. Alice's Key Manager copies to disk her private key and bob's public key. The key manager gets these from Soledad, in the form of unitary Keyrings.
+2. Client code uses GPGME, feeding it these temporary keyring files.
+3. The keyrings are destroyed.
+
+TBD: how best to ensure destruction of the keyring files.
+
+An example Soledad document for an address:
+
+ {
+ "address":"alice@example.org",
+ "keys": [
+ {
+ "type": "opengpg"
+ "key": "binary blob",
+ "keyring": "binary blob",
+ "expires_on": "2014-01-01",
+ "validation": "provider_signed",
+ "first_seen_at": "2013-04-01 00:11:00",
+ "last_audited_at": "2013-04-02 12:00:00",
+ },
+ {
+ "type": "otr"
+ "key": "binary blob",
+ "expires_on": "2014-01-01",
+ "validation": "registered",
+ "first_seen_at": "2013-04-01 00:11:00",
+ "last_audited_at": "2013-04-02 12:00:00",
+ }
+ ]
+ }
+
+Pseudocode
+---------------------------
+
+get_key
+
+ #
+ # return a key for an address
+ #
+ function get_key(address, type)
+ if key for address exists in soledad database?
+ return key
+ else
+ fetch key from nickserver
+ save it in soledad
+ return key
+ end
+ end
+
+send_key
+
+ #
+ # send the user's provider the user's key. this key will get signed by the provider, and replace any prior keys
+ #
+ function send_key(type)
+ if not authenticated:
+ error!
+ end
+ get (self.address, type)
+ send (key_data, type) to the provider
+ end
+
+refresh_keys
+
+ #
+ # update the user's db of validated keys to see if there are changes.
+ #
+ function refresh_keys()
+ for each key in the soledad database (that should be checked?):
+ newkey = fetch_key_from_nickserver()
+ if key is about to expire and newkey complies with the renewal paramters:
+ replace key with newkey
+ else if fingerprint(key) != fingerprint(newkey):
+ freak out, something wrong is happening? :)
+ may be handle revokation, or try to get some voting for a given key and save that one (retrieve it through tor/vpn/etc and see what's the most found key or something like that.
+ else:
+ everything's cool for this key, continue
+ end
+ end
+ end
+
+private fetch_key_from_nickserver
+
+ function fetch_key_from_nickserver(key)
+ randomly pick a subset of the available nickservers we know about
+ send a tcp request to each in this subset in parallel
+ first one that opens a successful socket is used, all the others are terminated immediately
+ make http request
+ parse json for the keys
+ return keys
+ end
+
+
+Reference nickserver implementation
+=====================================================
+
+https://github.com/leapcode/nickserver
+
+The reference nickserver is written in Ruby 1.9 and licensed GPLv3. It is lightweight and scalable (supporting high concurrency, and reasonable latency), and uses EventMachine for asynchronous network IO. Data is stored in CouchDB.
+
diff --git a/pages/docs/design/overview.md b/pages/docs/design/overview.md
new file mode 100644
index 0000000..e477806
--- /dev/null
+++ b/pages/docs/design/overview.md
@@ -0,0 +1,403 @@
+@nav_title = "Overview"
+@title = "Overview of LEAP architecture"
+@summary = "Bird's eye view of how all the pieces fit together."
+
+The LEAP Platform allows an organization to deploy and manage a complete infrastructure for providing user communication services.
+
+This document gives a brief overview of how the pieces fit together.
+
+LEAP Client
+===================
+
+The LEAP Client is an application that runs on a user's own device and is responsible for all encryption of user data. The client must be installed a user's device before they can access any LEAP services (except for user support via the web application).
+
+Desktop Client
+--------------------------
+
+LEAP Client for Linux, Windows, and Mac.
+
+Written in: Python
+
+Libraries used: QT, PyQT, OpenVPN, Sqlite, Sqlcipher, U1DB, OpenSSL, GPG.
+
+User interface:
+
+* First run wizard: walks the user through the bootstrap process when the client is first run (either registering a new user or authenticating as an existing user)
+* Preferences panel: A mac system-preferences-like place to edit all the LEAP client settings (does not exist yet).
+* Task bar: Show the status of LEAP services (connected? syncing?), and lets the user open the preferences panel.
+* Update wizard: a dialog that shows the code update progress.
+
+Android Client
+------------------------------
+
+LEAP Client for Android.
+
+Written in: Java (possibly with with some Python in the future)
+
+Libraries used: sqlcipher, sqlite, bouncycastle, U1DB, OpenVPN.
+
+User interface:
+
+* Single button to connect or disconnect encrypted internet
+* A notification drawer item indicating status of VPN
+* A first run wizard
+
+Features (planned):
+
+* a sync provider to allow contacts and calendar data to be sync'ed via Soledad.
+* eventually, match the desktop client in features.
+
+
+LEAP Admin Tools
+====================================
+
+Platform Recipes
+------------------------------
+
+The LEAP platform recipes define an abstract service provider. It consists of puppet modules designed to work together to provide a system administrator everything they need to manage a service provider infrastructure that provides secure communication services.
+
+Typically, a system administrator will not need to modify the LEAP platform recipes, although they are free to fork and merge as desired. Most service providers using the LEAP platform will use the same platform recipes.
+
+The recipes are abstract. In order to configure settings for a particular service provider, a system administrator creates a provider instance. The platform recipes also include a base provider that provider instances inherit from.
+
+Provider Instance
+----------------------------------
+
+A "provider instance" is a directory tree (typically tracked in git) containing all the configurations for a service provider's infrastructure. A provider instance primarily consists of:
+
+* A configuration file for each server (node) in the provider's infrastructure (e.g. nodes/vpn1.json)
+* A global configuration file for the provider (e.g. provider.json).
+* Additional files, such as certificates and keys (e.g. files/nodes/vpn1/vpn1_ssh.pub).
+* A pointer to the platform recipes (as defined in "Leapfile")
+
+A minimal provider instance directory looks like this:
+
+
+ └── bitmask # provider instance directory.
+ ├── common.json # settings common to all nodes.
+ ├── Leapfile # various settings for this instance.
+ ├── provider.json # global settings of the provider.
+ ├── files/ # keys, certificates, and other files.
+ ├── nodes/ # a directory for node configurations.
+ └── users/ # public key information for privileged sysadmins.
+
+A provider instance directory contains everything needed to manage all the servers that compose a provider's infrastructure. Because of this, you can use normal git development work-flow to manage your provider instance.
+
+Command line program
+-------------------------------
+
+The command line program `leap` is used by sysadmins to manage everything about a service provider's infrastructure. Except when creating an new provider instance, `leap` is run from within the directory tree of a provider instance.
+
+The `leap` command line has many capabilities, including:
+
+* create an initial provider instance
+* create, initialize, and deploy nodes (e.g. servers)
+* manage keys and certificates
+* query information about the node configurations
+
+Traditional system configuration automation systems, like puppet or chef, deploy changes to servers using a pull method. Each server pulls a manifest from a central master server and uses this to alter the state of the server.
+
+Instead, LEAP uses a masterless push method: The user runs 'leap deploy' from the provider instance directory on their desktop machine to push the changes out to every server (or a subset of servers). LEAP still uses puppet, but there is no central master server that each node must pull from.
+
+One other significant difference between LEAP and typical system automation is how interactions among servers are handled. Rather than store a central database of information about each server that can be queried when a recipe is applied, the `leap` command compiles static representation of all the information a particular server will need in order to apply the recipes. In compiling this static representation, `leap` can use arbitrary programming logic to query and manipulate information about other servers.
+
+These two approaches, masterless push and pre-compiled static configuration, allow the sysadmin to manage a set of LEAP servers using traditional software development techniques of branching and merging, to more easily create local testing environments using virtual servers, and to deploy without the added complexity and failure potential of a master server.
+
+Server-side Components
+=======================================
+
+These are components where most of the code and logic runs on a server (as opposed to client-side components, where most of the code runs on the client).
+
+Databases
+------------------------------------
+
+All user data is stored using BigCouch, a decentralized and high-availability version of CouchDB.
+
+The databases are used by the different services and sometimes work as communication channels between the services.
+
+These are the databases we currently use:
+
+* customers -- payment information for the webapp
+* identities -- alias information, written by the webapp, read by leap_mx and nickserver
+* keycache -- used by the nickserver
+* sessions -- web session persistance for the webapp
+* shared -- used by soledad
+* tickets -- help tickets issued in the webapp
+* tokens -- created by the webapp on login, used by soledad to authenticate
+* users -- user records used by the webapp including the authentication data
+* user-...id... -- client-encrypted user data accessed from the client via soledad
+
+### Database Setup
+
+The main couch databases are initially created, seeded and updated when deploying the platform.
+
+The site_couchdb module contains the database description and security settings in `manifests/create_dbs.pp`. The design docs are seeded from the files in `files/designs/:db_name`. If these files change the next puppet deploy will update the databases accordingly. Both the webapp and soledad have scripts that will dump the required design docs so they can be included here.
+
+The per-user databases are created upon user registration by [Tapicero](https://leap.se/docs/design/tapicero). Tapicero also adds security and design documents. The design documents for per-user databases are stored in the [tapicero repository](https://github.com/leapcode/tapicero) in `designs`. Tapicero can be used to update existing user databases with new security settings and design documents.
+
+### BigCouch
+
+Like many NoSQL databases, BigCouch is inspired by [Amazon's Dynamo paper](http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) and works by sharding each database among many servers using a circular ring hash. The number of shards might be greater than the number of servers, in which case each server would have multiple shards of the same database. Each server in the BigCouch cluster appears to contain the entire database, but actually it will just proxy the request to the actual database that has the content (if it does not have the document itself).
+
+Important BigCouch constants:
+
+* Q -- The number of shards over which a database will spread.
+* N -- The number of redundant copies of each document. Default is 3.
+* W -- The number of document copies that must be saved before document is 'written'. Default is 2.
+* R -- The number of document copies that must be found before document is 'read'. Default is 2.
+* Z -- The number of zones in the cluster. Each zone will have a complete copy of all the data. Default is 1.
+
+In LEAP, every service that needs to interact with the database runs a local HTTP load balancer that distributes database requests randomly to the BigCouch cluster. If a BigCouch node dies, the load balancer detects this and takes it out of rotation (this usage is typical of BigCouch installations).
+
+Web App
+------------------------------
+
+The LEAP Web App provides the following functions:
+
+* User registration and management
+* Help tickets
+* Client certificate renewal
+* Webfinger access to user's public keys
+* Email aliases and forwarding
+* Localized and Customizable documentation
+
+Written in: Ruby, Rails.
+
+The Web App communicates with:
+
+* CouchDB is used for all data storage.
+* Web browsers of users accessing the user interface in order to edit their settings or fill out help tickets. Additionally, admins may delete users.
+* LEAP Clients access the web app's REST API in order to register new users, authenticate existing ones, and renew client certificates.
+* tokens are stored upon successful authentication to allow the client to authenticate against other services
+
+Nickserver
+------------------------------
+
+Written in: Ruby
+Libaries: EventMachine, GPG
+
+Nickserver is the opposite of a key server. A key server allows you to lookup keys, and the UIDs associated with a particular key. A nickserver allows you to query a particular 'nick' (e.g. username@example.org) and get back relevant public key information for that nick.
+
+Nickserver has the following properties:
+
+* Written in Ruby, licensed GPLv3
+* Lightweight and scalable (high concurrency, reasonable latency)
+* Uses asynchronous network IO for both server and client connections (via EventMachine)
+* Attempts to reply to queries using four different methods:
+ * Cached key in CouchDB
+ * Webfinger
+ * DNS
+ * HKP keyserver pool (https://hkps.pool.sks-keyservers.net)
+
+Why bother writing Nickserver instead of just using the existing HKP keyservers?
+
+* Keyservers are fundamentally different: Nickserver is a registry of 1:1 mapping from nick (uid) to public key. Keyservers are directories of public keys, which happen to have some uid information in the subkeys, but there is no way to query for an exact uid.
+* Support clients: the goal is to provide clients with a cloud-based method of rapidly and easily converting nicks to keys. Client code can stay simple by pushing more of the work to the server.
+* Enhancements over keyservers: the goal with Nickserver is to support future enhancements like webfinger, DNS key lookup, mail-back verification, network perspective, and fast distribution of short lived keys.
+* Scalable: the goal is for a service that can handle many simultaneous requests very quickly with low memory consumption.
+
+Miscellaneous
+------------------------------
+
+A LEAP service provider might also run servers with the following services:
+
+* git -- private git repository hosting.
+* Domain Name Server -- Authoritative name server for the provider's domain.
+* Tapicero -- headless daemon that watches couch changes for new users and creates their databases
+
+Client-side Components
+======================================
+
+Most of the code and processing for these components happens on the client-side, although they all include some interaction with cloud services.
+
+Soledad
+------------------------------
+
+Soledad stands for "Synchronization Of Locally Encrypted Data Among Devices". On the client side, Soledad is responsible for client-encrypting user data, keeping it in sync with the copy in the cloud, and for providing local applications with a simple API for data storage. This "client-side Soledad" is essentially a local database that is kept in sync with the cloud. The "Soledad Server" is the cloud-based component that the client syncs with.
+
+Written in: Python (on desktops and servers), possibly Java (on android, not yet written).
+
+Libraries used:
+
+* Client-side: U1DB, Sqlite, Sqlcipher, GPG.
+* Server: U1DB (forked), CouchDB.
+
+Client-side Soledad communicates with:
+
+* Other client application code, providing a storage API.
+* Soledad Server via the U1DB synchronization protocol.
+
+Soledad Server communicates with:
+
+* LEAP Client via the U1DB synchronization protocol
+* CouchDB or OpenStack Object Storage for backend storage.
+
+Client-side Soledad Notes:
+
+* Soledad is an modification of U1DB python reference implementation with changes to support client-side encryption and to replace sqlite with sqlcipher.
+* Local data is stored on disk as an SQLite DB file(s) that is block-encrypted with sqlcipher (AES128).
+* Before being synced to the server, a document is block-encrypted using a symmetric key composed from HMAC of the document id and a long secret (Soledad secret).
+* Soledad secret is stored on-disk encrypted to the user's OpenPGP key. A copy is stored on the server as well. The same secret is shared among all the clients a user has activated.
+* Soledad inherits these traits from U1DB:
+ * The storage API used by client code is similar to couchdb (schema-less document storage with indexes).
+ * Application code using Soledad is responsible for resolving sync conflicts.
+
+Secrets Manager
+------------------------------
+
+Not yet written.
+
+Written in: Python
+Libraries used: GPG, GnuTLS
+
+Communicates with: Nickserver (cloud), Soledad (local).
+
+The Secrets Manager is a library that exposes to local client code an API for managing cryptographic material. It is responsible for:
+
+* private secrets: the user's private and public keys and certificates.
+* public keys: discovering, registering, and trusting the public keys of other people.
+* creation: creating keys as needed.
+* renewal: fetch a new client certificate when the current one is about to expire.
+* recovery: allow the user to recover their data if they lose everything except for a recovery code.
+* crypto hardware: allow the user to unlock secrets via an OpenPGP smart card like cryptostick.
+
+**Example secrets**
+
+* A user's OpenPGP keypair
+* The symmetric key used to encrypt local data (used by sqlcipher)
+* The client certificate used to auth with OpenVPN gateway.
+* The client certificate used to auth with the SMTP gateway.
+
+**Public key management**
+
+Some functionality of public key management:
+
+* Discover the public keys of recipients and senders via a Nickserver.
+* "Register" the discovered keys, either using a federated path through the provider, directly, or via trust on first use (TOFU). For now, we will start initially with TOFU.
+* Allow the user to choose between two competing keys when a recipient has multiple candidate keys.
+* Allow the user to specify keys that should be not used.
+* Allow the user to manually specify a user's public key.
+
+**Recovery**
+
+* Allow the user to generate and print out a recovery code. This creates a record on the server, in an anonymized way, that can be used to restore all the secrets stored by the key/secret manager and thus recover all your data. The provider should not know what recovery information maps to which user.
+* Eventually, perhaps allow the user to specify other users who have the power to recover their lost secrets in the event that the user forgets their password.
+* Allow the user to enter this recovery code when they have lost their username and password. If this is enabled, the user's private keys are stored in the cloud, albeit encrypted and anonymized.
+* Give some users the option of full recovery via email reset by storing the user's password on the server. This would be a very low security option, but one that some users may wish to opt-in for.
+
+**Notes**
+
+* All secrets are stored in Soledad, except the secret to unlock Soledad storage. This way, all clients will have access to the same secrets. For some things, like validated public keys, this is exactly what we want. For other things, this could be a problem, and should be refined in future revisions.
+* The current scheme is to store the user's private keys and private secrets in their Soledad storage. This allows a user to login with a different device and be all set up. There are, however, certainly problems with this approach.
+
+
+Bootstrap
+------------------------------
+
+Parts of this are written.
+
+Written in: Python
+
+* Register new accounts or authenticate via the REST API, using SRP.
+* Download the providers definition file, and various service definition files.
+* Validate the CA certificate of the service provider.
+* If using an existing account on a new device, fetch user's secrets from the cloud (not yet written).
+* If creating a new account, generate a key pair and store in the cloud (not yet written).
+
+Update Manager
+------------------------------
+
+Not yet written.
+
+Handles upgrading the client by downloading and installing signed code.
+
+Three goals:
+
+* Frequent Updates: we want to be able to push out small and frequent updates should the need arise.
+* Secure Updates: we want to ensure that the update mechanism cannot be used as an attack vector.
+* Third Party Updates: we want a third party to be responsible for updates, NOT the service provider itself.
+
+End User Services
+=========================================
+
+Email
+------------------------------
+
+Not yet working, some of the parts are written.
+
+Written in: Python
+
+Email in the client consists of three parts:
+
+* SMTP Proxy: for outgoing mail.
+ * Communicates with user's MUA (local), Key Manager (local), Nickserver (cloud), and SMTP relay (cloud).
+* Message Receiver: for incoming mail.
+ * Communicates with Soledad (local), Key Manager (local).
+* IMAP Server: for reading and writing to user's mailbox.
+ * Communicates with Soledad (local), user's MUA (local).
+
+Outgoing mail workflow:
+
+* LEAP client runs a thin SMTP proxy on the user's device, bound to localhost.
+* User's MUA is configured outgoing SMTP to localhost
+* When SMTP proxy receives an email from MUA
+ * SMTP proxy queries Key Manager for the user's private key and public keys of all recipients
+ * Message is signed by sender and encrypted to recipients.
+ * If recipient's key is missing, email goes out in cleartext (unless user has configured option to send only encrypted email)
+ * Finally, message is relayed to provider's SMTP relay
+
+Incoming email workflow:
+
+* Incoming message is received by provider's MX servers.
+* Message is encrypted to the user's public key (if not already so), and stored in the user's incoming message queue.
+* Message queue is synced to client device via Soledad.
+* "Message Receiver" in the LEAP Client empties message queue, unencrypting each message and saving it in the user's inbox, stored in local Soledad database.
+* Local database gets client-encrypted and sync'ed to cloud and other devices owned by the user via Soledad.
+
+Mail storage workflow:
+
+* LEAP client runs a thin IMAP server on the user's device, bound to localhost.
+* User's MUA is configured to use localhost for the mail account.
+* Local IMAP server runs against a local database the user's email data (access via Soledad).
+* Soledad will sync changes made to mailboxes with the cloud and other clients.
+
+Encrypted Internet
+------------------------------
+
+The goal behind the encrypted internet service is to provide an automatic, always on, trouble free way to encrypt a user's network traffic. For now, we use OpenVPN for the transport (OpenVPN uses TLS for session negotiation and IPSec for data).
+
+Written in: C (OpenVPN binary), Python (desktop controlling code), Java (android controlling code)
+Libraries: QT
+Uses: OpenVPN
+
+Communicates with:
+
+* All traffic is routed through one of the provider's OpenVPN gateways
+* OpenVPN binary and LEAP client communicate via a telnet administration interface to OpenVPN.
+* Client discovers gateways and fetches client certificate from the provider's HTTP API.
+
+User Interface:
+
+* Initial connection attempt takes place in the first run wizard, displaying any errors along the way.
+* After first run, the client will display the status of the encrypted internet in the task tray (windows, linux), menu bar (mac), or notification drawer (android).
+* The three main UI functions of the encrypted internet will be: connect/disconnect, choose gateway, view errors.
+
+Notes:
+
+* OpenVPN must be started with superuser privileges (or have the ability to execute network changes as superuser). Afterwards, it can drop the privileges.
+* OpenVPN authentication with the gateway uses an x.509 client certificate. This certificate is short lived, and is acquired by the client from the provider's HTTP API as needed.
+
+Workflow:
+
+* user installs client
+* on first run
+ * client downloads and validates service provider's definition file, CA cert, and encrypted internet service definition file.
+ * user registers new account or authenticates with provider's webapp REST API
+ * SRP is used, server never sees the password and does not store a hash of the password.
+ * if registering, new record is created for user in distributed users db.
+* client gets a new client certificate from webapp, if missing or expired
+ * authenticate via SRP with webapp
+ * webapp retrieves client cert from a pool of pre-generated certificates.
+ * cert pool is filled as needed by background CA deamon.
+* client connects to openvpn gateway, picked from among those listed in service definition file, authenticating with client certificate.
+* by default, when user starts computer the next time, client autoconnects.
diff --git a/pages/docs/design/soledad.md b/pages/docs/design/soledad.md
new file mode 100644
index 0000000..a0eeed4
--- /dev/null
+++ b/pages/docs/design/soledad.md
@@ -0,0 +1,423 @@
+@title = 'Soledad'
+@summary = 'A server daemon and client library to provide client-encrypted application data that is kept synchronized among multiple client devices.'
+@toc = true
+
+Introduction
+=====================
+
+Soledad allows client applications to securely share synchronized document databases. Soledad aims to provide a cross-platform, cross-device, syncable document storage API, with the addition of client-side encryption of database replicas and document contents stored on the server.
+
+Key aspects of Soledad include:
+
+* **Client and server:** Soledad includes a server daemon and client application library.
+* **Client-side encryption:** Soledad puts very little trust in the server by encrypting all data before it is synchronized to the server and by limiting ways in which the server can modify the user's data.
+* **Local storage:** All data cached locally is stored in an encrypted database.
+* **Document database:** An application using the Soledad client library is presented with a document-centric database API for storage and sync. Documents may be indexed, searched, and versioned.
+
+The current reference implementation of Soledad is written in Python and distributed under a GPLv3 license.
+
+Soledad is an acronym of "Synchronization of Locally Encrypted Documents Among Devices" and means "solitude" in Spanish.
+
+Goals
+======================
+
+**Security goals**
+
+* *Client-side encryption:* Before any data is synced to the cloud, it should be encrypted on the client device.
+* *Encrypted local storage:* Any data cached in the client should be stored in an encrypted format.
+* *Resistant to offline attacks:* Data stored on the server should be highly resistant to offline attacks (i.e. an attacker with a static copy of data stored on the server would have a very hard time discerning much from the data).
+* *Resistant to online attacks:* Analysis of storing and retrieving data should not leak potentially sensitive information.
+* *Resistance to data tampering:* The server should not be able to provide the client with old or bogus data for a document.
+
+**Synchronization goals**
+
+* *Consistency:* multiple clients should all get sync'ed with the same data.
+* *Sync flag:* the ability to partially sync data. For example, so a mobile device doesn't need to sync all email attachments.
+* *Multi-platform:* supports both desktop and mobile clients.
+* *Quota:* the ability to identify how much storage space a user is taking up.
+* *Scalable cloud:* distributed master-less storage on the cloud side, with no single point of failure.
+* *Conflict resolution:* conflicts are flagged and handed off to the application logic to resolve.
+
+**Usability goals**
+
+* *Availability*: the user should always be able to access their data.
+* *Recovery*: there should be a mechanism for a user to recover their data should they forget their password.
+
+**Known limitations**
+
+These are currently known limitations:
+
+* The server knows when the contents of a document have changed.
+* There is no facility for sharing documents among multiple users.
+* Soledad is not able to prevent server from withholding new documents or new revisions of a document.
+* Deleted documents are never deleted, just emptied. Useful for security reasons, but could lead to DB bloat.
+
+**Non-goals**
+
+* Soledad is not for filesystem synchronization, storage or backup. It provides an API for application code to synchronize and store arbitrary schema-less JSON documents in one big flat document database. One could model a filesystem on top of Soledad, but it would be a bad fit.
+* Soledad is not intended for decentralized peer-to-peer synchronization, although the underlying synchronization protocol does not require a server. Soledad takes a cloud approach in order to ensure that a client has quick access to an available copy of the data.
+
+Related software
+==================================
+
+[Crypton](https://crypton.io/) - Similar goals to Soledad, but in javascript for HTML5 applications.
+
+[Mylar](https://github.com/strikeout/mylar) - Like Crypton, Mylar can be used to write secure HTML5 applications in javascript. Uniquely, it includes support for homomorphic encryption to allow server-side searches.
+
+[Firefox Sync](https://wiki.mozilla.org/Services/Sync) - A client-encrypted data sync from Mozilla, designed to securely synchronize bookmarks and other browser settings.
+
+[U1DB](http://pythonhosted.org/u1db/) - Similar API as Soledad, without encryption.
+
+Soledad protocol
+===================================
+
+Document API
+-----------------------------------
+
+Soledad's document API is similar to the [API used in U1DB](http://pythonhosted.org/u1db/reference-implementation.html).
+
+* Document storage: `create_doc()`, `put_doc()`, `get_doc()`.
+* Synchronization with the server replica: `sync()`.
+* Document indexing and searching: `create_index()`, `list_indexes()`, `get_from_index()`, `delete_index()`.
+* Document conflict resolution: `get_doc_conflicts()`, `resolve_doc()`.
+
+For example, create a document, modify it and sync:
+
+ sol.create_doc({'my': 'doc'}, doc_id='mydoc')
+ doc = sol.get_doc('mydoc')
+ doc.content = {'new': 'content'}
+ sol.put_doc(doc)
+ sol.sync()
+
+Storage secret
+-----------------------------------
+
+The `storage_secret` is a long, randomly generated key used to derive encryption keys for both the documents stored on the server and the local replica of these documents. The `storage_secret` is block encrypted using a key derived from the user's password and saved locally on disk in a file called `<user_uid>.secret`, which contains a JSON structure that looks like this:
+
+ {
+ "storage_secrets": {
+ "<secret_id>": {
+ "kdf": "scrypt",
+ "kdf_salt": "<b64 repr of salt>",
+ "kdf_length": <key_length>,
+ "cipher": "aes256",
+ "length": <secret_length>,
+ "secret": "<encrypted storage_secret>",
+ }
+ }
+ 'kdf': 'scrypt',
+ 'kdf_salt': '<b64 repr of salt>',
+ 'kdf_length: <key length>
+ }
+
+The `storage_secrets` entry is a map that stores information about available storage keys. Currently, Soledad uses only one storage key per provider, but this may change in the future.
+
+The following fields are stored for one storage key:
+
+* `secret_id`: a handle used to refer to a particular `storage_secret` and equal to `sha256(storage_secret)`.
+* `kdf`: the key derivation function to use. Only scrypt is currently supported.
+* `kdf_salt`: the salt used in the kdf. The salt for scrypt is not random, but encodes important parameters like the limits for time and memory.
+* `kdf_length`: the length of the derived key resulting from the kdf.
+* `cipher`: what cipher to use to encrypt `storage_secret`. It must match `kdf_length` (i.e. the length of the derived_key).
+* `length`: the length of `storage_secret`, when not encrypted.
+* `secret`: the encrypted `storage_secret`, created by `sym_encrypt(cipher, storage_secret, derived_key)` (base64 encoded).
+
+Other variables:
+
+* `derived_key` is equal to `kdf(user_password, kdf_salt, kdf_length)`.
+* `storage_secret` is equal to `sym_decrypt(cipher, secret, derived_key)`.
+
+When a client application first wants to use Soledad, it must provide the user's password to unlock the `storage_secret`:
+
+ from leap.soledad.client import Soledad
+ sol = Soledad(
+ uuid='<user_uid>',
+ passphrase='<user_passphrase>',
+ secrets_path='~/.config/leap/soledad/<user_uid>.secret',
+ local_db_path='~/.config/leap/soledad/<user_uid>.db',
+ server_url='https://<soledad_server_url>',
+ cert_file='~/.config/leap/providers/<provider>/keys/ca/cacert.pem',
+ auth_token='<auth_token>',
+ secret_id='<secret_id>') # optional argument
+
+
+Currently, the `storage_secret` is shared among all devices with access to a particular user's Soledad database. See [Recovery and bootstrap](#Recovery.and.bootstrap) for how the `storage_secret` is initially installed on a device.
+
+We don't use the `derived_key` as the `storage_secret` because we want the user to be able to change their password without needing to re-key.
+
+Document encryption
+------------------------
+
+Before a JSON document is synced with the server, it is transformed into a document that looks like this:
+
+ {
+ "_enc_json": "<ciphertext>",
+ "_enc_scheme": "symkey",
+ "_enc_method": "aes256ctr",
+ "_enc_iv": "<initialization_vector>",
+ "_mac": "<auth_mac>",
+ "_mac_method": "hmac"
+ }
+
+About these fields:
+
+* `_enc_json`: The original JSON document, encrypted and hex encoded. Calculated as:
+ * `doc_key = hmac(storage_secret[MAC_KEY_LENGTH:], doc_id)`
+ * `ciphertext = hex(sym_encrypt(cipher, content, doc_key))`
+* `_enc_scheme`: Information about the encryption scheme used to encrypt this document (i.e.`pubkey`, `symkey` or `none`).
+* `_enc_method`: Information about the block cipher that is used to encrypt this document.
+* `_mac`: A MAC to prevent the server from tampering with stored documents. Calculated as:
+ * `mac_key = hmac(storage_secret[:MAC_KEY_LENGTH], doc_id)`
+ * `_mac = hmac(doc_id|rev|ciphertext|_enc_scheme|_enc_method|_enc_iv, mac_key)`
+* `_mac_method`: The method used to calculate the mac above (currently hmac).
+
+Other variables:
+
+* `doc_key`: This value is unique for every document and only kept in memory. We use `doc_key` instead of simply `storage_secret` in order to hinder possible derivation of `storage_secret` by the server. Every `doc_id` is unique.
+* `content`: equal to `sym_decrypt(cipher, ciphertext, doc_key)`.
+
+When receiving a document with the above structure from the server, Soledad client will first verify that `_mac` is correct, then decrypt the `_enc_json` to find `content`, which it saves as a cleartext document in the local encrypted database replica.
+
+The document MAC includes the document revision and the client will refuse to download a new document if the document does not include a higher revision. In this way, the server cannot rollback a document to an older revision. The server also cannot delete a document, since document deletion is handled by removing the document contents, marking it as deleted, and incrementing the revision. However, a server can withhold from the client new documents and new revisions of a document (including withholding document deletion).
+
+The currently supported encryption ciphers are AES256 (CTR mode) and XSalsa20. The currently supported MAC method is HMAC with SHA256.
+
+Document synchronization
+-----------------------------------
+
+Soledad follows the U1DB synchronization protocol, with some changes:
+
+* Add the ability to flag some documents so they are not synchronized by default (not fully supported yet).
+* Refuse to synchronize a document if it is encrypted and the MAC is incorrect.
+* Always use `https://<soledad_server_url>/user-<user_uid>` as the synchronization URL.
+
+
+ doc = sol.create_doc({'some': 'data'})
+ doc.syncable = False
+ sol.sync() # will not send the above document to the server!
+
+Document IDs
+--------------------
+
+Like U1DB, Soledad allows the programmer to use whatever ID they choose for each document. However, it is best practice to let the library choose random IDs for each document so as to ensure you don't leak information. In other words, leave the second argument to `create_doc()` empty.
+
+Re-keying
+-----------
+
+Sometimes there is a need to change the `storage_secret`. Rather then re-encrypt every document, Soledad implements a system called "lazy revocation" where a new `storage_secret` is generated and used for all subsequent encryption. The old `storage_secret` is still retained and used when decrypting older documents that have not yet been re-encrypted with the new `storage_secret`.
+
+Authentication
+-----------------------
+
+Unlike U1DB, Soledad only supports token authentication and does not support OAuth. Soledad itself does not handle authentication. Instead, this job is handled by a thin HTTP WSGI middleware layer running in front of the Soledad server daemon, which retrieves valid tokens from a certain shared database and compares with the user-provided token. How the session token is obtained is beyond the scope of Soledad.
+
+Bootstrap and recovery
+------------------------------------------
+
+Because documents stored on the server's database replica have their contents encrypted with keys based on the `storage_secret`, initial synchronizations of newly configured provider accounts are only possible if the secret is transferred from one device to another. Thus, installation of Soledad in a new device or account recovery after data loss is only possible if specific recovery data has previously been exported and either stored on the provider or imported on a new device.
+
+Soledad may export a recovery document containing recovery data, which may be password-encrypted and stored in the server, or stored in a safe environment in order to be later imported into a new Soledad installation.
+
+**Recovery document**
+
+An example recovery document:
+
+ {
+ 'storage_secrets': {
+ '<secret_id>': {
+ 'kdf': 'scrypt',
+ 'kdf_salt': '<b64 repr of salt>'
+ 'kdf_length': <key length>
+ 'cipher': 'aes256',
+ 'length': <secret length>,
+ 'secret': '<encrypted storage_secret>',
+ },
+ },
+ 'kdf': 'scrypt',
+ 'kdf_salt': '<b64 repr of salt>',
+ 'kdf_length: <key length>,
+ '_mac_method': 'hmac',
+ '_mac': '<mac>'
+ }
+
+About these fields:
+
+* `secret_id`: a handle used to refer to a particular `storage_secret` and equal to `sha256(storage_secret)`.
+* `kdf`: the key derivation function to use. Only scrypt is currently supported.
+* `kdf_salt`: the salt used in the kdf. The salt for scrypt is not random, but encodes important parameters like the limits for time and memory.
+* `kdf_length`: the length of the derived key resulting from the kdf.
+* `length`: the length of the secret.
+* `secret`: the encrypted `storage_secret`.
+* `cipher`: what cipher to use to encrypt `secret`. It must match `kdf_length` (i.e. the length of the `derived_key`).
+* `_mac_method`: The method used to calculate the mac above (currently hmac).
+* `_mac`: Defined as `hmac(doc_id|rev|ciphertext, doc_key)`. The purpose of this field is to prevent the server from tampering with the stored documents.
+
+Currently, scrypt parameters are:
+
+ N (CPU/memory cost parameter) = 2^14 = 16384
+ p (paralelization parameter) = 1
+ r (length of block mixed by SMix()) = 8
+ dkLen (length of derived key) = 32 bytes = 256 bits
+
+Other fields we might want to include in the future:
+
+* `expires_on`: the month in which this recovery document should be purged from the database. The server may choose to purge documents before their expiration, but it should not let them linger after it.
+* `soledad`: the encrypted `soledad.json`, created by `sym_encrypt(cipher, contents(soledad.json), derived_key)` (base64 encoded).
+* `reset_token`: an optional encrypted password reset token, if supported by the server, created by `sym_encrypt(cipher, password_reset_token, derived_key)` (base64 encoded). The purpose of the reset token is to allow recovery using the recovery code even if the user has forgotten their password. It is only applicable if using recovery code method.
+
+**Recovery database**
+
+In order to support easy recovery, the Soledad client stores a recovery document in a special recovery database. This database is shared among all users.
+
+The recovery database supports two functions:
+
+* `get_doc(doc_id)`
+* `put_doc(doc_id, recovery_document_content)`
+
+Anyone may preform an unauthenticated `get_doc` request. To mitigate the potential attacks, the response to queries of the discovery database must have a long delay of X seconds. Also, the `doc_id` is very long (see below).
+
+Although the database is shared, the user must authenticate via the normal means before they are allowed to put a recovery document. Because of this, a nefarious server might potentially record which user corresponds to which recovery documents. A well behaved server, however, will not retain this information. If the server supports authentication via blind signatures, then this will not be an issue.
+
+
+**Recovery code (yet to be implemented)**
+
+We intend to offer data recovery by specifying username and a recovery code. The choice of type of recovery (using password or a recovery code) must be made in advance of attempting recovery (e.g. at some point after the user has Soledad successfully running on a device).
+
+About the optional recovery code:
+
+* The recovery code should be randomly generated, at least 16 characters in length, and contain all lowercase letters (to make it sane to type into mobile devices).
+* The recovery code is not stored by Soledad. When the user needs to bootstrap a new device, a new code is generated. To be used for actual recovery, a user will need to record their recovery code by printing it out or writing it down.
+* The recovery code is independent of the password. In other words, if a recovery code is generated, then a user changes their password, the recovery code is still be sufficient to restore a user's account even if the user has lost the password. This feature is dependent on the server supporting a password reset token. Also, generating a new recovery code does not affect the password.
+* When a new recovery code is created, and new recovery document must be pushed to the recovery database. A code should not be shown to the user before this happens.
+* The recovery code expires when the recovery database record expires (see below).
+
+The purpose of the recovery code is to prevent a compromised or nefarious Soledad service provider from decrypting a user's storage. The benefit of a recovery code over the user password is that the password has a greater opportunity to be compromised by the server. Even if authentication is performed via Secure Remote Password, the server may still perform a brute force attack to derive the password.
+
+Reference implementation of client
+===================================
+
+https://github.com/leapcode/soledad
+
+Dependencies:
+
+* [U1DB](https://launchpad.net/u1db) provides an API and protocol for synchronized databases of JSON documents.
+* [SQLCipher](http://sqlcipher.net/) provides a block-encrypted SQLite database used for local storage.
+* python-gnupg
+* scrypt
+* pycryptopp
+
+Local storage
+--------------------------
+
+U1DB reference implementation in Python has an SQLite backend that implements the object store API over a common SQLite database residing in a local file. To allow for encrypted local storage, Soledad adds a SQLCipher backend, built on top of U1DB's SQLite backend, which adds [SQLCipher API](http://sqlcipher.net/sqlcipher-api/) to U1DB.
+
+**Responsibilities**
+
+The SQLCipher backend is responsible for:
+
+* Providing the SQLCipher API for U1DB (`PRAGMA` statements that control encryption parameters).
+* Guaranteeing that the local database used for storage is indeed encrypted.
+* Guaranteeing secure synchronization:
+ * All data being sent to a remote replica is encrypted with a symmetric key before being sent.
+ * Ensure that data received from remote replica is indeed encrypted to a symmetric key when it arrives, and then that it is decrypted before being included in the local database replica.
+* Correctly representing and handling new Document properties (e.g. the `sync` flag).
+
+Part of the Soledad `storage_key` is used directly as the key for the SQLCipher encryption layer. SQLCipher supports the use of a raw 256 bit keys if provided as a 64 character hex string. This will skip the key derivation step (PBKDF2), which is redundant in our case. For example:
+
+ sqlite> PRAGMA key = "x'2DD29CA851E7B56E4697B0E1F08507293D761A05CE4D1B628663F411A8086D99'";
+
+**Classes**
+
+SQLCipher backend classes:
+
+* `SQLCipherDatabase`: An extension of `SQLitePartialExpandDatabase` used by Soledad Client to store data locally using SQLCipher. It implements the following:
+ * Need of a password to instantiate the db.
+ * Verify if the db instance is indeed encrypted.
+ * Use a LeapSyncTarget for encrypting content before synchronizing over HTTP.
+ * "Syncable" option for documents (users can mark documents as not syncable, so they do not propagate to the server).
+
+Encrypted synchronization target
+--------------------------------------------------
+
+To allow for database synchronization among devices, Soledad uses the following conventions:
+
+* Centralized synchronization scheme: Soledad clients always sync with a server, and never between themselves.
+* The server stores its database in a CouchDB database using a REST API over HTTP.
+* All data sent to the server is encrypted with a symmetric secret before being sent. Note that this ensures all data received by the server and stored in the CouchDB database has been encrypted by the client.
+* All data received from the server is validated as being an encrypted blob, and then is decrypted before being stored in local database. Note that the local database provides a new encryption layer for the data through SQLCipher.
+
+**Responsibilities**
+
+Provide sync between local and remote replicas:
+
+* Encrypt outgoing content.
+* Decrypt incoming content.
+
+**Classes**
+
+Synchronization-related classes:
+
+* `SoledadSyncTarget`: an extension of `HTTPSyncTarget` modified to encrypt documents' content before sending them to the network and to have more control of the syncing process.
+
+Reference implementation of server
+======================================================
+
+https://github.com/leapcode/soledad
+
+Dependencies:
+
+* [CouchDB](https://couchdb.apache.org/] for server storage, via [python client library](https://pypi.python.org/pypi/CouchDB/0.8).
+* [Twisted](http://twistedmatrix.com/trac/) to run the WSGI application.
+* scrypt
+* pycryptopp
+* PyOpenSSL
+
+CouchDB backend
+-------------------------------
+
+In the server side, Soledad stores its database replicas in CouchDB servers. Soledad's CouchDB backend implementation is built on top of U1DB's `CommonBackend`, and stores and fetches data using a remote CouchDB server. It lacks indexing first because we don't need that functionality on server side, but also because if not very well done, it could lack sensitive information about document's contents.
+
+CouchDB backend is responsible for:
+
+* Initializing and maintaining the following U1DB replica data in the database:
+ * Transaction log.
+ * Conflict log.
+ * Synchronization log.
+* Mapping the U1DB API to CouchDB API.
+
+**Classes**
+
+* `CouchDatabase`: A backend used by Soledad Server to store data in CouchDB.
+* `CouchSyncTarget`: Just a target for syncing with Couch database.
+* `CouchServerState`: Interface of the WSGI server with the CouchDB backend.
+
+WSGI Server
+-----------------------------------------
+
+The U1DB server reference implementation provides for an HTTP API backed by SQLite databases. Soledad extends this with token-based auth HTTP access to CouchDB databases.
+
+* Soledad makes use of `twistd` from Twisted API to serve its WSGI application.
+* Authentication is done by means of a token.
+* Soledad implements a WSGI middleware in server side that:
+ * Uses the provided token to verify read and write access to each user's private databases and write access to the shared recovery database.
+ * Allows reading from the shared remote recovery database.
+ * Uses CouchDB as its backend.
+
+**Classes**
+
+* `SoledadAuthMiddleware`: implements the WSGI middleware with token based auth as described before.
+* `SoledadApp`: The WSGI application. For now, not different from `u1db.remote.http_app.HTTPApp`.
+
+**Authentication**
+
+Soledad Server authentication middleware controls access to user's private databases and to the shared recovery database. Soledad client provides a token for Soledad server that can check the validity of this token for this user's session by querying a certain database.
+
+A valid token for this user's session is required for:
+
+* Read and write access to this user's database.
+* Read and write access to the shared recovery database.
+
+Tests
+===================
+
+To be sure the new implemented backends work correctly, we included in Soledad the U1DB tests that are relevant for the new pieces of code (backends, document, http(s) and sync tests). We also added specific tests to the new functionalities we are building.
diff --git a/pages/docs/en.haml b/pages/docs/en.haml
new file mode 100644
index 0000000..41f68b3
--- /dev/null
+++ b/pages/docs/en.haml
@@ -0,0 +1,4 @@
+- @title = "LEAP Technical Documentation"
+- @nav_title = "Documentation"
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/get-involved/bad-project-ideas.md b/pages/docs/get-involved/bad-project-ideas.md
new file mode 100644
index 0000000..dbe4bd2
--- /dev/null
+++ b/pages/docs/get-involved/bad-project-ideas.md
@@ -0,0 +1,69 @@
+### Import current GPG Key to be used with leap mail.
+
+* Contact: drebs, chiiph
+* Difficulty: Medium
+* Description: Current GPG users have their key already, and they may not need or want to migrate to a new key with their bitmask user, so it would be great if instead of generating a new key, the client could ask for an alternative key to be imported. Another option would be to have the hability to have multiple keys for a user and have the client be configurable enough so that an advanced user can choose which to use.
+
+### Certificate perspectives through Tor or other methods.
+
+* Contact: chiiph
+* Difficulty: Easy to medium
+* Description: Properly trusting a certificate is not the easiest thing to do, if you are a target of a Man in the Middle in your network, chances are you are going to be in trouble. One way to solve this problem is to have a better network perspective. This can be accomplished by launching Tor, building 3 circuits that exit from different parts of the world, and downloading the certificate from each point and then comparing the outcomes of it.
+
+### Contact list replacement for Android based on Soledad
+
+* Dependencies: Soledad port on Android
+* Contact: drebs, chiiph
+* Difficulty: Easy to medium.
+* Description: Having a client encrypted sync'ed solution for all your contacts in your devices is something that can be easily solved by using Soledad for storage and implementing a custom SyncAdapter for contacts and calendar.
+* Resources: https://developer.android.com/training/sync-adapters/creating-sync-adapter.html
+
+### Support for KVM, OpenVZ, LXC
+
+* Contact: elijah, micah, varac
+* Difficulty: -
+
+### Add OAuth2 auth to soledad server or other methods.
+
+* Contact: drebs, chiiph
+* Difficulty: Easy to medium
+* Description: One of the most used method for authentication is OAuth, currently Soledad server only supports our own token authentication methods but that won't be necessarily the case for other Soledad adopters, so it would be great to use our pluggable auth design in Soledad server to add as many authentication methods as possible, such as OAuth.
+
+### Tor support
+
+* Contact: chiiph, drebs
+* Difficulty: Easy to medium
+* Description: It would be great to be able to access a Soledad server through Tor, the idea is to add the necessary code in Soledad for this to be possible, and later on add that as a configuration option for the bitmask client.
+
+### Encrypted filesystem based on Soledad (FUSE)
+
+* Contact: chiiph, elijah
+* Difficulty: Medium to hard.
+* Description: There are certain issues with building a fully distributed secure file system solution, all of which can be solved with Soledad. One possible approach to this problem would be to use something like Tahoe-LAFS and use Soledad as the collector of your caps. Another approach could be using Soledad directly and handling problems like chunking by hand directly in this app.
+
+### Calendar app
+
+* Contact: chiiph, drebs
+* Difficulty: Easy to medium
+* Description: This task would involve basically building a UI for a calendar application that is Soledad backed, which would be easily sync'ed among all the user's devices.
+
+### Add leap token auth to Vines XMPP server
+* Contact: elijah
+* Difficulty: -
+* Skills: Ruby
+
+### Add MUC to Vines XMPP server
+* Contact: elijah
+* Difficulty: -
+* Description: -
+
+### Port Soledad to Android
+* Contact: drebs, chiiph
+* Difficulty: Medium to hard
+* Description: Soledad is currently built on top of U1DB's reference implementation which is in Python. It also uses OpenSSL and pycryptopp for the cryptography bits. So the possibilities for porting Soledad to Android are: Implement it in pure C and use cryptopp (since it's what pycryptopp is using underneath), do an pure Java implementation or try to run the Python code we are already using. It would be reasonable to not have the most fast implementation at first if running our Python code is possible and would shorten the development times.
+
+### Port Keymanager to Android
+* Dependencies: Soledad for Android
+* Contact: drebs, chiiph
+* Difficulty: Medium
+* Description: The way we try to solve the key distribution problem is by having a NickServer and handling key logic through what we call the KeyManager. Currently, as most of our components, it's implemented in Python, so the same ideas apply here as for the Soledad port.
diff --git a/pages/docs/get-involved/coding.haml b/pages/docs/get-involved/coding.haml
new file mode 100644
index 0000000..236aa64
--- /dev/null
+++ b/pages/docs/get-involved/coding.haml
@@ -0,0 +1,73 @@
+- @title = "Contributing Code"
+- @summary = "How to issue a pull request."
+
+%p All development happens via pull requests. To add a new feature or fix a bug, the developer must create a new branch off <code>develop</code>, make their changes, and then issue a pull request for another developer to review before the changes get merged back into <code>develop</code>.
+
+%p Here is an example, using github with username <code>rms</code> and repository <code>bitmask_client</code>. You don't need to use github, but it is a friendly way to get started.
+
+%p
+ %b Step 1 &mdash; Fork on github
+
+%p Login to github.com as 'rms', browse to #{link 'https://github.com/leapcode/bitmask_client'}, and click the fork button.
+
+%p Now you should have a fork of the code available at <code>https://github.com/rms/bitmask_client</code>.
+
+%p
+ %b Step 2 &mdash; Set up local clone
+
+%p Clone the upstream repository:
+
+%pre
+ %code
+ git clone https://leap.se/git/bitmask_client
+ cd bitmask_client
+
+%blockquote
+ .p NOTE: Alternately, you can use the github mirror at <code>https://github.com/leapcode/bitmask_client</code>. It does not matter which one you choose.
+
+%p
+ %b Step 4 &mdash; Add a "remote" for your fork
+
+%p Next, you need to add the fork you created on github as an alternate remote in your local repository:
+
+%pre
+ %code
+ git remote add rms https://github.com/rms/bitmask_client.git
+
+%p
+ %b Step 5 &mdash; Create a new feature branch
+
+%pre
+ %code
+ git fetch origin
+ git checkout develop
+ git checkout -b feature/my_new_feature
+
+%p
+ %b Step 6 &mdash; Hack away
+
+%P Make all your changes in your <code>feature/my_new_feature</code> branch, with a separate <code>git commit</code> for each discrete modification you make.
+
+%p
+ %b Step 7 &mdash; Prepare for pull request
+
+%p Once you are happy with your branch, prepare it for a pull request by rebasing on the latest upstream <code>develop</code> branch. This will also give you an opportunity to clean up your commit history by squashing and changing commit messages.
+
+%pre
+ %code
+ git fetch origin # ensure the latest
+ git checkout feature/my_new_feature # if not already checked out
+ git rebase -i develop # rebase and clean up commits
+
+%p
+ %b Step 8 &mdash; Submit pull request
+
+%p Next, you will push your local feature branch to your fork on github, and then issue a pull request.
+
+%pre
+ %code
+ git push rms feature/my_new_feature
+
+%p Then browse to <code>https://github.com/rms/bitmask_client</code>, where you will see a handy button to issue a pull request. Make sure that the upstream branch is <code>leapcode/bitmask_client:develop</code> and you are requesting the merge of <code>rms/bitmask_client:feature/my_new_feature</code>.
+
+Then you are done. Some other developer will get a notice of your pull request, review the changes, and merge into upstream <code>develop</code> branch. If they have questions or comments, you will get an email from github.
diff --git a/pages/docs/get-involved/communication.md b/pages/docs/get-involved/communication.md
new file mode 100644
index 0000000..6102bde
--- /dev/null
+++ b/pages/docs/get-involved/communication.md
@@ -0,0 +1,25 @@
+@nav_title = "Communication channels"
+@title = "Communication channels for development"
+@summary = "How to communicate with other people working on the code."
+
+### IRC
+
+Probably the fastest and most reliable way to contact anyone involved with LEAP. Don't despair if you don't get a reply right away, we are all in different time zones and we all are able to read the scrollback history, so someone will reply eventually.
+
+ #leap on irc.freenode.net
+
+Topics related to coding, bugs, and development issues. Also general discussion and anything related to LEAP.
+
+### Mailing lists
+
+<div class="well">discuss&#x0040;leap&#x002e;se</div>
+
+* To subscribe, send mail to <code>discuss-subscribe&#x0040;leap&#x002e;se</code>
+* To unsubscribe, send mail to <code>discuss-unsubscribe&#x0040;leap&#x002e;se</code>
+* List archives are <a href='https://lists.riseup.net/www/arc/leap-discuss'>also available</a>
+
+### Email
+
+To contact someone from LEAP, you can send an email to:
+
+<div class="well">info&#x0040;leap&#x002e;se</div>
diff --git a/pages/docs/get-involved/en.haml b/pages/docs/get-involved/en.haml
new file mode 100644
index 0000000..b58afb7
--- /dev/null
+++ b/pages/docs/get-involved/en.haml
@@ -0,0 +1,4 @@
+- @title = "Get Involved"
+- @summary = "Contributing to LEAP software development."
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/get-involved/project-ideas.md b/pages/docs/get-involved/project-ideas.md
new file mode 100644
index 0000000..5da5dd8
--- /dev/null
+++ b/pages/docs/get-involved/project-ideas.md
@@ -0,0 +1,412 @@
+@title = "Project Ideas"
+@summary = "Ideas for discrete, unclaimed development projects that would greatly benefit the LEAP ecosystem."
+
+Interested in helping with LEAP? Not sure where to dive in? This list of project ideas is here to help.
+
+These are discrete projects that would really be a great benefit to the LEAP development effort, but are separate enough that you can dive right in without stepping on anyone's toes.
+
+If you are interested [contact us on IRC or the mailing list](communication). We will put you in touch with the contact listed under each project.
+
+If you have your own ideas for projects, we would love to hear about it!
+
+Bitmask Client Application
+=======================================
+
+Email
+---------------------------------------
+
+### Apple Mail plugin
+
+We have an extension for Thunderbird to autoconfigure for use with Bitmask. It would be great to do the same thing for Apple Mail. [Some tips to get started](http://blog.adamnash.com/2007/09/17/getting-ready-to-write-an-apple-mailapp-plug-in-for-mac-os-x/) and a "links to many existing Mail.app plugins"[http://www.tikouka.net/mailapp/]
+
+* Contact: drebs
+* Difficulty: Medium
+* Skills: MacOS programming, Objective-C or Python (maybe other languages too?)
+
+### Microsoft Outlook plugin
+
+We have an extension for Thunderbird to autoconfigure for use with Bitmask. It would be great to do the same thing for Outlook.
+
+* Contact: drebs
+* Difficulty: Medium
+* Skills: Windows programming
+
+### Mailpile fork
+
+[Mailpile](http://www.mailpile.is/) is a new mail client written in Python with an HTML interface. Mailpile is interesting, because it is one of the few actively developed cross platform mail clients. Since the Bitmask application is also in Python, it would be nice to distribute a version of Mailpile with Bitmask that is preconfigured to work with whatever email accounts you have in Bitmask. Additionally, you would need to modify Mailpile so that it does not cache a copy of all email itself (since Bitmask app already keeps a copy in a client-encrypted database), and remove the OpenPGP parts of Mailpile (since this is already handled by Bitmask).
+
+* Contact: chiiph
+* Difficulty: Medium
+* Skills: Python
+
+Linux
+---------------------------
+
+### Package application for non-Debian linux flavors
+
+The Bitmask client application is entirely ported to Debian, with every dependency library now submitted to unstable. However, many of these packages are not in other flavors of linux, including RedHat/Fedora, SUSE, Arch, Gentoo.
+
+* Contact: kali, micah, chiiph
+* Difficulty: Medium
+* Skills: Linux packaging
+
+### Package application for BSD
+
+The Bitmask client application is entirely ported to Debian, with every dependency library now submitted to unstable. However, many of these packages are not in *BSD.
+
+* Contact: chiiph
+* Difficulty: Medium
+* Skills: BSD packaging
+
+Mac OS
+-------------------------
+
+### Proper privileged execution on Mac
+
+We are currently running openvpn through cocoasudo to run OpenVPN with admin privs, we should not depend on a third party app and handle that ourselves. The proper way to do this is with [Service Management framework](https://developer.apple.com/library/mac/#samplecode/SMJobBless/Introduction/Intro.html).
+
+* Contact: chiiph, kali
+* Difficulty: Medium
+* Skills: Mac programming
+
+### Prevent DNS leakage on Mac OS
+
+Currently, we block DNS leakage on the OpenVPN gateway. This works, but it would be better to do this on the client. The problem is there are a lot of weird edge cases that can lead to DNS leakage. See [dnsleaktest.com](http://www.dnsleaktest.com/) for more information.
+
+* Contact: kali, chiiph
+* Difficulty: Medium
+* Skills: Mac programming
+
+### Support for older Mac OSs
+
+We support OSX 64bits x86 >= 10.7, but in order to support versions <10.7 there are a list of libraries that need to be built compatible with the specific SDK version and with PPC support (basically, boost and certain python modules).
+
+* Contact: chiiph, kali
+* Difficulty: Medium to hard
+* Skills: Mac programming
+
+Windows
+-------------------------------
+
+### Code signing on Windows
+
+The bundle needs to be a proper signed application in order to make it safer and more usable when we need administrative privileges to run things like OpenVPN.
+
+* Contact: chiiph
+* Difficulty: Easy to medium
+* Skills: Windows programming
+
+### Proper privileged execution on Windows
+
+Right now we are building OpenVPN with a manifest so that it's run as Administrator. Perhaps it would be better to handle this with User Account Control.
+
+* Contact: chiiph, kali
+* Difficulty: Medium
+* Skills: Windows programming
+
+### Prevent DNS leakage on Windows
+
+Currently, we block DNS leakage on the OpenVPN gateway. This works, but it would be better to do this on the client. The problem is there are a lot of weird edge cases that can lead to DNS leakage. See [dnsleaktest.com](http://www.dnsleaktest.com/) for more information.
+
+* Contact: kali, chiiph
+* Difficulty: Medium
+* Skills: Windows programming
+
+### Add Windows support for Soledad and all the different bundle components
+
+We dropped Windows support because we couldn't keep up with all the platforms, Windows support should be re-added, which means making sure that the gpg modules, Soledad and all the other components are written in a proper multiplatform manner.
+
+* Contact: chiiph, drebs
+* Difficulty: Easy to Medium
+* Skills: Windows programming, Python
+
+### Create proper Windows installer for the bundle
+
+We are aiming to distributing bundles with everything needed in them, but an amount of users will want a proper Windows installer and we should provide one.
+
+* Contact: chiiph, kali
+* Difficulty: Medium
+* Skills: Windows programming
+
+### Document how to build everything with Visual Studio Express
+
+All the python modules tend to be built with migw32. The current Windows bundle is completely built with migw32 for this reason. Proper Windows support means using Visual Studio (and in our case, the Express edition, unless the proper licenses are bought).
+
+* Contact: chiiph
+* Difficuty: Medium to Hard
+* Skills: Windows programming
+
+### Support Windows 64bits
+
+We have support for Windows 32bits, 64bits seems to be able to use that, except for the TAP driver for OpenVPN. So this task is either really easy because it's a matter of calling the installer in a certain way or really hard because it involves low level driver handling or something like that.
+
+* Contact: chiiph
+* Difficulty: Either hard or really easy.
+* Skills: Windows programming
+
+Android
+----------------------------------------------
+
+### Dynamic OpenVPN configuration
+
+Currently the Android app chooses which VPN gateway to connect to based on the least difference of timezones and establishes a configuration for connecting to it by a biased selection of options (port, proto, etc) from the set declared by the provider through the API. For cases where a gateway is unavailable or a network is restricting traffic that our configuration matches (e.g. UDP out to port 443), being able to attempt different configurations or gateways would help finding a configuration that worked.
+
+* Contact: meanderingcode, parmegv, or richy
+* Difficulty: Easy to medium
+* Skills: Android programming
+
+### Ensure OpenVPN fails closed
+
+For enhanced security, we would like the VPN on android to have the option of blocking all network traffic if the VPN dies or when it has not yet established a connection. Network traffic would be restored when the user manually turns off the VPN or the VPN connection is restored. Currently, there is no direct way to do this with Android, but we have a few ideas for tackling this problem.
+
+* Contact: meanderingcode, parmegv, or richy
+* Difficulty: Hard (Medium but meticulous, or harder than we think)
+* Skills: Android programming, applicable linux skill like iptables
+
+### Port libraries to Android
+
+Before we can achieve full functionality on Android, we have a lot of Python libraries that need to either be ported to run directly on Android or to rewrite them natively in Java or JNI. We have been pursing both strategies, for different libraries, but we have a lot more work to do.
+
+* Contact: richy, meanderingcode, parmegv
+* Difficulty: varies
+* Skills: Android programming, compiling, Python programming.
+
+Installer and Build Process
+----------------------------------------------
+
+### Reproducible builds with Gitian for bundles
+
+We rely on a group of binary components in our bundles, these include libraries like boost, Qt, PySide, pycryptopp among many others. All these should be built in a reproducible way in order to be able to sign the bundles from many points without the need to actually having to send the bundle from the main place it gets built to the rest of the signers. This will also allow a better integration with our automatic updates infrastructure.
+
+* Contact: chiiph
+* Difficulty: Medium to hard
+
+### Automatic dependency collector for bundle creation
+
+The bundles are now used as a template for new versions, the first bundle was basically built by hand, adding one dependency after the other until it all worked. We would like to automate this process completely, since new dependencies tend to be added at certain points. One possibility would be to use PyInstaller dependency recollection code, another would be to use some of Python's module introspection to recursively collect dependencies.
+
+* Contact: chiiph, kali
+* Difficulty: Medium to hard
+
+### Lightweight network installer
+
+The bundles are big. It would be great if we could reduce its size, but that's not always possible when you are providing so many different things in one application. One way to work around this would be to have a really tiny application that runs Thandy, has the proper certificates and has a tiny lightweight UI so that the user can install the bundle's packages one by one and even pick parts that the user might not want. Just want to run Email? Then there's no need to download OpenVPN and all the chat and file sync code.
+
+* Contact: chiiph
+* Difficulty: Medium to hard
+* Skills: C/C++, Python
+
+
+New Services
+----------------------------------
+
+### Password keeper
+
+There are multiple password keepers that exist today, but they don't necessarily have a way to sync your passwords from device to device. Building a Soledad backed password keeper would solve all these problems implicitly, it's only a matter of UI and random password generation.
+
+* Contact: drebs, chiiph, elijah
+* Difficulty: Easy to medium.
+* Skills: Python
+
+### Notepad app
+
+This idea is basically a simple note pad application that saves all its notes as Soledad documents and syncs them securely against a Soledad server.
+
+* Contact: chiiph, kali, drebs
+* Difficulty: Easy to medium
+* Skills: Python
+
+Miscellaneous
+-------------------------------
+
+### Token-based user registration
+
+The idea is to allow or require tokens in the new user signup process. These tokens might allow to claim a particular username, give you a credit when you sign up, allow you to sign up, etc.
+
+* Dependency: token-based signup in webapp API.
+* Contact: elijah, chiiph
+* Difficulty: Easy
+* Skills: Python
+
+### General QA
+
+One thing that we really need is a team of people that is constantly updating their versions of the code and testing the new additions. Basic knowledge of Git would be needed, and some really basic Python.
+
+* Contact: mcnair, elijah, chiiph
+* Difficulty: Easy to medium, depending on the QA team that is managed.
+
+### Translations
+
+Do you speak a language that's not English? Great! We can use your help! We are always looking for translators for every language possible.
+
+* Contact: ivan, kali, chiiph
+* Difficulty: Easy
+
+### Support for OpenPGP smart cards
+
+A really nice piece of hardware is OpenPGP smart cards. What would be needed is a way to save the generated key in the smart card instead of in Soledad (or both, should be configurable enough) and then migrate the regular OpenPGP workflow to support these change.
+
+* Contact: chiiph, drebs
+* Difficulty: Medium
+
+### Device blessing
+
+Add the option to require a one-time code in order to allow an additional device to be synchronized with your account.
+
+* Contact: elijah
+* Difficulty: Hard
+* Skills: Python
+
+### Push notifications from the server
+
+There are situations where the service provider you are using through the bitmask client might want to notify some event to all its users. May be some downtime, or any other problems or situations. There should be an easy way to push such notifications to the client.
+
+* Contact: chiiph, elijah
+* Difficulty: Easy to medium
+* Skills: Python
+
+### Quick wipe of all data
+
+Some users might be in situations where being caught with software like OpenVPN is illegal or basically just problematic. There should be a quick way to wipe the existence of the whole bundle and your identity from provider.
+
+* Contact: chiiph, kali, ivan, elijah
+* Difficulty: Medium to hard
+* Skills: Python
+
+### Add support for obfsproxy to Bitmask client
+
+After obfsproxy support is added to the platform, it needs to be enabled in the client.
+
+* Contact: chiiph, ivan, kali
+* Difficulty: Easy
+* Skills: Python
+
+
+LEAP Platform
+===========================
+
+Soledad
+---------------------------
+
+### Add support for quota
+
+Soledad server only handles authentication and basic interaction for sync, it would be good to have a way to limit the quota each user has to use and enforce it through the server.
+
+* Contact: chiiph, drebs
+* Difficulty: Medium to hard
+* Skills: Python
+
+### Add support for easier soledad server deployment
+
+Currently Soledad relies on a fairly complex CouchDB setup. It can be deployed with just one CouchDB instance, but may be if you are just using one instance you might be good enough with SQLite or other easy to setup storage methods. The same applies to authentication, may be you want a handful of users to be able to use your Soledad sever, in which case something like certificate client authentication might be enough. So it would be good to support these non-scalable options for deploying a Soledad server.
+
+* Contact: chiiph, drebs
+* Difficulty: Medium
+* Skills: Python
+
+### A soledad management tool
+
+Bootstrapping Soledad and being able to sync with it is not a necessarily easy task, you need to take care of auth and other values like server, port, user id. Having an easy to use command line interface application that can interact with Soledad would ease testing both on the client as on the server.
+
+* Contact: chiiph, drebs
+* Difficulty: Easy to medium
+* SKills: Python
+
+### Federated Soledad
+
+Currently, each user's Soledad database is their own and no one else ever has access. It would be mighty useful to allow two or more users to share a Solidad database.
+
+* Contact: drebs, elijah
+* Difficult: Hard
+* Skills: Python
+
+DNS
+--------------------------------
+
+### Add DNSSEC entries to DNS zone file
+
+We should add commands to the leap command line tool to make it easy to generate KSK and ZSK, and sign DNS entries.
+
+* Contact: elijah, micah, varac
+* Difficulty: Easy
+* Skills: Ruby
+
+### Add DANE entries to DNS zone file
+
+Every node one or more server certificates. We should publish these using DANE.
+
+* Contact: elijah, micah, varac
+* Difficulty: Easy
+
+### Add DKIM entries to DNS zone file
+
+We need to generate and publish [DKIM](https://en.wikipedia.org/wiki/DKIM) keys.
+
+* Contact: elijah, micah, varac
+* Difficulty: Easy
+
+OpenVPN
+-----------------------------------
+
+### OpenVPN with ECC PFS support
+
+Currently, OpenVPN gets configured to use a non-ECC DH cipher with perfect forward secrecy, but it would be nice to get it working with an Elliptical Curve Cipher. This greatly reduces the CPU load of the OpenVPN gateway.
+
+* Contact: elijah, varac
+* Difficulty: Medium
+* Skills: OpenVPN, X.509
+
+### Add support for obfsproxy to the platform
+
+Sometimes OpenVPN will be blocked by firewalls or governments if the protocol is detected. Obfsproxy 3 is the most advanced tool available for circumventing this detection. Obfsproxy was concieved as a tool to reach the Tor network, but it can be used for other protocols too. We want to have the ability to use this for our Encrypted Internet solution. For more information, see [OpenVPN and Obfsproxy howto guide](http://www.dlshad.net/?p=135) and the [Obfsproxy project page](https://www.torproject.org/projects/obfsproxy.html.en).
+
+* Contact: varac, elijah
+* Difficulty: Easy
+* Skills: OpenVPN, Linux, networking
+
+Email
+--------------------------
+
+### Mailing list support
+
+Adapt the PSELS mailing list for use with the LEAP platform. PSELS uses OpenPGP in a novel way to achieve proxy re-encryption, allowing for a mailing list in which the server does not ever have access to messages in cleartext, but subscribers don't need to encrypt each message to the public key of all subscribers. For more information, read the [paper](http://www.ncsa.illinois.edu/people/hkhurana/ICICS.pdf).
+
+* Contact: elijah
+* Difficulty: Extremely hard
+* Skills: Cryptography, Python
+
+
+LEAP Webapp
+============================
+
+### Add support for bitcoin payments to the billing module
+
+The webapp has a payment infrastructure setup (Braintree), but it only supports credit card and bank wire payments. The webapp should be extended to also accept payments from bitcoin.
+
+* Contact: azul, elijah, jessi
+* Difficulty: Easy
+
+### Add support for newsletter
+
+Sometimes simple push notifications aren't enough, you may want to mail a newsletter to your users or more descriptive notifications, it should be possible for an administrator of a provider to use the webapp to quickly send mail to all its users.
+
+* Contact: chiiph, azul, elijah
+* Difficulty: Easy
+
+### Add support for quota
+
+Description: Once the Soledad server quota enforcement code is in place, it would be good to have the ability to configure the quota for a user and check the user's quota via the webapp.
+
+* Dependency: Soledad server quota enforcement.
+* Contact: azul, elijah
+* Difficulty: Easy
+* Skills: Ruby
+
+### Add support for token-based user registration
+
+The idea is to allow or require tokens in the signup process. These tokens might allow to claim a particular username, give you a credit when you sign up, allow you to sign up, etc.
+
+* Contact: azul, jessi, elijah
+* Difficulty: Easy to medium
+* Skills: Ruby and Javascript
+
diff --git a/pages/docs/get-involved/source.haml b/pages/docs/get-involved/source.haml
new file mode 100644
index 0000000..4b7abb5
--- /dev/null
+++ b/pages/docs/get-involved/source.haml
@@ -0,0 +1,85 @@
+- @title = "Source Code"
+- @summary = "Overview of the main code repositories"
+
+%p This page should give an easy overview of the most important repositories. The authorative code is hosted at #{link 'leap.se/git' => 'https://leap.se/git/'}, but they are also mirrored to #{link 'github' => 'https://github.com/leapcode/'}.
+
+%p In general, all LEAP code repositories will have <code>develop</code> and <code>master</code> branches. The <code>master</code> branch should be a stable, release version of the software. The <code>develop</code> branch is where all feature and bugfix branches are merged into.
+
+%h3 Client code
+
+%table.table.table-bordered
+ %tr
+ %td bitmask_client
+ %td The Bitmask desktop client application, supporting encrypted internet proxy, secure email, and secure chat (coming soon). The client is written in Python, runs on Linux, Mac, and Windows, and is licensed under the GPLv3.
+ %td
+ = link 'https://leap.se/git/bitmask_client.git'
+ = link 'https://github.com/leapcode/bitmask_client'
+ %tr
+ %td bitmask_android
+ %td Android version of the Bitmask client, supporting encrypted internet proxy. Future development will include support for secure email. Licensed under the GPLv3.
+ %td
+ = link 'https://leap.se/git/bitmask_android.git'
+ = link 'https://github.com/leapcode/bitmask_android'
+
+%h3 Service provider platform
+
+%table.table.table-bordered
+ %tr
+ %td leap_platform
+ %td Server automation recipes for running secure communication services via the LEAP Platform. Written mostly using puppet, and licensed under the GPLv3.
+ %td
+ = link 'https://leap.se/git/leap_platform.git'
+ = link 'https://github.com/leapcode/leap_platform'
+
+ %tr
+ %td leap_cli
+ %td Command line interface for managing a service provider running the LEAP platform. Written in Ruby and released under the GPLv3.
+ %td
+ = link 'https://leap.se/git/leap_cli.git'
+ = link 'https://github.com/leapcode/leap_cli'
+
+ %tr
+ %td soledad
+ %td Soledad (Synchronization of Locally Encrypted Data Among Devices) provides a synchronized, client-encrypted document database. Written in Python.
+ %td
+ = link 'https://leap.se/git/soledad.git'
+ = link 'https://github.com/leapcode/soledad'
+
+ %tr
+ %td nickserver
+ %td Nickserver is a daemon supporting nicknym, a protocol to map user nicknames to public keys. Written in Ruby, released under the GPLv3.
+ %td
+ = link 'https://leap.se/git/nickserver.git'
+ = link 'https://github.com/leapcode/nickserver'
+
+
+%h3 Web applications and libraries
+
+%table.table.table-bordered
+ %tr
+ %td leap_web
+ %td Web application for the LEAP platform, providing user management, tickets, billing, and REST API.
+ %td
+ = link 'https://leap.se/git/leap_web.git'
+ = link 'https://github.com/leapcode/leap_web'
+ %tr
+ %td leap_website
+ %td This website
+ %td
+ = link 'https://leap.se/git/leap_website.git'
+ %tr
+ %td leap_doc
+ %td LEAP Documentation (everything under leap.se/docs including this page)
+ %td= link 'https://leap.se/git/leap_doc.git'
+ %tr
+ %td srp_js
+ %td Secure Remote Password (SRP) library for Javascript.
+ %td
+ = link 'https://leap.se/git/srp_js.git'
+ = link 'https://github.com/leapcode/srp_js'
+ %tr
+ %td ruby_srp
+ %td Secure Remote Password (SRP) library for Ruby.
+ %td
+ = link 'https://leap.se/git/ruby_srp.git'
+ = link 'https://github.com/leapcode/ruby_srp' \ No newline at end of file
diff --git a/pages/docs/platform/details/couchdb.md b/pages/docs/platform/details/couchdb.md
new file mode 100644
index 0000000..276bfdc
--- /dev/null
+++ b/pages/docs/platform/details/couchdb.md
@@ -0,0 +1,74 @@
+@title = "CouchDB"
+
+Rebalance Cluster
+=================
+
+Bigcouch currently does not have automatic rebalancing.
+It will probably be added after merging into couchdb.
+If you add a node, or remove one node from the cluster,
+
+. make sure you have a backup of all DBs !
+
+ /srv/leap/couchdb/scripts/couchdb_dumpall.sh
+
+
+. delete all dbs
+. shut down old node
+. check the couchdb members
+
+ curl -s —netrc-file /etc/couchdb/couchdb.netrc -X GET http://127.0.0.1:5986/nodes/_all_docs
+ curl -s —netrc-file /etc/couchdb/couchdb.netrc http://127.0.0.1:5984/_membership
+
+
+. remove bigcouch from all nodes
+
+ apt-get --purge remove bigcouch
+
+
+. deploy to all couch nodes
+
+ leap deploy development +couchdb
+
+. most likely, deploy will fail because bigcouch will complain about not all nodes beeing connected. Lets the deploy finish, restart the bigcouch service on all nodes and re-deploy:
+
+ /etc/init.d/bigcouch restart
+
+
+. restore the backup
+
+ /srv/leap/couchdb/scripts/couchdb_restoreall.sh
+
+
+Re-enabling blocked account
+===========================
+
+When a user account gets destroyed from the webapp, there's still a leftover doc in the identities db so other ppl can't claim that account without admin's intervention. Here's how you delete that doc and therefore enable registration for that particular account again:
+
+. grep the identities db for the email address:
+
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET http://127.0.0.1:5984/identities/_all_docs?include_docs=true|grep test_127@bitmask.net
+
+
+. lookup "id" and "rev" to delete the doc:
+
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X DELETE 'http://127.0.0.1:5984/identities/b25cf10f935b58088f0d547fca823265?rev=2-715a9beba597a2ab01851676f12c3e4a'
+
+
+How to find out which userstore belongs to which identity ?
+===========================================================
+
+ /usr/bin/curl -s --netrc-file /etc/couchdb/couchdb.netrc '127.0.0.1:5984/identities/_all_docs?include_docs=true' | grep testuser
+
+ {"id":"665e004870ee17aa4c94331ff3ecb173","key":"665e004870ee17aa4c94331ff3ecb173","value":{"rev":"2-2e335a75c4b79a5c2ef5c9950706fe1b"},"doc":{"_id":"665e004870ee17aa4c94331ff3ecb173","_rev":"2-2e335a75c4b79a5c2ef5c9950706fe1b","user_id":"665e004870ee17aa4c94331ff3cd59eb","address":"testuser@example.org","destination":"testuser@example.org","keys": ...
+
+* search for the "user_id" field
+* in this example testuser@example.org uses the database user-665e004870ee17aa4c94331ff3cd59eb
+
+
+How much disk space is used by a userstore
+==========================================
+
+Beware that this returns the uncompacted disk size (see http://wiki.apache.org/couchdb/Compaction)
+
+ echo "`curl --netrc -s -X GET 'http://127.0.0.1:5984/user-dcd6492d74b90967b6b874100b7dbfcf'|json_pp|grep disk_size|cut -d: -f 2`/1024"|bc
+
diff --git a/pages/docs/platform/details/development.md b/pages/docs/platform/details/development.md
new file mode 100644
index 0000000..ab7ef87
--- /dev/null
+++ b/pages/docs/platform/details/development.md
@@ -0,0 +1,355 @@
+@title = "Development Environment"
+@summary = "Setting up an environment for modifying the leap_platform."
+@toc = true
+
+If you are wanting to make local changes to your provider, or want to contribute some fixes back to LEAP, we recommend that you follow this guide to build up a development environment to test your changes first. Using this method, you can quickly test your changes without deploying them to your production environment, while benefitting from the convenience of reverting to known good states in order to retry things from scratch.
+
+This page will walk you through setting up nodes using [Vagrant](http://www.vagrantup.com/) for convenient deployment testing, snapshotting known good states, and reverting to previous snapshots.
+
+Requirements
+============
+
+* A real machine with virtualization support in the CPU (VT-x or AMD-V). In other words, not a virtual machine.
+* Have at least 4gb of RAM.
+* Have a fast internet connection (because you will be downloading a lot of big files, like virtual machine images).
+* You should do everything described below as an unprivileged user, and only run those commands as root that are noted with *sudo* in front of them. Other than those commands, there is no need for privileged access to your machine, and in fact things may not work correctly.
+
+Install prerequisites
+--------------------------------
+
+For development purposes, you will need everything that you need for deploying the LEAP platform:
+
+* LEAP cli
+* A provider instance
+
+You will also need to setup a virtualized Vagrant environment, to do so please make sure you have the following
+pre-requisites installed:
+
+*Debian & Ubuntu*
+
+Install core prerequisites:
+
+ sudo apt-get install git ruby ruby-dev rsync openssh-client openssl rake make
+
+Install Vagrant in order to be able to test with local virtual machines (typically optional, but required for this tutorial). You probably want a more recent version directly from [vagrant.](https://www.vagrantup.com/downloads.htm)
+
+ sudo apt-get install vagrant virtualbox
+
+
+*Mac OS X 10.9 (Mavericks)*
+
+Install Homebrew package manager from http://brew.sh/ and enable the [System Duplicates Repository](https://github.com/Homebrew/homebrew/wiki/Interesting-Taps-&-Branches) (needed to update old software versions delivered by Apple) with
+
+ brew tap homebrew/dupes
+
+Update OpenSSH to support ECDSA keys. Follow [this guide](http://www.dctrwatson.com/2013/07/how-to-update-openssh-on-mac-os-x/) to let your system use the Homebrew binary.
+
+ brew install openssh --with-brewed-openssl --with-keychain-support
+
+The certtool provided by Apple it's really old, install the one provided by GnuTLS and shadow the system's default.
+
+ sudo brew install gnutls
+ ln -sf /usr/local/bin/gnutls-certtool /usr/local/bin/certool
+
+Install the Vagrant and VirtualBox packages for OS X from their respective Download pages.
+
+* http://www.vagrantup.com/downloads.html
+* https://www.virtualbox.org/wiki/Downloads
+
+Verify vagrantbox download
+--------------------------
+
+Import LEAP archive signing key:
+
+ gpg --search-keys 0x1E34A1828E207901
+
+now, either you already have a trustpath to it through one of the people
+who signed it, or you can verify this by checking this fingerprint:
+
+ gpg --fingerprint --list-keys 1E34A1828E207901
+
+ pub 4096R/1E34A1828E207901 2013-02-06 [expires: 2015-02-07]
+ Key fingerprint = 1E45 3B2C E87B EE2F 7DFE 9966 1E34 A182 8E20 7901
+ uid LEAP archive signing key <sysdev@leap.se>
+
+if the fingerprint matches, you could locally sign it so you remember the you already
+verified it:
+
+ gpg --lsign-key 1E34A1828E207901
+
+Then download the SHA512SUMS file and it's signature file
+
+ wget https://downloads.leap.se/platform/SHA512SUMS.sign
+ wget https://downloads.leap.se/platform/SHA512SUMS
+
+and verify the signature against your local imported LEAP archive signing pubkey
+
+ gpg --verify SHA512SUMS.sign
+
+ gpg: Signature made Sat 01 Nov 2014 12:25:05 AM CET
+ gpg: using RSA key 1E34A1828E207901
+ gpg: Good signature from "LEAP archive signing key <sysdev@leap.se>"
+
+Make sure that the last line says "Good signature from...", which tells you that your
+downloaded SHA512SUMS file has the right contents!
+
+Now you can compare the sha512sum of your downloaded vagrantbox with the one in the SHA512SUMS file:
+
+ wget https://downloads.leap.se/platform/vagrant/virtualbox/leap-wheezy.box
+ sha512sum leap-wheezy.box
+ cat SHA512SUMS
+
+
+
+Adding development nodes to your provider
+=========================================
+
+Now you will add local-only Vagrant development nodes to your provider.
+
+You do not need to setup a different provider instance for development, in fact it is more convenient if you do not, but you can if you wish. If you do not have a provider already, you will need to create one and configure it before continuing (it is recommended you go through the [Quick Start](quick-start) before continuing down this path).
+
+
+Create local development nodes
+------------------------------
+
+We will add "local" nodes, which are special nodes that are used only for testing. These nodes exist only as virtual machines on your computer, and cannot be accessed from the outside. Each "node" is a server that can have one or more services attached to it. We recommend that you create different nodes for different services to better isolate issues.
+
+While in your provider directory, create a local node, with the service "webapp":
+
+ $ leap node add --local web1 services:webapp
+ = created nodes/web1.json
+ = created files/nodes/web1/
+ = created files/nodes/web1/web1.key
+ = created files/nodes/web1/web1.crt
+
+This command creates a node configuration file in `nodes/web1.json` with the webapp service.
+
+Starting local development nodes
+--------------------------------
+
+In order to test the node "web1" we need to start it. Starting a node for the first time will spin up a virtual machine. The first time you do this will take some time because it will need to download a VM image (about 700mb). After you've downloaded the base image, you will not need to download it again, and instead you will re-use the downloaded image (until you need to update the image).
+
+NOTE: Many people have difficulties getting Vagrant working. If the following commands do not work, please see the Vagrant section below to troubleshoot your Vagrant install before proceeding.
+
+ $ leap local start web1
+ = created test/
+ = created test/Vagrantfile
+ = installing vagrant plugin 'sahara'
+ Bringing machine 'web1' up with 'virtualbox' provider...
+ [web1] Box 'leap-wheezy' was not found. Fetching box from specified URL for
+ the provider 'virtualbox'. Note that if the URL does not have
+ a box for this provider, you should interrupt Vagrant now and add
+ the box yourself. Otherwise Vagrant will attempt to download the
+ full box prior to discovering this error.
+ Downloading or copying the box...
+ Progress: 3% (Rate: 560k/s, Estimated time remaining: 0:13:36)
+ ...
+ Bringing machine 'web1' up with 'virtualbox' provider...
+ [web1] Importing base box 'leap-wheezy'...
+ 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
+
+Now the virtual machine 'web1' is running. You can add another local node using the same process. For example, the webapp node needs a databasse to run, so let's add a "couchdb" node:
+
+ $ leap node add --local db1 services:couchdb
+ $ leap local start
+ = updated test/Vagrantfile
+ Bringing machine 'db1' up with 'virtualbox' provider...
+ [db1] Importing base box 'leap-wheezy'...
+ [db1] Matching MAC address for NAT networking...
+ [db1] Setting the name of the VM...
+ [db1] Clearing any previously set forwarded ports...
+ [db1] Fixed port collision for 22 => 2222. Now on port 2202.
+ [db1] Creating shared folders metadata...
+ [db1] Clearing any previously set network interfaces...
+ [db1] Preparing network interfaces based on configuration...
+ [db1] Forwarding ports...
+ [db1] -- 22 => 2202 (adapter 1)
+ [db1] Running any VM customizations...
+ [db1] Booting VM...
+ [db1] Waiting for VM to boot. This can take a few minutes.
+ [db1] VM booted and ready for use!
+ [db1] Configuring and enabling network interfaces...
+ [db1] Mounting shared folders...
+ [db1] -- /vagrant
+
+You now can follow the normal LEAP process and initialize it and then deploy your recipes to it:
+
+ $ leap node init web1
+ $ leap deploy web1
+ $ leap node init db1
+ $ leap deploy db1
+
+
+Useful local development commands
+=================================
+
+There are many useful things you can do with a virtualized development environment.
+
+Listing what machines are running
+---------------------------------
+
+Now you have the two virtual machines "web1" and "db1" running, you can see the running machines as follows:
+
+ $ leap local status
+ Current machine states:
+
+ db1 running (virtualbox)
+ web1 running (virtualbox)
+
+ This environment represents multiple VMs. The VMs are all listed
+ above with their current state. For more information about a specific
+ VM, run `vagrant status NAME`.
+
+Stopping machines
+-----------------
+
+It is not recommended that you leave your virtual machines running when you are not using them. They consume memory and other resources! To stop your machines, simply do the following:
+
+ $ leap local stop web1 db1
+
+Connecting to machines
+----------------------
+
+You can connect to your local nodes just like you do with normal LEAP nodes, by running 'leap ssh node'.
+
+However, if you cannot connect to your local node, because the networking is not setup properly, or you have deployed a firewall that locks you out, you may need to access the graphical console.
+
+In order to do that, you will need to configure Vagrant to launch a graphical console and then you can login as root there to diagnose the networking problem. To do this, add the following to your $HOME/.leaprc:
+
+ @custom_vagrant_vm_line = 'config.vm.provider "virtualbox" do |v|
+ v.gui = true
+ end'
+
+and then start, or restart, your local Vagrant node. You should get a VirtualBox graphical interface presented to you showing you the bootup and eventually the login.
+
+Snapshotting machines
+---------------------
+
+A very useful feature of local Vagrant development nodes is the ability to snapshot the current state and then revert to that when you need.
+
+For example, perhaps the base image is a little bit out of date and you want to get the packages updated to the latest before continuing. You can do that simply by starting the node, connecting to it and updating the packages and then snapshotting the node:
+
+ $ leap local start web1
+ $ leap ssh web1
+ web1# apt-get -u dist-upgrade
+ web1# exit
+ $ leap local save web1
+
+Now you can deploy to web1 and if you decide you want to revert to the state before deployment, you simply have to reset the node to your previous save:
+
+ $ leap local reset web1
+
+More information
+----------------
+
+See `leap help local` for a complete list of local-only commands and how they can be used.
+
+
+Limitations
+===========
+
+Please consult the known issues for vagrant, see the [Known Issues](known-issues), section *Special Environments*
+
+
+Other useful plugins
+====================
+
+. The vagrant-cachier (plugin http://fgrehm.viewdocs.io/vagrant-cachier/) lets you cache .deb packages on your hosts so they are not downloaded by multiple machines over and over again, after resetting to a previous state.
+
+Troubleshooting Vagrant
+=======================
+
+To troubleshoot vagrant issues, try going through these steps:
+
+* Try plain vagrant using the [Getting started guide](http://docs.vagrantup.com/v2/getting-started/index.html).
+* If that fails, make sure that you can run virtual machines (VMs) in plain virtualbox (Virtualbox GUI or VBoxHeadless).
+ We don't suggest a sepecial howto for that, [this one](http://www.thegeekstuff.com/2012/02/virtualbox-install-create-vm/) seems pretty decent, or you follow the [Oracale Virtualbox User Manual](http://www.virtualbox.org/manual/UserManual.html). There's also specific documentation for [Debian](https://wiki.debian.org/VirtualBox) and for [Ubuntu](https://help.ubuntu.com/community/VirtualBox). If you succeeded, try again if you now can start vagrant nodes using plain vagrant (see first step).
+* If plain vagrant works for you, you're very close to using vagrant with leap ! If you encounter any problems now, please [contact us](https://leap.se/en/about-us/contact) or use our [issue tracker](https://leap.se/code)
+
+Known working combinations
+--------------------------
+
+Please consider that using other combinations might work for you as well, these are just the combinations we tried and worked for us:
+
+
+Debian Wheezy
+-------------
+
+* `virtualbox-4.2 4.2.16-86992~Debian~wheezy` from Oracle and `vagrant 1.2.2` from vagrantup.com
+
+
+Ubuntu Raring 13.04
+-------------------
+
+* `virtualbox 4.2.10-dfsg-0ubuntu2.1` from Ubuntu raring and `vagrant 1.2.2` from vagrantup.com
+
+Mac OS X 10.9
+-------------
+
+* `VirtualBox 4.3.10` from virtualbox.org and `vagrant 1.5.4` from vagrantup.com
+
+
+Using Vagrant with libvirt/kvm
+==============================
+
+Vagrant can be used with different providers/backends, one of them is [vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). Here are the steps how to use it. Be sure to use a recent vagrant version for the vagrant-libvirt plugin (>= 1.5, which can only be fetched from http://www.vagrantup.com/downloads.html at this moment).
+
+Install vagrant-libvirt plugin and add box
+------------------------------------------
+ sudo apt-get install libvirt-bin libvirt-dev
+ # you need to assign the new 'libvirtd' group to your user in a running x session, or logout and login again:
+ newgrp libvirtd
+ # to build the vagrant-libvirt plugin you need the following packages:
+ sudo apt-get install ruby-dev libxslt-dev libxml2-dev libvirt-dev
+ vagrant plugin install vagrant-libvirt
+ vagrant plugin install sahara
+ vagrant box add leap-wheezy https://downloads.leap.se/platform/vagrant/libvirt/leap-wheezy.box --provider libvirt
+
+Remove Virtualbox
+-----------------
+ sudo apt-get remove virtualbox*
+
+Debugging
+---------
+
+If you get an error in any of the above commands, try to get some debugging information, it will often tell you what is wrong. In order to get debugging logs, you simply need to re-run the command that produced the error but prepend the command with VAGRANT_LOG=info, for example:
+ VAGRANT_LOG=info vagrant box add leap-wheezy https://downloads.leap.se/platform/vagrant/libvirt/leap-wheezy.box
+
+Start it
+--------
+
+Use this example Vagrantfile:
+
+ Vagrant.configure("2") do |config|
+ config.vm.define :testvm do |testvm|
+ testvm.vm.box = "leap-wheezy"
+ testvm.vm.network :private_network, :ip => '10.6.6.201'
+ end
+
+ config.vm.provider :libvirt do |libvirt|
+ libvirt.connect_via_ssh = false
+ end
+ end
+
+Then:
+
+ vagrant up --provider=libvirt
+
+If everything works, you should export libvirt as the VAGRANT_DEFAULT_PROVIDER:
+
+ export VAGRANT_DEFAULT_PROVIDER="libvirt"
+
+Now you should be able to use the `leap local` commands.
+
+Known Issues
+------------
+
+* 'Call to virConnectOpen failed: internal error: Unable to locate libvirtd daemon in /usr/sbin (to override, set $LIBVIRTD_PATH to the name of the libvirtd binary)' - you don't have the libvirtd daemon running or installed, be sure you installed the 'libvirt-bin' package and it is running
+* 'Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied' - you need to be in the libvirt group to access the socket, do 'sudo adduser <user> libvirt' and then re-login to your session
+* if each call to vagrant ends up with a segfault, it may be because you still have virtualbox around. if so, remove virtualbox to keep only libvirt + KVM. according to https://github.com/pradels/vagrant-libvirt/issues/75 having two virtualization engines installed simultaneously can lead to such weird issues.
+* see the [vagrant-libvirt issue list on github](https://github.com/pradels/vagrant-libvirt/issues)
+* be sure to use vagrant-libvirt >= 0.0.11 and sahara >= 0.0.16 (which are the latest stable gems you would get with `vagrant plugin install [vagrant-libvirt|sahara]`) for proper libvirt support
+* for shared folder support, you need nfs-kernel-server installed on the host machine and set up sudo to allow unpriviledged users to modify /etc/exports. See [vagrant-libvirt#synced-folders](https://github.com/pradels/vagrant-libvirt#synced-folders)
+
+
+ sudo apt-get install nfs-kernel-server
diff --git a/pages/docs/platform/details/en.haml b/pages/docs/platform/details/en.haml
new file mode 100644
index 0000000..fe7a4c8
--- /dev/null
+++ b/pages/docs/platform/details/en.haml
@@ -0,0 +1,4 @@
+- @nav_title = "Details"
+- @title = 'Platform Details'
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/platform/details/faq.md b/pages/docs/platform/details/faq.md
new file mode 100644
index 0000000..57afb6c
--- /dev/null
+++ b/pages/docs/platform/details/faq.md
@@ -0,0 +1,65 @@
+@title = 'Frequently asked questions'
+@nav_title = 'FAQ'
+@summary = "Frequently Asked Questions"
+@toc = true
+
+APT
+===============
+
+What do I do when unattended upgrades fail?
+--------------------------------------------------
+
+When you receive notification e-mails with a subject of 'unattended-upgrades result for $machinename', that means that some package couldn't be automatically upgraded and needs manual interaction. The reasons vary, so you have to be careful. Most often you can simply login to the affected machine and run `apt-get dist-upgrade`.
+
+Puppet
+======
+
+Where do i find the time a server was last deployed ?
+-----------------------------------------------------
+
+The puppet state file on the node indicates the last puppetrun:
+
+ ls -la /var/lib/puppet/state/state.yaml
+
+What resources are touched by puppet/leap_platform (services/packages/files etc.) ?
+-----------------------------------------------------------------------------------
+
+Log into your server and issue:
+
+ grep -v '!ruby/sym' /var/lib/puppet/state/state.yaml | sed 's/\"//' | sort
+
+
+How can i customize the leap_platform puppet manifests ?
+--------------------------------------------------------
+
+You can create custom puppet modules under `files/puppet`.
+The custom puppet entry point is in class 'custom' which can be put into
+`files/puppet/modules/custom/manifests/init.pp`. This class gets automatically included
+by site_config::default, which is applied to all nodes.
+
+Of cause you can also create a different git branch and change whatever you want, if you are
+familiar wit git.
+
+Facter
+======
+
+How can i see custom facts distributed by leap_platform on a node ?
+-------------------------------------------------------------------
+
+On the server, export the FACTERLIB env. variable to include the path of the custom fact in question:
+
+ export FACTERLIB=/var/lib/puppet/lib/facter:/srv/leap/puppet/modules/stdlib/lib/facter/
+ facter
+
+
+Etc
+===
+
+How do i change the domain of my provider ?
+-------------------------------------------
+
+* First of all, you need to have access to the nameserver config of your new domain.
+* Update domain in provider.json
+* remove all ca and cert files: `rm files/cert/* files/ca/*`
+* create ca, csr and certs : `leap cert ca; leap cert csr; leap cert dh; leap cert update`
+* deploy
diff --git a/pages/docs/platform/details/under-the-hood.md b/pages/docs/platform/details/under-the-hood.md
new file mode 100644
index 0000000..dcbddb3
--- /dev/null
+++ b/pages/docs/platform/details/under-the-hood.md
@@ -0,0 +1,26 @@
+@title = "Under the hood"
+@summary = "Various implementation details."
+
+This page contains various details on the how the platform is implemented. You can safely ignore this page, although it may be useful if you plan to make modifications to the platform.
+
+Puppet Details
+======================================
+
+Tags
+----
+
+Tags are beeing used to deploy different classes.
+
+* leap_base: site_config::default (configure hostname + resolver, sshd, )
+* leap_slow: site_config::slow (slow: apt-get update, apt-get dist-upgrade)
+* leap_service: cofigure platform service (openvpn, couchdb, etc.)
+
+You can pass any combination of tags, i.e. use
+
+* "--tags leap_base,leap_slow,leap_service" (DEFAULT): Deploy all
+* "--tags leap_service": Only deploy service(s) (useful for debugging/development)
+* "--tags leap_base": Only deploy basic configuration (again, useful for debugging/development)
+
+See http://docs.puppetlabs.com/puppet/2.7/reference/lang_tags.html for puppet tag usage.
+
+
diff --git a/pages/docs/platform/details/webapp.md b/pages/docs/platform/details/webapp.md
new file mode 100644
index 0000000..2b078af
--- /dev/null
+++ b/pages/docs/platform/details/webapp.md
@@ -0,0 +1,282 @@
+@title = 'LEAP Web'
+@summary = 'The web component of the LEAP Platform, providing user management, support desk, documentation and more.'
+@toc = true
+
+Introduction
+===================
+
+"LEAP Web" is the webapp component of the LEAP Platform, providing the following services:
+
+* REST API for user registration.
+* Admin interface to manage users.
+* Client certificate distribution and renewal.
+* User support help tickets.
+* Billing
+* Customizable and Localized user documentation
+
+This web application is written in Ruby on Rails 3, using CouchDB as the backend data store.
+
+It is licensed under the GNU Affero General Public License (version 3.0 or higher). See http://www.gnu.org/licenses/agpl-3.0.html for more information.
+
+Known problems
+====================
+
+* Client certificates are generated without a CSR. The problem is that this makes the web
+ application extremely vulnerable to denial of service attacks. This was not an issue until we
+ started to allow the possibility of anonymously fetching a client certificate without
+ authenticating first.
+
+* By its very nature, the user database is vulnerable to enumeration attacks. These are
+ very hard to prevent, because our protocol is designed to allow query of a user database via
+ proxy in order to provide network perspective.
+
+Integration
+===========
+
+LEAP web is part of the leap platform. Most of the time it will be customized and deployed in that context. This section describes the integration of LEAP web in the wider framework. The Development section focusses on development of LEAP web itself.
+
+Configuration & Customization
+------------------------------
+
+The customization of the webapp for a leap provider happens via two means:
+ * configuration settings in services/webapp.json
+ * custom files in files/webapp
+
+### Configuration Settings
+
+The webapp ships with a fairly large set of default settings for all environments. They are stored in config/defaults.yml. During deploy the platform creates config/config.yml from the settings in services/webapp.json. These settings will overwrite the defaults.
+
+### Custom Files
+
+Any file placed in files/webapp in the providers repository will overwrite the content of config/customization in the webapp. These files will override files of the same name.
+
+This mechanism allows customizing basically all aspects of the webapp.
+See files/webapp/README.md in the providers repository for more.
+
+### Provider Information ###
+
+The leap client fetches provider information via json files from the server. The platform prepares that information and stores it in the webapp in public/1/config/*.json. (1 being the current API version).
+
+Provider Documentation
+-------------
+
+LEAP web already comes with a bit of user documentation. It mostly resides in app/views/pages and thus can be overwritten by adding files to files/webapp/views/pages in the provider repository. You probably want to add your own Terms of Services and Privacy Policy here.
+The webapp will render haml, erb and markdown templates and pick translated content from localized files such as privacy_policy.es.md. In order to add or remove languages you have to modify the available_locales setting in the config. (See Configuration Settings above)
+
+Development
+===========
+
+Installation
+---------------------------
+
+Typically, this application is installed automatically as part of the LEAP Platform. To install it manually for testing or development, follow these instructions:
+
+### TL;DR ###
+
+Install git, ruby 1.9, rubygems and couchdb on your system. Then run
+
+ gem install bundler
+ git clone https://leap.se/git/leap_web
+ cd leap_web
+ git submodule update --init
+ bundle install --binstubs
+ bin/rails server
+
+### Install system requirements
+
+First of all you need to install ruby, git and couchdb. On debian based systems this would be achieved by something like
+
+ sudo apt-get install git ruby1.9.3 rubygems couchdb
+
+We install most gems we depend upon through [bundler](http://gembundler.com). So first install bundler
+
+ sudo gem install bundler
+
+On Debian Wheezy or later, there is a Debian package for bundler, so you can alternately run ``sudo apt-get install bundler``.
+
+### Download source
+
+Simply clone the git repository:
+
+ git clone git://leap.se/leap_web
+ cd leap_web
+
+### SRP Submodule
+
+We currently use a git submodule to include srp-js. This will soon be replaced by a ruby gem. but for now you need to run
+
+ git submodule update --init
+
+### Install required ruby libraries
+
+ cd leap_web
+ bundle
+
+Typically, you run ``bundle`` as a normal user and it will ask you for a sudo password when it is time to install the required gems. If you don't have sudo, run ``bundle`` as root.
+
+Configuration
+----------------------------
+
+The configuration file `config/defaults.yml` providers good defaults for most
+values. You can override these defaults by creating a file `config/config.yml`.
+
+There are a few values you should make sure to modify:
+
+ production:
+ admins: ["myusername","otherusername"]
+ domain: example.net
+ force_ssl: true
+ secret_token: "4be2f60fafaf615bd4a13b96bfccf2c2c905898dad34..."
+ client_ca_key: "/etc/ssl/ca.key"
+ client_ca_cert: "/etc/ssl/ca.crt"
+ ca_key_password: nil
+
+* `admins` is an array of usernames that are granted special admin privilege.
+* `domain` is your fully qualified domain name.
+* `force_ssl`, if set to true, will require secure cookies and turn on HSTS. Don't do this if you are using a self-signed server certificate.
+* `secret_token`, used for cookie security, you can create one with `rake secret`. Should be at least 30 characters.
+* `client_ca_key`, the private key of the CA used to generate client certificates.
+* `client_ca_cert`, the public certificate the CA used to generate client certificates.
+* `ca_key_password`, used to unlock the client_ca_key, if needed.
+
+### Provider Settings
+
+The leap client fetches provider information via json files from the server.
+If you want to use that functionality please add your provider files the public/1/config directory. (1 being the current API version).
+
+Running
+-----------------------------
+
+ cd leap_web
+ bin/rails server
+
+You will find Leap Web running on `localhost:3000`
+
+Testing
+--------------------------------
+
+To run all tests
+
+ rake test
+
+To run an individual test:
+
+ rake test TEST=certs/test/unit/client_certificate_test.rb
+ or
+ ruby -Itest certs/test/unit/client_certificate_test.rb
+
+Engines
+---------------------
+
+Leap Web includes some Engines. All things in `app` will overwrite the engine behaviour. You can clone the leap web repository and add your customizations to the `app` directory. Including leap_web as a gem is currently not supported. It should not require too much work though and we would be happy to include the changes required.
+
+If you have no use for one of the engines you can remove it from the Gemfile. Engines should really be plugins - no other engines should depend upon them. If you need functionality in different engines it should probably go into the toplevel.
+
+# Deployment #
+
+We strongly recommend using the LEAP platform for deploy. Most of the things documented here are automated as part of the platform. If you want to research how the platform deploys or work on your own mechanism this section is for you.
+
+These instructions are targeting a Debian GNU/Linux system. You might need to change the commands to match your own needs.
+
+## Server Preperation ##
+
+### Dependencies ##
+
+The following packages need to be installed:
+
+* git
+* ruby1.9
+* rubygems1.9
+* couchdb (if you want to use a local couch)
+
+### Setup Capistrano ###
+
+We use puppet to deploy. But we also ship an untested config/deploy.rb.example. Edit it to match your needs if you want to use capistrano.
+
+run `cap deploy:setup` to create the directory structure.
+
+run `cap deploy` to deploy to the server.
+
+## Customized Files ##
+
+Please make sure your deploy includes the following files:
+
+* public/1/config/*.json (see Provider Settings section)
+* config/couchdb.yml
+
+## Couch Security ##
+
+We recommend against using an admin user for running the webapp. To avoid this couch design documents need to be created ahead of time and the auto update mechanism needs to be disabled.
+Take a look at test/setup_couch.sh for an example of securing the couch.
+
+## Design Documents ##
+
+After securing the couch design documents need to be deployed with admin permissions. There are two ways of doing this:
+ * rake couchrest:migrate_with_proxies
+ * dump the documents as files with `rake couchrest:dump` and deploy them
+ to the couch by hand or with the platform.
+
+### CouchRest::Migrate ###
+
+The before_script block in .travis.yml illustrates how to do this:
+
+ mv test/config/couchdb.yml.admin config/couchdb.yml # use admin privileges
+ bundle exec rake couchrest:migrate_with_proxies # run the migrations
+ bundle exec rake couchrest:migrate_with_proxies # looks like this needs to run twice
+ mv test/config/couchdb.yml.user config/couchdb.yml # drop admin privileges
+
+### Deploy design docs from CouchRest::Dump ###
+
+First of all we get the design docs as files:
+
+ # put design docs in /tmp/design
+ bundle exec rake couchrest:dump
+
+Then we add them to files/design in the site_couchdb module in leap_platform so they get deployed with the couch. You could also upload them using curl or sth. similar.
+
+# Troubleshooting #
+
+Here are some less common issues you might run into when installing Leap Web.
+
+## Cannot find Bundler ##
+
+### Error Messages ###
+
+`bundle: command not found`
+
+### Solution ###
+
+Make sure bundler is installed. `gem list bundler` should list `bundler`.
+You also need to be able to access the `bundler` executable in your PATH.
+
+## Outdated version of rubygems ##
+
+### Error Messages ###
+
+`bundler requires rubygems >= 1.3.6`
+
+### Solution ###
+
+`gem update --system` will install the latest rubygems
+
+## Missing development tools ##
+
+Some required gems will compile C extensions. They need a bunch of utils for this.
+
+### Error Messages ###
+
+`make: Command not found`
+
+### Solution ###
+
+Install the required tools. For linux the `build-essential` package provides most of them. For Mac OS you probably want the XCode Commandline tools.
+
+## Missing libraries and headers ##
+
+Some gem dependencies might not compile because they lack the needed c libraries.
+
+### Solution ###
+
+Install the libraries in question including their development files.
+
+
diff --git a/pages/docs/platform/en.md b/pages/docs/platform/en.md
new file mode 100644
index 0000000..d0dcfcc
--- /dev/null
+++ b/pages/docs/platform/en.md
@@ -0,0 +1,77 @@
+@title = 'LEAP Platform for Service Providers'
+@nav_title = 'Provider Platform'
+@summary = 'Software platform to automate the process of running a communication service provider.'
+@toc = true
+
+The *LEAP Platform* is set of complementary packages and server recipes to automate the maintenance of LEAP services in a hardened Debian environment. Its goal is to make it as painless as possible for sysadmins to deploy and maintain a service provider's infrastructure for secure communication.
+
+The LEAP Platform consists of three parts, detailed below:
+
+1. The platform recipes.
+2. The provider instance.
+3. The `leap` command line tool.
+
+The platform recipes
+--------------------
+
+The LEAP platform recipes define an abstract service provider. It is a set of [Puppet](https://puppetlabs.com/puppet/puppet-open-source/) modules designed to work together to provide to sysadmins everything they need to manage a service provider infrastructure that provides secure communication services.
+
+LEAP maintains a repository of platform recipes, which typically do not need to be modified, although it can be forked and merged as desired. Most service providers using the LEAP platform can use the same set of platform recipes.
+
+As these recipes consist in abstract definitions, in order to configure settings for a particular service provider a system administrator has to create a provider instance (see below).
+
+LEAP's platform recipes are distributed as a git repository: `https://leap.se/git/leap_platform`
+
+The provider instance
+---------------------
+
+A provider instance is a directory tree (typically tracked in git) containing all the configurations for a service provider's infrastructure. A provider instance primarily consists of:
+
+* A pointer to the platform recipes.
+* A global configuration file for the provider.
+* A configuration file for each server (node) in the provider's infrastructure.
+* Additional files, such as certificates and keys.
+
+A minimal provider instance directory looks like this:
+
+ └── bitmask # provider instance directory.
+ ├── Leapfile # settings for the `leap` command line tool.
+ ├── provider.json # global settings of the provider.
+ ├── common.json # settings common to all nodes.
+ ├── nodes/ # a directory for node configurations.
+ ├── files/ # keys, certificates, and other files.
+ └── users/ # public key information for privileged sysadmins.
+
+
+A provider instance directory contains everything needed to manage all the servers that compose a provider's infrastructure. Because of this, any versioning tool and development work-flow can be used to manage your provider instance.
+
+The `leap` command line tool
+----------------------------
+
+The `leap` [command line tool](commands) is used by sysadmins to manage everything about a service provider's infrastructure. Except when creating an new provider instance, `leap` is run from within the directory tree of a provider instance.
+
+The `leap` command line has many capabilities, including:
+
+* Create, initialize, and deploy nodes.
+* Manage keys and certificates.
+* Query information about the node configurations.
+
+Traditional system configuration automation systems, like [Puppet](https://puppetlabs.com/puppet/puppet-open-source/) or [Chef](http://www.opscode.com/chef/), deploy changes to servers using a pull method. Each server pulls a manifest from a central master server and uses this to alter the state of the server.
+
+Instead, the `leap` tool uses a masterless push method: The sysadmin runs `leap deploy` from the provider instance directory on their desktop machine to push the changes out to every server (or a subset of servers). LEAP still uses Puppet, but there is no central master server that each node must pull from.
+
+One other significant difference between LEAP and typical system automation is how interactions among servers are handled. Rather than store a central database of information about each server that can be queried when a recipe is applied, the `leap` command compiles static representation of all the information a particular server will need in order to apply the recipes. In compiling this static representation, `leap` can use arbitrary programming logic to query and manipulate information about other servers.
+
+These two approaches, masterless push and pre-compiled static configuration, allow the sysadmin to manage a set of LEAP servers using traditional software development techniques of branching and merging, to more easily create local testing environments using virtual servers, and to deploy without the added complexity and failure potential of a master server.
+
+The `leap` command line tool is distributed as a git repository: `https://leap.se/git/leap_cli`. It can be installed with `sudo gem install leap_cli`.
+
+Getting started
+----------------------------------
+
+We recommend reading the platform documentation in the following order:
+
+1. [Quick start tutorial](tutorials/quick-start).
+2. [Platform Guide](platform/guide).
+3. [Configuration format](platform/config).
+4. The `leap` [command reference](platform/commands).
diff --git a/pages/docs/platform/guide/commands.md b/pages/docs/platform/guide/commands.md
new file mode 100644
index 0000000..0cee709
--- /dev/null
+++ b/pages/docs/platform/guide/commands.md
@@ -0,0 +1,419 @@
+@title = 'Command Line Reference'
+@summary = "A copy of leap --help"
+
+The command "leap" can be used to manage a bevy of servers running the LEAP platform from the comfort of your own home.
+
+
+# Global Options
+
+* `--log FILE`
+Override default log file
+Default Value: None
+
+* `-v|--verbose LEVEL`
+Verbosity level 0..5
+Default Value: 1
+
+* `--[no-]color`
+Disable colors in output
+
+* `--debug`
+Enable debugging library (leap_cli development only)
+
+* `--help`
+Show this message
+
+* `--version`
+Display version number and exit
+
+* `--yes`
+Skip prompts and assume "yes"
+
+
+# leap add-user USERNAME
+
+Adds a new trusted sysadmin by adding public keys to the "users" directory.
+
+
+
+**Options**
+
+* `--pgp-pub-key arg`
+OpenPGP public key file for this new user
+Default Value: None
+
+* `--ssh-pub-key arg`
+SSH public key file for this new user
+Default Value: None
+
+* `--self`
+Add yourself as a trusted sysadin by choosing among the public keys available for the current user.
+
+
+# leap cert
+
+Manage X.509 certificates
+
+
+
+## leap cert ca
+
+Creates two Certificate Authorities (one for validating servers and one for validating clients).
+
+See see what values are used in the generation of the certificates (like name and key size), run `leap inspect provider` and look for the "ca" property. To see the details of the created certs, run `leap inspect <file>`.
+
+## leap cert csr
+
+Creates a CSR for use in buying a commercial X.509 certificate.
+
+Unless specified, the CSR is created for the provider's primary domain. The properties used for this CSR come from `provider.ca.server_certificates`.
+
+**Options**
+
+* `--domain DOMAIN`
+Specify what domain to create the CSR for.
+Unless specified, the CSR is created for the provider's primary domain. The properties used for this CSR come from `provider.ca.server_certificates`.
+Default Value: None
+
+
+## leap cert dh
+
+Creates a Diffie-Hellman parameter file.
+
+
+
+## leap cert update FILTER
+
+Creates or renews a X.509 certificate/key pair for a single node or all nodes, but only if needed.
+
+This command will a generate new certificate for a node if some value in the node has changed that is included in the certificate (like hostname or IP address), or if the old certificate will be expiring soon. Sometimes, you might want to force the generation of a new certificate, such as in the cases where you have changed a CA parameter for server certificates, like bit size or digest hash. In this case, use --force. If <node-filter> is empty, this command will apply to all nodes.
+
+**Options**
+
+* `--force`
+Always generate new certificates
+
+
+# leap clean
+
+Removes all files generated with the "compile" command.
+
+
+
+# leap compile
+
+Compile generated files.
+
+
+
+## leap compile all [ENVIRONMENT]
+
+Compiles node configuration files into hiera files used for deployment.
+
+
+
+## leap compile zone
+
+Compile a DNS zone file for your provider.
+
+
+Default Command: all
+
+# leap db
+
+Database commands.
+
+
+
+## leap db destroy [FILTER]
+
+Destroy all the databases. If present, limit to FILTER nodes.
+
+
+
+# leap deploy FILTER
+
+Apply recipes to a node or set of nodes.
+
+The FILTER can be the name of a node, service, or tag.
+
+**Options**
+
+* `--ip IPADDRESS`
+Override the default SSH IP address.
+Default Value: None
+
+* `--port PORT`
+Override the default SSH port.
+Default Value: None
+
+* `--tags TAG[,TAG]`
+Specify tags to pass through to puppet (overriding the default).
+Default Value: leap_base,leap_service
+
+* `--dev`
+Development mode: don't run 'git submodule update' before deploy.
+
+* `--fast`
+Makes the deploy command faster by skipping some slow steps. A "fast" deploy can be used safely if you recently completed a normal deploy.
+
+* `--force`
+Deploy even if there is a lockfile.
+
+* `--[no-]sync`
+Sync files, but don't actually apply recipes.
+
+
+# leap env
+
+Manipulate and query environment information.
+
+The 'environment' node property can be used to isolate sets of nodes into entirely separate environments. A node in one environment will never interact with a node from another environment. Environment pinning works by modifying your ~/.leaprc file and is dependent on the absolute file path of your provider directory (pins don't apply if you move the directory)
+
+## leap env ls
+
+List the available environments. The pinned environment, if any, will be marked with '*'.
+
+
+
+## leap env pin ENVIRONMENT
+
+Pin the environment to ENVIRONMENT. All subsequent commands will only apply to nodes in this environment.
+
+
+
+## leap env unpin
+
+Unpin the environment. All subsequent commands will apply to all nodes.
+
+
+Default Command: ls
+
+# leap facts
+
+Gather information on nodes.
+
+
+
+## leap facts update FILTER
+
+Query servers to update facts.json.
+
+Queries every node included in FILTER and saves the important information to facts.json
+
+# leap help command
+
+Shows a list of commands or help for one command
+
+Gets help for the application or its commands. Can also list the commands in a way helpful to creating a bash-style completion function
+
+**Options**
+
+* `-c`
+List commands one per line, to assist with shell completion
+
+
+# leap inspect FILE
+
+Prints details about a file. Alternately, the argument FILE can be the name of a node, service or tag.
+
+
+
+**Options**
+
+* `--base`
+Inspect the FILE from the provider_base (i.e. without local inheritance).
+
+
+# leap list [FILTER]
+
+List nodes and their classifications
+
+Prints out a listing of nodes, services, or tags. If present, the FILTER can be a list of names of nodes, services, or tags. If the name is prefixed with +, this acts like an AND condition. For example:
+
+`leap list node1 node2` matches all nodes named "node1" OR "node2"
+
+`leap list openvpn +local` matches all nodes with service "openvpn" AND tag "local"
+
+**Options**
+
+* `--print arg`
+What attributes to print (optional)
+Default Value: None
+
+* `--disabled`
+Include disabled nodes in the list.
+
+
+# leap local
+
+Manage local virtual machines.
+
+This command provides a convient way to manage Vagrant-based virtual machines. If FILTER argument is missing, the command runs on all local virtual machines. The Vagrantfile is automatically generated in 'test/Vagrantfile'. If you want to run vagrant commands manually, cd to 'test'.
+
+## leap local destroy [FILTER]
+
+Destroys the virtual machine(s), reclaiming the disk space
+
+
+
+## leap local reset [FILTER]
+
+Resets virtual machine(s) to the last saved snapshot
+
+
+
+## leap local save [FILTER]
+
+Saves the current state of the virtual machine as a new snapshot
+
+
+
+## leap local start [FILTER]
+
+Starts up the virtual machine(s)
+
+
+
+## leap local status [FILTER]
+
+Print the status of local virtual machine(s)
+
+
+
+## leap local stop [FILTER]
+
+Shuts down the virtual machine(s)
+
+
+
+# leap mosh NAME
+
+Log in to the specified node with an interactive shell using mosh (requires node to have mosh.enabled set to true).
+
+
+
+# leap new DIRECTORY
+
+Creates a new provider instance in the specified directory, creating it if necessary.
+
+
+
+**Options**
+
+* `--contacts arg`
+Default email address contacts.
+Default Value: None
+
+* `--domain arg`
+The primary domain of the provider.
+Default Value: None
+
+* `--name arg`
+The name of the provider.
+Default Value: None
+
+* `--platform arg`
+File path of the leap_platform directory.
+Default Value: None
+
+
+# leap node
+
+Node management
+
+
+
+## leap node add NAME [SEED]
+
+Create a new configuration file for a node named NAME.
+
+If specified, the optional argument SEED can be used to seed values in the node configuration file.
+
+The format is property_name:value.
+
+For example: `leap node add web1 ip_address:1.2.3.4 services:webapp`.
+
+To set nested properties, property name can contain '.', like so: `leap node add web1 ssh.port:44`
+
+Separeate multiple values for a single property with a comma, like so: `leap node add mynode services:webapp,dns`
+
+**Options**
+
+* `--local`
+Make a local testing node (by automatically assigning the next available local IP address). Local nodes are run as virtual machines on your computer.
+
+
+## leap node init FILTER
+
+Bootstraps a node or nodes, setting up SSH keys and installing prerequisite packages
+
+This command prepares a server to be used with the LEAP Platform by saving the server's SSH host key, copying the authorized_keys file, installing packages that are required for deploying, and registering important facts. Node init must be run before deploying to a server, and the server must be running and available via the network. This command only needs to be run once, but there is no harm in running it multiple times.
+
+**Options**
+
+* `--ip IPADDRESS`
+Override the default SSH IP address.
+Default Value: None
+
+* `--port PORT`
+Override the default SSH port.
+Default Value: None
+
+* `--echo`
+If set, passwords are visible as you type them (default is hidden)
+
+
+## leap node mv OLD_NAME NEW_NAME
+
+Renames a node file, and all its related files.
+
+
+
+## leap node rm NAME
+
+Removes all the files related to the node named NAME.
+
+
+
+# leap ssh NAME
+
+Log in to the specified node with an interactive shell.
+
+
+
+**Options**
+
+* `--port arg`
+Override ssh port for remote host
+Default Value: None
+
+* `--ssh arg`
+Pass through raw options to ssh (e.g. --ssh '-F ~/sshconfig')
+Default Value: None
+
+
+# leap test
+
+Run tests.
+
+
+
+## leap test init
+
+Creates files needed to run tests.
+
+
+
+## leap test run
+
+Run tests.
+
+
+
+**Options**
+
+* `--[no-]continue`
+Continue over errors and failures (default is --no-continue).
+
+Default Command: run
diff --git a/pages/docs/platform/guide/config.md b/pages/docs/platform/guide/config.md
new file mode 100644
index 0000000..be67e6b
--- /dev/null
+++ b/pages/docs/platform/guide/config.md
@@ -0,0 +1,263 @@
+@title = "Configuration Files"
+@summary = "How to edit configuration files."
+
+Files
+-------------------------------------------
+
+Here are a list of some of the common files that make up a provider. Except for Leapfile and provider.json, the files are optional. Unless otherwise specified, all file names are relative to the 'provider directory' root (where the Leapfile is).
+
+`Leapfile` -- If present, this file tells `leap` that the directory is a provider directory. This file is usually empty, but can contain global options.
+
+`~/.leaprc` -- Evaluated the same as Leapfile, but not committed to source control.
+
+`provider.json` -- Global options related to this provider.
+
+`provider.ENVIRONMENT.json` -- Global options for the provider that are applied to only a single environment.
+
+`common.json` -- All nodes inherit from this file.
+
+`secrets.json` -- An automatically generated file that contains any randomly generated strings needed in order to deploy. These strings are often secret and should be protected, although any need for a random string or number that is remembered will produce another entry in this file. This file is automatically generated and refreshed each time you run `leap compile` or `leap deploy`. If an entry is no longer needed, it will get removed. If you want to change a secret, you can remove this file and have it regenerated, or remove the particular line item and just those items will be created anew.
+
+`facts.json` -- If some of your servers are running on AWS or OpenStack, you will need to discover certain properties about how networking is configured on these machines in order for a full deploy to work. In these cases, make sure to run `leap facts update` to periodically regenerate the facts.json file.
+
+`nodes/NAME.json` -- The configuration file for node called NAME.
+
+`services/SERVICE.json` -- The properties in this configuration file are applied to any node that includes SERVICE in its `services` property.
+
+`services/SERVICE.ENVIRONMENT.json` -- The properties in this configuration file are applied to any node that includes SERVICE in its services and has environment equal to ENVIRONMENT.
+
+`services/TAG.json` -- The properties in this configuration file are applied to any node that has includes TAG in its `tags` property.
+
+`services/TAG.ENVIRONMENT.json` -- The properties in this configuration file are applied to any node that has includes TAG in its `tags` property and has `environment` property equal to ENVIRONMENT.
+
+`files/*` -- Various static files used by the platform (e.g. keys, certificates, webapp customization, etc).
+
+`users/USER/` -- A directory that stores the public keys of the sysadmin with name USER. This person will have root access to all the servers.
+
+
+Leapfile
+-------------------------------------------
+
+A `Leapfile` defines options for the `leap` command and lives at the root of your provider directory. `Leapfile` is evaluated as ruby, so you can include whatever weird logic you want in this file. In particular, there are several variables you can set that modify the behavior of leap. For example:
+
+ @platform_directory_path = '../leap_platform'
+ @log = '/var/log/leap.log'
+
+Additionally, you can create a `~/.leaprc` file that is loaded after `Leapfile` and is evaluated the same way.
+
+Platform options:
+
+* `@platform_directory_path` (required). This must be set to the path where `leap_platform` lives. The path may be relative.
+
+Vagrant options:
+
+* `@vagrant_network`. Allows you to override the default network used for local nodes. It should include a netmask like `@vagrant_network = '10.0.0.0/24'`.
+* `@custom_vagrant_vm_line`. Insert arbitrary text into the auto-generated Vagrantfile. For example, `@custom_vagrant_vm_line = "config.vm.boot_mode = :gui"`.
+
+Logging options:
+
+* `@log`. If set, all command invocation and results are logged to the specified file. This is the same as the switch `--log FILE`, except that the command line switch will override the value in the Leapfile.
+
+
+JSON format
+-------------------------------------------
+
+All configuration files, other than `Leapfile`, are in the JSON format. For example:
+
+ {
+ "key1": "value1",
+ "key2": "value2"
+ }
+
+Keys should match `/[a-z0-9_]/`
+
+Unlike traditional JSON, comments are allowed. If the first non-whitespace characters are `//` then the line is treated as a comment.
+
+ // this is a comment
+ {
+ // this is a comment
+ "key": "value" // this is an error
+ }
+
+Options in the configuration files might be nested hashes, arrays, numbers, strings, or boolean. Numbers and boolean values should **not** be quoted. For example:
+
+ {
+ "openvpn": {
+ "ip_address": "1.1.1.1",
+ "protocols": ["tcp", "udp"],
+ "ports": [80, 53],
+ "options": {
+ "public_ip": false,
+ "adblock": true
+ }
+ }
+ }
+
+If the value string is prefixed with an '=' character, the result is evaluated as ruby. For example:
+
+ {
+ "domain": {
+ "public": "domain.org"
+ }
+ "api_domain": "= 'api.' + domain.public"
+ }
+
+In this case, the property "api_domain" will be set to "api.domain.org". So long as you do not create unresolvable circular dependencies, you can reference other properties in evaluated ruby that are themselves evaluated ruby.
+
+See "Macros" below for information on the special macros available to the evaluated ruby.
+
+TIP: In rare cases, you might want to force the evaluation of a value to happen in a later pass after most of the other properties have been evaluated. To do this, prefix the value string with "=>" instead of "=".
+
+Node inheritance
+----------------------------------------
+
+Every node inherits from common.json and also any of the services or tags attached to the node. Additionally, the `leap_platform` contains a directory `provider_base` that defines the default values for tags, services and common.json.
+
+Suppose you have a node configuration for `bitmask/nodes/willamette.json` like so:
+
+ {
+ "services": "webapp",
+ "tags": ["production", "northwest-us"],
+ "ip_address": "1.1.1.1"
+ }
+
+This node will have hostname "willamette" and it will inherit from the following files (in this order):
+
+1. common.json
+ - load defaults: `provider_base/common.json`
+ - load provider: `bitmask/common.json`
+2. service "webapp"
+ - load defaults: `provider_base/services/webapp.json`
+ - load provider: `bitmask/services/webapp.json`
+3. tag "production"
+ - load defaults: `provider_base/tags/production.json`
+ - load provider: `bitmask/tags/production.json`
+4. tag "northwest-us"
+ - load: `bitmask/tags/northwest-us.json`
+5. finally, load node "willamette"
+ - load: `bitmask/nodes/willamette.json`
+
+The `provider_base` directory is under the `leap_platform` specified in the file `Leapfile`.
+
+To see all the variables a node has inherited, you could run `leap inspect willamette`.
+
+Common configuration options
+----------------------------------------
+
+You can use the command `leap inspect` to see what options are available for a provider, node, service, or tag configuration. For example:
+
+* `leap inspect common` -- show the options inherited by all nodes.
+* `leap inspect --base common` -- show the common.json from `provider_base` without the local `common.json` inheritance applied.
+* `leap inspect webapp` -- show all the options available for the service `webapp`.
+
+Here are some of the more important options you should be aware of:
+
+* `ip_address` -- Required for all nodes, no default.
+* `ssh.port` -- The SSH port you want the node's OpenSSH server to bind to. This is also the default when trying to connect to a node, but if the node currently has OpenSSH running on a different port then run deploy with `--port` to override the `ssh.port` configuration value.
+* `mosh.enabled` -- If set to `true`, then mosh will be installed on the server. The default is `false`.
+
+Macros
+----------------------------------------
+
+When using evaluated ruby in a JSON configuration file, there are several special macros that are available. These are evaluated in the context of a node (available as the variable `self`).
+
+The following methods are available to the evaluated ruby:
+
+`variable.variable`
+
+ > Any variable defined or inherited by a particular node configuration is available by just referencing it using either hash notation or object field notation (e.g. `['domain']['public']` or `domain.public`). Circular references are not allowed, but otherwise it is OK to nest evaluated values in other evaluated values. If a value has not been defined, the hash notation will return nil but the field notation will raise an exception. Properties of services, tags, and the global provider can all be referenced the same way. For example, `global.services['openvpn'].x509.dh`.
+
+`nodes`
+
+ > A hash of all nodes. This list can be filtered.
+
+`nodes_like_me`
+
+ > A hash of nodes that have the same deployment tags as the current node (e.g. 'production' or 'local').
+
+`global.services`
+
+ > A hash of all services, e.g. `global.services['openvpn']` would return the "openvpn" service.
+
+`global.tags`
+
+ > A hash of all tags, e.g. `global.tags['production']` would return the "production" tag.
+
+ `global.provider`
+
+ > Can be used to access variables defined in `provider.json`, e.g. `global.provider.contacts.default`.
+
+`file(filename)`
+
+ > Inserts the full contents of the file. If the file is an erb template, it is rendered. The filename can either be one of the pre-defined file symbols, or it can be a path relative to the "files" directory in your provider instance. E.g, `file :ca_cert` or `files 'ca/ca.crt'`.
+
+`file_path(filename)`
+
+ > Ensures that the file will get rsynced to the node as an individual file. The value returned by `file_path` is the full path where this file will ultimately live when deploy to the node. e.g. `file_path :ca_cert` or `file_path 'branding/images/logo.png'`.
+
+`secret(:symbol)`
+
+ > Returns the value of a secret in secrets.json (or creates it if necessary). E.g. `secret :couch_admin_password`
+
+`hosts_file`
+
+ > Returns a data structure that puppet will use to generate /etc/hosts. Care is taken to use the local IP of other hosts when needed.
+
+`known_hosts_file`
+
+ > Returns the lines needed in a SSH `known_hosts` file.
+
+`stunnel_client(node_list, port, options={})`
+
+ > Returns a stunnel configuration data structure for the client side. Argument `node_list` is an `ObjectList` of nodes running stunnel servers. Argument `port` is the real port of the ultimate service running on the servers that the client wants to connect to.
+
+`stunnel_server(port)`
+
+ > Generates a stunnel server entry. The `port` is the real port targeted service.
+
+Hash tables
+-----------------------------------------
+
+The macros `nodes`, `nodes_like_me`, `global.services`, and `global.tags` all return a hash table of configuration objects (either nodes, services, or tags). There are several ways to filter and process these hash tables:
+
+Access an element by name:
+
+ nodes['vpn1'] # returns node named 'vpn1'
+ global.services['openvpn'] # returns service named 'openvpn'
+
+Create a new hash table by applying filters:
+
+ nodes[:public_dns => true] # all nodes where public_dns == true
+ nodes[:services => 'openvpn', 'location.country_code' => 'US'] # openvpn service OR in the US.
+ nodes[[:services, 'openvpn'], [:services, 'tor']] # services equal to openvpn OR tor
+ nodes[:services => 'openvpn'][:tags => 'production'] # openvpn AND production
+ nodes[:name => "!bob"] # all nodes that are NOT named "bob"
+
+Create an array of values by selecting a single field:
+
+ nodes.field('location.name')
+ ==> ['seattle', 'istanbul']
+
+Create an array of hashes by selecting multiple fields:
+
+ nodes.fields('domain.full', 'ip_address')
+ ==> [
+ {'domain_full' => 'red.bitmask.net', 'ip_address' => '1.1.1.1'},
+ {'domain_full' => 'blue.bitmask.net', 'ip_address' => '1.1.1.2'},
+ ]
+
+Create a new hash table of hashes, with only certain fields:
+
+ nodes.pick_fields('domain.full', 'ip_address')
+ ==> {
+ "red" => {'domain_full' => 'red.bitmask.net', 'ip_address' => '1.1.1.1'},
+ "blue => {'domain_full' => 'blue.bitmask.net', 'ip_address' => '1.1.1.2'},
+ }
+
+With `pick_fields`, if there is only one field, it will generate a simple hash table:
+
+ nodes.pick_fields('ip_address')
+ ==> {
+ "red" => '1.1.1.1',
+ "blue => '1.1.1.2',
+ }
diff --git a/pages/docs/platform/guide/en.haml b/pages/docs/platform/guide/en.haml
new file mode 100644
index 0000000..61c24ea
--- /dev/null
+++ b/pages/docs/platform/guide/en.haml
@@ -0,0 +1,4 @@
+- @nav_title = "Guide"
+- @title = "Platform Guide"
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/platform/guide/environments.md b/pages/docs/platform/guide/environments.md
new file mode 100644
index 0000000..67d8ace
--- /dev/null
+++ b/pages/docs/platform/guide/environments.md
@@ -0,0 +1,69 @@
+@title = "Working with environments"
+@nav_title = "Environments"
+@summary = "How to partition the nodes into separate environments."
+
+With environments, you can divide your nodes into different and entirely separate sets. For example, you might have sets of nodes for 'testing', 'staging' and 'production'.
+
+Typically, the nodes in one environment are totally isolated from the nodes in a different environment. Each environment will have its own separate database, for example.
+
+There are a few exceptions to this rule: backup nodes, for example, will by default attempt to back up data from all the environments (excluding local).
+
+## Assign an environment
+
+To assign an environment to a node, you just set the `environment` node property. This is typically done with tags, although it is not necessary. For example:
+
+`tags/production.json`
+
+ {
+ "environment": "production"
+ }
+
+`nodes/mynode.json`
+
+ {
+ "tags": ["production"]
+ }
+
+There are several built-in tags that will apply a value for the environment:
+
+* `production`: An environment for nodes that are in use by end users.
+* `development`: An environment to be used for nodes that are being used for experiments or staging.
+* `local`: This environment gets automatically applied to all nodes that run only on local VMs. Nodes with a `local` environment are treated special and excluded from certain calculations.
+
+You don't need to use these and you can add your own.
+
+## Environment commands
+
+* `leap env` -- List the available environments and disply which one is active.
+* `leap env pin ENV` -- Pin the current environment to ENV.
+* `leap env unpin` -- Remove the environment pin.
+
+The environment pin is only active for your local machine: it is not recorded in the provider directory and not shared with other users.
+
+## Environment specific JSON files
+
+You can add JSON configuration files that are only applied when a specific environment is active. For example, if you create a file `provider.production.json`, these values will only get applied to the `provider.json` file for the `production` environment.
+
+This will also work for services and tags. For example:
+
+ provider.local.json
+ services/webapp.development.json
+ tags/seattle.production.json
+
+In this example, `local`, `development`, and `production` are the names of environments.
+
+## Bind an environment to a Platform version
+
+If you want to ensure that a particular environment is bound to a particular version of the LEAP Platform, you can add a `platform` section to the `provider.ENV.json` file (where ENV is the name of the environment in question).
+
+The available options are `platform.version`, `platform.branch`, or `platform.commit`. For example:
+
+ {
+ "platform": {
+ "version": "1.6.1",
+ "branch": "develop",
+ "commit": "5df867fbd3a78ca4160eb54d708d55a7d047bdb2"
+ }
+ }
+
+You can use any combination of `version`, `branch`, and `commit` to specify the binding. The values for `branch` and `commit` only work if the `leap_platform` directory is a git repository.
diff --git a/pages/docs/platform/guide/keys-and-certificates.md b/pages/docs/platform/guide/keys-and-certificates.md
new file mode 100644
index 0000000..6139acd
--- /dev/null
+++ b/pages/docs/platform/guide/keys-and-certificates.md
@@ -0,0 +1,194 @@
+@title = "Keys and Certificates"
+@summary = "Working with SSH keys, secrets, and X.509 certificates."
+
+Working with SSH
+================================
+
+Whenever the `leap` command nees to push changes to a node or gather information from a node, it tunnels this command over SSH. Another way to put this: the security of your servers rests entirely on SSH. Because of this, it is important that you understand how `leap` uses SSH.
+
+SSH related files
+-------------------------------
+
+Assuming your provider directory is called 'provider':
+
+* `provider/nodes/crow/crow_ssh.pub` -- The public SSH host key for node 'crow'.
+* `provider/users/alice/alice_ssh.pub` -- The public SSH user key for user 'alice'. Anyone with the private key that corresponds to this public key will have root access to all nodes.
+* `provider/files/ssh/known_hosts` -- An autogenerated known_hosts, built from combining `provider/nodes/*/*_ssh.pub`. You must not edit this file directly. If you need to change it, remove or change one of the files that is used to generate `known_hosts` and then run `leap compile`.
+* `provider/files/ssh/authorized_keys` -- An autogenerated list of all the user SSH keys with root access to the notes. It is created from `provider/users/*/*_ssh.pub`. You must not edit this file directly. If you need to change it, remove or change one of the files that is used to generate `authorized_keys` and then run `leap compile`.
+
+All of these files should be committed to source control.
+
+If you rename, remove, or add a node with `leap node [mv|add|rm]` the SSH key files and the `known_hosts` file will get properly updated.
+
+SSH and local nodes
+-----------------------------
+
+Local nodes are run as Vagrant virtual machines. The `leap` command handles SSH slightly differently for these nodes.
+
+Basically, all the SSH security is turned off for local nodes. Since local nodes only exist for a short time on your computer and can't be reached from the internet, this is not a problem.
+
+Specifically, for local nodes:
+
+1. `known_hosts` is never updated with local node keys, since the SSH public key of a local node is different for each user.
+2. `leap` entirely skips the checking of host keys when connecting with a local node.
+3. `leap` adds the public Vagrant SSH key to the list of SSH keys for a user. The public Vagrant SSH key is a shared and insecure key that has root access to most Vagrant virtual machines.
+
+When SSH host key changes
+-------------------------------
+
+If the host key for a node has changed, you will get an error "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED".
+
+To fix this, you need to remove the file `files/nodes/stompy/stompy_ssh.pub` and run `leap node init stompy`, where the node's name is 'stompy'. **Only do this if you are ABSOLUTELY CERTAIN that the node's SSH host key has changed**.
+
+Changing the SSH port
+--------------------------------
+
+Suppose you have a node `blinky` that has SSH listening on port 22 and you want to make it port 2200.
+
+First, modify the configuration for `blinky` to specify the variable `ssh.port` as 2200. Usually, this is done in `common.json` or in a tag file.
+
+For example, you could put this in `tags/production.json`:
+
+ {
+ "ssh": {
+ "port": 2200
+ }
+ }
+
+Run `leap compile` and open `hiera/blinky.yaml` to confirm that `ssh.port` is set to 2200. The port number must be specified as a number, not a string (no quotes).
+
+Then, you need to deploy this change so that SSH will bind to 2200. You cannot simply run `leap deploy blinky` because this command will default to using the variable `ssh.port` which is now `2200` but SSH on the node is still bound to 22.
+
+So, you manually override the port in the deploy command, using the old port:
+
+ leap deploy --port 22 blinky
+
+Afterwards, SSH on `blinky` should be listening on port 2200 and you can just run `leap deploy blinky` from then on.
+
+Sysadmins with multiple SSH keys
+-----------------------------------
+
+The command `leap add-user --self` allows only one SSH key. If you want to specify more than one key for a user, you can do it manually:
+
+ users/userx/userx_ssh.pub
+ users/userx/otherkey_ssh.pub
+
+All keys matching 'userx/*_ssh.pub' will be usable.
+
+Removing sysadmin access
+--------------------------------
+
+Suppose you want to remove `userx` from having any further ssh access to the servers. Do this:
+
+ rm -r users/userx
+ leap deploy
+
+X.509 Certificates
+================================
+
+Configuration options
+-------------------------------------------
+
+The `ca` option in provider.json provides settings used when generating CAs and certificates. The defaults are as follows:
+
+ {
+ "ca": {
+ "name": "= global.provider.ca.organization + ' Root CA'",
+ "organization": "= global.provider.name[global.provider.default_language]",
+ "organizational_unit": "= 'https://' + global.provider.domain",
+ "bit_size": 4096,
+ "digest": "SHA256",
+ "life_span": "10y",
+ "server_certificates": {
+ "bit_size": 2048,
+ "digest": "SHA256",
+ "life_span": "1y"
+ },
+ "client_certificates": {
+ "bit_size": 2048,
+ "digest": "SHA256",
+ "life_span": "2m",
+ "limited_prefix": "LIMITED",
+ "unlimited_prefix": "UNLIMITED"
+ }
+ }
+ }
+
+You should not need to override these defaults in your own provider.json, but you can if you want to. To see what values are used for your provider, run `leap inspect provider.json`.
+
+NOTE: A certificate `bit_size` greater than 2048 will probably not be recognized by most commercial CAs.
+
+Certificate Authorities
+-----------------------------------------
+
+There are three x.509 certificate authorities (CA) associated with your provider:
+
+1. **Commercial CA:** It is strongly recommended that you purchase a commercial cert for your primary domain. The goal of platform is to not depend on the commercial CA system, but it does increase security and usability if you purchase a certificate. The cert for the commercial CA must live at `files/cert/commercial_ca.crt`.
+2. **Server CA:** This is a self-signed CA responsible for signing all the **server** certificates. The private key lives at `files/ca/ca.key` and the public cert lives at `files/ca/ca.crt`. The key is very sensitive information and must be kept private. The public cert is distributed publicly.
+3. **Client CA:** This is a self-signed CA responsible for signing all the **client** certificates. The private key lives at `files/ca/client_ca.key` and the public cert lives at `files/ca/client_ca.crt`. Neither file is distribute publicly. It is not a big deal if the private key for the client CA is compromised, you can just generate a new one and re-deploy.
+
+To generate both the Server CA and the Client CA, run the command:
+
+ leap cert ca
+
+Server certificates
+-----------------------------------
+
+Most every server in your service provider will have a x.509 certificate, generated by the `leap` command using the Server CA. Whenever you modify any settings of a node that might affect it's certificate (like changing the IP address, hostname, or settings in provider.json), you can magically regenerate all the certs that need to be regenerated with this command:
+
+ leap cert update
+
+Run `leap help cert update` for notes on usage options.
+
+Because the server certificates are generated locally on your personal machine, the private key for the Server CA need never be put on any server. It is up to you to keep this file secure.
+
+Client certificates
+--------------------------------
+
+Every leap client gets its own time-limited client certificate. This cert is use to connect to the OpenVPN gateway (and probably other things in the future). It is generated on the fly by the webapp using the Client CA.
+
+To make this work, the private key of the Client CA is made available to the webapp. This might seem bad, but compromise of the Client CA simply allows the attacker to use the OpenVPN gateways without paying. In the future, we plan to add a command to automatically regenerate the Client CA periodically.
+
+There are two types of client certificates: limited and unlimited. A client using a limited cert will have its bandwidth limited to the rate specified by `provider.service.bandwidth_limit` (in Bytes per second). An unlimited cert is given to the user if they authenticate and the user's service level matches one configured in `provider.service.levels` without bandwidth limits. Otherwise, the user is given a limited client cert.
+
+Commercial certificates
+-----------------------------------
+
+We strongly recommend that you use a commercial signed server certificate for your primary domain (in other words, a certificate with a common name matching whatever you have configured for `provider.domain`). This provides several benefits:
+
+1. When users visit your website, they don't get a scary notice that something is wrong.
+2. When a user runs the LEAP client, selecting your service provider will not cause a warning message.
+3. When other providers first discover your provider, they are more likely to trust your provider key if it is fetched over a commercially verified link.
+
+The LEAP platform is designed so that it assumes you are using a commercial cert for the primary domain of your provider, but all other servers are assumed to use non-commercial certs signed by the Server CA you create.
+
+To generate a CSR, run:
+
+ leap cert csr
+
+This command will generate the CSR and private key matching `provider.domain` (you can change the domain with `--domain=DOMAIN` switch). It also generates a server certificate signed with the Server CA. You should delete this certificate and replace it with a real one once it is created by your commercial CA.
+
+The related commercial cert files are:
+
+ files/
+ certs/
+ domain.org.crt # Server certificate for domain.org, obtained by commercial CA.
+ domain.org.csr # Certificate signing request
+ domain.org.key # Private key for you certificate
+ commercial_ca.crt # The CA cert obtained from the commercial CA.
+
+The private key file is extremely sensitive and care should be taken with its provenance.
+
+If your commercial CA has a chained CA cert, you should be OK if you just put the **last** cert in the chain into the `commercial_ca.crt` file. This only works if the other CAs in the chain have certs in the debian package `ca-certificates`, which is the case for almost all CAs.
+
+If you want to add additional fields to the CSR, like country, city, or locality, you can configure these values in provider.json like so:
+
+ "ca": {
+ "server_certificates": {
+ "country": "US",
+ "state": "Washington",
+ "locality": "Seattle"
+ }
+ }
+
+If they are not present, the CSR will be created without them. \ No newline at end of file
diff --git a/pages/docs/platform/guide/miscellaneous.md b/pages/docs/platform/guide/miscellaneous.md
new file mode 100644
index 0000000..c38c007
--- /dev/null
+++ b/pages/docs/platform/guide/miscellaneous.md
@@ -0,0 +1,14 @@
+@title = "Miscellaneous"
+@summary = "Miscellaneous commands you may need to know."
+
+Facts
+==============================
+
+There are a few cases when we must gather internal data from a node before we can successfully deploy to other nodes. This is what `facts.json` is for. It stores a snapshot of certain facts about each node, as needed. Entries in `facts.json` are updated automatically when you initialize, rename, or remove a node. To manually force a full update of `facts.json`, run:
+
+ leap facts update FILTER
+
+Run `leap help facts update` for more information.
+
+The file `facts.json` should be committed to source control. You might not have a `facts.json` if one is not required for your provider.
+
diff --git a/pages/docs/platform/guide/nodes.md b/pages/docs/platform/guide/nodes.md
new file mode 100644
index 0000000..cf22544
--- /dev/null
+++ b/pages/docs/platform/guide/nodes.md
@@ -0,0 +1,187 @@
+@title = "Nodes"
+@summary = "Working with nodes, services, tags, and locations."
+
+Node types
+================================
+
+Every node has one or more services that determines the node's function within your provider's infrastructure.
+
+When adding a new node to your provider, you should ask yourself four questions:
+
+* **many or few?** Some services benefit from having many nodes, while some services are best run on only one or two nodes.
+* **required or optional?** Some services are required, while others can be left out.
+* **who does the node communicate with?** Some services communicate very heavily with other particular services. Nodes running these services should be close together.
+* **public or private?** Some services communicate with the public internet, while others only need to communicate with other nodes in the infrastructure.
+
+Brief overview of the services:
+
+* **webapp**: The web application. Runs both webapp control panel for users and admins as well as the REST API that the client uses. Needs to communicate heavily with `couchdb` nodes. You need at least one, good to have two for redundancy. The webapp does not get a lot of traffic, so you will not need many.
+* **couchdb**: The database for users and user data. You can get away with just one, but for proper redundancy you should have at least three. Communicates heavily with `webapp`, `mx`, and `soledad` nodes.
+* **soledad**: Handles the data syncing with clients. Typically combined with `couchdb` service, since it communicates heavily with couchdb.
+* **mx**: Incoming and outgoing MX servers. Communicates with the public internet, clients, and `couchdb` nodes.
+* **openvpn**: OpenVPN gateway for clients. You need at least one, but want as many as needed to support the bandwidth your users are doing. The `openvpn` nodes are autonomous and don't need to communicate with any other nodes. Often combined with `tor` service.
+* **monitor**: Internal service to monitor all the other nodes. Currently, you can have zero or one `monitor` service defined. It is required that the monitor be on the webapp node. It was not designed to be run as a separate node service.
+* **tor**: Sets up a tor exit node, unconnected to any other service.
+* **dns**: Not yet implemented.
+
+Webapp
+-----------------------------------
+
+The webapp node is responsible for both the user face web application and the API that the client interacts with.
+
+Some users can be "admins" with special powers to answer tickets and close accounts. To make an account into an administrator, you need to configure the `webapp.admins` property with an array of user names.
+
+For example, to make users `alice` and `bob` into admins, create a file `services/webapp.json` with the following content:
+
+ {
+ "webapp": {
+ "admins": ["bob", "alice"]
+ }
+ }
+
+And then redeploy to all webapp nodes:
+
+ leap deploy webapp
+
+By putting this in `services/webapp.json`, you will ensure that all webapp nodes inherit the value for `webapp.admins`.
+
+Services
+================================
+
+What nodes do you need for a provider that offers particular services?
+
+<table class="table table-striped">
+<tr>
+ <th>Node Type</th>
+ <th>VPN Service</th>
+ <th>Email Service</th>
+ <th>Notes</th>
+</tr>
+<tr>
+ <td>webapp</td>
+ <td>required</td>
+ <td>required</td>
+ <td></td>
+</tr>
+<tr>
+ <td>couchdb</td>
+ <td>required</td>
+ <td>required</td>
+<td></td>
+</tr>
+<tr>
+ <td>soledad</td>
+ <td>not used</td>
+ <td>required</td>
+<td></td>
+</tr>
+<tr>
+ <td>mx</td>
+ <td>not used</td>
+ <td>required</td>
+ <td></td>
+</tr>
+<tr>
+ <td>openvpn</td>
+ <td>required</td>
+ <td>not used</td>
+ <td></td>
+</tr>
+<tr>
+ <td>monitor</td>
+ <td>optional</td>
+ <td>optional</td>
+ <td>This service must be on the webapp node</td>
+</tr>
+<tr>
+ <td>tor</td>
+ <td>optional</td>
+ <td>optional</td>
+ <td></td>
+</tr>
+</table>
+
+Locations
+================================
+
+All nodes should have a `location.name` specified, and optionally additional information about the location, like the time zone. This location information is used for two things:
+
+* Determine which nodes can, or must, communicate with one another via a local network. The way some virtualization environments work, like OpenStack, requires that nodes communicate via the local network if they are on the same network.
+* Allows the client to prefer connections to nodes that are closer in physical proximity to the user. This is particularly important for OpenVPN nodes.
+
+The location stanza in a node's config file looks like this:
+
+ {
+ "location": {
+ "id": "ankara",
+ "name": "Ankara",
+ "country_code": "TR",
+ "timezone": "+2",
+ "hemisphere": "N"
+ }
+ }
+
+The fields:
+
+* `id`: An internal handle to use for this location. If two nodes have match `location.id`, then they are treated as being on a local network with one another. This value defaults to downcase and underscore of `location.name`.
+* `name`: Can be anything, might be displayed to the user in the client if they choose to manually select a gateway.
+* `country_code`: The [ISO 3166-1](https://en.wikipedia.org/wiki/ISO_3166-1) two letter country code.
+* `timezone`: The timezone expressed as an offset from UTC (in standard time, not daylight savings). You can look up the timezone using this [handy map](http://www.timeanddate.com/time/map/).
+* `hemisphere`: This should be "S" for all servers in South America, Africa, or Australia. Otherwise, this should be "N".
+
+These location options are very imprecise, but good enough for most usage. The client often does not know its own location precisely either. Instead, the client makes an educated guess at location based on the OS's timezone and locale.
+
+If you have multiple nodes in a single location, it is best to use a tag for the location. For example:
+
+`tags/ankara.json`:
+
+ {
+ "location": {
+ "name": "Ankara",
+ "country_code": "TR",
+ "timezone": "+2",
+ "hemisphere": "N"
+ }
+ }
+
+`nodes/vpngateway.json`:
+
+ {
+ "services": "openvpn",
+ "tags": ["production", "ankara"],
+ "ip_address": "1.1.1.1",
+ "openvpn": {
+ "gateway_address": "1.1.1.2"
+ }
+ }
+
+Unless you are using OpenStack or AWS, setting `location` for nodes is not required. It is, however, highly recommended.
+
+Disabling Nodes
+=====================================
+
+There are two ways to temporarily disable a node:
+
+**Option 1: disabled environment**
+
+You can assign an environment to the node that marks it as disabled. Then, if you use environment pinning, the node will be ignored when you deploy. For example:
+
+ {
+ "environment": "disabled"
+ }
+
+Then use `leap env pin ENV` to pin the environment to something other than 'disabled'. This only works if all the other nodes are also assigned to some environment.
+
+**Option 2: enabled == false**
+
+If a node has a property `enabled` set to false, then the `leap` command will skip over the node and pretend that it does not exist. For example:
+
+ {
+ "ip_address": "1.1.1.1",
+ "services": ["openvpn"],
+ "enabled": false
+ }
+
+**Options 3: no-deploy**
+
+If the file `/etc/leap/no-deploy` exists on a node, then when you run the commmand `leap deploy` it will halt and prevent a deploy from going through (if the node was going to be included in the deploy).
diff --git a/pages/docs/platform/service-diagram.odg b/pages/docs/platform/service-diagram.odg
new file mode 100644
index 0000000..09265c2
--- /dev/null
+++ b/pages/docs/platform/service-diagram.odg
Binary files differ
diff --git a/pages/docs/platform/service-diagram.png b/pages/docs/platform/service-diagram.png
new file mode 100644
index 0000000..85e6243
--- /dev/null
+++ b/pages/docs/platform/service-diagram.png
Binary files differ
diff --git a/pages/docs/platform/troubleshooting/en.haml b/pages/docs/platform/troubleshooting/en.haml
new file mode 100644
index 0000000..f0f1359
--- /dev/null
+++ b/pages/docs/platform/troubleshooting/en.haml
@@ -0,0 +1,3 @@
+- @title = "Troubleshooting"
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/platform/troubleshooting/known-issues.md b/pages/docs/platform/troubleshooting/known-issues.md
new file mode 100644
index 0000000..f274462
--- /dev/null
+++ b/pages/docs/platform/troubleshooting/known-issues.md
@@ -0,0 +1,115 @@
+@title = 'Leap Platform Release Notes'
+@nav_title = 'Known issues'
+@summary = 'Known issues in the Leap Platform.'
+@toc = true
+
+Here you can find documentation about known issues and potential work-arounds in the current Leap Platform release.
+
+0.6.0
+==============
+
+Upgrading
+------------------
+
+Upgrade your leap_platform to 0.6 and make sure you have the latest leap_cli.
+
+**Update leap_platform:**
+
+ cd leap_platform
+ git pull
+ git checkout -b 0.6.0 0.6.0
+
+**Update leap_cli:**
+
+If it is installed as a gem from rubygems:
+
+ sudo gem update leap_cli
+
+If it is installed as a gem from source:
+
+ cd leap_cli
+ git pull
+ git checkout master
+ rake build
+ sudo rake install
+
+If it is run directly from source:
+
+ cd leap_cli
+ git pull
+ git checkout master
+
+To upgrade:
+
+ leap --version # must be at least 1.6.2
+ leap cert update
+ leap deploy
+ leap test
+
+If the tests fail, try deploying again. If a test fails because there are two tapicero daemons running, you need to ssh into the server, kill all the tapicero daemons manually, and then try deploying again (sometimes the daemon from platform 0.5 would put its PID file in an odd place).
+
+OpenVPN
+------------------
+
+On deployment to a openvpn node, if the following happens:
+
+ - err: /Stage[main]/Site_openvpn/Service[openvpn]/ensure: change from stopped to running failed: Could not start Service[openvpn]: Execution of '/etc/init.d/openvpn start' returned 1: at /srv/leap/puppet/modules/site_openvpn/manifests/init.pp:189
+
+this is likely the result of a kernel upgrade that happened during the deployment, requiring that the machine be restarted before this service can start. To confirm this, login to the node (leap ssh <nodename>) and look at the end of the /var/log/daemon.log:
+
+ # tail /var/log/daemon.log
+ Nov 22 19:04:15 snail ovpn-udp_config[16173]: ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)
+ Nov 22 19:04:15 snail ovpn-udp_config[16173]: Exiting due to fatal error
+
+if you see this error, simply restart the node.
+
+CouchDB
+---------------------
+
+You can't deploy new couchdb nodes after one or more have been deployed. Make *sure* that you configure and deploy all your couchdb nodes when starting the provider. The problem is that we dont not have a clean way of adding couch nodes after initial creation of the databases, so any nodes added after result in improperly synchronized data. See Bug [#5601](https://leap.se/code/issues/5601) for more information.
+
+In some scenarios, such as when certain components are unavailable, the couchdb syncing will be broken. When things are brought back to normal, shortly after restart, the nodes will attempt to resync all their data, and can fail to complete this process because they run out of file descriptors. A symptom of this is the webapp wont allow you to register or login, the /opt/bigcouch/var/log/bigcouch.log is huge with a lot of errors that include (over multiple lines): {error, emfile}}. We have raised the limits for available file descriptors to bigcouch to try and accommodate for this situation, but if you still experience it, you may need to increase your /etc/sv/bigcouch/run ulimit values and restart bigcouch while monitoring the open file descriptors. We hope that in the next platform release, a newer couchdb will be better at handling these resources.
+
+You can also see the number of file descriptors in use by doing:
+
+ # watch -n1 -d lsof -p `pidof beam`|wc -l
+
+The command `leap db destroy` will not automatically recreate new databases. You must run `leap deploy` afterwards for this.
+
+User setup and ssh
+------------------
+
+At the moment, it is only possible to add an admin who will have access to all LEAP servers (see: https://leap.se/code/issues/2280)
+
+The command `leap add-user --self` allows only one SSH key. If you want to specify more than one key for a user, you can do it manually:
+
+ users/userx/userx_ssh.pub
+ users/userx/otherkey_ssh.pub
+
+All keys matching 'userx/*_ssh.pub' will be used for that user.
+
+Deploying
+---------
+
+If you have any errors during a run, please try to deploy again as this often solves non-deterministic issues that were not uncovered in our testing. Please re-deploy with `leap -v2 deploy` to get more verbose logs and capture the complete output to provide to us for debugging.
+
+If when deploying your debian mirror fails for some reason, network anomoly or the mirror itself is out of date, then platform deployment will not succeed properly. Check the mirror is up and try to deploy again when it is resolved (see: https://leap.se/code/issues/1091)
+
+Deployment gives 'error: in `%`: too few arguments (ArgumentError)' - this is because you attempted to do a deploy before initializing a node, please initialize the node first and then do a deploy afterwards (see: https://leap.se/code/issues/2550)
+
+This release has no ability to custom configure apt sources or proxies (see: https://leap.se/code/issues/1971)
+
+When running a deploy at a verbosity level of 2 and above, you will notice puppet deprecation warnings, these are known and we are working on fixing them
+
+IPv6
+----
+
+As of this release, IPv6 is not supported by the VPN configuration. If IPv6 is detected on your network as a client, it is blocked and instead it should revert to IPv4. We plan on adding IPv6 support in an upcoming release.
+
+
+Special Environments
+--------------------
+
+When deploying to OpenStack release "nova" or newer, you will need to do an initial deploy, then when it has finished run `leap facts update` and then deploy again (see: https://leap.se/code/issues/3020)
+
+It is not possible to actually use the EIP openvpn server on vagrant nodes (see: https://leap.se/code/issues/2401)
diff --git a/pages/docs/platform/troubleshooting/tests.md b/pages/docs/platform/troubleshooting/tests.md
new file mode 100644
index 0000000..8406404
--- /dev/null
+++ b/pages/docs/platform/troubleshooting/tests.md
@@ -0,0 +1,33 @@
+@title = 'Tests and Monitoring'
+@summary = 'Testing and monitoring your infrastructure.'
+@toc = true
+
+## Troubleshooting Tests
+
+At any time, you can run troubleshooting tests on the nodes of your provider infrastructure to check to see if things seem to be working correctly. If there is a problem, these tests should help you narrow down precisely where the problem is.
+
+To run tests on FILTER node list:
+
+ leap test run FILTER
+
+Alternately, you can run test on all nodes (probably only useful if you have pinned the environment):
+
+ leap test
+
+## Monitoring
+
+In order to set up a monitoring node, you simply add a `monitor` service tag to the node configuration file. It could be combined with any other service, but we propose that you add it to the webapp node, as this already is public accessible via HTTPS.
+
+After deploying, this node will regularly poll every node to ask for the status of various health checks. These health checks include the checks run with `leap test`, plus many others.
+
+We use [Nagios](http://www.nagios.org/) together with [Check MK agent](https://en.wikipedia.org/wiki/Check_MK) for running checks on remote hosts.
+
+You can log into the monitoring web interface via [https://MONITORNODE/nagios3/](https://MONITORNODE/nagios3/). The username is `nagiosadmin` and the password is found in the secrets.json file in your provider directory.
+
+### Log Monitoring
+
+At the moment, we use [check-mk-agent-logwatch](https://mathias-kettner.de/checkmk_check_logwatch.html) for searching logs for irregularities.
+Logs are parsed for patterns using a blacklist, and are stored in `/var/lib/check_mk/logwatch/<Nodename>`.
+
+In order to "acknowledge" a log warning, you need to log in to the monitoring server, and delete the corresponding file in `/var/lib/check_mk/logwatch/<Nodename>`. This should be done via the nagios webinterface in the future.
+
diff --git a/pages/docs/platform/troubleshooting/where-to-look.md b/pages/docs/platform/troubleshooting/where-to-look.md
new file mode 100644
index 0000000..fbd9593
--- /dev/null
+++ b/pages/docs/platform/troubleshooting/where-to-look.md
@@ -0,0 +1,249 @@
+@title = 'Where to look for errors'
+@nav_title = 'Where to look'
+@toc = true
+
+
+General
+=======
+
+* Please increase verbosity when debugging / filing issues in our issue tracker. You can do this with adding i.e. `-v 5` after the `leap` cmd, i.e. `leap -v 2 deploy`.
+
+Webapp
+======
+
+Places to look for errors
+-------------------------
+
+* `/var/log/apache2/error.log`
+* `/srv/leap/webapp/log/production.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+* `/var/log/leap/*`
+
+Is haproxy ok ?
+---------------
+
+
+ curl -s -X GET "http://127.0.0.1:4096"
+
+Is couchdb accessible through stunnel ?
+---------------------------------------
+
+* Depending on how many couch nodes you have, increase the port for every test
+ (see /etc/haproxy/haproxy.cfg for the server/port mapping):
+
+
+ curl -s -X GET "http://127.0.0.1:4000"
+ curl -s -X GET "http://127.0.0.1:4001"
+ ...
+
+
+Check couchdb acl as admin
+--------------------------
+
+ mkdir /etc/couchdb
+ cat /srv/leap/webapp/config/couchdb.yml.admin # see username and password
+ echo "machine 127.0.0.1 login admin password <PASSWORD>" > /etc/couchdb/couchdb-admin.netrc
+ chmod 600 /etc/couchdb/couchdb-admin.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-admin.netrc -X GET "http://127.0.0.1:4096"
+ curl -s --netrc-file /etc/couchdb/couchdb-admin.netrc -X GET "http://127.0.0.1:4096/_all_dbs"
+
+Check couchdb acl as unpriviledged user
+---------------------------------------
+
+ cat /srv/leap/webapp/config/couchdb.yml # see username and password
+ echo "machine 127.0.0.1 login webapp password <PASSWORD>" > /etc/couchdb/couchdb-webapp.netrc
+ chmod 600 /etc/couchdb/couchdb-webapp.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-webapp.netrc -X GET "http://127.0.0.1:4096"
+ curl -s --netrc-file /etc/couchdb/couchdb-webapp.netrc -X GET "http://127.0.0.1:4096/_all_dbs"
+
+
+Check client config files
+-------------------------
+
+ https://example.net/provider.json
+ https://example.net/1/config/smtp-service.json
+ https://example.net/1/config/soledad-service.json
+ https://example.net/1/config/eip-service.json
+
+
+Soledad
+=======
+
+ /var/log/soledad.log
+
+
+Couchdb
+=======
+
+Places to look for errors
+-------------------------
+
+* `/opt/bigcouch/var/log/bigcouch.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+
+
+
+Bigcouch membership
+-------------------
+
+* All nodes configured for the provider should appear here:
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET 'http://127.0.0.1:5986/nodes/_all_docs'
+</pre>
+
+* All configured nodes should show up under "cluster_nodes", and the ones online and communicating with each other should appear under "all_nodes". This example output shows the configured cluster nodes `couch1.bitmask.net` and `couch2.bitmask.net`, but `couch2.bitmask.net` is currently not accessible from `couch1.bitmask.net`
+
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc 'http://127.0.0.1:5984/_membership'
+ {"all_nodes":["bigcouch@couch1.bitmask.net"],"cluster_nodes":["bigcouch@couch1.bitmask.net","bigcouch@couch2.bitmask.net"]}
+</pre>
+
+* Sometimes a `/etc/init.d/bigcouch restart` on all nodes is needed, to register new nodes
+
+Databases
+---------
+
+* Following output shows all neccessary DBs that should be present. Note that the `user-0123456....` DBs are the data stores for a particular user.
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET 'http://127.0.0.1:5984/_all_dbs'
+ ["customers","identities","sessions","shared","tickets","tokens","user-0","user-9d34680b01074c75c2ec58c7321f540c","user-9d34680b01074c75c2ec58c7325fb7ff","users"]
+</pre>
+
+
+
+
+Design Documents
+----------------
+
+* Is User `_design doc` available ?
+
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET "http://127.0.0.1:5984/users/_design/User"
+</pre>
+
+Is couchdb cluster backend accessible through stunnel ?
+-------------------------------------------------------
+
+* Find out how many connections are set up for the couchdb cluster backend:
+
+<pre>
+ grep "accept = 127.0.0.1" /etc/stunnel/*
+</pre>
+
+
+* Now connect to all of those local endpoints to see if they up. All these tests should return "localhost [127.0.0.1] 4000 (?) open"
+
+<pre>
+ nc -v 127.0.0.1 4000
+ nc -v 127.0.0.1 4001
+ ...
+</pre>
+
+
+MX
+==
+
+Places to look for errors
+-------------------------
+
+* `/var/log/mail.log`
+* `/var/log/leap_mx.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+
+Is couchdb accessible through stunnel ?
+---------------------------------------
+
+* Depending on how many couch nodes you have, increase the port for every test
+ (see /etc/haproxy/haproxy.cfg for the server/port mapping):
+
+
+ curl -s -X GET "http://127.0.0.1:4000"
+ curl -s -X GET "http://127.0.0.1:4001"
+ ...
+
+Query leap-mx
+-------------
+
+* for useraccount
+
+
+<pre>
+ postmap -v -q "joe@dev.bitmask.net" tcp:localhost:2244
+ ...
+ postmap: dict_tcp_lookup: send: get jow@dev.bitmask.net
+ postmap: dict_tcp_lookup: recv: 200
+ ...
+</pre>
+
+* for mailalias
+
+
+<pre>
+ postmap -v -q "joe@dev.bitmask.net" tcp:localhost:4242
+ ...
+ postmap: dict_tcp_lookup: send: get joe@dev.bitmask.net
+ postmap: dict_tcp_lookup: recv: 200 f01bc1c70de7d7d80bc1ad77d987e73a
+ postmap: dict_tcp_lookup: found: f01bc1c70de7d7d80bc1ad77d987e73a
+ f01bc1c70de7d7d80bc1ad77d987e73a
+ ...
+</pre>
+
+
+Check couchdb acl as unpriviledged user
+---------------------------------------
+
+
+
+ cat /etc/leap/mx.conf # see username and password
+ echo "machine 127.0.0.1 login leap_mx password <PASSWORD>" > /etc/couchdb/couchdb-leap_mx.netrc
+ chmod 600 /etc/couchdb/couchdb-leap_mx.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-leap_mx.netrc -X GET "http://127.0.0.1:4096/_all_dbs" # pick one "user-<hash>" db
+ curl -s --netrc-file /etc/couchdb/couchdb-leap_mx.netrc -X GET "http://127.0.0.1:4096/user-de9c77a3d7efbc779c6c20da88e8fb9c"
+
+
+* you may check multiple times, cause 127.0.0.1:4096 is haproxy load-balancing the different couchdb nodes
+
+
+Mailspool
+---------
+
+* Any file in the leap_mx mailspool longer for a few seconds ?
+
+
+
+<pre>
+ ls -la /var/mail/vmail/Maildir/cur/
+</pre>
+
+* Any mails in postfix mailspool longer than a few seconds ?
+
+<pre>
+ mailq
+</pre>
+
+
+
+Testing mail delivery
+---------------------
+
+ swaks -f alice@example.org -t bob@example.net -s mx1.example.net --port 25
+ swaks -f varac@cdev.bitmask.net -t varac@cdev.bitmask.net -s chipmonk.cdev.bitmask.net --port 465 --tlsc
+ swaks -f alice@example.org -t bob@example.net -s mx1.example.net --port 587 --tls
+
+
+VPN
+===
+
+Places to look for errors
+-------------------------
+
+* `/var/log/syslog` (watch out for openvpn issues)
+
+
diff --git a/pages/docs/platform/tutorials/en.haml b/pages/docs/platform/tutorials/en.haml
new file mode 100644
index 0000000..1c73fc0
--- /dev/null
+++ b/pages/docs/platform/tutorials/en.haml
@@ -0,0 +1,4 @@
+- @nav_title = "Tutorials"
+- @title = "Platform Tutorials"
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/platform/tutorials/quick-start.md b/pages/docs/platform/tutorials/quick-start.md
new file mode 100644
index 0000000..132fd32
--- /dev/null
+++ b/pages/docs/platform/tutorials/quick-start.md
@@ -0,0 +1,82 @@
+@title = 'LEAP Platform quick start tutorial'
+@nav_title = 'Quick Start'
+@summary = 'Getting leap platform up, the quick way'
+
+
+Testing Leap platform with Vagrant
+==================================
+
+There are two ways how you can setup leap platform
+using vagrant.
+
+Use the leap_cli vagrant integration
+------------------------------------
+
+Install leap_cli and leap_platform on your host,
+configure a provider from scratch and use the
+`leap local` commands to manage your vagrant node(s).
+
+See https://leap.se/en/docs/platform/development how to use
+the leap_cli vagrant integration and
+https://leap.se/en/docs/platform/tutorials/single-node how
+to setup a single node mail server.
+
+
+Using the Vagrantfile provided by Leap Platform
+-----------------------------------------------
+
+This is by far the easiest way.
+It will install a single node mail Server in the default
+configuration with one single command.
+
+Clone the 0.6.1 platform branch with
+
+ git clone -b 0.6.1 https://github.com/leapcode/leap_platform.git
+
+Start the vagrant box with
+
+ cd leap_platform
+ vagrant up
+
+Follow the instructions how to configure your `/etc/hosts`
+in order to use the provider!
+
+You can login via ssh with the systemuser `vagrant` and the same password.
+
+There are 2 users preconfigured:
+
+. `testuser` with pw `hallo123`
+. `testadmin` with pw `hallo123`
+
+Testing your provider
+=====================
+
+Using the bitmask client
+------------------------
+
+Download the provider ca:
+
+ wget --no-check-certificate https://example.org/ca.crt -O /tmp/ca.crt
+
+Start bitmask:
+
+ bitmask --ca-cert-file /tmp/ca.crt
+
+
+
+Recieving Mail
+--------------
+
+Use i.e. swaks to send a testmail
+
+ swaks -f noone@example.org -t testuser@example.org -s example.org
+
+and use your favorite mail client to examine your inbox.
+You can also use [offlineimap](http://offlineimap.org/) to fetch mails:
+
+ offlineimap -c vagrant/.offlineimaprc.example.org
+
+WARNING: Use offlineimap *only* for testing/debugging,
+because it will save the mails *decrypted* locally to
+your disk !
+
diff --git a/pages/docs/platform/tutorials/single-node-email.md b/pages/docs/platform/tutorials/single-node-email.md
new file mode 100644
index 0000000..8e7ff50
--- /dev/null
+++ b/pages/docs/platform/tutorials/single-node-email.md
@@ -0,0 +1,338 @@
+@title = 'Single node email tutorial'
+@nav_title = 'Single node email'
+@summary = 'A single node email provider.'
+
+Quick Start - Single node setup
+===============================
+
+This tutorial walks you through the initial process of creating and deploying a minimal service provider running the [LEAP platform](platform).
+We will guide you through building a single node mail provider.
+
+Our goal
+------------------
+
+We are going to create a minimal LEAP provider offering Email service. This basic setup can be expanded by adding more webapp and couchdb nodes to increase availability (performance wise, a single couchdb and a single webapp are more than enough for most usage, since they are only lightly used, but you might want redundancy). Please note: currently it is not possible to safely add additional couchdb nodes at a later point. They should all be added in the beginning, so please consider carefully if you would like more before proceeding.
+
+Our goal is something like this:
+
+ $ leap list
+ NODES SERVICES TAGS
+ node1 couchdb, mx, soledad, webapp local
+
+NOTE: You won't be able to run that `leap list` command yet, not until we actually create the node configurations.
+
+Requirements
+------------
+
+In order to complete this Quick Start, you will need a few things:
+
+* You will need one real or paravirtualized virtual machine (Vagrant, KVM, Xen, Openstack, Amazon, …) that have a basic Debian Stable installed.
+* You should be able to SSH into them remotely, and know their root password, IP addresses and their SSH host keys
+* The ability to create/modify DNS entries for your domain is preferable, but not needed. If you don't have access to DNS, you can workaround this by modifying your local resolver, i.e. editing `/etc/hosts`.
+* You need to be aware that this process will make changes to your systems, so please be sure that these machines are a basic install with nothing configured or running for other purposes
+* Your machines will need to be connected to the internet, and not behind a restrictive firewall.
+* You should work locally on your laptop/workstation (one that you trust and that is ideally full-disk encrypted) while going through this guide. This is important because the provider configurations you are creating contain sensitive data that should not reside on a remote machine. The leap cli utility will login to your servers and configure the services.
+* You should do everything described below as an unprivileged user, and only run those commands as root that are noted with *sudo* in front of them. Other than those commands, there is no need for privileged access to your machine, and in fact things may not work correctly.
+
+All the commands in this tutorial are run on your sysadmin machine. In order to complete the tutorial, the sysadmin will do the following:
+
+* Install pre-requisites
+* Install the LEAP command-line utility
+* Check out the LEAP platform
+* Create a provider and its certificates
+* Setup the provider's node and the services that will reside on it
+* Initialize the node
+* Deploy the LEAP platform to the node
+* Test that things worked correctly
+* Some additional commands
+
+We will walk you through each of these steps.
+
+
+Prepare your environment
+========================
+
+There are a few things you need to setup before you can get going. Just some packages, the LEAP cli and the platform.
+
+Install pre-requisites
+--------------------------------
+
+*Debian & Ubuntu*
+
+Install core prerequisites:
+
+ $ sudo apt-get install git ruby ruby-dev rsync openssh-client openssl rake make bzip2
+
+<!--
+*Mac OS*
+
+1. Install rubygems from https://rubygems.org/pages/download (unless the `gem` command is already installed).
+-->
+
+NOTE: leap_cli should work with ruby1.8, but has only been tested using ruby1.9.
+
+
+Install the LEAP command-line utility
+-------------------------------------------------
+
+Install the `leap` command from rubygems.org:
+
+ $ sudo gem install leap_cli
+
+Alternately, you can install `leap` from source:
+
+ $ git clone https://leap.se/git/leap_cli
+ $ cd leap_cli
+ $ rake build
+ $ sudo rake install
+
+You can also install from source as an unprivileged user, if you want. For example, instead of `sudo rake install` you can do something like this:
+
+ $ rake install
+ # watch out for the directory leap is installed to, then i.e.
+ $ sudo ln -s ~/.gem/ruby/1.9.1/bin/leap /usr/local/bin/leap
+
+With either `rake install` or `sudo rake install`, you can use now /usr/local/bin/leap, which in most cases will be in your $PATH.
+
+If you have successfully installed the `leap` command, then you should be able to do the following:
+
+ $ leap --help
+
+This will list the command-line help options. If you receive an error when doing this, please read through the README.md in the `leap_cli` source to try and resolve any problems before going forwards.
+
+Check out the platform
+--------------------------
+
+The LEAP Platform is a series of puppet recipes and modules that will be used to configure your provider. You will need a local copy of the platform that will be used to setup your nodes and manage your services. To begin with, you will not need to modify the LEAP Platform.
+
+First we'll create a directory for LEAP things, and then we'll check out the platform code and initalize the modules:
+
+ $ mkdir ~/leap
+ $ cd ~/leap
+ $ git clone --recursive https://leap.se/git/leap_platform.git
+
+
+Provider Setup
+==============
+
+A provider instance is a directory tree, usually stored in git, that contains everything you need to manage an infrastructure for a service provider. In this case, we create one for example.org and call the instance directory 'example'.
+
+ $ mkdir -p ~/leap/example
+
+Bootstrap the provider
+-----------------------
+
+Now, we will initialize this directory to make it a provider instance. Your provider instance will need to know where it can find the local copy of the git repository leap_platform, which we setup in the previous step.
+
+ $ cd ~/leap/example
+ $ leap new .
+
+NOTES:
+ . make sure you include that trailing dot!
+
+The `leap new` command will ask you for several required values:
+
+* domain: The primary domain name of your service provider. In this tutorial, we will be using "example.org".
+* name: The name of your service provider (we use "Example").
+* contact emails: A comma separated list of email addresses that should be used for important service provider contacts (for things like postmaster aliases, Tor contact emails, etc).
+* platform: The directory where you have a copy of the `leap_platform` git repository checked out.
+
+You could also have passed these configuration options on the command-line, like so:
+
+ $ leap new --contacts your@email.here --domain example.org --name Example --platform=~/leap/leap_platform .
+
+You may want to poke around and see what is in the files we just created. For example:
+
+ $ cat provider.json
+
+Optionally, commit your provider directory using the version control software you fancy. For example:
+
+ $ git init
+ $ git add .
+ $ git commit -m "initial provider commit"
+
+Now add yourself as a privileged sysadmin who will have access to deploy to servers:
+
+ $ leap add-user --self
+
+NOTE: in most cases, `leap` must be run from within a provider instance directory tree (e.g. ~/leap/example).
+
+Create provider certificates
+----------------------------
+
+Create two certificate authorities, one for server certs and one for client
+certs (note: you only need to run this one command to get both):
+
+ $ leap cert ca
+
+Create a temporary cert for your main domain (you should replace with a real commercial cert at some point)
+
+ $ leap cert csr
+
+To see details about the keys and certs that the prior two commands created, you can use `leap inspect` like so:
+
+ $ leap inspect files/ca/ca.crt
+
+NOTE: the files `files/ca/*.key` are extremely sensitive and must be carefully protected. The other key files are much less sensitive and can simply be regenerated if needed.
+
+
+Edit provider.json configuration
+--------------------------------------
+
+There are a few required settings in provider.json. At a minimum, you must have:
+
+ {
+ "domain": "example.org",
+ "name": "Example",
+ "contacts": {
+ "default": "email1@example.org"
+ }
+ }
+
+For a full list of possible settings, you can use `leap inspect` to see how provider.json is evaluated after including the inherited defaults:
+
+ $ leap inspect provider.json
+
+
+Setup the provider's node and services
+--------------------------------------
+
+A "node" is a server that is part of your infrastructure. Every node can have one or more services associated with it. Some nodes are "local" and used only for testing, see [Development](development) for more information.
+
+Create a node, with all the services needed for Email - "couchdb", "mx", "soledad" and "webapp":
+
+ $ leap node add node1 ip_address:x.x.x.w services:couchdb,mx,soledad,webapp tags:production
+
+NOTE: replace x.x.x.w with the actual IP address of this node
+
+This created a node configuration file in `nodes/node1.json`, but it did not do anything else. It also added the 'tag' called 'production' to this node. Tags allow us to conveniently group nodes together. When creating nodes, you should give them the tag 'production' if the node is to be used in your production infrastructure.
+
+Initialize the nodes
+--------------------
+
+Node initialization only needs to be done once, but there is no harm in doing it multiple times:
+
+ $ leap node init production
+
+This will initialize the node with the tag "production". When `leap node init` is run, you will be prompted to verify the fingerprint of the SSH host key and to provide the root password of the server. You should only need to do this once.
+
+
+Deploy the LEAP platform to the nodes
+--------------------
+
+Now you should deploy the platform recipes to the node. [Deployment can take a while to run](http://xkcd.com/303/), especially on the first run, as it needs to update the packages on the new machine.
+
+ $ leap deploy
+
+Watch the output for any errors (in red), if everything worked fine, you should now have your first running node. If you do have errors, try doing the deploy again.
+
+
+Setup DNS
+---------
+
+Now that you have the node configured, you should create the DNS entrie for this node.
+
+Set up your DNS with these hostnames:
+
+ $ leap list --print ip_address,domain.full,dns.aliases
+ node1 x.x.x.w, node1.example.org, example.org, api.example.org, nicknym.example.org
+
+Alternately, you can adapt this zone file snippet:
+
+ $ leap compile zone
+
+If you cannot edit your DNS zone file, you can still test your provider by adding this entry to your local resolver hosts file (`/etc/hosts` for linux):
+
+ x.x.x.w node1.example.org example.org api.example.org nicknym.example.org
+
+Please don't forget about these entries, they will override DNS queries if you setup your DNS later.
+
+
+What is going on here?
+--------------------------------------------
+
+First, some background terminology:
+
+* **puppet**: Puppet is a system for automating deployment and management of servers (called nodes).
+* **hiera files**: In puppet, you can use something called a 'hiera file' to seed a node with a few configuration values. In LEAP, we go all out and put *every* configuration value needed for a node in the hiera file, and automatically compile a custom hiera file for each node.
+
+When you run `leap deploy`, a bunch of things happen, in this order:
+
+1. **Compile hiera files**: The hiera configuration file for each node is compiled in YAML format and saved in the directory `hiera`. The source material for this hiera file consists of all the JSON configuration files imported or inherited by the node's JSON config file.
+* **Copy required files to node**: All the files needed for puppet to run are rsync'ed to each node. This includes the entire leap_platform directory, as well as the node's hiera file and other files needed by puppet to set up the node (keys, binary files, etc).
+* **Puppet is run**: Once the node is ready, leap connects to the node via ssh and runs `puppet apply`. Puppet is applied locally on the node, without a daemon or puppetmaster.
+
+You can run `leap -v2 deploy` to see exactly what commands are being executed.
+
+<!-- See [under the hood](under-the-hood) for more details. -->
+
+
+Test that things worked correctly
+=================================
+
+You should now one machine with the LEAP platform email service deployed to it.
+
+
+Access the web application
+--------------------------------------------
+
+In order to connect to the web application in your browser, you need to point your domain at the IP address of your new node.
+
+Next, you can connect to the web application either using a web browser or via the API using the LEAP client. To use a browser, connect to https://example.org (replacing that with your domain). Your browser will complain about an untrusted cert, but for now just bypass this. From there, you should be able to register a new user and login.
+
+Testing with leap_cli
+---------------------
+
+Use the test command to run a set of different tests:
+
+ leap test
+
+
+Additional information
+======================
+
+It is useful to know a few additional things.
+
+Useful commands
+---------------
+
+Here are a few useful commands you can run on your new local nodes:
+
+* `leap ssh web1` -- SSH into node web1 (requires `leap node init web1` first).
+* `leap list` -- list all nodes.
+* `leap list production` -- list only those nodes with the tag 'production'
+* `leap list --print ip_address` -- list a particular attribute of all nodes.
+* `leap cert update` -- generate new certificates if needed.
+
+See the full command reference for more information.
+
+Node filters
+-------------------------------------------
+
+Many of the `leap` commands take a "node filter". You can use a node filter to target a command at one or more nodes.
+
+A node filter consists of one or more keywords, with an optional "+" before each keyword.
+
+* keywords can be a node name, a service type, or a tag.
+* the "+" before the keyword constructs an AND condition
+* otherwise, multiple keywords together construct an OR condition
+
+Examples:
+
+* `leap list openvpn` -- list all nodes with service openvpn.
+* `leap list openvpn +production` -- only nodes of service type openvpn AND tag production.
+* `leap deploy webapp openvpn` -- deploy to all webapp OR openvpn nodes.
+* `leap node init vpn1` -- just init the node named vpn1.
+
+Keep track of your provider configurations
+------------------------------------------
+
+You should commit your provider changes to your favorite VCS whenever things change. This way you can share your configurations with other admins, all they have to do is to pull the changes to stay up to date. Every time you make a change to your provider, such as adding nodes, services, generating certificates, etc. you should add those to your VCS, commit them and push them to where your repository is hosted.
+
+Note that your provider directory contains secrets! Those secrets include passwords for various services. You do not want to have those passwords readable by the world, so make sure that wherever you are hosting your repository, it is not public for the world to read.
+
+What's next
+-----------------------------------
+
+Read the [LEAP platform guide](guide) to learn about planning and securing your infrastructure.
+
diff --git a/pages/docs/platform/tutorials/single-node-vpn.md b/pages/docs/platform/tutorials/single-node-vpn.md
new file mode 100644
index 0000000..0e3d486
--- /dev/null
+++ b/pages/docs/platform/tutorials/single-node-vpn.md
@@ -0,0 +1,389 @@
+@title = 'Single node VPN tutorial'
+@nav_title = 'Single node VPN'
+@summary = 'Three node OpenVPN provider.'
+
+Quick Start
+===========
+
+This tutorial walks you through the initial process of creating and deploying a minimal service provider running the [LEAP platform](platform). This Quick Start guide will guide you through building a three node OpenVPN provider.
+
+Our goal
+------------------
+
+We are going to create a minimal LEAP provider offering OpenVPN service. This basic setup can be expanded by adding more OpenVPN nodes to increase capacity, or more webapp and couchdb nodes to increase availability (performance wise, a single couchdb and a single webapp are more than enough for most usage, since they are only lightly used, but you might want redundancy). Please note: currently it is not possible to safely add additional couchdb nodes at a later point. They should all be added in the beginning, so please consider carefully if you would like more before proceeding.
+
+Our goal is something like this:
+
+ $ leap list
+ NODES SERVICES TAGS
+ cheetah couchdb production
+ wildebeest webapp production
+ ostrich openvpn production
+
+NOTE: You won't be able to run that `leap list` command yet, not until we actually create the node configurations.
+
+Requirements
+------------
+
+In order to complete this Quick Start, you will need a few things:
+
+* You will need three real or paravirtualized virtual machines (KVM, Xen, Openstack, Amazon, but not Vagrant - sorry) that have a basic Debian Stable installed. If you allocate 20G of disk space to each node for the system, after this process is completed, you will have used less than 10% of that disk space. If you allocate 2 CPUs and 8G of memory to each node, that should be more than enough to begin with.
+* You should be able to SSH into them remotely, and know their root password, IP addresses and their SSH host keys
+* You will need four different IPs. Each node gets a primary IP, and the OpenVPN gateway additionally needs a gateway IP.
+* The ability to create/modify DNS entries for your domain is preferable, but not needed. If you don't have access to DNS, you can workaround this by modifying your local resolver, i.e. editing `/etc/hosts`.
+* You need to be aware that this process will make changes to your systems, so please be sure that these machines are a basic install with nothing configured or running for other purposes
+* Your machines will need to be connected to the internet, and not behind a restrictive firewall.
+* You should work locally on your laptop/workstation (one that you trust and that is ideally full-disk encrypted) while going through this guide. This is important because the provider configurations you are creating contain sensitive data that should not reside on a remote machine. The `leap` command will login to your servers and configure the services.
+* You should do everything described below as an unprivileged user, and only run those commands as root that are noted with *sudo* in front of them. Other than those commands, there is no need for privileged access to your machine, and in fact things may not work correctly.
+
+All the commands in this tutorial are run on your sysadmin machine. In order to complete the tutorial, the sysadmin will do the following:
+
+* Install pre-requisites
+* Install the LEAP command-line utility
+* Check out the LEAP platform
+* Create a provider and its certificates
+* Setup the provider's nodes and the services that will reside on those nodes
+* Initialize the nodes
+* Deploy the LEAP platform to the nodes
+* Test that things worked correctly
+* Some additional commands
+
+We will walk you through each of these steps.
+
+
+Prepare your environment
+========================
+
+There are a few things you need to setup before you can get going. Just some packages, the LEAP cli and the platform.
+
+Install pre-requisites
+--------------------------------
+
+*Debian & Ubuntu*
+
+Install core prerequisites:
+
+ $ sudo apt-get install git ruby ruby-dev rsync openssh-client openssl rake make bzip2
+
+<!--
+*Mac OS*
+
+1. Install rubygems from https://rubygems.org/pages/download (unless the `gem` command is already installed).
+-->
+
+NOTE: leap_cli requires ruby 1.9 or later.
+
+
+Install the LEAP command-line utility
+-------------------------------------------------
+
+Install the `leap` command from rubygems.org:
+
+ $ sudo gem install leap_cli
+
+Alternately, you can install `leap` from source:
+
+ $ git clone https://leap.se/git/leap_cli
+ $ cd leap_cli
+ $ rake build
+ $ sudo rake install
+
+You can also install from source as an unprivileged user, if you want. For example, instead of `sudo rake install` you can do something like this:
+
+ $ rake install
+ # watch out for the directory leap is installed to, then i.e.
+ $ sudo ln -s ~/.gem/ruby/1.9.1/bin/leap /usr/local/bin/leap
+
+With either `rake install` or `sudo rake install`, you can use now /usr/local/bin/leap, which in most cases will be in your $PATH.
+
+If you have successfully installed the `leap` command, then you should be able to do the following:
+
+ $ leap --help
+
+This will list the command-line help options. If you receive an error when doing this, please read through the README.md in the `leap_cli` source to try and resolve any problems before going forwards.
+
+Check out the platform
+--------------------------
+
+The LEAP Platform is a series of puppet recipes and modules that will be used to configure your provider. You will need a local copy of the platform that will be used to setup your nodes and manage your services. To begin with, you will not need to modify the LEAP Platform.
+
+First we'll create a directory for LEAP things, and then we'll check out the platform code and initalize the modules:
+
+ $ mkdir ~/leap
+ $ cd ~/leap
+ $ git clone https://leap.se/git/leap_platform.git
+ $ cd leap_platform
+ $ git submodule sync; git submodule update --init
+
+
+Provider Setup
+==============
+
+A provider instance is a directory tree, usually stored in git, that contains everything you need to manage an infrastructure for a service provider. In this case, we create one for example.org and call the instance directory 'example'.
+
+ $ mkdir -p ~/leap/example
+
+Bootstrap the provider
+-----------------------
+
+Now, we will initialize this directory to make it a provider instance. Your provider instance will need to know where it can find the local copy of the git repository leap_platform, which we setup in the previous step.
+
+ $ cd ~/leap/example
+ $ leap new .
+
+NOTES:
+ . make sure you include that trailing dot!
+
+The `leap new` command will ask you for several required values:
+
+* domain: The primary domain name of your service provider. In this tutorial, we will be using "example.org".
+* name: The name of your service provider (we use "Example").
+* contact emails: A comma separated list of email addresses that should be used for important service provider contacts (for things like postmaster aliases, Tor contact emails, etc).
+* platform: The directory where you have a copy of the `leap_platform` git repository checked out.
+
+You could also have passed these configuration options on the command-line, like so:
+
+ $ leap new --contacts your@email.here --domain leap.example.org --name Example --platform=~/leap/leap_platform .
+
+You may want to poke around and see what is in the files we just created. For example:
+
+ $ cat provider.json
+
+Optionally, commit your provider directory using the version control software you fancy. For example:
+
+ $ git init
+ $ git add .
+ $ git commit -m "initial provider commit"
+
+Now add yourself as a privileged sysadmin who will have access to deploy to servers:
+
+ $ leap add-user --self
+
+NOTE: in most cases, `leap` must be run from within a provider instance directory tree (e.g. ~/leap/example).
+
+Create provider certificates
+----------------------------
+
+Create two certificate authorities, one for server certs and one for client
+certs (note: you only need to run this one command to get both):
+
+ $ leap cert ca
+
+Create a temporary cert for your main domain (you should replace with a real commercial cert at some point)
+
+ $ leap cert csr
+
+To see details about the keys and certs that the prior two commands created, you can use `leap inspect` like so:
+
+ $ leap inspect files/ca/ca.crt
+
+Create the Diffie-Hellman parameters file, needed for forward secret OpenVPN ciphers:
+
+ $ leap cert dh
+
+NOTE: the files `files/ca/*.key` are extremely sensitive and must be carefully protected. The other key files are much less sensitive and can simply be regenerated if needed.
+
+
+Edit provider.json configuration
+--------------------------------------
+
+There are a few required settings in provider.json. At a minimum, you must have:
+
+ {
+ "domain": "example.org",
+ "name": "Example",
+ "contacts": {
+ "default": "email1@example.org"
+ }
+ }
+
+For a full list of possible settings, you can use `leap inspect` to see how provider.json is evaluated after including the inherited defaults:
+
+ $ leap inspect provider.json
+
+
+Setup the provider's nodes and services
+---------------------------------------
+
+A "node" is a server that is part of your infrastructure. Every node can have one or more services associated with it. Some nodes are "local" and used only for testing, see [Development](development) for more information.
+
+Create a node, with the service "webapp":
+
+ $ leap node add wildebeest ip_address:x.x.x.w services:webapp tags:production
+
+NOTE: replace x.x.x.w with the actual IP address of this node
+
+This created a node configuration file in `nodes/wildebeest.json`, but it did not do anything else. It also added the 'tag' called 'production' to this node. Tags allow us to conveniently group nodes together. When creating nodes, you should give them the tag 'production' if the node is to be used in your production infrastructure.
+
+The web application and the VPN nodes require a database, so lets create the database server node:
+
+ $ leap node add cheetah ip_address:x.x.x.x services:couchdb tags:production
+
+NOTE: replace x.x.x.x with the actual IP address of this node
+
+Now we need the OpenVPN gateway, so lets create that node:
+
+ $ leap node add ostrich ip_address:x.x.x.y openvpn.gateway_address:x.x.x.z services:openvpn tags:production
+
+NOTE: replace x.x.x.y with the IP address of the machine, and x.x.x.z with the second IP. openvpn gateways must be assigned two IP addresses, one for the host itself and one for the openvpn gateway. We do this to prevent incoming and outgoing VPN traffic on the same IP. Without this, the client might send some traffic to other VPN users in the clear, bypassing the VPN.
+
+
+Setup DNS
+---------
+
+Now that you have the nodes configured, you should create the DNS entries for these nodes.
+
+Set up your DNS with these hostnames:
+
+ $ leap list --print ip_address,domain.full,dns.aliases
+ cheetah x.x.x.w, cheetah.example.org, null
+ wildebeest x.x.x.x, wildebeest.example.org, api.example.org
+ ostrich x.x.x.y, ostrich.example.org, null
+
+Alternately, you can adapt this zone file snippet:
+
+ $ leap compile zone
+
+If you cannot edit your DNS zone file, you can still test your provider by adding entries to your local resolver hosts file (`/etc/hosts` for linux):
+
+ x.x.x.w cheetah.example.org
+ x.x.x.x wildebeest.example.org api.example.org example.org
+ x.x.x.y ostrich.example.org
+
+Please don't forget about these entries, they will override DNS queries if you setup your DNS later.
+
+
+Initialize the nodes
+--------------------
+
+Node initialization only needs to be done once, but there is no harm in doing it multiple times:
+
+ $ leap node init production
+
+This will initialize all nodes with the tag "production". When `leap node init` is run, you will be prompted to verify the fingerprint of the SSH host key and to provide the root password of the server(s). You should only need to do this once.
+
+If you prefer, you can initalize each node, one at a time:
+
+ $ leap node init wildebeest
+ $ leap node init cheetah
+ $ leap node init ostrich
+
+Deploy the LEAP platform to the nodes
+--------------------
+
+Now you should deploy the platform recipes to the nodes. [Deployment can take a while to run](http://xkcd.com/303/), especially on the first run, as it needs to update the packages on the new machine.
+
+*Important notes:* currently nodes must be deployed in a certain order. The underlying couch database node(s) must be deployed first, and then all other nodes. Also you need to configure and deploy all of the couchdb nodes that you plan to use at this time, as currently you cannot add more of them later later ([See](https://leap.se/es/docs/platform/known-issues#CouchDB.Sync)).
+
+ $ leap deploy cheetah
+
+Watch the output for any errors (in red), if everything worked fine, you should now have your first running node. If you do have errors, try doing the deploy again.
+
+However, to deploy our three-node openvpn setup, we need the database and LEAP web application requires a database to run, so let's deploy to the couchdb and openvpn nodes:
+
+ $ leap deploy wildebeest
+ $ leap deploy ostrich
+
+
+What is going on here?
+--------------------------------------------
+
+First, some background terminology:
+
+* **puppet**: Puppet is a system for automating deployment and management of servers (called nodes).
+* **hiera files**: In puppet, you can use something called a 'hiera file' to seed a node with a few configuration values. In LEAP, we go all out and put *every* configuration value needed for a node in the hiera file, and automatically compile a custom hiera file for each node.
+
+When you run `leap deploy`, a bunch of things happen, in this order:
+
+1. **Compile hiera files**: The hiera configuration file for each node is compiled in YAML format and saved in the directory `hiera`. The source material for this hiera file consists of all the JSON configuration files imported or inherited by the node's JSON config file.
+* **Copy required files to node**: All the files needed for puppet to run are rsync'ed to each node. This includes the entire leap_platform directory, as well as the node's hiera file and other files needed by puppet to set up the node (keys, binary files, etc).
+* **Puppet is run**: Once the node is ready, leap connects to the node via ssh and runs `puppet apply`. Puppet is applied locally on the node, without a daemon or puppetmaster.
+
+You can run `leap -v2 deploy` to see exactly what commands are being executed.
+
+
+Test that things worked correctly
+=================================
+
+You should now have three machines with the LEAP platform deployed to them, one for the web application, one for the database and one for the OpenVPN gateway.
+
+To run troubleshooting tests:
+
+ leap test
+
+If you want to confirm for yourself that things are working, you can perform the following manual tests.
+
+### Access the web application
+
+In order to connect to the web application in your browser, you need to point your domain at the IP address of the web application node (named wildebeest in this example).
+
+There are a lot of different ways to do this, but one easy way is to modify your `/etc/hosts` file. First, find the IP address of the webapp node:
+
+ $ leap list webapp --print ip_address
+
+Then modify `/etc/hosts` like so:
+
+ x.x.x.w leap.example.org
+
+Replacing 'leap.example.org' with whatever you specified as the `domain` in the `leap new` command.
+
+Next, you can connect to the web application either using a web browser or via the API using the LEAP client. To use a browser, connect to https://leap.example.org (replacing that with your domain). Your browser will complain about an untrusted cert, but for now just bypass this. From there, you should be able to register a new user and login.
+
+### Use the VPN
+
+You should be able to simply test that the OpenVPN gateway works properly by doing the following:
+
+ $ leap test init
+ $ sudo openvpn test/openvpn/production_unlimited.ovpn
+
+Or, you can use the LEAP client (called "bitmask") to connect to your new provider, create a user and then connect to the VPN.
+
+
+Additional information
+======================
+
+It is useful to know a few additional things.
+
+Useful commands
+---------------
+
+Here are a few useful commands you can run on your new local nodes:
+
+* `leap ssh wildebeest` -- SSH into node wildebeest (requires `leap node init wildebeest` first).
+* `leap list` -- list all nodes.
+* `leap list production` -- list only those nodes with the tag 'production'
+* `leap list --print ip_address` -- list a particular attribute of all nodes.
+* `leap cert update` -- generate new certificates if needed.
+
+See the full command reference for more information.
+
+Node filters
+-------------------------------------------
+
+Many of the `leap` commands take a "node filter". You can use a node filter to target a command at one or more nodes.
+
+A node filter consists of one or more keywords, with an optional "+" before each keyword.
+
+* keywords can be a node name, a service type, or a tag.
+* the "+" before the keyword constructs an AND condition
+* otherwise, multiple keywords together construct an OR condition
+
+Examples:
+
+* `leap list openvpn` -- list all nodes with service openvpn.
+* `leap list openvpn +production` -- only nodes of service type openvpn AND tag production.
+* `leap deploy webapp openvpn` -- deploy to all webapp OR openvpn nodes.
+* `leap node init ostrich` -- just init the node named ostrich.
+
+Keep track of your provider configurations
+------------------------------------------
+
+You should commit your provider changes to your favorite VCS whenever things change. This way you can share your configurations with other admins, all they have to do is to pull the changes to stay up to date. Every time you make a change to your provider, such as adding nodes, services, generating certificates, etc. you should add those to your VCS, commit them and push them to where your repository is hosted.
+
+Note that your provider directory contains secrets! Those secrets include passwords for various services. You do not want to have those passwords readable by the world, so make sure that wherever you are hosting your repository, it is not public for the world to read.
+
+What's next
+-----------------------------------
+
+Read the [LEAP platform guide](guide) to learn about planning and securing your infrastructure.
+
diff --git a/pages/docs/tech/en.haml b/pages/docs/tech/en.haml
new file mode 100644
index 0000000..c03b89b
--- /dev/null
+++ b/pages/docs/tech/en.haml
@@ -0,0 +1,5 @@
+- @title = "LEAP Technology Notes"
+- @nav_title = "Technology"
+- @summary = "Musings, notes, and background information on various technology issues that relate to LEAP."
+
+= child_summaries \ No newline at end of file
diff --git a/pages/docs/tech/hard-problems/en.md b/pages/docs/tech/hard-problems/en.md
new file mode 100644
index 0000000..4d9598a
--- /dev/null
+++ b/pages/docs/tech/hard-problems/en.md
@@ -0,0 +1,169 @@
+@title = 'Hard problems in secure communication'
+@nav_title = 'Hard problems'
+@summary = "How LEAP addresses the difficult problems in secure communication"
+
+## The big seven
+
+If you take a survey of interesting initiatives to create more secure communication, a pattern starts to emerge: it seems that any serious attempt to build a system for secure message communication eventually comes up against the following list of seven hard problems.
+
+1. **Public key problem**: Public key validation is very difficult for users to manage, but without it you cannot have confidentiality.
+2. **Availability problem**: People want to smoothly switch devices, and restore their data if they lose a device, but this is very difficult to do securely.
+3. **Update problem**: Almost universally, software updates are done in ways that invite attacks and device compromises.
+4. **Meta-data problem**: Existing protocols are vulnerable to meta-data analysis, even though meta-data is often much more sensitive than content.
+5. **Asynchronous problem**: For encrypted communication, you must currently choose between forward secrecy or the ability to communicate asynchronously.
+6. **Group problem**: In practice, people work in groups, but public key cryptography doesn't.
+7. **Resource problem**: There are no open protocols to allow users to securely share a resource.
+
+These problems appear to be present regardless of which architectural approach you take (centralized authority, distributed peer-to-peer, or federated servers).
+
+It is possible to safely ignore many of these problems if you don't particularly care about usability or matching the features that users have grown accustomed to with contemporary methods of online communication. But if you do care about usability and features, then you are stuck with finding solutions to these problems.
+
+## Our solutions
+
+In our work, LEAP has tried to directly face down these seven problems. In some cases, we have come up with solid solutions. In other cases, we are moving forward with temporary stop-gap measures and investigating long term solutions. In two cases, we have no current plan for addressing the problems.
+
+### Public key problem
+
+The problem:
+
+> Public keys is very difficult for users to manage, but without it you cannot have confidentiality.
+
+If proper key management is a precondition for secure communication, but it is too difficult for most users, what hope do we have?
+
+The problem of public keys breaks down into five discrete issues:
+
+* **Key discovery** is the process of obtaining the public key for a particular user identifier. Currently, there is no commonly accepted standard for mapping an identifier to a public key. For OpenPGP, many people use keyservers for this (although the keyserver infrastructure was not designed to be used in this way). A related problem is how a client can discover public key information for all the contacts in their addressbook for phonebook without revealing this information to a third party.
+* **Key validation** is the process ensuring that a public key really does map to a particular user identifier. This is also called the "binding problem" in computer science. Traditional methods of key validation have recently become discredited.
+* **Key availability** is the assurance that the user will have access, whenever needed, to their keys and the keys of other users. Almost every attempt to solve the key validation problem turns into a key availability problem, because once you have validated a public key, you need to make sure that this validation is available to the user on all the possible devices they might want to send or receive messages on.
+* **Key revocation** is the process of ensuring that people do not use an old public key that has been superseded by a new one.
+
+Of these problems, key validation is the most difficult and most central to proper key management. The two traditional methods of key validation are either the X.509 Certificate Authority (CA) system or the decentralized "Web of Trust" (WoT). Recently, these schemes have come under intense criticism. Repeated security lapses at many of the Certificate Authorities have revealed serious flaws in the CA system. On the other hand, in an age where we better understand the power of social network analysis and the sensitivity of the social graph, the exposure of metadata by a "Web of Trust" is no longer acceptable from a security standpoint.
+
+An alternative method of key validation is called TOFU for Trust On First Use. With TOFU, a public key is assumed to be the right key the first time it is used. TOFU can work well for long term associations and for people who are not being targeted for attack, but its security relies on the security of the discovery transport and the application's ability to retain a memory of discovered keys.
+
+TOFU can break down in many real-world situations where a user might need to generate new keys or securely communicate with a new contact. TOFU is widely used for protocols like SSH, where the user receives confirmation of key continuity each time they connect to a server. There is no such confirmation with asynchronous messaging protocols, making TOFU much less appropriate in these situations.
+
+Other strategies for addressing parts of the key management problem include:
+
+1. Inline Keys: Many projects plan to facilitate discovery by simply including the user's public key in every outgoing message (as an attachment, in a footer, or in a header).
+1. DNS: Key distributed via DNSSEC, where a service provider adds a DNS entry for each user containing the user's public key or fingerprint. This places all the trust in the DNS owner.
+1. Network perspective: Validation by key endorsement (third party signatures), with audits performed via network perspective.
+1. Introductions: Discovery and validation of keys through acquaintance introduction.
+1. Mobile: Although too lengthy to manually transcribe, an app on a mobile device can be used to easily exchange keys in person (for example, via a QR code or bluetooth connection).
+1. Append-only log: There is a proposal to modify Certificate Transparency to handle user accounts, where audits are performed against append-only logs.
+1. Biometric feedback: In the one case of voice communication, you can use recognition of the other person's voice as a means to validate the key (when used in combination with a Short Authentication String). This is how ZRTP works.
+
+For LEAP, we have developed a unique federated system called [Nicknym](/nicknym) that automatically discovers and validates public keys allowing the user to take advantage of public key cryptography without knowing anything about keys or signatures. Nicknym uses a combination of TOFU, provider endorsement, and network perspective. There is also a new proposal very similar to Nicknym called [Nyms](https://nymsio.github.io/) which we hope to also support. Nyms adds the ability of users from non-participating service providers to register their keys.
+
+### Availability problem
+
+The problem:
+
+> People want to smoothly switch devices, and restore their data if they lose a device, but this very difficult to do securely.
+
+Users today demand the ability to access their data on multiple devices and to have piece of mind that their data will not be lost forever if they lose a device.
+
+At LEAP, we have worked to solve the availability problem with a system we call [Soledad](/soledad) (for Synchronization of Locally Encrypted Documents Among Devices). Soledad gives the client application an encrypted, synchronized, searchable document database. All data is client encrypted, both when it is stored on the local device and synced with the cloud. This is very powerful, as it allow the client developer to take advantage of a rich document database without needing to worry about how it is backed up or synchronized.
+
+As far as we know, there is nothing else like it, either in the free software or commercial world. However, there are several projects in a similar problem space:
+
+* [Mylar](http://css.csail.mit.edu/mylar/), for client-encrypting web application data.
+* [Crypton](https://crypton.io/), for client-encrypting web application data.
+* [Firefox Sync](https://wiki.mozilla.org/Services/Sync)
+
+Soledad tries to solve the problem of general data availability, but other initiatives have tried to tackle the more narrow problem of availability of private keys and discovered public keys. These initiatives include:
+
+* [Whiteout key sync](https://blog.whiteout.io/2014/07/07/secure-pgp-key-sync-a-proposal/)
+* Nilcat, experimental [code for cloud storage of keys](https://github.com/mettle/nilcat)
+* Ben Laurie's [proposed protocol for storing secrets in the cloud](http://www.links.org/files/nigori/nigori-protocol-01.html)
+* Phillip Hallam-Baker's [thoughts along similar lines](http://tools.ietf.org/html/draft-hallambaker-prismproof-key-00)
+
+### Update problem
+
+The problem:
+
+> Almost universally, software updates are done in ways that invite attacks and device compromises.
+
+The sad state of update security is especially troublesome because update attacks can now be purchased off the shelf by repressive regimes. The problem of software update is particular bad on desktop platforms. In the case of mobile and HTML5 apps, the vulnerabilities are not as dire, but the issues are also harder to fix.
+
+To address the update problem, LEAP is adopting a unique update system called Thandy from the Tor project. Thandy is complex to manage, but is very effective at preventing known update attacks.
+
+Thandy, and the related [TUF](https://updateframework.com), are designed to address the many [security vulnerabilities in existing software update systems](https://github.com/theupdateframework/tuf/blob/develop/SECURITY.md). In one example, other update systems suffer from an inability of the client to confirm that they have the most up-to-date copy, thus opening a huge vulnerability where the attacker simply waits for a security upgrade, prevents the upgrade, and launches an attack exploiting the vulnerability that should have just been fixed. Thandy/TUF provides a unique mechanism for distributing and verifying updates so that no client device will install the wrong update or miss an update without knowing it.
+
+Related to the update problem is the backdoor problem: how do you know that an update does not have a backdoor added by the software developers themselves? Probably the best approach is that taken by [Gitian](https://gitian.org/), which provides a "deterministic build process to allow multiple builders to create identical binaries". We hope to adopt Gitian in the future.
+
+### Meta-data problem
+
+The problem:
+
+> Existing protocols are vulnerable to meta-data analysis, even though meta-data is often much more sensitive than content.
+
+As a short term measure, we are integrating opportunistic encrypted transport (TLS) for email and chat messages when relayed among servers. There are two important aspects to this:
+
+* Relaying servers need a solid way to discover and validate the keys of one another. For this, we are initially using DNSSEC/DANE.
+* An attacker must not be able to downgrade the encrypted transport back to cleartext. For this, we are modifying software to ensure that encrypted transport cannot later be downgraded.
+
+This approach is potentially effective against external network observers, but does not protect the meta-data from the service providers themselves. Also, it does not, by itself, protect against more advanced attacks involving timing and traffic analysis.
+
+In the medium term, LEAP plans to support direct delivery from client to server via Tor for service providers that support this. These anonymously delivered messages would be kept in a separate folder and only displayed to the user if the message signatures are valid. There is a lot of open debate as to the efficacy of using a low-latency onion routing network like Tor for something that might be better suited to a high latency mixing network. For now, Tor is useful because it exists and has a lot of traffic already. For a great discussion comparing mix networks and onion routing, see [Tom Ritter's blog post on the topic](https://ritter.vg/blog-mix_and_onion_networks.html).
+
+In the long term, we plan to adopt one of the proposed schemes for securely routing meta-data. These include:
+
+* Auto-alias-pairs: Each party auto-negotiates aliases for communicating with each other. Behind the scenes, the client then invisibly uses these aliases for subsequent communication. The advantage is that this is backward compatible with existing routing. The disadvantage is that the user's server stores a list of their aliases. As an improvement, you could add the possibility of a third party service to maintain the alias map.
+* Onion-routing-headers: A message from user A to user B is encoded so that the "to" routing information only contains the name of B's server. When B's server receives the message, it unwraps (unencrypts) a supplementary header that contains the actual user "B". Like aliases, this provides no benefit if both users are on the same server. As an improvement, the message could be routed through intermediary servers.
+* Third-party-dropbox: To exchange messages, user A and user B negotiate a unique "dropbox" URL for depositing messages, potentially using a third party. To send a message, user A would post the message to the "dropbox". To receive a message, user B would regularly polls this URL to see if there are new messages.
+* Mixmaster-with-signatures: Messages are bounced through a mixmaster-like set of anonymization relays and then finally delivered to the recipient's server. The user's client only displays the message if it is encrypted, has a valid signature, and the user has previously added the sender to a 'allow list' (perhaps automatically generated from the list of validated public keys).
+* Tor: One scheme employed by Pond is to simply allow for direct delivery over Tor from the sender's device to the recipient's server. This is fairly simple, and places all the work on the existing Tor network.
+
+In all of these cases, meta-data protected routing can make abuse prevention more difficult. For this reason, it probably makes sense to only allow once of these options once both parties have already exchanged key material, in order to prevent the user being flooded with anonymous Spam.
+
+### Asynchronous problem
+
+The problem:
+
+> For encrypted communication, you must currently choose between forward secrecy or the ability to communicate asynchronously.
+
+With the pace of growth in digital storage and decryption, forward secrecy is increasingly important. Otherwise, any encrypted communication you engage in today is likely to become cleartext communication in the near future.
+
+In the example of email and chat, we have OpenPGP with email and OTR with chat: the former provides asynchronous capabilities, and the latter forward secrecy, but neither one supports both abilities. We need both better security for email and the ability to send/receive offline chat messages.
+
+In the short term, we are layering forward secret transport for email and chat relay on top of traditional object encryption (OpenPGP). This approach is identical to our stop-gap approach for the meta-data problem, with the one addition that relaying servers need the ability to not simply negotiate TLS transport, but to also negotiate forward secret ciphers and to prevent a cipher downgrade.
+
+This approach is potentially effective against external network observers, but does not achieve forward secrecy from the service providers themselves.
+
+In the long term, we plan to work with other groups to create new encryption protocol standards that can be both asynchronous and forward secret:
+
+* [Triple elliptical curve Diffie-Hellman handshake](https://whispersystems.org/blog/simplifying-otr-deniability/)
+* [Forward Secrecy Extensions for OpenPGP](http://tools.ietf.org/html/draft-brown-pgp-pfs-03)
+
+The [Axolotl protocol](https://github.com/trevp/axolotl/wiki) used by both Pond and TextSecure currently has the most mature approach to asynchronous forward secrecy. This could be added as an invisible upgrade to normal email encryption when the client detects that both parties support it.
+
+### Group problem
+
+The problem:
+
+> In practice, people work in groups, but public key cryptography doesn't.
+
+We have a lot of ideas, but we don't have any solutions yet to fix this. Essentially, the question is how to use existing public key primitives to create strong cryptographic groups, where membership and permissions are based on keys and not arbitrary server-maintained access control lists.
+
+Most of the interesting work in this area has been done by companies working on secure file backup/sync/sharing, such as Wuala and Spideroak. Unfortunately, there are not yet any good open protocols or free software packages that can handle group cryptography.
+
+At the moment, probably the best approach is the simple approach: a protocol where the client encrypts each message to each recipient individually, and has some mechanism to verify the transcript to ensure that all parties received the same messages.
+
+There is some free software work on some of interesting building blocks that could be useful in building group cryptography. For example:
+
+* [Proxy re-encryption](https://en.wikipedia.org/wiki/Proxy_re-encryption): This allows the server to re-encrypt to new recipients without gaining access to the cleartext. The [SELS mailing list manager](http://sels.ncsa.illinois.edu/) uses OpenPGP to implement a [clever scheme for proxy re-encryption](http://spar.isi.jhu.edu/~mgreen/proxy.pdf).
+* [Ring signatures](https://en.wikipedia.org/wiki/Ring_signature): This allows any member of a group to sign, withing anyone knowing which member.
+
+### Resource problem
+
+The problem:
+
+> There are no open protocols to allow users to securely share a resource.
+
+For example, when using secure chat or secure federated social networking, you need some way to link to external media, such as an image, video or file, that has the same security guarantees as the message itself. Embedding this type of resource in the messages themselves is prohibitively inefficient.
+
+We don't have a proposal for how to address this problem. There are a lot of great initiatives working under the banner of read-write-web, but these do not take encryption into account. In many ways, solutions to the resource problem are dependent on solutions to the the group problem.
+
+As with the group problem, most of the progress in this area has been by people working on encrypted file sync (e.g. strategies like Lazy Revocation and Key Regression).
+
diff --git a/pages/docs/tech/hard-problems/pt.md b/pages/docs/tech/hard-problems/pt.md
new file mode 100644
index 0000000..50c0541
--- /dev/null
+++ b/pages/docs/tech/hard-problems/pt.md
@@ -0,0 +1,133 @@
+@title = 'Problemas difíceis na comunicação segura'
+@nav_title = 'Problemas difíceis'
+@summary = "Como o LEAP aborda os problemas difíceis na comunicação segura"
+
+## Os sete grandes
+
+Se você pesquisar iniciativas interessantes para a criação de formas mais seguras de comunicação, verá que surge um padrão: aparentemente toda tentativa séria de construir um sistema para transmissão de mensagens seguras eventualmente se depara com a seguinte lista de sete problemas difíceis:
+
+1. **Problema da autenticidade**: a validação de chaves públicas é muito difícil para ser gerenciada por usuários, mas sem isso não é possível obter confidencialidade.
+2. **Problema dos metadados**: os protocolos existentes são vulneráveis à análise de metadados, mesmo que os metadados muitas vezes sejam mais sensíveis do que o conteúdo da comunicação.
+3. **Problema da assincronicidade**: para estabelecer comunicação criptografada, atualmente é necessário escolher entre sigilo futuro (forward secrecy) e a habilidade de se comunicar de forma assíncrona.
+4. **Problema do grupo**: na prática, pessoas trabalham em grupos, mas a criptografia de chave pública não.
+5. **Problema dos recursos**: não existem protocolos abertos que permitam aos usuários compartilharem um recurso de forma segura.
+6. **Problema da disponibilidade**: as pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
+7. **Problema da atualização**: quase que universalmente, atualizações de software são feitas de maneiras que são convidativas a ataques e comprometimento de dispositivos.
+
+Tais problemas parecem estar presentes independentemente da abordagem arquitetônica escolhida (autoridade centralizada, peer-to-peer distribuído ou servidores federados).
+
+É possível ignorar muitos desses problemas se você não se importar especificamente com a usabilidade ou com o conjunto de funcionalidades com as quais os/as usuários/as se acostumaram nos métodos contemporâneos de comunicação online. Mas se você se importa com a usabilidade e recursos, então você terá que encontrar soluções para esses problemas.
+
+## Nossas soluções
+
+Em nosso trabalho, o LEAP tentou enfrentar diretamente esses sete problemas. Em alguns casos, chegamos a soluções sólidas. Noutros, estamos avançando com medidas paliativas temporárias e investigando soluções de longo prazo. Em dois casos não temos nenhum plano atual para lidar com os problemas.
+
+### O problema da autenticidade
+
+O problema:
+
+> A validação de chaves públicas é muito difícil para ser gerenciada por usuários, mas sem isso não é possível obter confidencialidade.
+
+Se a validação de chaves adequada é um pressuposto para uma comunicação segura, mas é muito difícil para a maioria dos usuários/as, que esperança temos? Desenvolvemos um sistema federado único chamado [Nicknym](/nicknym) que descobre e valida automaticamente as chaves públicas, permitindo ao usuário tirar partido de criptografia de chave pública sem saber nada sobre chaves ou assinaturas.
+
+O protocolo padrão que existe hoje para solucionar este problema chama-se [DANE](https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities). O DANE pode ser a melhor opção no longo prazo, mas atualmente é difícil de ser configurado, difícil de ser utilizado por clientes, vaza informações sobre associação para um observador da rede, e depende da confiança na zona raíz do DNS e nas zonas TLD.
+
+### Problema dos metadados
+
+O problema:
+
+> Os protocolos existentes são vulneráveis à análise de metadados, mesmo que os metadados muitas vezes sejam mais sensíveis do que o conteúdo da comunicação.
+
+Como medida de curto prazo, estamos integrando transporte criptografado oportunístico (TLS) para email e mensagens de chat ao serem retransmitidas entre servidores. Há dois aspectos importantes nisso:
+
+* Servidores repetidores (relaying servers) precisam de uma maneira sólida para descobrir e validar as chaves uns dos outros. Para isso, estamos utilizando inicialmente DNSSEC/DANE.
+* Um atacante não deve ser capaz de fazer o downgrade do transporte criptografado para texto não cifrado. Para isso, estamos modificando o software para assegurar que o transporte criptografado não possa sofrer downgrade.
+
+Tal abordagem é potencialmente eficaz contra observadores externos na rede, mas não protege os metadados dos próprios provedores de serviços. Além disso, ela não protege, por si só, contra ataques mais avançados que envolvam análise de tráfego e de tempo.
+
+No longo prazo, pretendemos adotar um dos vários esquemas distintos para roteamento seguro de metadados. Estes incluem:
+
+* Pareamento automático de pseudônimos (auto-alias-pairs): cada uma das partes autonegocia pseudônimos para se comunicarem umas com as outras. Nos bastidores, o cliente utiliza de forma invisível esses pseudônimos para a comunicação subsequente. A vantagem é que isso é compatível com o roteamento existente. A desvantagem é que o servidor do usuário/a armazena uma lista de seus pseudônimos. Como uma melhoria, pode-se adicionar a possibilidade de usar um serviço de terceiros para manter o mapa dos pseudônimos.
+* Cabeçalhos de roteamento do tipo "cebola" (onion-routing-headers): uma mensagem de um/a usuário/a para o/a usuário/a B é codificada de forma que as informações de roteamento do destinatário/a contenham apenas o nome do servidor usado por B. Quando o servidor de B recebe a mensagem, decodifica um cabeçalho adicional que contém o utilizador real "B". Como o uso de pseudônimos, isso não proporciona benefícios se os usuários estão no mesmo servidor. Como uma melhoria, a mensagem pode ser encaminhada por meio de servidores intermediários.
+* Caixa de depósito de terceiros (third-party dropbox): para trocar mensagens, o/a usuário/a A e o/a usuário/a B negociam uma URL única de uma "caixa de depósito" (dropbox) para depositar mensagens, potencialmente usando um agente intermediário. Para enviar uma mensagem, o usuário A depositaria a mensagem na caixa. Para receber uma mensagem, o usuário B acessaria regularmente esta URL para ver se há novas mensagens.
+* Misturador com assinaturas (mixmaster-with-signatures): as mensagens são enviadas através de um conjunto de repetirores anonimizadores do tipo mixmaster e ao final são entregues ao servidor do destinatário. O programa cliente do usuário apenas exibe a mensagem se ela for criptografada, tiver uma assinatura válida, e se o usuário tiver adicionado anteriormente o remetente a uma 'lista de permissões' (talvez gerada automaticamente a partir da lista de chaves públicas validadas).
+
+Para uma boa discussão comparando redes misturadoras com roteamento cebola, veja a [postagem no blog de Tom Ritter](https://ritter.vg/blog-mix_and_onion_networks.html) sobre o tema.
+
+### Problema da assincronicidade
+
+O problema:
+
+> Para estabelecer comunicação criptografada, atualmente é necessário escolher entre sigilo futuro (forward secrecy) e a habilidade de se comunicar de forma assíncrona.
+
+Com o ritmo de crescimento do armazenamento digital e da criptanálise, o sigilo futuro é cada vez mais importante. Caso contrário, qualquer comunicação criptografada que você fizer hoje possivelmente se tornará uma comunicação em texto não cifrado num futuro próximo.
+
+No caso do email e do bate-papo, existem o OpenPGP para email e o OTR para bate-papo: o primeiro fornece recursos assíncronos e o segundo fornece sigilo futuro, mas nenhum deles possuem ambas as habilidades. Precisamos tanto de uma melhor segurança para email quanto da capacidade de enviar e receber mensagens de bate-papo em modo offline.
+
+No curto prazo, estamos empilhando transporte de email com sigilo futuro e relay de chat em cima de criptografia tradicional de objetos (OpenPGP). Esta abordagem é idêntica à nossa abordagem paliativa para o problema dos metadados, com o acréscimo de que os servidores repetidores precisam ter a capacidade de não apenas negociar transporte TLS mas também de negociar cifras que suportem sigilo futuro e que evitem uma precarização (downgrade) da cifra utilizada.
+
+Esta abordagem é potencialmente eficaz contra os observadores externos na rede, mas não obtém sigilo futuro dos próprios prestadores de serviço.
+
+No longo prazo, pretendemos trabalhar com outros grupos para criar novos padrões de protocolo de criptografia que podem ser tanto assíncronos quanto permitir o sigilo futuro:
+
+ * [Extensões para sigilo futuro para o OpenPGP](http://tools.ietf.org/html/draft-brown-pgp-pfs-03).
+ * [Handshake Diffie-Hellman triplo com curvas elípticas](https://whispersystems.org/blog/simplifying-otr-deniability/).
+
+### Problema do grupo
+
+O problema:
+
+> Na prática, as pessoas trabalham em grupos, mas a criptografia de chave pública não.
+
+Temos um monte de ideias, mas não temos ainda uma solução para corrigir este problema. Essencialmente, a questão é como usar primitivas de chaves públicas existentes para criar grupos criptográficos fortes, onde a adesão e as permissões são baseadas em chaves e em listas de controle de acesso mantidas no lado do servidor.
+
+A maioria dos trabalhos interessantes nesta área tem sido feitos por empresas que trabalham com backup/sincronização/compartilhamento seguro de arquivos, como Wuala e Spideroak. Infelizmente, ainda não há quaisquer protocolos abertos bons ou pacotes de software livre que possam lidar com criptografia para grupos.
+
+Neste momento, é provável que a melhor abordagem seja a abordagem simples: um protocolo no qual o cliente criptografa cada mensagem para cada destinatário individualmente, e que tenha algum mecanismo para verificação da transcrição de forma a garantir que todas as partes tenham recebido a mesma mensagem.
+
+Existem alguns trabalhos em software livre com blocos construtivos interessantes que podem ser úteis na construção da criptografia para grupos. Por exemplo:
+
+ * [Re-criptografia de proxy (proxy re-encryption)](https://en.wikipedia.org/wiki/Proxy_re-encryption): permite que o servidor cifre o conteúdo para novos beneficiários sem que tenha acesso ao texto não cifrado. O [gerenciador de lista de discussão SELS](http://sels.ncsa.illinois.edu/) usa OpenPGP para implementar um [sistema inteligente para o proxy de re-encriptação](http://spar.isi.jhu.edu/~mgreen/proxy.pdf).
+ * [Assinaturas em anel (ring signatures)](https://en.wikipedia.org/wiki/Ring_signature): permite que qualquer membro do grupo assine, sem que se possa saber qual membro fez a assinatura.
+
+### Problema dos recursos
+
+O problema:
+
+> Não existem protocolos abertos que permitam aos usuários compartilharem um recurso de forma segura.
+
+Por exemplo, ao usar um bate-papo seguro ou rede social segura federada, você precisa de alguma forma de criar links para uma mídia externa, como uma imagem, vídeo ou arquivo, que tenha as mesmas garantias de segurança que a própria mensagem. A incorporação deste tipo de recurso nas mensagens em si é proibitivamente ineficiente.
+
+Nós não temos uma proposta de como resolver este problema. Há um monte de grandes iniciativas que trabalham sob a bandeira da read-write-web, mas que não levam em conta a criptografia. De muitas maneiras, as soluções para o problema dos recursos são dependentes de soluções para o problema do grupo.
+
+Tal como acontece com o problema do grupo, a maior parte do progresso nesta área tem sido por pessoas que trabalham em sincronização de arquivos criptografados (por exemplo, estratégias como a Revogação Preguiçosa -- Lazy Revocation -- e Regressão de Chave -- Key Regression).
+
+### Problema da disponibilidade
+
+O problema:
+
+> As pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
+
+Os/as usuários/as atuais exigem a capacidade de acessar seus dados em múltiplos dispositivos e de terem em mente que dados não serão perdidos para sempre se perderem um dispositivo. No mundo do software livre, só o Firefox abordou este problema adequadamente e de forma segura (com o Firefox Sync).
+
+No LEAP, temos trabalhado para resolver o problema de disponibilidade com um sistema que chamamos de [Soledad](/soledad) (um acrônimo em inglês para "sincronização, entre dispositivos, de documentos criptografados localmente"). Soledad dá ao aplicativo cliente um banco de dados de documentos sincronizado, pesquisável e criptografado. Todos os dados são criptografados no lado do cliente, tanto quando ele é armazenado no dispositivo local quanto quando é sincronizado com a nuvem. Até onde sabemos, não há nada parecido com isso, seja no mundo do software livre ou comercial.
+
+Soledad tenta resolver o problema genérico da disponibilidade de dados, mas outras iniciativas tentaram abordar o problema mais específico das chaves privadas e da descoberta de chaves públicas. Estas iniciativas incluem:
+
+* [O protocolo proposto por Ben Laurie para armazenamento de segredos na nuvem](http://www.links.org/files/nigori/nigori-protocol-01.html).
+* [Código para armazenamento de chaves na nuvem](https://github.com/mettle/nilcat), experimental e similar ao anterior.
+* [Comentários de Phillip Hallam-Baker sobre questões similares](http://tools.ietf.org/html/draft-hallambaker-prismproof-key-00).
+
+### O problema da atualização
+
+O problema:
+
+> Quase que universalmente, atualizações de software são feitas de maneiras que são convidativas a ataques e comprometimento de dispositivos.
+
+O triste estado das atualizações de segurança é especialmente problemático porque os ataques de atualização já podem ser comprados prontos por regimes repressores. O problema de atualização de software é especialmente ruim em plataformas desktop. No caso aplicativos em HTML5 ou para dispositivos móveis, as vulnerabilidades não são tão terríveis, mas os problemas também são mais difíceis de corrigir.
+
+Para resolver o problema da atualização, o LEAP está adotando um sistema de atualização exclusivo chamado Thandy do projeto Tor. Thandy é complexo para administrar, mas é muito eficaz na prevenção de ataques de atualização conhecidos.
+
+Thandy, e o projeto relacionado [TUF](https://updateframework.com/), são projetados para dar conta das muitas [vulnerabilidades de segurança em sistemas de atualização de software](https://updateframework.com/projects/project/wiki/Docs/Security) existentes. Num exemplo, outros sistemas de atualização sofrem de uma incapacidade do cliente de confirmar que possuem a cópia mais recente, abrindo assim uma enorme vulnerabilidade onde o atacante simplesmente espera por uma atualização de segurança, evita que o upgrade ocorra e lança um ataque para a exploração da vulnerabilidade que deveria ter acabado de ser corrigida. Thandy/TUF fornecem um mecanismo único para a distribuição e verificação de atualizações de modo que nenhum dispositivo cliente irá instalar a atualização errada ou perder uma atualização sem saber.
+
+Um problema relacionado com o problema da atualização é o problema do backdoor: como você sabe que uma atualização não tem um backdoor adicionado pelos próprios desenvolvedores do software? Provavelmente, a melhor abordagem é aquela tomada pelo [Gitian](https://gitian.org/), que fornece um "processo de construção determinística para permitir que vários construtores criem binários idênticos". Nós pretendemos adotar o Gitian no futuro.
diff --git a/pages/docs/tech/infosec/_table-style.haml b/pages/docs/tech/infosec/_table-style.haml
new file mode 100644
index 0000000..c9a3495
--- /dev/null
+++ b/pages/docs/tech/infosec/_table-style.haml
@@ -0,0 +1,59 @@
+%style
+ :sass
+ table.properties
+ td
+ vertical-align: top
+ padding-bottom: 0.75em
+ th
+ vertical-align: top
+ padding-right: 1em
+ text-align: right
+ font-weight: normal
+ font-style: italic
+
+ table.infosec
+ width: 100%
+ border-collapse: collapse
+ tbody
+ border-top: 1px solid black
+ border-bottom: 1px solid black
+ th span.normal
+ font-weight: normal
+ th.first, th.second, th.cell
+ width: 14.285714286%
+ th.spacer, td.spacer
+ width: 1px !important
+ padding: 0 !important
+ background: black
+ // border: 1px dotted black
+ //border: 0 !important
+ //th.second
+ // width: 0%
+ //th.cell
+ // width: 20%
+ td.cell
+ border-top: 1px solid black
+ border-bottom: 1px solid black
+ //border-right: 1px dotted rgba(0,0,0,.1)
+ border-left: 1px dotted rgba(0,0,0,.1)
+ text-align: center
+ padding: 4px
+ &.none
+ //background-color: #ccc
+ background: #888
+ &.low, &.lower, &.worse
+ //background-color: #FFCCCC
+ background: #aaa
+ &.medium, &.higher
+ //background-color: #FFFFCC
+ background: #ccc
+ &.high, &.better
+ //background-color: #CCFFCC
+ background: #e6e6e6
+ &.better, &.worse
+ font-weight: bold
+ tr.footer td
+ border-left: 1px dotted rgba(0,0,0,.1)
+ text-align: center
+ font-size: 0.8em
+
diff --git a/pages/docs/tech/infosec/_table.haml b/pages/docs/tech/infosec/_table.haml
new file mode 100644
index 0000000..6981dac
--- /dev/null
+++ b/pages/docs/tech/infosec/_table.haml
@@ -0,0 +1,233 @@
+:ruby
+ table_type = locals[:table_type]
+ if table_type == :small
+ ##
+ ## SMALL TABLE
+ ##
+ columns = [:p2p, :ssilo, :sfed]
+ column_data = {
+ :ssilo => [:silo, :encrypted],
+ :sfed => [:federation, :encrypted],
+ :p2p => [:peer_to_peer, :encrypted]
+ }
+ rows = [:availability, :usability, :compatibility, :authenticity, :control, :anonymity]
+ row_groups = []
+ footer = false
+ cells = {
+ :ssilo => {
+ :control => [:lower],
+ :compatibility => [:lower],
+ :usability => [:higher],
+ :authenticity => [:lower],
+ :availability => [:higher],
+ :anonymity => [:lower]
+ },
+ :sfed => {
+ :control => [:higher],
+ :compatibility => [:higher],
+ :usability => [:lower],
+ :authenticity => [:higher],
+ :availability => [:lower],
+ :anonymity => [:lower]
+ },
+ :p2p => {
+ :control => [:higher],
+ :compatibility => [:lower],
+ :usability => [:lower],
+ :authenticity => [:higher],
+ :availability => [:lower],
+ :anonymity => [:higher]
+ }
+ }
+ elsif table_type == :big
+ ##
+ ## BIG TABLE
+ ##
+ columns = [:silo, :fed, :ssilo, :sfed, :p2p]
+ column_data = {
+ :silo => [:silo, :cleartext, :silo_example],
+ :fed => [:federation, :cleartext, :fed_example],
+ :ssilo => [:silo, :encrypted, :ssilo_example],
+ :sfed => [:federation, :encrypted, :sfed_example],
+ :p2p => [:peer_to_peer, :encrypted, :p2p_example],
+ :spacer => [:spacer, :spacer, :spacer]
+ }
+ rows = [
+ :control, :compatibility, :usability,
+ :anonymity, :unmappability, :authenticity,
+ :availability, :confidentiality, :integrity
+ ]
+ row_groups = [:message_security, :identity_security, :user_freedom]
+ row_groups_data = {
+ :user_freedom => [:control, :compatibility, :usability],
+ :identity_security => [:authenticity, :anonymity, :unmappability],
+ :message_security => [:confidentiality, :integrity, :availability]
+ }
+ footer = true
+ cells = {
+ :silo => {
+ :control => [:none],
+ :compatibility => [:none],
+ :usability => [:high],
+ :anonymity => [:none],
+ :unmappability => [:none],
+ :authenticity => [:none],
+ :availability => [:high],
+ :confidentiality => [:none],
+ :integrity => [:none]
+ },
+ :fed => {
+ :control => [:medium],
+ :compatibility => [:high],
+ :usability => [:medium],
+ :anonymity => [:none],
+ :unmappability => [:none],
+ :authenticity => [:none],
+ :availability => [:medium],
+ :confidentiality => [:none],
+ :integrity => [:none]
+ },
+ :ssilo => {
+ :control => [:none],
+ :compatibility => [:none],
+ :usability => [:high],
+ :anonymity => [:low],
+ :unmappability => [:none],
+ :authenticity => [:none],
+ :availability => [:high],
+ :confidentiality => [:high],
+ :integrity => [:high]
+ },
+ :sfed => {
+ :control => [:medium],
+ :compatibility => [:medium],
+ :usability => [:low],
+ :anonymity => [:low],
+ :unmappability => [:none],
+ :authenticity => [:low],
+ :availability => [:medium],
+ :confidentiality => [:high],
+ :integrity => [:high]
+ },
+ :p2p => {
+ :control => [:high],
+ :compatibility => [:none],
+ :usability => [:low],
+ :anonymity => [:medium],
+ :unmappability => [:medium],
+ :authenticity => [:low],
+ :availability => [:low],
+ :confidentiality => [:high],
+ :integrity => [:high]
+ },
+ :spacer => {
+ :control => [:spacer],
+ :compatibility => [:spacer],
+ :usability => [:spacer],
+ :anonymity => [:spacer],
+ :unmappability => [:spacer],
+ :authenticity => [:spacer],
+ :availability => [:spacer],
+ :confidentiality => [:spacer],
+ :integrity => [:spacer]
+ }
+ }
+ elsif table_type == :leap
+ ##
+ ## LEAP TABLE
+ ##
+ columns = [:fed, :sfed, :leap]
+ column_data = {
+ :ssilo => [:silo, :encrypted],
+ :sfed => [:federation, :encrypted],
+ :p2p => [:peer_to_peer, :encrypted],
+ :fed => [:federation, :cleartext],
+ :leap => [:leap, :encrypted]
+ }
+ rows = [
+ :control, :compatibility, :usability,
+ :anonymity, :unmappability, :authenticity,
+ :availability, :confidentiality, :integrity
+ ]
+ row_groups = [:message_security, :identity_security, :user_freedom]
+ row_groups_data = {
+ :user_freedom => [:control, :compatibility, :usability],
+ :identity_security => [:authenticity, :anonymity, :unmappability],
+ :message_security => [:confidentiality, :integrity, :availability]
+ }
+ footer = false
+ cells = {
+ :fed => {
+ :control => [:medium],
+ :compatibility => [:high],
+ :usability => [:medium],
+ :anonymity => [:none],
+ :unmappability => [:none],
+ :authenticity => [:none],
+ :availability => [:medium],
+ :confidentiality => [:none],
+ :integrity => [:none]
+ },
+ :sfed => {
+ :control => [:medium],
+ :compatibility => [:medium],
+ :usability => [:low],
+ :anonymity => [:low],
+ :unmappability => [:none],
+ :authenticity => [:low],
+ :availability => [:medium],
+ :confidentiality => [:high],
+ :integrity => [:high]
+ },
+ :leap => {
+ :control => [:medium],
+ :compatibility => [:worse],
+ :usability => [:better],
+ :anonymity => [:low],
+ :unmappability => [:better],
+ :authenticity => [:better],
+ :availability => [:medium],
+ :confidentiality => [:high],
+ :integrity => [:high]
+ }
+ }
+ end
+
+%table.infosec
+ %tr
+ %th.first
+ - if row_groups.any?
+ %th.second
+ - columns.each do |column|
+ - if column == :spacer
+ %th.spacer
+ - else
+ %th.cell
+ = I18n.t(column_data[column][0], :scope => 'infosec')
+ %br<>
+ %span.normal
+ = I18n.t(column_data[column][1], :scope => 'infosec')
+ - if row_groups.any?
+ - row_groups.each do |row_group|
+ %tbody
+ - rows = row_groups_data[row_group]
+ - rows.each do |row|
+ %tr
+ - if rows.first == row
+ %td{:rowspan=>3}= I18n.t(row_group, :scope => 'infosec').sub(' ', '<br/>')
+ %td= I18n.t(row, :scope => 'infosec')
+ - columns.each do |column|
+ %td.cell{:class => cells[column][row]}= I18n.t(cells[column][row].first, :scope => 'infosec')
+ - else
+ - rows.each do |row|
+ %tbody
+ %tr
+ %td= I18n.t(row, :scope => 'infosec')
+ - columns.each do |column|
+ %td.cell{:class => cells[column][row]}= I18n.t(cells[column][row].first, :scope => 'infosec')
+ - if footer
+ %tr.footer
+ %td{:colspan=>2}= I18n.t(:for_example, :scope => 'infosec')
+ - columns.each do |column|
+ %td= I18n.t(column_data[column][2], :scope => 'infosec')
+
diff --git a/pages/docs/tech/infosec/en.haml b/pages/docs/tech/infosec/en.haml
new file mode 100644
index 0000000..6b042ec
--- /dev/null
+++ b/pages/docs/tech/infosec/en.haml
@@ -0,0 +1,105 @@
+- @title = "Architecture comparison"
+- @nav_title = "Architecture comparison"
+- @summary = "A comparison of the trade-offs made by different communication archectures"
+
+= render :partial => 'table-style'
+
+%h1.first You can't have it all
+
+%p Every messaging architecture makes certain design choices that privilege one property of information security over another. Although there is no intrinsically necessary trade off among different information security properties, when we examine the technical limitations of actual implementations we see clearly that existing architectures are structurally biased toward certain properties and against others.
+
+%h1 A fancy table
+
+%p This table provides a rough comparison of the choices made by common messaging architectures. See #{link 'below for details' => '#table-notes'} regarding the column and row headings.
+
+.p
+ %b Table 1. Information security of common messaging architectures
+ = render partial: 'table', locals: {table_type: :big}
+
+%p Reasonable people may disagree: this table represents one defensible assessment of the various architecture categories. Many people would adjust one or two cells, but on the whole we believe this table is a fair and accurate comparison. Some squares get low marks because of user error. For example, peer-to-peer systems have a hard time with user friendly keys, leading to high user error and low effective authenticity.
+
+%p In table 2 we see a simplified representation that highlights the relative differences between the encrypted architectures:
+
+.p
+ %b Table 2. Relative trade-offs of encrypted messaging architectures
+ = render partial: 'table', locals: {table_type: :small}
+
+%p Relatively better is not necessarily good. For example, federated and peer-to-peer models have better authenticity than silo models, but still in practice have many authenticity problems.
+
+%h1 The LEAP strategy
+
+%p In a nutshell, the LEAP strategy is this: take a federated architecture and improve the authenticity, unmappability, and usability. In table form, that looks like this:
+
+.p
+ %b Table 3. The LEAP strategy for improving federated architectures
+ = render partial: 'table', locals: {table_type: :leap}
+
+%p Why this focus on authenticity, unmappability, and usability?
+
+%p First, there is a lot of room for improvement. We believe that there is actually no particular structural reason why these properties are so low in existing federated encrypted architectures.
+
+%p Second, these property are extremely important and yet are typically given low priority or are ignored completely.
+
+%ul
+ %li
+ %b Authenticity:
+ Message security rests entirely on a foundation of authenticity. Without proper validation of encryption keys, you cannot be assured of confidentiality or integrity. Unfortunately, current system of establishing message authenticity are so difficult to use that most users simply ignore this step. LEAP will address these problems with a system of #{link 'strong and automatic identity validation' => 'nicknym'}.
+ %li
+ %b Usability:
+ There are plenty of high security tools that are nearly impossible for the common user to use correctly. If the tool is too difficult, it will not be widely adopted and will likely be used incorrectly by those who do adopt it. LEAP with address these problems with the #{link 'LEAP client' => 'client'} that is tightly coupled with the server-side software and is autoconfiguring.
+ %li
+ %b Unmappability:
+ Recent advances in social network analysis and the greatly expanded of ability state and corporate actors to gather social graph information have made unmappability an urgent requirement for any architecture that seeks to address the surveillance situation we face today. LEAP will address these problems with our proposal for #{link 'graph resistant routing' => 'routing'}.
+
+%p Improvement in these areas will come at a price. Although LEAP communication tools will be backward compatible with existing federated standards, a user of the LEAP platform will not have the same degree of choice in client software and provider as does a user of a traditional federated system. Our goal is to actively help providers adopt the LEAP platform, in order to give users more options in the long run.
+
+%h1#table-notes Decoding the table
+
+%h2 Communication architectures (columns)
+
+(to be written)
+
+%h2 Aspects of information security (rows)
+
+%p Classical information security consists of a trio of properties: confidentiality, integrity, availability. To this list, others have added authenticity, control, and anonymity (among many others).
+
+%p For our purposes here, we also add usability, compatibility, and unmappability. What do all these mean? Let's use the example of a single message, and group these nine properties in three categories:
+
+%h3 Message Security
+
+%table.properties
+ %tr
+ %th Confidentiality
+ %td A message has highly confidentiality if only the intended recipients are able to read the message.
+ %tr
+ %th Integrity
+ %td A message has high integrity if the recipient is assured the message has not been altered.
+ %tr
+ %th Availability
+ %td A message has high availability if the user is able to get to the message when they so desire.
+
+%h3 Identity Security
+
+%table.properties
+ %tr
+ %th Authenticity
+ %td A message has high authenticity if the recipient is certain who sent the message and the sender is certain who received it.
+ %tr
+ %th Anonymity
+ %td A message has high anonymity if the identity of the sender cannot be established by examining the message or the pattern of message delivery.
+ %tr
+ %th Unmappability
+ %td A message has high unmappability if the social network that you communicate with cannot be easily discovered. Unmappability is often collapsed under anonymity. This is unfortunate. It is true the anonymity is one of the issue at stake with social network mapping, but it is just one of many. Because of recent advances in social network analysis and the ability to gather social graph information, we feel that unmappability deserves to be highlighted on its own.
+
+%h3 User Freedom
+
+%table.properties
+ %tr
+ %th Control
+ %td If a user is in possession of their own data and can do with it exactly what they want (and others cannot use the data in ways contrary to the wishes of the user), then we say that they have high control.
+ %tr
+ %th Usability
+ %td For a communication system to have high usability, the common user must be able to operate the system in a way that does not compromise their security.
+ %tr
+ %th Compatibility
+ %td For a system to have high compatibility, the user must not be locked into a particular provider or technology, but should have competing and compatible options available to them. In other words, a user's data should be portable and they should have a choice of clients.
diff --git a/pages/docs/tech/limitations.md b/pages/docs/tech/limitations.md
new file mode 100644
index 0000000..77592f3
--- /dev/null
+++ b/pages/docs/tech/limitations.md
@@ -0,0 +1,123 @@
+@title = 'Known Limitations'
+@toc = true
+@summary = 'Known limitations, issues, and security problems with the LEAP platform'
+
+Herein lie the know limitations, issues, and security problems of the LEAP platform and Bitmask client application.
+
+Provider problems
+==========================================
+
+Meta-data can be recorded by the provider
+-------------------------------------------------
+
+Currently, the service provider is able to observe the meta-data routing information of messages in transit of their own users (email and chat). This information is not stored, but a nefarious provider could observe this information in transit and record it.
+
+We have several [plans to eliminate this](/routing), but these are not part of the initial release.
+
+Incoming cleartext email can be recorded by the provider
+---------------------------------------------------------------------
+
+Currently, if an incoming email is not already encrypted, the provider encrypts the email to the recipient's public key.
+
+Potentially, a compromised or nefarious service provider could alter the LEAP software to keep a copy of these cleartext emails. Over time, as more people send encrypted email, this will become less of an issue. Providers will simply see fewer and fewer incoming cleartext emails.
+
+The provider can undermine the security of the web application
+-------------------------------------------------------------------------
+
+Both the client and the web application use something called SRP (Secure Remote Password) to prevent the server from ever seeing a cleartext copy of the password. This is in contrast to normal password systems, where the password is hashed on the server, so the server could record a copy of the password when it is initially set.
+
+However, because all the javascript cryptographic libraries used by the user's web browser to perform the SRP negotiation are loaded from the provider's server, a nefarious or compromised provider could give the user's browser bad libraries that secretly sent a cleartext copy of the user's password.
+
+There are three methods that can be used to prevent this:
+
+* We could offer the option of first visiting a third party website that loads the authentication libraries before redirecting to the provider's website. Unfortunately, this user experience is a bit awkward.
+* We could allow providers the option of not allowing authentication or signup through the webapp. Instead, the client could authenticate with the provider's session API, receiving a session token, and then pass this token to the web browser.
+* Currently, the web application is needed for email settings, help tickets and billing, but potentially these functions could be rolled into the Bitmask client.
+
+It is not either/or, we could support a combination of options.
+
+The details for both of these are bit more tricky than in these simple descriptions, because of the need to work around the single origin policy, but it is still entirely possible to do this securely (using either CORS or PostMessage).
+
+Still possible to brute force the password verifier
+-----------------------------------------------------------------
+
+SRP (Secure Remote Password) does not store a hash of the password in the provider's user database, but instead a password verifier. However, if an attacker gains access to the database of password verifiers (plus salts) they can perform a brute force attack similar to a normal attack on a database of password hashes. The attack in the case of SRP is more difficult, since there is much more cryptographic work involved, but it is still possible.
+
+To mitigate exposure of the password verifiers, we plan to separate them out into a separate database with only a separate single-purpose authentication API daemon granted access. Additionally, we hope to offer a password-less option that uses OpenPGP smart cards.
+
+The provider can observe VPN traffic
+--------------------------------------------------
+
+The "Encrypted Internet" feature of LEAP currently works using OpenVPN secure tunnel to proxy all network traffic to a gateway operated by the provider. Once the traffic exits this gateway, it is cleartext (unless otherwise encrypted on the client, e.g. HTTPS) and could be observed and record by the provider, or any network observer able to monitor all traffic into and out of the gateway.
+
+This limitation is mitigated by having the LEAP client authenticate with the VPN gateway using semi-anonymous or anonymous client certificates. A nefarious or compromised provider could attempt to record the moment that a user fetches a new client certificate, and record the IP address or authentication credentials of the user at that time.
+
+In the future, we plan to remove this vulnerability in two ways:
+
+* Allow the client to fetch new client certificates using blind signatures, so that there is no way for the provider to associate user with certificate, but we can also ensure that only valid users get client certificates.
+* Use Tor as an alternate and optional transport for "Encrypted Internet". From the stand point of the user, it would work the same (using perhaps tun socks proxy and dnscrypt-proxy). This option would be slower and not support UDP traffic, but it would be much more secure. The Tor project prefers that every application that uses Tor be specifically designed for Tor so that it does not leak information in other ways. Using Tor as a default route like we do with OpenVPN would violate this, but would be more user friendly.
+
+Device problems
+==================================
+
+A compromised device is a sad device
+----------------------------------------------
+
+The LEAP client tries to minimize attacks related to physical access to the device. In particular, we try to be very resistant to offline attacks, where an attacker has captured the user's device while the LEAP client does not have an open session. For example, locally stored data is kept in an encrypted database that is only unlocked when the user authenticates with the application.
+
+However, if an attacker gains access to the device, and then the device is returned to the user, they can do all kinds of nasty things, like install a keylogger that captures every keystroke.
+
+This vulnerability is true of all software, not just LEAP, but it is worth noting.
+
+Mail clients cache data in cleartext
+--------------------------------------------------
+
+Currently, LEAP relies on the use of a standard email client like Thunderbird, Apple Mail, or Outlook. Although all LEAP data is stored encrypted on the user's device, these mail clients cache and index email data in the clear on their own.
+
+To fix this problem, we have two plans:
+
+* Write plugins for Thunderbird, Apple Mail, and Outlook to make the integration with the Bitmask client more easy and to automatically configure the email client to not-cache or index.
+* Distribute a custom email client with the Bitmask application, perhaps based on mailpile.
+
+The Bitmask application provides a client-encrypted searchable database, so it should be possible to get the same functionality provided by the indexing done by the existing mail clients.
+
+User problems
+=================================
+
+Passwords are never going to be very good
+---------------------------------------------------
+
+LEAP relies on the user's password to unlock access to the user's client encrypted data storage. It does this the right way, using a solid KDF, but many users choose passwords that are weak, offering marginal security if an attacker gains offline access to the user's encrypted storage (for example, if they obtain the device).
+
+In the future, we hope to add support for OpenPGP smart cards in order to overcome many of the problems associated with passwords.
+
+Design problems
+============================================
+
+Enumeration of usernames
+-----------------------------
+
+The system LEAP uses to validate the public keys of users is inherently vulnerable to an attacker enumerating usernames. Because requests for public keys may be proxy'ed through other providers, there is no good method of preventing an attacker from launching many queries for public keys and eventually mapping most of the usernames.
+
+This is unfortunate, but this is also a problem with all other such systems of key discovery and validation (i.e. DANE). For now, we consider this to be an acceptable compromise.
+
+Much trust is placed in LEAP
+-------------------------------------------
+
+In order to shield the service provider from being pressured by a host government or criminal organization to add a backdoor into the client, the model with the LEAP platform is that the client is normally downloaded from the leap.se website and subsequent updates are signed by LEAP developers.
+
+This is good for the provider, but not so good for LEAP, since this system could potentially place pressure on LEAP. Because LEAP does not have a provider-customer relationship with any user, LEAP cannot target compromised applications for particular users. LEAP could, however, introduce a backdoor in the client used by all users.
+
+To prevent this, we plan to adopt [Gitian](https://gitian.org/) or something equivalent. Gitian allows for a way to standardize the entire build environment and build process in order for third parties to be able to verify that the released binary application does indeed match the correct source code.
+
+External authority problems
+=================================================
+
+Certificate authorities considered dangerous
+---------------------------------------------------
+
+The long term goal with LEAP is to entirely rid ourselves of reliance on the x.509 certificate authority system. However, there are a few places where the platform still relies on it:
+
+* When the client first validates a new provider, it will assume the provider's TLS connection is valid if presented with a server certificate signed by a commercial CA recognized by the operating system. Subsequent connections to the provider's API use pinned certificates.
+* When a nicknym agent discovers new public keys for users, it uses a TLS connection validated by a commercial CA recognized by the operating system. In the future, nicknym responses will also be signed, eliminating some of the vulnerability.
+* Currently, the web application does not get deployed with any other TLS validation than the standard commercial CA method. Eventually, we plan to support [DNS-based Authentication of Named Entities (DANE)](https://datatracker.ietf.org/wg/dane/), [Trust Assertions for Certificate Keys (TACK)](http://tack.io/), [Public Key Pinning Extension for HTTP](https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/), or [Certificate Transparency](http://www.certificate-transparency.org/) (whatever gets the most traction).
diff --git a/pages/docs/tech/routing.md b/pages/docs/tech/routing.md
new file mode 100644
index 0000000..46af93f
--- /dev/null
+++ b/pages/docs/tech/routing.md
@@ -0,0 +1,65 @@
+@title = "Graph Resistant Routing"
+@summary = "LEAP's plans for protecting routing meta-data"
+
+# A social graph is highly sensitive data
+
+As messages are sent and delivered, they contain "meta-data" describing where these messages should be routed. With existing protocols for email and chat, this meta-data is sent in the clear and can be used to build a social graph of how people interact.
+
+As the field of network analysis has advanced in recent years, the social graph has become highly sensitive and critical information. Knowledge of the social graph can give an attacker a blueprint into the inner workings of an organization or reveal surprisingly intimate personal details, such as sexual orientation or even health status.
+
+In the short term, LEAP is opportunistically encrypting the message transport whenever possible. This protects the meta-data routing information from an external observer, but does not protect against a nefarious or compromised service provider. This page is about our plans for a better way.
+
+# Possible solutions
+
+There are four strategies we might employ to add protection of routing information to email and chat:
+
+* Auto-alias-pairs: Each party auto-negotiates aliases for communicating with each other. Behind the scenes, the client then invisibly uses these aliases for subsequent communication. The advantage is that this is backward compatible with existing routing. The disadvantage is that the user's server stores a list of their aliases. As an improvement, you could add the possibility of a third party service to maintain the alias map.
+* Onion-routing-headers: A message from user A to user B is encoded so that the "to" routing information only contains the name of B's server. When B's server receives the message, it unwraps (unencrypts) a supplementary header that contains the actual user "B". Like aliases, this provides no benefit if both users are on the same server. As an improvement, the message could be routed through intermediary servers.
+* Third-party-dropbox: To exchange messages, user A and user B negotiate a unique "dropbox" URL for depositing messages, potentially using a third party. To send a message, user A would post the message to the "dropbox". To receive a message, user B would regularly polls this URL to see if there are new messages.
+* Mixmaster-with-signatures: Messages are bounced through a mixmaster-like set of anonymization relays and then finally delivered to the recipient's server. The user's client only displays the message if it is encrypted, has a valid signature, and the user has previously added the sender to a 'allow list' (perhaps automatically generated from the list of validated public keys).
+* Direct-delivery: Instead of relaying messages from client to server to server to client, in this model the sender's client delivers the message directly to the recipient's server. This delivery would need to happen over an anonymization network akin to Tor, or through proxies set up for this purpose. In order to prevent spam, the recipient server would only accept messages delivered in this manner if the message was signed using a group signature (this ensures that the server doesn't know who the sender is, but can confirm that they are allowed to deliver to a particular user). This would require advanced confirmation on the part of both users that they may send messages to one another. This is how Pond works.
+
+
+None of these are currently used for email or chat.
+
+# Auto alias pairs
+
+How would auto alias pairs work? Imagine users Alice and Bob. Alice wants to correspond with Bob but doesn't want either her service provider or Bob's service provider to be able to record the association. She also doesn't want a network observer to be able to map the association.
+
+When Alice first sends a message to Bob, Alice's client will initiate the following chain of events on her behalf, automatically and behind the scenes.
+
+* Alice's client requests a new alias from her service provider. If her normal address is alice@domain.org, she receives an alias hjj3fpwv84fn@domain.org that will forward messages to her account.
+* Alice's client uses automatic key discovery and validation to find Bob's public key and discover if Bob's service provider supports map resistant routing.
+* If Bob does support it, Alice's client will then send a special message, encrypted to Bob's key, that contains Alice's public address and her private alias.
+* When Bob's client encounters this special message, it records a mapping between Alice's public address (alice@domain.org) and the private alias she has created for Bob's use (hjj3fpwv84fn@domain.org).
+* Bob's client then creates an alias for Bob and sends it to Alice.
+* Alice's client receives this alias, and records the mapping between Bob's public address and his private alias.
+* Alice's client then relays the original message to Bob's alias.
+
+Subsequently, whenever Alice or Bob want to communicate, they use the normal public addresses for one another, but behind the scenes their clients will rewrite the source and recipient of the messages to use the private aliases.
+
+This scheme is backwards compatible with existing messaging protocols, such as SMTP and XMPP.
+
+## Limitations
+
+There are five major limitations to this scheme:
+
+1. **Alias unmasking attacks:** because each service provider maintains an alias map for their own users, an attacker who has gained access to this alias map can de-anonymize all the associations into and out of that particular provider, for the past and future.
+2. **Timing attacks:** a statistical analysis of the time that messages of a particular size are sent, received, and relayed by both service providers could reveal the map of associations.
+3. **Log correlation problem:** a powerful attacker could gain access to the logs of both service providers, and thereby reconstruct the associations between the two providers.
+4. **Single provider problem:** this scheme does not protect associations between people on the same service provider.
+5. **Client problem:** if the user's device is compromised, the record of their actual associations can be discovered.
+
+At this point, we feel that it is OK to live with these limitations for the time being in order to simply the logic of the user's client application and to ensure backward compatibility with existing messaging protocols.
+
+Possible future enhancements could greatly mitigate these attacks:
+
+* We could use temporarily aliases that rotate daily, perhaps based on an HMAC counter, based on a shared secret between two users.
+* The 'alias service' could be run by a third party, so that providers don't have access to the alias maps (thus migitating problems 1, 3, and 4).
+* A service provider with sufficient traffic could be in a very good position to be able to aggregate and time-shift the messages it relays in order to disrupt timing attacks.
+
+Ultimately, however, most of these attacks are a problem when faced with an extremely powerful adversary. This scheme is not designed for these situations. Instead, it is designed to prevent the casual, mass surveillance of all communication that currently takes place in repressive and democratic countries alike, by both governments and corporations. It greatly reduces the capacity for association mapping of both traffic over the wire and in stored communication. It is not designed to make a particular user anonymous when specifically targeted by a powerful adversary.
+
+# How would X work?
+
+How would onion-routing-headers, third-party-dropbox, or mixmaster-with-signatures work? To be written.
diff --git a/pages/docs/tech/secure-email/en.md b/pages/docs/tech/secure-email/en.md
new file mode 100644
index 0000000..fe1f51f
--- /dev/null
+++ b/pages/docs/tech/secure-email/en.md
@@ -0,0 +1,578 @@
+@title = "Secure Email Report"
+@summary = "A report on the state of the art in secure email projects"
+@toc = false
+
+There are an increasing number of projects working on next generation secure email or email-like communication. This is an initial draft report highlighting the projects and comparing the approaches. Please help us fill in the missing details and correct any inaccuracies. To contribute to this document, fork repository found at https://github.com/OpenTechFund/secure-email and issue a pull request.
+
+Contents:
+
+1. [Common Problems](#common-problems)
+ 1. [Key Management](#key-management)
+ 1. [Metadata Protection](#metadata-protection)
+ 1. [Forward Secrecy](#forward-secrecy)
+ 1. [Data Availability](#data-availability)
+ 1. [Secure Authentication](#secure-authentication)
+1. [Web Mail](#web-mail)
+ 1. [Lavaboom](#lavaboom)
+ 1. [Mega](#mega)
+ 1. [PrivateSky](#privatesky)
+ 1. [Scramble](#scramble)
+ 1. [Startmail](#startmail)
+ 1. [Whiteout](#whiteout)
+1. [Browser Extensions](#browser-extensions)
+ 1. [Mailvelope](#mailvelope)
+1. [Mail Clients](#mail-clients)
+ 1. [Bitmail](#bitmail)
+ 1. [Mailpile](#mailpile)
+ 1. [Parley](#parley)
+1. [Self-Hosted Email](#self-hosted-email)
+ 1. [Dark Mail Alliance](#self-hosted-dark-mail)
+ 1. [FreedomBox](#freedombox)
+ 1. [Mailpile](#self-hosted-mailpile)
+ 1. [Mail-in-a-box](#mail-in-a-box)
+ 1. [kinko](#kinko)
+1. [Email Infrastructure](#email-infrastructure)
+ 1. [Dark Mail Alliance](#dark-mail-alliance)
+ 1. [LEAP Encryption Access Project](#leap)
+1. [Post-email alternatives](#post-email-alternatives)
+ 1. [Bitmessage](#bitmessage)
+ 1. [Bote mail](#bote-mail)
+ 1. [Cables](#cables)
+ 1. [Dark Mail Alliance](#p2p-dark-mail-alliance)
+ 1. [Enigmabox](#enigmabox)
+ 1. [FlowingMail](#flowingmail)
+ 1. [Goldbug](#goldbug)
+ 1. [Pond](#pond)
+1. [Related Works](#related-works)
+
+<a name="common-problems"></a>Common Problems
+===========================================================
+
+All of the technologies listed here face a common set of problems when trying to make email (or email-like communication) secure and easy to use. These problems are hard, and have defied easy solutions, because there are no quick technological fixes: at issue is the complex interaction between user experience, real world infrastructure, and security. Although no consensus has yet emerged on how best to tackle any of these problems, the diversity of projects listed in this report reflect an surge of interest in this area and an encouraging spirit of experimentation.
+
+<a name="key-management"></a>Key Management
+-----------------------------------------------------------
+
+All the projects in this report use public-key encryption to allow a user to send a confidential message to the intendant recipient, and for the recipient to verify the authorship of the message. Unfortunately, public-key encryption is notoriously difficult to use properly, even for advanced users. The very concepts are confusing for most users: public key versus private key, key signing, key revocation, signing keys versus encryption keys, bit length, and so on.
+
+Traditionally, public key cryptography for email has relied on either the X.509 Certificate Authority (CA) system or a decentralized "Web of Trust" (WoT) for key validation (authenticating that a particular person owns a particular key). Recently, both schemes have come under intense criticism. Repeated security lapses at many of the Certificate Authorities have revealed serious flaws in the CA system. On the other hand, in an age where we better understand the power of social network analysis and the sensitivity of the social graph, the exposure of metadata by a "Web of Trust" is no longer acceptable from a security standpoint.
+
+This is where we are now: we have public key technology that is excessively difficult for the common user, and our only methods of key validation have fallen into disrepute. The projects listed here have plunged into this void, attempting to simplify the usage of public-key cryptography. These efforts have four elements:
+
+* Key discovery: There is no commonly used standard for discovering the public key attached to a particular email address. All the projects here that use OpenPGP intend to initially use, as a stop-gap measure, the OpenPGP keyservers for key discovery, although the keyserver infrastructure was not designed to be used in this way.
+* Key validation: If not Certificate Authorities or Web of Trust, what then? Nearly every project here uses Trust On First Use (TOFU) in one way or another. With TOFU, a key is assumed to be the right key the first time it is used. TOFU can work well for long term associations and for people who are not being targeted for attack, but its security relies on the security of the discovery transport and the application's ability to retain a memory of discovered keys. TOFU can break down in many real-world situations where a user might need to generate new keys or securely communicate with a new contact. The projects here are experimenting with TOFU in different ways, and these problems can likely be mitigated by combining TOFU with other measures.
+* Key availability: Almost every attempt to solve the key validation problem turns into a key availability problem, because once you have validated a public key, you need to make sure that this validation is available to the user on all the possible devices they might want to send or receive messages on.
+* Key revocation: What happens when a private key is lost, and a user want to issue a new public key? None of the projects in this report have an answer for how to deal with this in a post-CA and post-WoT world.
+
+The projects that use a public key as a unique identifier do not have the key validation problem, because they do no need to try to bind a human memorable identifier to a long non-memorable public key: they simply enforce the use of the public key as the user's address. For example, rather than `alice@example.org` as the identifier, these systems might use `8b3b2213ff00e5fb684b003d005ed2fb`. In place of the key validation problem, this approach raises the key exchange problem: how do two parties initially exchange long public keys with one another? This approach is taken by all the P2P projects listed here (although there do exist some P2P application that don't use public key identifiers).
+
+Some of the major experimental approaches to solving the problem of public key discovery and validation include:
+
+1. Inline: Many of the projects here plan to simply include the user's public key as an attachment to every outgoing email (or in a footer or SMTP header).
+1. DNS: Key distributed via DNSSEC, where a service provider adds a DNS entry for each user containing the user's public key or fingerprint.
+1. Append-only log: Proposal to modify Certificate Transparency to handle user accounts, where audits are performed against append-only logs.
+1. Network perspective: Validation by key endorsement (third party signatures), with audits performed via network perspective.
+1. Introductions: Discovery and validation of keys through acquaintance introduction.
+1. Mobile: Although too lengthy to manually transcribe, an app on a mobile device can be used to easily exchange keys in person (for example, via a QR code or bluetooth connection).
+
+<a name="metadata-protection"></a>Metadata Protection
+-----------------------------------------------------------
+
+Traditional schemes for secure email have left metadata exposed. We now know that metadata is often more sensitive than message content: metadata is structured data, easily stored forever, and subject to powerful techniques of social network analysis that can can be incredibly revealing.
+
+Metadata protection, however, is **hard**. In order to protect metadata, the message routing protocol must hide the sender and recipient from all the intermediaries responsible for relaying the message. This is not possible with the traditional protocol for email transport, although it will probably be possible to piggyback additional (non-backward compatible) protocols on top of traditional email transport in order to achieve metadata protection.
+
+Alternately, some projects reject traditional email transport entirely. These decentralized peer-to-peer approaches to metadata protection generally fall into four camps: (1) directly relay the message from sender's device to recipients device; (2) relay messages through a network of friends; (3) broadcast messages to everyone; (4) relay messages through an anonymization network such as Tor. The first two approaches protect metadata, but at the expense of increasing vulnerability to traffic analysis that could reveal the same metadata. The third solution faces serious problems of scalability. Pond uses the fourth method, discussed below.
+
+All schemes for metadata protection face the prospect of increasing Spam (since one of the primary methods used to prevent Spam is analysis of metadata). This is why some schemes with strong metadata protection make it impossible to send or receive messages to anyone you are not already in contact with. This works brilliantly for reducing Spam, but is unlikely to be a viable long term strategy for entirely replacing the utility of email.
+
+<a name="forward-secrecy"></a>Forward Secrecy
+-----------------------------------------------------------
+
+Forward secrecy is a security property that prevents an attacker from saving messages today and then later decrypting these messages once they have captured the user's private key. Without forward secrecy, an attacker is more likely to be able to capture messages today and simply wait for computers to become powerful enough to crack the encryption by brute force. Traditional email encryption offers no forward secrecy.
+
+All methods for forward secrecy involve a process where two parties negotiate an ephemeral key that is used for a short period of time to secure their communication. In many cases, the ephemeral key is generated anew for every single message. Traditional schemes for forward secrecy are incompatible with the asynchronous nature of email communication, since with email you still need to be able to send someone a message even if they are not online and ephemeral key generation requires a back and forth exchange between both parties.
+
+There are several new experimental (and tricky) protocols that attempt to achieve both forward secrecy and support for asynchronous communication, but none have yet emerged as a standard. These protocols either (1) require an initial bootstrap message that is not forward secret, (2) require an initial synchronous exchange to start the process, or (3) rely on a pool of pre-generated ephemeral key pairs that can be used on first contact. When the continually changing ephemeral key for a conversation is lost by either party, then the initialization stage is performed again.
+
+Another possible approach is to use traditional encryption with no support for forward secrecy but instead rely on a scheme for automatic key discovery and validation in order to frequently rotate keys. This way, a user could throw away their private key every few days, achieving a very crude form of forward secrecy.
+
+<a name="data-availability"></a>Data Availability
+-----------------------------------------------------------
+
+Users today demand data availability: they want to be able to access their messages and send messages from any device they choose, wherever they choose, and whenever they choose. Most importantly, they don't want the loss of any particular device to result in a loss of all their data. For insecure communication, achieving data availability is dead simple: simply store everything in the cloud. For secure communication, however, we have no proven solutions to this problem. As noted above, the key management problem is also really a data availability problem.
+
+Most of the email projects here have postponed dealing with the data availability problem. A few have used IMAP to synchronize data or developed their own secure synchronization protocol. Several of the email-like P2P approaches rely on a P2P network for data availability.
+
+<a name="secure-authentication"></a>Secure Authentication
+-----------------------------------------------------------
+
+For those projects that make use of a service provider, one of the key problems is how to authenticate securely with the service provider without revealing the password (since the password is probably also used to encrypt the private key and other secure storage, so it is important that the service provider does not have cleartext access as with typical password authentication schemes). The possible schemes include:
+
+* Separate passwords. The application can use one password for authentication and a separate password for securing secrets.
+* Pre-hash the password on the client before sending it to the server. This method can work, although it does not also authenticate the server (an impostor server can always reply with a success message), and is still vulnerable to brute force dictionary attacks.
+* Use Secure Remote Password (SRP), a type of cryptographic zero-knowledge proof designed for password authentication in which the client and server mutually authenticate. SRP has been around a while, and is fairly well analyzed, but it is still vulnerable to brute force dictionary attacks (albeit much less than traditionally password schemes).
+* Sign a challenge from the server with the user's private key. This has the advantage of being nearly impossible to brute force attack, but is vulnerable to impostor server providers and requires that the user's device has the private key.
+
+No consensus or standard has yet emerged, although SRP has been around a while.
+
+<a name="web-mail"></a>Web Mail
+===========================================================
+
+Most users are familiar with web-based email and the incredible convenience it offers: you can access your email from any device, and you don't need to worry about data synchronization. Developers of web-based email faces several difficult challenges when attempting to make a truly secure application. These challenges can be overcome, but not easily.
+
+First, because the web application is loaded from the web server each time you use it, the service provider could be targeting a version of the client to you that includes a backdoor. To overcome this vulnerability, it is possible to load the code for the web application from a third party. There are two ways of doing this:
+
+1. App Store: Most web browsers support special extensions in the form of "Browser Applications". These are loaded from some kind of app store and installed on the user's device. In this case, the third party that provides the application is the app store. Therefore, the user is then relying on the app store to furnish them with a secure version of the app. For example, this is the approach taken by [cryptocat](https://crypto.cat).
+2. Third Party: There are two advanced mechanisms to allow a web application to be loaded from one website and allow it to access data from another website. One is called CORS ([Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing)) and the other is HTML5's [window.postMessage command](https://developer.mozilla.org/en-US/docs/Web/API/window.postMessage). With either method, anyone can be the third party furnishing the application, or it can be self hosted. For example, this is the approach taken by [Unhosted](https://unhosted.org).
+
+Second, even if the application is loaded from a trusted third party, web browsers are not an ideal environment for sensitive data: there are many ways for an in-browser application to leak data and web browsers are notoriously prone to security holes (it is a very difficult problem to be able to run untrusted code locally in a secure sandbox). To their credit, the browser developers are often vigilant about fixing these holes (depending on who you ask), but the browser environment is far from a secure computing environment. It continues to be, however, the most convenient environment.
+
+Third, developers of web-based secure email face an additional challenge when dealing with offline data or data caching. Modern HTML5 apps typically store a lot of data locally on the user's device using the localStorage facility. Currently, however, no browser stores this encrypted. A secure web-based email application must either choose to not support any local storage, or develop a scheme for individually encrypting each object put in localStorage, a process which is very inefficient. Even storing keys temporarily in short lived session storage is problematic, since these can be easily read from disk later.
+
+These challenges to do not apply to downloaded mail clients that happen to use HTML5 as their interface (Mailpile, for example).
+
+<a name="lavaboom"></a>Lavaboom
+-----------------------------------------------------------
+
+[lavaboom.com](https://www.lavaboom.com)
+
+Lavaboom is a new web-based mail provider from Germany using client-side encryption in the browser. No further details are available at this time.
+
+Lavaboom's name is a tribute to the shuttered Lavabit service, although Lavaboom has no affiliation or people in common with Lavabit.
+
+<a name="mega"></a>Mega
+-----------------------------------------------------------
+
+[mega.co.nz](https://mega.co.nz)
+
+The relaunch of Mega has featured client-side encryption via a javascript application running in the browser. Mega has announced plans to extend their offerings to include email service with a similar design. No details are yet forthcoming. In interviews, Mega has said the javascript running in the browser will be open source, but the server component will be proprietary.
+
+<a name="privatesky"></a>PrivateSky
+-----------------------------------------------------------
+
+PrivateSky was a secure web-based email service that chose to shut down because their design was not compatible with UK law. Many in the press have [said GCHQ forced the closure](http://www.ibtimes.co.uk/articles/529392/20131211/gchq-forced-privatesky-secure-email-service-offline.htm), which the [company refutes](http://www.certivox.com/blog/bid/359788/The-real-story-on-the-PrivateSky-takedown).
+
+<a name="startmail"></a>StartMail
+-----------------------------------------------------------
+
+[startmail.com](http://startmail.com)
+
+The makers of the secure search engine [startpage.com](https://startpage.com) have announced they will be providing secure email service.
+
+Despite the tag line as the "world's most private email," StartMail is remarkably insecure. It offers regular IMAP service and a webmail interface that supports OpenPGP, but the user still must trust StartMail entirely. For example when you authenticate, your password string is sent to StartMail, and new OpenPGP keypairs are generated on the server, not the client. The website also makes some dubious statements, such as claiming to be more secure because their TLS server certificate supports extended validation.
+
+Verdict: oil of snake
+
+<a name="scramble"></a>Scramble
+-----------------------------------------------------------
+
+[scramble.io](https://scramble.io)
+
+Scramble is an OpenPGP email application that can be loaded from a website (with plans to add app store support). Additionally, you can sign up for email service from scramble.io.
+
+**Keys:** Private keys are generated in the browser app, encrypted with the user's passphrase, and then stored on the server. The server never sees the user's passphrase (password is hashed using scrypt before sent to the server during account creation and authentication). The master storage secret (symmetric key) used to encrypt keys is stored in the browser's sessionStorage, which is erased when the user logs out. Keys are validated using notaries.
+
+**Infrastructure:** Scramble uses a system of network perspectives to discover and validate public keys. The client will come with a list of pre-blessed notaries that can be used to query for public keys. If the notaries agree, the client will consider the key to be validated.
+
+**Application:** Currently, Scramble is a traditional HTML5 javascript application loaded from the website. In the future, Scramble will also be an installable browser app.
+
+* Written in: Go, Javascript
+* Source code: https://github.com/dcposch/scramble
+* Design documentation: https://github.com/dcposch/scramble/wiki/Scramble-Protocol
+* License: LGPL
+* Platforms: Windows, Mac, Linux (with Android planned).
+
+<a name="whiteout"></a>Whiteout
+-----------------------------------------------------------
+
+[whiteout.io](https://whiteout.io)
+
+Whiteout is a commercial service featuring an HTML5-based OpenPGP email client that is loaded from the web.
+
+* Written in: Javascript
+* Source code: https://github.com/whiteout-io/mail-html5
+* License: proprietary, but the code is available for inspection.
+
+<a name="browser-extensions"></a>Browser Extensions
+===========================================================
+
+A browser extension modifies the behavior of the web browser (not to be confused with a browser application, which has far fewer permissions and consists of a self-contained application). Browser extensions are able to modify how the user interacts with a variety of websites. Browser extensions share many of the same advantages and disadvantages of [web mail approaches](#webmail).
+
+<a name="mailvelope"></a>Mailvelope
+-----------------------------------------------------------
+
+[mailvelope.com](http://mailvelope.com)
+
+Mailvelope is a browser extension that allows you to use OpenPGP email with traditional web-mail providers like Gmail, Yahoo, and Outlook.com.
+
+**Keys:** The private key is generated for you, password protected, and stored in the browser's local storage (along with public keys). In the future, the plan is to support automatic discovery and validation of public keys using OpenPGP keyservers and message footers.
+
+**Application:** When the extension detects you have opened a web page from a supported web-mail provider such as Gmail, it offers the user the opportunity to encrypt what you type in the compose window and decrypt messages you receive.
+
+**Limitations:** Because of an inherent limitation in the way Mailvelope can interface with web-mail, it is not able to send OpenPGP/MIME (although it can read it fine). As mentioned elsewhere, browser storage is not a particular ideal place to be storing keys. When a web-mail provider changes their UI (or API if they happen to have one), the extension must be updated to handle the new format.
+
+* Contact: info@mailvelope.com
+* Written in: Javascript
+* Source code: https://github.com/toberndo/mailvelope
+* Design documentation: http://www.mailvelope.com/help
+* License: AGPL
+* Platforms: Windows, Mac, Linux (with Android planned).
+
+<a name="mail-clients"></a>Mail Clients
+===========================================================
+
+An email client, or MUA (Mail User Agent), provides a user interface to access email from any service provider. Traditional examples of email clients include Thunderbird or Microsoft Outlook (although both these application include a lot of other functionality as well). Nearly all email clients communicate with the email service provider using IMAP or POP and SMTP, although some also support local mailboxes in mbox or Maildir format.
+
+There are two primary advantages to the mail client approach:
+
+1. Existing accounts: By using a custom secure mail client, a user can continue to use their existing email accounts.
+1. Tailored UI: A custom client has the potential to rethink the email user experience in order to better convey security related details to the user.
+
+The mail client approach, however, also has several disadvantages:
+
+1. Insecure service providers: A mail client cannot address many of the core problems with email security when used with a traditional email provider. For example, metadata will not be protected in storage or transit, and the provider cannot aid in key discovery or validation. Most importantly, many existing mail providers are highly vulnerable, since few rely on DNSSEC for their MX records or validate their StartTLS connections for mail relay (when they even bother to enable StartTLS). A traditional email provider also requires authentication via password that is seen by the provider in clear text, and might be recorded by them. Finally, most service providers retain significant personally identifiable information, such as IP address of clients.
+1. Install a new app: As with many of the other approaches, the custom mail client approach requires that users download and install a specialized application on their device before they can use it.
+
+Ultimately, the level of email security that is possible with the custom mail client approach will always be limited. However, custom email clients may be an excellent strategy for gradually weening users away from email and to a different and more secure protocol. Most of the projects in this section see email support as a gateway to ease the transition to something that can replace email.
+
+<a name="bitmail"></a>Bitmail
+-----------------------------------------------------------
+
+[bitmail.sf.net](http://bitmail.sf.net)
+
+Bitmail is a desktop application that provides a user interface for traditional IMAP-based mail, but also supports a custom peer-to-peer protocol for relaying email through a network of friends. Bitmail will support both OpenPGP and S/MIME.
+
+**Keys:** Keys are validated using a shared secret or fingerprint validation. Public keys are discovered over the P2P network. Keys are stored locally in an encrypted database.
+
+**Routing:** Bitmail uses an opportunistic message distribution model where every message is sent to every neighbor. It is called "Echo" and it is very similar to the protocols used by Retroshare and Briar.
+
+**Application:** Bitmail uses the Qt library for cross platform UI.
+
+There are also plans to include a Bitmail MUA extension.
+
+* Written in: C
+* Source code: http://sourceforge.net/projects/bitmail/files
+* Design documentation: http://sourceforge.net/p/bitmail/code/HEAD/tree/branches/BitMail.06.2088_2013-11-03/BitMail/branches/BitMail/Documentation/
+* License: GPL v2
+* Platforms: Windows (with Mac and Linux planned).
+
+Note: I am unclear which of the previous features are planned and which are currently working.
+
+<a name="mailpile"></a>Mailpile
+-----------------------------------------------------------
+
+[mailpile.is](http://mailpile.is)
+
+Mailpile is an email client designed to quickly handle large amounts of email and also support user-friendly encryption. The initial focus is on email, with plans to eventually support post-email protocols like bitmessage, flowingmail, or darkmail. Also, the developers hope to add support for XMPP-based chat in the future. Since the Mozilla foundation has not committed the resources necessary to keep Thunderbird contemporary, the Mailpile initiative holds a lot of promise as a cross-platform mail client that seeks to redesign how we interact with email.
+
+**Keys:** Mailpile email encryption is based on OpenPGP (it uses your GPG keyring). Key discovery will be handled using OpenPGP keyservers and including public keys as attachments to outgoing email. Public keys are trusted on first use, with plans for validation via DANE and manual fingerprint verification (future support for a P2P protocol might include additional methods, such as Certificate Transparency or Short Authentication Strings). Currently, keys are not backed up.
+
+**Application:** Mailpile UI is written using HTML5 and Javascript, running against a self-hosted Python application (that typically lives locally on the device, but might be running on your own server).
+
+**Limitations:** Mailpile does not currently have a scheme for recovery if your device is destroyed or a method for securely synchronizing keys among devices. Although the search index is stored encrypted on disk (if the user already has GPG installed and a key pair generated), it is encrypted in a way that requires the index to be loaded entirely into memory. Mailpile is under very active development, so these and other issues may change in the near future.
+
+* Written in: Python, Javascript
+* Source code: http://github.com/pagekite/mailpile
+* Design documentation: http://github.com/pagekite/mailpile
+* License: AGPL & Apache
+* Platforms: Windows, Mac, Linux (with Android and iOS planned).
+* Contact: team@mailpile.is
+
+<a name="parley"></a>Parley
+-----------------------------------------------------------
+
+[parley.co](https://parley.co)
+
+Parley is a desktop mail client with a UI written using HTML5 and Javascript, with a local backend written in Python.
+
+**Keys:** Although Parley can be used with any service provider, the Parley servers are used to publish public keys and back up client-encrypted private keys. For key discovery, Parley uses a central repository and the OpenPGP keyservers. For key validation, Parley relies on trust on first use and Parley key endorsement.
+
+**Infrastructure:** All users of the Parley client also sign up for the Parley service, but they use their existing email account. The Parley server acts as a proxy that uses [context.io](http://context.io) for email storage (context.io is a commercial service that provides a REST API for a traditional IMAP account). The Parley server also handles key discovery, validation, and backup. Both the client and server are released as free software.
+
+**Application:** Parley is currently bundled into an executable using [Appcelerator](http://www.appcelerator.com/). The Parley client does not speak IMAP or SMTP directly. Rather, uses the email REST API of context.io.
+
+**Limitations:** All user email is stored by context.io, albeit in OpenPGP format. Metadata is exposed to context.io, however (in addition to your service provider).
+
+* Written in: Python, Javascript
+* Source code: https://github.com/blackchair/parley
+* Design documentation: https://parley.co/#how-it-works
+* License: BSD
+* Platforms: Windows, Mac, Linux (with Android and iOS planned).
+
+<a name="self-hosted-email"></a>Self-Hosted Email
+===========================================================
+
+Traditionally, email is a federated protocol: when you send an email it travels from your computer, to the server of your email provider, to the server of the recipient's provider, and finally to the recipient's computer. The key idea with self-hosted email is to cut out the middleman and run your own email server.
+
+In the United States, much of the interest in self-hosted email is driven by the Supreme Court's current (and particularly odd) interpretation of the 4th amendment called the ["Third-Party Doctrine"](https://en.wikipedia.org/wiki/Third-Party_Doctrine). Essentially, you have much weaker privacy protections in the US if you entrust any of your data to a third party. Additionally, the Court has so far afforded much greater protections to items physically inside your home. "Aha!" say the hackers and the lawyers, "we will just put email in the home."
+
+Unfortunately, it is not so simple. There are some major challenges to putting email servers in everyone's home:
+
+* **Delegated reputation**: The current email infrastructure is essentially a system of delegated reputation. In order to be able to send mail to most providers and not have a large percentage of it marked as Spam, a service provider must gradually build up a good reputation. Users are able to send mail because their provider has cultivated this reputation and maintained it by closing abusive accounts. It is certainly possible to run an email provider with a single user, but it is much harder to build up a good reputation. Also, many email providers block all relay attempts from IP addresses that have been flagged as "home" addresses, on the (probable) assumption that the message is coming from a virus and not a legitimate email server.
+* **Servers are on a hostile network**: Because a server needs to have open ports that are publicly accessible from the internet at all times, running one is much trickier than a simple desktop computer. It is much more critical to make sure security upgrades are applied in a timely manner, and that you are able to respond to external attacks, such as "Spam Bombs". Any publicly addressable IP that is put on the open internet will be continually probed for vulnerabilities. Self-hosting will probably work great for a protocol like Pond, where there are strict restrictions on who may deliver incoming messages. Email, however, is a protocol that is wide open and prone to abuse.
+* **Sysadmins are not robots**: No one has yet figured out how to make self-healing servers that don't require a skilled sysadmin to keep them healthy. Once someone does, a lot of sysadmins will be out of work, but they are presently not very worried. There are many things that commonly go wrong with servers, such as upgrades failing, drives filling up, daemons crashing, memory leaks, hardware failures, and so on.
+* **Does not address the important problems**: Moving the physical location of a device does nothing to solve the hard problems associated with easy-to-use email security (such as data availability and key validation). Some of the approaches to these problems rely on service provider infrastructure that would be infeasible to self host.
+* **DNS is hard**: One of the important security problems with traditional email is the vulnerability MX DNS records. Doing DNS correctly is hard, and not something that can be expected of the common user.
+
+Self-hosted email is an intriguing "legal hack", albeit one that faces many technical challenges.
+
+<a name="self-hosted-dark-mail"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+The Dark Mail Alliance has said they want to support self-hosting for the server component of the system. No details yet.
+
+<a name="freedombox"></a>FreedomBox
+-----------------------------------------------------------
+
+[freedomboxfoundation.org](https://freedomboxfoundation.org)
+
+From its early conception, part of FreedomBox was "email and telecommunications that protects privacy and resists eavesdropping". Email, however, is not currently being worked on as part of FreedomBox. (as far as I can tell).
+
+<a name="self-hosted-mailpile"></a>Mailpile
+-----------------------------------------------------------
+
+Although Mailpile is primarily a mail client, the background Python component can read the Maildir format for email. This means you could install Mailpile on your own server running a Mail Transfer Agent (MTA) like postfix or qmail. You would then access your mail remotely by connecting to your server via a web browser.
+
+<a name="Mail-in-a-box"></a>Mail-in-a-box
+-----------------------------------------------------------
+
+<a href="https://github.com/JoshData/mailinabox">github.com/JoshData/mailinabox</a>
+
+Mail-in-a-box helps people set up self-hosted email for linux hobbyists and email developers. It will install and configure the necessary Debian packages required to turn a machine running Ubuntu into a self-hosted email server. It provides a fairly straightforward, standard email server with IMAP, SMTP, greylisting, DKIM and SPF. It also includes a command line tool for adding and removing accounts.
+
+**Advantages:** Something quick for anyone with some linux skill who wants to experiment with email.
+
+**Limitations:** Setting up an email server is the easy part, maintaining the service over time is the tricky part. Without any automation recipes using something like Puppet, Chef, Salt, or CFEngine, mail-in-a-box is unlikely to be useful to anyone but the curious hobbyist.
+
+* Written in: Bash
+* Source code: https://github.com/JoshData/mailinabox
+* License: CC0 1.0 Universal
+
+<a name="kinko"></a>kinko
+-----------------------------------------------------------
+
+[kinko](https://kinko.me) implements an en/decrypting SMTP- and IMAP-proxy on ARM-class hardware, the kinko box. Emails are synced from the users' email accounts via IMAP to the box and are stored in plaintext in a secure storage area on the box. The kinko box also includes a webmailer to be able to use email with the browser.
+
+Connections to the kinko box are secured by TLS using a private key only known to the box itself. Furthermore, the kinko box is tunnelled to a public internet location. Consequently, users can access secure email from everywhere, using IMAP compatible email clients and/or browsers, including mobile clients.
+
+kinko uses GnuPG for encryption, with the addition of encrypting the email subject. Further additions should allow "Post-email alternatives" (a la bitmessage) to be used with the email clients that users are using today already. Other, privacy-related additions are planned as well.
+
+**Key discovery and validation:** Users can upload existing PGP keyrings. PGP keys are discovered via email
+addresses, email content, and PGP key servers. Keys are trusted on first use (but this policy can be changed
+to explicit fingerprint validation.)
+
+**Project status:** An alpha prototype exists. We are preparing for the release of a beta package in Q2/2014.
+
+**Languages:** The kinko base system is implemented in ruby and shell, with minor portions in native code.
+Applications can be implemented in more or less any language.
+
+**Webmail:** The currently included webmail application is roundcube webmail. That might change in the future.
+
+**Licenses:** All portions of the kinko system will be released under the AGPL license. (Included 3rd party
+applications will use their respective open source licenses). The hardware is open sourced as
+per [olimex](https://www.olimex.com/wiki/A10-OLinuXino-LIME).
+
+<a name="email-infrastructure"></a>Email Infrastructure
+===========================================================
+
+The "infrastructure" projects give a service provider the opportunity to offer secure email accounts to end-users. By modifying how both email clients and email servers work, these projects have the potential to deploy greater security measures than are possible with a client-only approach. For example:
+
+* Encrypted relay: A secure email provider is able to support, and enforce, encrypted transport when relaying mail to other providers. This is an important mechanism for preventing mass surveillance of metadata (which is otherwise not protected by OpenPGP client-side encryption of message contents).
+* Easier key management: A secure email provider can endorse the public keys of its users, and provide assistance to various schemes for automatic validation. Additionally, a secure email provider, coupled with a custom client, can make it easy to securely manage and back up the essential private keys which are otherwise cumbersome for most users to manage.
+* Invisible upgrade to better protocols: A secure email provider has the potential to support multiple protocols bound to a single user@domain address, allowing automatic and invisible upgrades to more secure post-email protocols when both parties detect the capability.
+* A return to federation: The recent concentration of email to a few giant providers greatly reduces the health and resiliency of email as an open protocol, since now only a few players essentially monopolize the medium. Projects that seek to make it easier to offer secure email as a service have the potential to reverse this trend.
+* Secure DNS: A secure provider can support DNSSEC and DANE, while most other email providers are unlikely to anytime soon. This is very important, because it is easy to hijack the MX records of a domain without DNSSEC.
+* Minimal data retention: A service provider that follows "best practices" will choose to retain less personally identifiable information on their users, such as their home IP addresses.
+
+The goal of both projects in this category is to build systems where the service provider is untrusted and cannot compromise the security of its users.
+
+Despite the potential of this approach, there are several unknown factors that might limit its appeal:
+
+* In order to benefit from a more secure provider, a user will need to switch their email account and email address, a very high barrier to adoption.
+* Where once there were many ISPs that offered email service, it is no longer clear if there is either the demand to sustain many email providers or the supply of providers interested in offering email as a service.
+* Users must download and install a custom application.
+
+<a name="dark-mail-alliance"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+[darkmail.info](https://darkmail.info)
+
+The Dark Mail Alliance will include both a client application and server software. The plan is to support traditional encrypted email (both OpenPGP and S/MIME), a new federated email-like protocol adapted from SilentCircle's instant message protocol (SCIMP), and a pure peer-to-peer messaging protocol. Both the client and server will be made available as free software.
+
+**Keys:** Key pairs will be generated on the user's device and uploaded to the service provider. [Certificate Transparency](http://certificate-transparency.org) will be used to automatically validate the service provider's endorsement of these public keys. Dark Mail additionally plans to support fingerprint confirmation, short authentication strings, and shared secret for manual key validation. Automatic discovery of public keys will happen using DNS, HTTPS, and via the messages themselves.
+
+**Routing:** The post-email messaging protocol promises to have forward secrecy and protection from metadata analysis (details have not yet been posted, and SCIMP does not currently support meta-data protection). Dark Mail Alliance plans to additionally support pure peer-to-peer messaging using a key fingerprint as the user identifier.
+
+**Infrastructure:** Dark Mail plans to support three types of architectures: traditional client/server, self-hosted, and pure peer-to-peer. No details yet on how these will work.
+
+**Application:** The client application will work with any existing MUA by exposing a local IMAP/SMTP server that the MUA can connect to.
+
+**Limitations:** Dark Mail has not yet released any code or design documents. However, they certainly have the resources to carry out their plans.
+
+* Written in: C
+* Source code: none yet
+* Design documentation: none yet
+* License: planned to be OSI-compatible
+* Platforms: initially Android and iOS, followed by Windows, OS X, Linux, and Windows Phone.
+* Contact: press@darkmail.info
+
+<a name="leap"></a>LEAP Encryption Access Project
+-----------------------------------------------------------
+
+[leap.se](https://leap.se)
+
+LEAP includes both a client application and turn-key system to automate the process of running a secure service provider. Currently, this includes user registration and management, help tickets, billing, VPN service, and secure email service. The secure email service is based on OpenPGP.
+
+**Keys:** Key pairs are generated on the user's device. Keys, and all user data, are stored in a client-encrypted database that is synchronized among the user's devices and backed up to the service provider. Keys are automatically validated using a combination of provider endorsement and network perspective (coming soon). Keys are discovered via the OpenPGP keyservers, the OpenPGP header, email footers, and a custom HTTP-based discovery protocol.
+
+**Infrastructure:** LEAP follows a traditional federated client/server architecture. The client is designed to work with any LEAP-compatible service provider (with plans to support legacy IMAP providers in the future). For security reasons, users are encouraged to get the application from LEAP and not their service provider.
+
+**Application:** The client application works with any existing MUA by exposing a local IMAP/SMTP server that the MUA can connect to. There is a Thunderbird extension to automate configuration of the account in Thunderbird. The client application communicates with the service provider using a custom protocol for synchronizing encrypted databases. The application is a very small C program that launches the Python code. The user interface is written using Qt.
+
+**Limitations:** In the current implementation, security properties of forward secrecy and metadata production are not end-to-end. Instead, the client relies on the service provider to ensure these properties. This limitation is due to some inherent limitations in the existing protocols for secure email. As with many of the other projects, LEAP's plan is to invisibly upgrade to a post-email protocol when possible in order to overcome these limitations.
+
+* Written in: Python
+* Source code: https://leap.se/source
+* Design documentation: https://leap.se/docs
+* License: mostly GPL v3, some MIT and AGPL.
+
+<a name="post-email-alternatives"></a>Post-email alternatives
+===========================================================
+
+There are several projects to create alternatives to email that are more secure yet still email-like.
+
+These projects share some common advantages:
+
+1. **Trust no one:** These projects share an approach that treats the network, and all parties on the network, as potentially hostile and not to be trusted. With this approach, a user's security can only be betrayed if their own device is compromised or the software is flawed or tampered with, but the user is protected from attacks against any service provider (because there typically is not one).
+1. **Fingerprint as identifier:** All these projects also use the fingerprint of the user's public key as the unique routing identifier for a user, allowing for decentralized and unique names. This neatly solves the problem of validating public keys, because every identifier basically *is* a key, so there is no need to establish a mapping from an identifier to a key.
+
+Except for Pond, all these alternatives take a pure peer-to-peer approach. As such, they face particular challenges:
+
+1. **The "Natural" Network**: Many advocates of peer-to-peer networking advance the notion that decentralized networks are the most efficient networks and are found everywhere in nature (in the neurons in our brain, in how mold grows, in how insects communicate, etc). This notion is only partially true. Networks are found in nature, but these network are not radically decentralized. Instead, natural networks tend to follow a power law distribution (aka "[scale free networks](https://en.wikipedia.org/wiki/Scale-free_network)"), where there is a high degree of partial centralization that balances "brokerage" (ability to communicate far in the network) with "closure" (ability to communicate close in the network). Thus, in practice, digital networks rely on "super hubs" that process most of the traffic. These hubs need to be maintained and hosted by someone, often at great expense (and making the network much more vulnerable to Sybil attacks).
+1. **The Internet:** Sadly, the physical internet infrastructure is actually very polycentric rather than decentralized (more akin to a tree than a spider's web). One reason for the rise of cloud computing is that resources are much cheaper near the core of the internet than near the periphery. Technical strategies that attempt to leverage the periphery will always be disadvantaged from an efficiency standpoint.
+1. **Traffic Analysis:** Most of the peer-to-peer approaches directly relay messages from sender's device to recipient's device, or route messages through the participant's contacts. Such an approach to message routing makes it potentially very easy for a network observer to map the network of associations, even if the message protocol otherwise offers very strong metadata protection.
+1. **Sybil Attacks:** By their nature, peer-to-peer networks do not have a method of blocking participation in the network. This makes them potentially very vulnerable to [Sybil attacks](https://en.wikipedia.org/wiki/Sybil_attack), where an attacker creates a very large number of fake participants in order to control the network or reveal the identity of other network participants.
+1. **Mobile:** Peer-to-peer networks are resource intensive, typically with every node in the network responsible for continually relaying traffic and keeping the network healthy. Unfortunately, this kind of thing is murder on the battery life of a mobile device, and requires a lot of extra network traffic.
+1. **Identifiers**: Using key fingerprints as unique identifiers has some advantages, but it also makes user identifiers impossible to remember. There is a lot of utility in the convenience of memorable username handles, as evidence in the use of email addresses and twitter handles.
+1. **Data Availability**: Unless also paired with a cloud component, peer-to-peer networks have much lower data availability than other approaches. For example, it takes much longer to update message deliveries from a peer network than from a server, particularly when the device has been offline for a while. Also, if a device is lost or destroyed, generally the user loses all their data.
+
+Most of these challenges have possible technological solutions that might make peer-to-peer approaches the most attractive option in the long run. For example, researchers may discover ways to make P2P networks less battery intensive. For this reason, it is important that research continue in this area. However, [in the long run we are all dead](https://en.wikiquote.org/wiki/John_Maynard_Keynes) and peer-to-peer approaches face serious hurdles before they can achieve the kind of user experience demanded today.
+
+<a name="bitmessage"></a>Bitmessage
+-----------------------------------------------------------
+
+[Bitmessage](https://bitmessage.org)
+
+Bitmessage is a peer-to-peer email-like communication protocol. It is totally decentralized and places no trust on any organization for services or validation.
+
+Advantages:
+
+* resistant to metadata analysis
+* relatively easy to use
+* works and is actively used by many people.
+
+Disadvantages:
+
+* no forward secrecy
+* unsolved scaling issues: all messages are broadcast to everyone
+* because there is no forward secrecy, it is especially problematic that anyone can grab an encrypted copy of any message in the system. This means if the private key is ever compromised, then all the past messages can be decrypted easily by anyone using the system.
+* relies on proof of work for spam prevention, which is probably not actually that preventative (spammers often steal CPU anyway).
+
+<a name="bote-mail"></a>Bote mail
+-----------------------------------------------------------
+
+[i2pbote.i2p.us](http://i2pbote.i2p.us) (or [i2pbote.i2p](http://i2pbote.i2p) if using i2p)
+
+Bote mail (aka [IMail](https://en.wikipedia.org/wiki/IMail)) is an email-like communication protocol that uses the anonymizing network I2p for transport. Bote mail stores messages in a global distribute hash table for up to 100 days, during which time the client has an opportunity to download and store the message.
+
+**Keys**: Bote mail uses public-key based addresses. You can create as many identities as you want, each identity corresponding to a ECDSA or NTRU key-pair.
+
+**Application**: Users interact with the user interface through a webmail interface, although the client is running locally.
+
+* Written in: Java
+* License: GPLv3
+
+<a name="cables"></a>Cables
+-----------------------------------------------------------
+
+[github.com/mkdesu/cables](https://github.com/mkdesu/cables)
+
+* Written in: C, Bash
+* License: GPL v2
+
+<a name="p2p-dark-mail-alliance"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+The Dark Mail Alliance plans to incorporate traditional email, a federated email alternative, and a second email alternative that is pure peer-to-peer. Details are not yet forthwith.
+
+<a name="enigmabox"></a>Enigmabox
+-----------------------------------------------------------
+
+[enigmabox.net](https://enigmabox.net)
+
+Enigmabox is a device that you install on your local network between your computer and the internet. It acts as secure proxy, providing VPN, and communication services analogous to email and VoIP. In order to communicate with another user, they must also have an enigmabox.
+
+Data is routed peer-to-peer directly from one enigmabox to another using cjdns, a system of virtual mesh networking in which IP addresses are derived from public keys. End to end encryption of messages is provided entirely by the cjdns transport layer.
+
+With this scheme, message are forward secret, but not entirely asynchronous. At some point, both the sender and recipient must have their enigmaboxes online at the same time.
+
+<a name="flowingmail"></a>FlowingMail
+-----------------------------------------------------------
+
+[flowingmail.com](http://flowingmail.com)
+
+P2P secure, encrypted email system.
+
+<a name="goldbug"></a>Goldbug
+-----------------------------------------------------------
+
+[goldbug.sf.net](http://goldbug.sf.net)
+
+* Written in: C++, Qt
+* License: BSD
+
+<a name="pond"></a>Pond
+-----------------------------------------------------------
+
+[pond.imperialviolet.org](https://pond.imperialviolet.org/)
+
+Pond is an email-like messaging application with several unique architectural and cryptographic features that make it stand out in the field.
+
+**Message Encryption**: Pond uses [Axolotl](https://github.com/trevp/axolotl/wiki) for asynchronous forward secret messages where the key is frequently ratcheted (akin to OTR, but more robust).
+
+**Routing**: Pond uses a unique architecture where every user relies on a service provider for receiving messages, but sent messages are delivered directly to the recipient's server (over Tor). This allows for strong metadata protection, but does not suffer from the other problems that peer-to-peer systems typically do. In order to prevent excessive Spam under this scheme, Pond uses a clever system of group signatures to allow the server to check if a sender is authorized to deliver to a particular user without leaking any information to the server.
+
+**Keys**: Pond uses Panda, a system for secure peer validation using short authentication strings.
+
+Pond's advantages include:
+
+* Very high security: forward secrecy, metadata protection, resistant to traffic analysis.
+* Pond hybrid federated and peer-to-peer approach is cool and holds a lot of promise.
+* Written in Go, and thus probably has many fewer security flaws than programs written in C or C++.
+* Pond is written by Adam Langley, an extremely well respected crypto-engineer.
+
+Pond's disadvantages include:
+
+* Uses non-human memorial unique identifiers, although this is not a necessary element of the design.
+* Requires that you set up contacts in advance before you can communicate with them (via a Short Authentication String or full public key exchange).
+* Pond is still very difficult to install and use.
+
+Pond is an exciting experiment in how you could build a very secure post-email protocol. Although Pond currently uses non-human identifiers, Pond could be easily modified to use traditional email username@domain.org identifiers (because it relies on service providers for message reception). The requirement in Pond that both parties pre-exchange keys could also be modified to allow users to set up addresses that could receive messages from anyone, albeit at the cost of likely Spam. Currently, Pond uses Tor to anonymize message routing, but the Tor network was designed for low-latency. Pond could potentially use a more secure anonymization network that was designed for higher-latency asynchronous messages.
+
+Ultimately, Pond's unique design makes it a very strong candidate for incorporation into a messaging application that could automatically upgrade from email to Pond should it detect that both parties support it.
+
+* Written in: Go
+* Source code: https://github.com/agl/pond
+* License: BSD
+* Platforms: anything you can compile Go on (for command line interface) or anything you can compile Go + Gtk (for GUI interface).
+
+<a name="related-works"></a>Related Works
+===========================================================
+
+There are many technologies that don't belong in this document because they either (a) are not trying to make encrypted email-like communication easier, (b) use some kind of weird proprietary escrow system, or (c) we just don't know enough about them yet. Here is a place to store links to such projects.
+
+* [Virtru](https://www.virtru.com) has a secure email product that relies on a centralized key escrow. For details, see [here](http://www.theregister.co.uk/Print/2014/01/24/ex_nsa_cloud_guru_email_privacy_startup) and [here](https://www.virtru.com/how-virtru-works).
+* [OpenCom](http://opencom.io) is a secure email and email-like communication in the planning stages.
+* [Ubiquitous Encrypted Email](https://github.com/tomrittervg/uee) is a protocol draft for standards that could lead to universal adoption of encrypted email.
+* [Redecentralize](https://github.com/redecentralize/alternative-internet) has a list of decentralized networks, such as Tor.
diff --git a/pages/docs/test/release_tests b/pages/docs/test/release_tests
new file mode 100644
index 0000000..58c60fb
--- /dev/null
+++ b/pages/docs/test/release_tests
@@ -0,0 +1,15 @@
+what to test before a release
+=============================
+
+deployment tests
+----------------
+deployment to PC
+deployment to KVM
+deployment to vagrant
+deployment of everything to one node
+
+post deployment tests
+---------------------
+webapp works? create a user? login as that user?
+client works with the above user?
+check firewall ports?