summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorvarac <varacanero@zeromail.org>2014-06-11 10:23:48 +0200
committervarac <varacanero@zeromail.org>2014-06-11 10:23:48 +0200
commitdfa5bc15c9a866b3dad595ba9965539cbd229c93 (patch)
tree1d5aa97e3905128e73acb02d22c12d1edfe78894
parent364db6ffafd036ed768b126b53ad7909e066e152 (diff)
parent4c3543f175252fae9ae48ac8f6accca207eeed8d (diff)
Merge branch 'master' of ssh://code.leap.se/leap_doc
-rw-r--r--docs/client/dev-environment.md140
-rw-r--r--docs/client/en.md32
-rw-r--r--docs/client/testers-howto.md95
-rw-r--r--docs/client/user-install.md93
-rw-r--r--docs/design/overview.md34
-rw-r--r--docs/design/tapicero.md98
-rw-r--r--docs/design/webapp.md282
-rw-r--r--docs/get-involved/communication.haml3
-rw-r--r--docs/get-involved/project-ideas.md16
-rw-r--r--docs/platform/development.md37
-rw-r--r--docs/platform/guide.md56
-rw-r--r--docs/platform/known-issues.md7
-rw-r--r--docs/platform/quick-start.md11
-rw-r--r--docs/tech/hard-problems/en.md38
-rw-r--r--docs/tech/hard-problems/pt.md96
-rw-r--r--docs/tech/secure-email/en.md578
-rw-r--r--menu.txt3
17 files changed, 1302 insertions, 317 deletions
diff --git a/docs/client/dev-environment.md b/docs/client/dev-environment.md
index 37aefa6..eb78b3b 100644
--- a/docs/client/dev-environment.md
+++ b/docs/client/dev-environment.md
@@ -4,7 +4,7 @@
Setting up a development environment
====================================
-This document covers how to get an enviroment ready to contribute code
+This document covers how to get an environment ready to contribute code
to Bitmask.
Cloning the repo
@@ -18,27 +18,22 @@ Cloning the repo
git clone https://leap.se/git/bitmask_client
git checkout develop
-Base Dependencies
------------------
+Dependencies
+------------
Bitmask depends on these libraries:
-- python 2.6 or 2.7
-- qt4 libraries (see also
- Troubleshooting PySide install \<pysidevirtualenv\> about how to
- install inside your virtualenv)
-- openssl
-- [openvpn](http://openvpn.net/index.php/open-source/345-openvpn-project.html)
+- python 2.6 or 2.7
+- qt4 libraries
+- openssl
+- [openvpn](http://openvpn.net/index.php/open-source/345-openvpn-project.html)
-### Debian
+### Install dependencies in a Debian based distro
In debian-based systems:
- $ apt-get install openvpn python-pyside python-openssl
-
-To install the software from sources:
+ $ sudo apt-get install git python-dev python-setuptools python-virtualenv python-pip libssl-dev python-openssl libsqlite3-dev g++ openvpn pyside-tools python-pyside libffi-dev
- $ apt-get install python-pip python-dev
Working with virtualenv
-----------------------
@@ -49,28 +44,27 @@ Working with virtualenv
It is a tool to create isolated Python environments.
-The basic problem being addressed is one of dependencies and versions,
+> The basic problem being addressed is one of dependencies and versions,
and indirectly permissions. Imagine you have an application that needs
version 1 of LibFoo, but another application requires version 2. How can
you use both these applications? If you install everything into
-/usr/lib/python2.7/site-packages (or whatever your platform's standard
+`/usr/lib/python2.7/site-packages` (or whatever your platform's standard
location is), it's easy to end up in a situation where you
unintentionally upgrade an application that shouldn't be upgraded.
Read more about it in the [project documentation
-page](http://pypi.python.org/pypi/virtualenv/).
-
-> **note**
->
-> this section could be completed with useful options that can be passed
-> to the virtualenv command (e.g., to make portable paths,
-> site-packages, ...). We also should document how to use
-> virtualenvwrapper.
+page](http://www.virtualenv.org/en/latest/virtualenv.html).
### Create and activate your dev environment
- $ virtualenv </path/to/new/environment>
- $ source </path/to/new/environment>/bin/activate
+You first create a virtualenv in any directory that you like:
+
+ $ mkdir ~/Virtualenvs
+ $ virtualenv ~/Virtualenvs/bitmask
+ $ source ~/Virtualenvs/bitmask/bin/activate
+ (bitmask)$
+
+Note the change in the prompt.
### Avoid compiling PySide inside a virtualenv
@@ -84,14 +78,14 @@ the recommended way if you are running a debian-based system*):
$ pkg/postmkvenv.sh
A second option if that does not work for you would be to install PySide
-globally and pass the `--site-packages` option when you are creating
+globally and pass the `--system-site-packages` option when you are creating
your virtualenv:
$ apt-get install python-pyside
- $ virtualenv --site-packages .
+ $ virtualenv --system-site-packages .
After that, you must export `LEAP_VENV_SKIP_PYSIDE` to skip the
-isntallation:
+installation:
$ export LEAP_VENV_SKIP_PYSIDE=1
@@ -105,40 +99,90 @@ administrative permissions:
$ pip install -r pkg/requirements.pip
+This step is not strictly needed, since the `setup.py develop` in the next
+paragraph with also fetch the needed dependencies. But you need to know abou it:
+when you or any person in the development team will be adding a new dependency,
+you will have to repeat this command so that the new dependencies are installed
+inside your virtualenv.
+
+Install Bitmask
+---------------
+
+Normally we would install the `leap.bitmask` package as any other package
+inside the virtualenv.
+But, instead, we will be using setuptools **development mode**. The difference
+is that, instead of installing the package in a permanent location in your
+regular installed packages path, it will create a link from the local
+site-packages to your working directory. In this way, your changes will always
+be in the installation path without need to install the package you are working
+on.::
+
+ (bitmask)$ python2 setup.py develop
+
+After this step, your Bitmask launcher will be located at
+`~/Virtualenvs/bitmask/bin/bitmask`, and it will be in the path as long as you
+have sourced your virtualenv.
+
+Make resources
+--------------
+
+We also need to compile the resource files::
+
+ (bitmask)$ make resources
+
+You need to repeat this step each time you change a `.ui` file.
+
Copy script files
-----------------
-The openvpn invocation expects some files to be in place. If you have
-not installed bitmask from a debian package, you must copy these files
-manually by now:
+The openvpn invocation expects some files to be in place. If you have not
+installed `bitmask` from a debian package, you must copy these files manually
+by now:
$ sudo mkdir -p /etc/leap
$ sudo cp pkg/linux/resolv-update /etc/leap
+
Running openvpn without root privileges
---------------------------------------
-In linux, we are using `policykit` to be able to run openvpn without
-root privileges, and a policy file is needed to be installed for that to
-be possible. The setup script tries to install the policy file when
-installing bitmask system-wide, so if you have installed bitmask in your
-global site-packages at least once it should have copied this file for
-you.
+In linux, we are using `policykit` to be able to run openvpn without root
+privileges, and a policy file is needed to be installed for that to be
+possible.
+The setup script tries to install the policy file when installing bitmask
+system-wide, so if you have installed bitmask in your global site-packages at
+least once it should have copied this file for you.
-If you *only* are running bitmask from inside a virtualenv, you will
-need to copy this file by hand:
+If you *only* are running bitmask from inside a virtualenv, you will need to
+copy this file by hand:
$ sudo cp pkg/linux/polkit/net.openvpn.gui.leap.policy /usr/share/polkit-1/actions/
-### Missing Authentication agent
-If you are using linux and running a desktop other than unity or gnome,
-you might get an error saying that you are not running the
-authentication agent. For systems with gnome libraries installed you can
-launch it like this:
+Running!
+--------
+
+If everything went well, you should be able to run your client by invoking
+`bitmask`. If it does not get launched, or you just want to see more verbose
+output, try the debug mode:
+
+ (bitmask)$ bitmask --debug
+
+
+Using automagic helper script
+-----------------------------
+
+You can use a helper script that will get you started with bitmask and all the related repos.
+
+1. install system dependencies
+2. download automagic script
+3. run it :)
- /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 &
+Commands so you can copy/paste:
-or if you are a kde user:
+ $ mkdir bitmask && cd bitmask
+ $ wget https://raw.githubusercontent.com/leapcode/bitmask_client/develop/pkg/scripts/bootstrap_develop.sh
+ $ chmod +x bootstrap_develop.sh
+ $ ./bootstrap_develop.sh init # use help parameter for more information
- /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 &
+This script allows you to get started, update and run the bitmask app with all its repositories.
diff --git a/docs/client/en.md b/docs/client/en.md
index 0436ec2..4dd6953 100644
--- a/docs/client/en.md
+++ b/docs/client/en.md
@@ -5,15 +5,11 @@
Bitmask
=======
-**Bitmask** is the multiplatform desktop client for the services offered by
-[the LEAP Platform](platform). It is written in python using
-[PySide](http://qt-project.org/wiki/PySide) and licensed under the GPL3. Currently we distribute pre-compiled bundles for Linux and OSX, with Windows bundles following soon.
+**Bitmask** is the multiplatform desktop client for the services offered by [the LEAP Platform](platform).
-You can find the complete up-to-date documentation [at the python package documentation
-site.](http://pythonhosted.org/leap.bitmask "Bitmask documentation")
+It is written in python using [PySide](http://qt-project.org/wiki/PySide) and licensed under the GPL3. Currently we distribute pre-compiled bundles for Linux and OSX, with Windows bundles following soon.
-We include below some sections of the user guide and the development documentation so
-you can get started.
+We include below some sections of the user guide and the development documentation so you can get started.
User Guide
----------
@@ -23,8 +19,7 @@ User Guide
Tester Guide
------------
-This part of the documentation details how to fetch the last development
-version and how to report bugs.
+This part of the documentation details how to fetch the last development version and how to report bugs.
* [Howto for testers](client/testers-howto)
@@ -46,12 +41,19 @@ If you want to contribute to the project, we wrote this for you.
Supported OSs
-------------
-Currently supported OSs (32 and 64 bits) are:
+We currently support:
-* Debian 7 (32bits lxde and 64 bits gnome3)
-* Ubuntu 12.04 (LTS, unity)
-* Ubuntu 13.10 (latest, unity)
+### Through the bundle
+
+* Debian 7
+* Ubuntu 12.04 (LTS)
+* Ubuntu 13.10 (latest)
* Mac OSX >= 10.8
-* Windows 7 (32 bits only)
-* Windows 8 (planned)
+* Note: It *should* work in other Debian based distros
+
+### Through the debian package
+* Ubuntu 13.04 (Raring Ringtail)
+* Ubuntu 13.10 (Saucy Salamander)
+* Debian 7.0 (Wheezy)
+* Debian 8.0 (Jessie)
diff --git a/docs/client/testers-howto.md b/docs/client/testers-howto.md
index 10a436d..8311b95 100644
--- a/docs/client/testers-howto.md
+++ b/docs/client/testers-howto.md
@@ -75,16 +75,16 @@ Attach the logfile to your bug report.
### Need human interaction?
-You can also find us in the `#leap-dev` channel on the [freenode
+You can also find us in the `#leap` channel on the [freenode
network](https://freenode.net). If you do not have a IRC client at hand,
you can [enter the channel via
-web](http://webchat.freenode.net/?nick=leaper....&channels=%23leap-dev&uio=d4).
+web](http://webchat.freenode.net/?nick=leaper....&channels=%23leap&uio=d4).
Fetching latest development code
--------------------------------
Normally, testing the latest client bundles \<standalone-bundle\> should
-be enough. We are engaged in a two-week release cycle with minor
+be enough. We are engaged in a three-week release cycle with minor
releases that are as stable as possible.
However, if you want to test that some issue has *really* been fixed
@@ -95,86 +95,16 @@ script, keep reading for a way to painlessly fetch the latest
development code.
We have put together a script to allow rapid testing in different
-platforms for the brave souls like you. It more or less does all the
-steps covered in the Setting up a Work Enviroment \<environment\>
-section, only that in a more compact way suitable (ahem) also for non
-developers.
+platforms for the brave souls like you. Check it out in the
+*Using automagic helper script* section of the
+[Hacking](client/dev-environment) page only that in a more compact
+way suitable (ahem) also for non developers.
> **note**
>
> At some point in the near future, we will be using standalone bundles
> \<standalone-bundle\> with the ability to self-update.
-### Install dependencies
-
-First, install all the base dependencies plus git, virtualenv and
-development files needed to compile several extensions:
-
- apt-get install openvpn git-core python-dev python-pyside python-setuptools python-virtualenv
-
-### Bootstrap script
-
-> **note**
->
-> This will fetch the *develop* branch. If you want to test another
-> branch, just change it in the line starting with *pip install...*.
-> Alternatively, bug kali so she add an option branch to an improved
-> script.
-
-> **note**
->
-> This script could make use of the after\_install hook. Read
-> <http://pypi.python.org/pypi/virtualenv/>
-
-Download and source the following script in the parent folder where you
-want your testing build to be downloaded. For instance, to \`/tmp/:
-
-.. code-block:: bash
-
- cd /tmp
- wget https://raw.github.com/leapcode/bitmask\_client/develop/pkg/scripts/bitmask\_bootstrap.sh
- source bitmask\_bootstrap.sh
-
-Tada! If everything went well, you should be able to run bitmask by typing::
-
- bitmask --debug
-
-Noticed that your prompt changed? That was \*virtualenv\*. Keep reading...
-
-Activating the virtualenv
-\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
-The above bootstrap script has fetched latest code inside a virtualenv, which is
-an isolated, \*virtual\* python local environment that avoids messing with your
-global paths. You will notice you are \*inside\* a virtualenv because you will see
-a modified prompt reminding it to you (\*bitmask-testbuild\* in this case).
-
-Thus, if you forget to \*activate your virtualenv\*, bitmask will not run from the
-local path, and it will be looking for something else in your global path. So,
-\*\*you have to remember to activate your virtualenv\*\* each time that you open a
-new shell and want to execute the code you are testing. You can do this by
-typing::
-
- \$ source bin/activate
-
-from the directory where you \*sourced\* the bootstrap script.
-
-Refer to :ref:Working with virtualenv \<virtualenv\>\` to learn more
-about virtualenv.
-
-### Copying config files
-
-If you have never installed `bitmask` globally, **you need to copy some
-files to its proper path before running it for the first time** (you
-only need to do this once). This, unless the virtualenv-based
-operations, will need root permissions. See
-copy script files \<copyscriptfiles\> and
-running openvpn without root privileges \<policykit\> sections for more
-info on this. In short:
-
- $ sudo cp pkg/linux/polkit/net.openvpn.gui.leap.policy /usr/share/polkit-1/actions/
- $ sudo mkdir -p /etc/leap
- $ sudo cp pkg/linux/resolv-update /etc/leap
-
### Local config files
If you want to start fresh without config files, just move them. In
@@ -182,17 +112,6 @@ linux:
mv ~/.config/leap ~/.config/leap.old
-### Pulling latest changes
-
-You should be able to cd into the downloaded repo and pull latest
-changes:
-
- (bitmask-testbuild)$ cd src/bitmask
- (bitmask-testbuild)$ git pull origin develop
-
-However, you are encouraged to run the whole bootstrapping process from
-time to time to help us catching install and versioning bugs too.
-
### Testing the packages
When we have a release candidate for the supported platforms, we will
diff --git a/docs/client/user-install.md b/docs/client/user-install.md
index e29d63e..2a99d66 100644
--- a/docs/client/user-install.md
+++ b/docs/client/user-install.md
@@ -4,86 +4,16 @@
Installation
============
-This part of the documentation covers the installation of Bitmask. We
-assume that you want to get it properly installed before being able to
-use it. But we can we wrong.
-
-Standalone bundle
------------------
-
-Maybe the quickest way of running Bitmask in your machine is using the
-standalone bundle. That is the recommended way to use Bitmask for the
-time being.
-
-You can get the latest bundles, and their matching signatures at [the
-downloads page](https://downloads.leap.se/client/).
-
-### Linux
-
-- [Linux 32 bits
- bundle](https://downloads.leap.se/client/linux/Bitmask-linux32-latest.tar.bz2)
- ([signature](https://downloads.leap.se/client/linux/Bitmask-linux32-latest.tar.bz2.asc))
-- [Linux 64 bits
- bundle](https://downloads.leap.se/client/linux/Bitmask-linux64-latest.tar.bz2)
- ([signature](https://downloads.leap.se/client/linux/Bitmask-linux64-latest.tar.bz2.asc))
-
-### OSX
-
-- [OSX
- bundle](https://downloads.leap.se/client/osx/Bitmask-OSX-latest.dmg)
- ([signature](https://downloads.leap.se/client/osx/Bitmask-OSX-latest.dmg.asc))
-
-### Windows
-
-- [Windows 32 bits
- bundle](https://downloads.leap.se/client/windows/Bitmask-win32-latest.zip)
- ([signature](https://downloads.leap.se/client/windows/Bitmask-win32-latest.zip.asc))
-
-### Signature verification
-
-For the signature verification you can use :
-
- $ gpg --verify Bitmask-linux64-latest.tar.bz2.asc
-
-Asuming that you downloaded the linux 64 bits bundle.
-
-Debian / Ubuntu packages
-------------------------
-
-First, you need to bootstrap your apt-key:
-
- # gpg --recv-key 0x1E34A1828E207901 0x485B12FA218E81EB
- # gpg --list-sigs 0x1E34A1828E207901
- # gpg --list-sigs 0x485B12FA218E81EB
- # gpg -a --export 0x1E34A1828E207901 | sudo apt-key add -
-
-Add the archive to your sources.list, replace <suite> below with your Debian or
-Ubuntu suite, which you can find by typing 'lsb_release -c' in a terminal.
-Currently the following are available: sid, jessie, trusty, saucy, raring, quantal
-
- # echo "deb http://deb.leap.se/debian <suite> main" >> /etc/apt/sources.list
- # apt-get update
- # apt-get install leap-keyring
-
-And then you can happily install bitmask:
-
- apt-get install bitmask
+For download links and installation instructions go to https://dl.bitmask.net/
Distribute & Pip
----------------
-> **note**
->
-> The rest of the methods described below in this page assume you are
-> familiar with python code, and you can find your way through the
-> process of dependencies install. For more insight, you can also refer
-> to the sections setting up a working environment or
-> fetching latest code for testing.
-
-![image](https://pypip.in/v/leap.bitmask/badge.png%0A%20%20%20%20%20:target:%20https://crate.io/packages/leap.bitmask)
+**Note**
-Installing Bitmask is as simple as using
-[pip](http://www.pip-installer.org/) for the already released versions :
+If you are familiar with python code and you can find your way through the
+process of dependencies install, you can installing Bitmask using [pip](http://www.pip-installer.org/)
+for the already released versions :
$ pip install leap.bitmask
@@ -99,16 +29,5 @@ Or from the github mirror :
$ git clone https://github.com/leapcode/bitmask_client.git
-Once you have grabbed a copy of the sources, and installed all the base
-dependencies, the recommended way to proceed is to install things in a virtualenv.
-
- $ virtualenv bitmask && source bitmask/bin/activate
- $ make # compile the resources
- $ python2 setup.py install
-
-Or you can install it into your global site-packages easily :
-
- $ make # compile the resources
- $ sudo python2 setup.py install
+For more information go to the [Hacking](client/dev-environment) section :)
-WARNING: installing a package in the global site-packages can be harmful because the dependency installation can overwrite some of the existing packages.
diff --git a/docs/design/overview.md b/docs/design/overview.md
index 2d257c7..e477806 100644
--- a/docs/design/overview.md
+++ b/docs/design/overview.md
@@ -113,13 +113,29 @@ Databases
All user data is stored using BigCouch, a decentralized and high-availability version of CouchDB.
-There are three "main" databases:
+The databases are used by the different services and sometimes work as communication channels between the services.
-* users -- stores basic information about each user, such as their username, a SRP password verifier, and any email aliases or forwards.
-* tickets -- database of help desk tickets.
-* client_certificates -- a pool of short-lived client x.509 certificates that are distributed to authenticated clients when their client certificate has expired.
+These are the databases we currently use:
-Additionally, each user may have multiple databases for storing client-encrypted data, such as email messages.
+* customers -- payment information for the webapp
+* identities -- alias information, written by the webapp, read by leap_mx and nickserver
+* keycache -- used by the nickserver
+* sessions -- web session persistance for the webapp
+* shared -- used by soledad
+* tickets -- help tickets issued in the webapp
+* tokens -- created by the webapp on login, used by soledad to authenticate
+* users -- user records used by the webapp including the authentication data
+* user-...id... -- client-encrypted user data accessed from the client via soledad
+
+### Database Setup
+
+The main couch databases are initially created, seeded and updated when deploying the platform.
+
+The site_couchdb module contains the database description and security settings in `manifests/create_dbs.pp`. The design docs are seeded from the files in `files/designs/:db_name`. If these files change the next puppet deploy will update the databases accordingly. Both the webapp and soledad have scripts that will dump the required design docs so they can be included here.
+
+The per-user databases are created upon user registration by [Tapicero](https://leap.se/docs/design/tapicero). Tapicero also adds security and design documents. The design documents for per-user databases are stored in the [tapicero repository](https://github.com/leapcode/tapicero) in `designs`. Tapicero can be used to update existing user databases with new security settings and design documents.
+
+### BigCouch
Like many NoSQL databases, BigCouch is inspired by [Amazon's Dynamo paper](http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) and works by sharding each database among many servers using a circular ring hash. The number of shards might be greater than the number of servers, in which case each server would have multiple shards of the same database. Each server in the BigCouch cluster appears to contain the entire database, but actually it will just proxy the request to the actual database that has the content (if it does not have the document itself).
@@ -142,7 +158,8 @@ The LEAP Web App provides the following functions:
* Help tickets
* Client certificate renewal
* Webfinger access to user's public keys
-* Email alias and forwarding
+* Email aliases and forwarding
+* Localized and Customizable documentation
Written in: Ruby, Rails.
@@ -151,6 +168,7 @@ The Web App communicates with:
* CouchDB is used for all data storage.
* Web browsers of users accessing the user interface in order to edit their settings or fill out help tickets. Additionally, admins may delete users.
* LEAP Clients access the web app's REST API in order to register new users, authenticate existing ones, and renew client certificates.
+* tokens are stored upon successful authentication to allow the client to authenticate against other services
Nickserver
------------------------------
@@ -185,7 +203,7 @@ A LEAP service provider might also run servers with the following services:
* git -- private git repository hosting.
* Domain Name Server -- Authoritative name server for the provider's domain.
-* CA Daemon -- headless daemon that generates x.509 certificates and puts them in the distributed database.
+* Tapicero -- headless daemon that watches couch changes for new users and creates their databases
Client-side Components
======================================
@@ -382,4 +400,4 @@ Workflow:
* webapp retrieves client cert from a pool of pre-generated certificates.
* cert pool is filled as needed by background CA deamon.
* client connects to openvpn gateway, picked from among those listed in service definition file, authenticating with client certificate.
-* by default, when user starts computer the next time, client autoconnects. \ No newline at end of file
+* by default, when user starts computer the next time, client autoconnects.
diff --git a/docs/design/tapicero.md b/docs/design/tapicero.md
new file mode 100644
index 0000000..cb7be7c
--- /dev/null
+++ b/docs/design/tapicero.md
@@ -0,0 +1,98 @@
+@title = 'Tapicero'
+@summary = 'Creating per-user databases on the couch for soledad.'
+@toc = true
+
+Tapicero
+==============
+
+**Create databases for the leap platform users**
+
+
+Tapicero is part of the leap platform. It's deployed to the couch nodes and watches the users database as a daemon. When a user is add it creates a new database for that user. It also removes these databases on user destruction. This way neither the webapp nor soledad need couch admin privileges.
+
+"Tapicero" is spanish for upholsterer - the person who creates your couch.
+
+Running
+--------------------
+
+Tapicero is usually deployed with the leap platform and run as a daemon from an init script. It also serves as a tool to modify existing user databases. You can find it in `/srv/leap/tapicero` on the couch nodes or play with it on your own machine.
+
+Run in foreground:
+
+ bundle exec /bin/tapicero run
+
+Run as a deamon:
+
+ bundle exec /bin/tapicero start
+ bundle exec /bin/tapicero stop
+
+Run once, process all changes so far and then exit:
+
+ bundle exec tapicero --run-once
+
+Configuration
+---------------------
+
+Tapicero reads the following configurations files, in this order:
+
+* ``$(tapicero_source)/config/default.yaml``
+* ``/etc/leap/tapicero.yaml``
+* Any file passed to ARGV like so ``tapicero start -- /etc/tapicero.yaml``
+
+Files that come later will overwrite settings from the former.
+
+### Sequence File
+
+Tapicero keeps track of the last change processed in a sequence file. The location of the sequence file is configured as `seq_file` and defaults to `/var/log/leap/tapicero.seq`
+
+After restarting Tapicero it will only process changes that happened after the change with the sequence id given in the sequence file. This behaviour can be altered by using the --rerun flag or removing the sequence file.
+
+### Logging
+
+Tapicero logs it's activity to syslog in a production environment. Logging details can be configured via `log_level`
+Configure `log_file` if you want to log to a file instead of syslog.
+
+Flags
+---------------------
+
+--run-once:
+ process the existing users and then exit
+
+--rerun:
+ also work on users that have been processed before
+
+--overwrite-security:
+ write the security settings even if the user database already has some
+
+Combining these flags you can migrate the security settings of all existing per user databases.
+
+
+Installation
+---------------------
+
+Tapicero is normally deployed as part of the leap platform. If you want to install it outside of this context these instructions are for you.
+
+Prerequisites:
+
+ sudo apt-get install ruby ruby-dev couchdb
+ # for development, you will also need git, bundle, and rake.
+
+From source:
+
+ git clone git://leap.se/tapicero
+ cd tapicero
+ bundle
+ bundle exec bin/tapicero {run, start, status, ...}
+
+From gem:
+
+ sudo gem install tapicero
+
+License
+--------
+
+This program is written in Ruby and is distributed under the following license:
+
+> GNU Affero General Public License
+> Version 3.0 or higher
+> http://www.gnu.org/licenses/agpl-3.0.html
diff --git a/docs/design/webapp.md b/docs/design/webapp.md
new file mode 100644
index 0000000..2b078af
--- /dev/null
+++ b/docs/design/webapp.md
@@ -0,0 +1,282 @@
+@title = 'LEAP Web'
+@summary = 'The web component of the LEAP Platform, providing user management, support desk, documentation and more.'
+@toc = true
+
+Introduction
+===================
+
+"LEAP Web" is the webapp component of the LEAP Platform, providing the following services:
+
+* REST API for user registration.
+* Admin interface to manage users.
+* Client certificate distribution and renewal.
+* User support help tickets.
+* Billing
+* Customizable and Localized user documentation
+
+This web application is written in Ruby on Rails 3, using CouchDB as the backend data store.
+
+It is licensed under the GNU Affero General Public License (version 3.0 or higher). See http://www.gnu.org/licenses/agpl-3.0.html for more information.
+
+Known problems
+====================
+
+* Client certificates are generated without a CSR. The problem is that this makes the web
+ application extremely vulnerable to denial of service attacks. This was not an issue until we
+ started to allow the possibility of anonymously fetching a client certificate without
+ authenticating first.
+
+* By its very nature, the user database is vulnerable to enumeration attacks. These are
+ very hard to prevent, because our protocol is designed to allow query of a user database via
+ proxy in order to provide network perspective.
+
+Integration
+===========
+
+LEAP web is part of the leap platform. Most of the time it will be customized and deployed in that context. This section describes the integration of LEAP web in the wider framework. The Development section focusses on development of LEAP web itself.
+
+Configuration & Customization
+------------------------------
+
+The customization of the webapp for a leap provider happens via two means:
+ * configuration settings in services/webapp.json
+ * custom files in files/webapp
+
+### Configuration Settings
+
+The webapp ships with a fairly large set of default settings for all environments. They are stored in config/defaults.yml. During deploy the platform creates config/config.yml from the settings in services/webapp.json. These settings will overwrite the defaults.
+
+### Custom Files
+
+Any file placed in files/webapp in the providers repository will overwrite the content of config/customization in the webapp. These files will override files of the same name.
+
+This mechanism allows customizing basically all aspects of the webapp.
+See files/webapp/README.md in the providers repository for more.
+
+### Provider Information ###
+
+The leap client fetches provider information via json files from the server. The platform prepares that information and stores it in the webapp in public/1/config/*.json. (1 being the current API version).
+
+Provider Documentation
+-------------
+
+LEAP web already comes with a bit of user documentation. It mostly resides in app/views/pages and thus can be overwritten by adding files to files/webapp/views/pages in the provider repository. You probably want to add your own Terms of Services and Privacy Policy here.
+The webapp will render haml, erb and markdown templates and pick translated content from localized files such as privacy_policy.es.md. In order to add or remove languages you have to modify the available_locales setting in the config. (See Configuration Settings above)
+
+Development
+===========
+
+Installation
+---------------------------
+
+Typically, this application is installed automatically as part of the LEAP Platform. To install it manually for testing or development, follow these instructions:
+
+### TL;DR ###
+
+Install git, ruby 1.9, rubygems and couchdb on your system. Then run
+
+ gem install bundler
+ git clone https://leap.se/git/leap_web
+ cd leap_web
+ git submodule update --init
+ bundle install --binstubs
+ bin/rails server
+
+### Install system requirements
+
+First of all you need to install ruby, git and couchdb. On debian based systems this would be achieved by something like
+
+ sudo apt-get install git ruby1.9.3 rubygems couchdb
+
+We install most gems we depend upon through [bundler](http://gembundler.com). So first install bundler
+
+ sudo gem install bundler
+
+On Debian Wheezy or later, there is a Debian package for bundler, so you can alternately run ``sudo apt-get install bundler``.
+
+### Download source
+
+Simply clone the git repository:
+
+ git clone git://leap.se/leap_web
+ cd leap_web
+
+### SRP Submodule
+
+We currently use a git submodule to include srp-js. This will soon be replaced by a ruby gem. but for now you need to run
+
+ git submodule update --init
+
+### Install required ruby libraries
+
+ cd leap_web
+ bundle
+
+Typically, you run ``bundle`` as a normal user and it will ask you for a sudo password when it is time to install the required gems. If you don't have sudo, run ``bundle`` as root.
+
+Configuration
+----------------------------
+
+The configuration file `config/defaults.yml` providers good defaults for most
+values. You can override these defaults by creating a file `config/config.yml`.
+
+There are a few values you should make sure to modify:
+
+ production:
+ admins: ["myusername","otherusername"]
+ domain: example.net
+ force_ssl: true
+ secret_token: "4be2f60fafaf615bd4a13b96bfccf2c2c905898dad34..."
+ client_ca_key: "/etc/ssl/ca.key"
+ client_ca_cert: "/etc/ssl/ca.crt"
+ ca_key_password: nil
+
+* `admins` is an array of usernames that are granted special admin privilege.
+* `domain` is your fully qualified domain name.
+* `force_ssl`, if set to true, will require secure cookies and turn on HSTS. Don't do this if you are using a self-signed server certificate.
+* `secret_token`, used for cookie security, you can create one with `rake secret`. Should be at least 30 characters.
+* `client_ca_key`, the private key of the CA used to generate client certificates.
+* `client_ca_cert`, the public certificate the CA used to generate client certificates.
+* `ca_key_password`, used to unlock the client_ca_key, if needed.
+
+### Provider Settings
+
+The leap client fetches provider information via json files from the server.
+If you want to use that functionality please add your provider files the public/1/config directory. (1 being the current API version).
+
+Running
+-----------------------------
+
+ cd leap_web
+ bin/rails server
+
+You will find Leap Web running on `localhost:3000`
+
+Testing
+--------------------------------
+
+To run all tests
+
+ rake test
+
+To run an individual test:
+
+ rake test TEST=certs/test/unit/client_certificate_test.rb
+ or
+ ruby -Itest certs/test/unit/client_certificate_test.rb
+
+Engines
+---------------------
+
+Leap Web includes some Engines. All things in `app` will overwrite the engine behaviour. You can clone the leap web repository and add your customizations to the `app` directory. Including leap_web as a gem is currently not supported. It should not require too much work though and we would be happy to include the changes required.
+
+If you have no use for one of the engines you can remove it from the Gemfile. Engines should really be plugins - no other engines should depend upon them. If you need functionality in different engines it should probably go into the toplevel.
+
+# Deployment #
+
+We strongly recommend using the LEAP platform for deploy. Most of the things documented here are automated as part of the platform. If you want to research how the platform deploys or work on your own mechanism this section is for you.
+
+These instructions are targeting a Debian GNU/Linux system. You might need to change the commands to match your own needs.
+
+## Server Preperation ##
+
+### Dependencies ##
+
+The following packages need to be installed:
+
+* git
+* ruby1.9
+* rubygems1.9
+* couchdb (if you want to use a local couch)
+
+### Setup Capistrano ###
+
+We use puppet to deploy. But we also ship an untested config/deploy.rb.example. Edit it to match your needs if you want to use capistrano.
+
+run `cap deploy:setup` to create the directory structure.
+
+run `cap deploy` to deploy to the server.
+
+## Customized Files ##
+
+Please make sure your deploy includes the following files:
+
+* public/1/config/*.json (see Provider Settings section)
+* config/couchdb.yml
+
+## Couch Security ##
+
+We recommend against using an admin user for running the webapp. To avoid this couch design documents need to be created ahead of time and the auto update mechanism needs to be disabled.
+Take a look at test/setup_couch.sh for an example of securing the couch.
+
+## Design Documents ##
+
+After securing the couch design documents need to be deployed with admin permissions. There are two ways of doing this:
+ * rake couchrest:migrate_with_proxies
+ * dump the documents as files with `rake couchrest:dump` and deploy them
+ to the couch by hand or with the platform.
+
+### CouchRest::Migrate ###
+
+The before_script block in .travis.yml illustrates how to do this:
+
+ mv test/config/couchdb.yml.admin config/couchdb.yml # use admin privileges
+ bundle exec rake couchrest:migrate_with_proxies # run the migrations
+ bundle exec rake couchrest:migrate_with_proxies # looks like this needs to run twice
+ mv test/config/couchdb.yml.user config/couchdb.yml # drop admin privileges
+
+### Deploy design docs from CouchRest::Dump ###
+
+First of all we get the design docs as files:
+
+ # put design docs in /tmp/design
+ bundle exec rake couchrest:dump
+
+Then we add them to files/design in the site_couchdb module in leap_platform so they get deployed with the couch. You could also upload them using curl or sth. similar.
+
+# Troubleshooting #
+
+Here are some less common issues you might run into when installing Leap Web.
+
+## Cannot find Bundler ##
+
+### Error Messages ###
+
+`bundle: command not found`
+
+### Solution ###
+
+Make sure bundler is installed. `gem list bundler` should list `bundler`.
+You also need to be able to access the `bundler` executable in your PATH.
+
+## Outdated version of rubygems ##
+
+### Error Messages ###
+
+`bundler requires rubygems >= 1.3.6`
+
+### Solution ###
+
+`gem update --system` will install the latest rubygems
+
+## Missing development tools ##
+
+Some required gems will compile C extensions. They need a bunch of utils for this.
+
+### Error Messages ###
+
+`make: Command not found`
+
+### Solution ###
+
+Install the required tools. For linux the `build-essential` package provides most of them. For Mac OS you probably want the XCode Commandline tools.
+
+## Missing libraries and headers ##
+
+Some gem dependencies might not compile because they lack the needed c libraries.
+
+### Solution ###
+
+Install the libraries in question including their development files.
+
+
diff --git a/docs/get-involved/communication.haml b/docs/get-involved/communication.haml
index bd9d705..20508a9 100644
--- a/docs/get-involved/communication.haml
+++ b/docs/get-involved/communication.haml
@@ -9,7 +9,7 @@
.well
%p
- %code #leap on irc.freednode.net
+ %code #leap on irc.freenode.net
%br/
Topics related to coding, bugs, and development issues. Also general discussion and anything related to LEAP.
@@ -21,6 +21,7 @@
%ul
%li To subscribe, send mail to <code>discuss-subscribe&#x0040;leap&#x002e;se</code>
%li To unsubscribe, send mail to <code>discuss-unsubscribe&#x0040;leap&#x002e;se</code>
+ %li List archives are <a href='https://lists.riseup.net/www/arc/leap-discuss'>also available</a>
%h3 Email
diff --git a/docs/get-involved/project-ideas.md b/docs/get-involved/project-ideas.md
index 48727af..f86c43a 100644
--- a/docs/get-involved/project-ideas.md
+++ b/docs/get-involved/project-ideas.md
@@ -150,19 +150,27 @@ We have support for Windows 32bits, 64bits seems to be able to use that, except
Android
----------------------------------------------
+### Dynamic OpenVPN configuration
+
+Currently the Android app chooses which VPN gateway to connect to based on the least difference of timezones and establishes a configuration for connecting to it by a biased selection of options (port, proto, etc) from the set declared by the provider through the API. For cases where a gateway is unavailable or a network is restricting traffic that our configuration matches (e.g. UDP out to port 443), being able to attempt different configurations or gateways would help finding a configuration that worked.
+
+* Contact: meanderingcode, parmegv, or richy
+* Difficulty: Easy to medium
+* Skills: Android programming
+
### Ensure OpenVPN fails closed
For enhanced security, we would like the VPN on android to have the option of blocking all network traffic if the VPN dies or when it has not yet established a connection. Network traffic would be restored when the user manually turns off the VPN or the VPN connection is restored. Currently, there is no direct way to do this with Android, but we have a few ideas for tackling this problem.
-* Contact: meandering-code, parmegv, or richy
-* Difficulty: not sure
-* Skills: Android programming
+* Contact: meanderingcode, parmegv, or richy
+* Difficulty: Hard (Medium but meticulous, or harder than we think)
+* Skills: Android programming, applicable linux skill like iptables
### Port libraries to Android
Before we can achieve full functionality on Android, we have a lot of Python libraries that need to either be ported to run directly on Android or to rewrite them natively in Java or JNI. We have been pursing both strategies, for different libraries, but we have a lot more work to do.
-* Contact: richy, meandering-code, parmegv
+* Contact: richy, meanderingcode, parmegv
* Difficulty: varies
* Skills: Android programming, compiling, Python programming.
diff --git a/docs/platform/development.md b/docs/platform/development.md
index f23ac71..cc7fd32 100644
--- a/docs/platform/development.md
+++ b/docs/platform/development.md
@@ -11,6 +11,7 @@ Requirements
* Be a real machine with virtualization support in the CPU (VT-x or AMD-V). In other words, not a virtual machine.
* Have at least 4gb of RAM.
* Have a fast internet connection (because you will be downloading a lot of big files, like virtual machine images).
+* You should do everything described below as an unprivileged user, and only run those commands as root that are noted with *sudo* in front of them. Other than those commands, there is no need for privileged access to your machine, and in fact things may not work correctly.
Install prerequisites
--------------------------------
@@ -29,16 +30,33 @@ Install core prerequisites:
sudo apt-get install git ruby ruby-dev rsync openssh-client openssl rake make
-Install Vagrant in order to be able to test with local virtual machines (typically optional, but required for this tutorial):
+Install Vagrant in order to be able to test with local virtual machines (typically optional, but required for this tutorial). You probably want a more recent version directly from [vagrant.](https://www.vagrantup.com/downloads.htm)
sudo apt-get install vagrant virtualbox
-<!--
-*Mac OS*
-1. Install rubygems from https://rubygems.org/pages/download (unless the `gem` command is already installed).
-2. Install Vagrant.dmg from http://downloads.vagrantup.com/
--->
+*Mac OS X 10.9 (Mavericks)*
+
+Install Homebrew package manager from http://brew.sh/ and enable the [System Duplicates Repository](https://github.com/Homebrew/homebrew/wiki/Interesting-Taps-&-Branches) (needed to update old software versions delivered by Apple) with
+
+ brew tap homebrew/dupes
+
+Update OpenSSH to support ECDSA keys. Follow [this guide](http://www.dctrwatson.com/2013/07/how-to-update-openssh-on-mac-os-x/) to let your system use the Homebrew binary.
+
+ brew install openssh --with-brewed-openssl --with-keychain-support
+
+The certtool provided by Apple it's really old, install the one provided by GnuTLS and shadow the system's default.
+
+ sudo brew install gnutls
+ ln -sf /usr/local/bin/gnutls-certtool /usr/local/bin/certool
+
+Install the Vagrant and VirtualBox packages for OS X from their respective Download pages.
+
+* http://www.vagrantup.com/downloads.html
+* https://www.virtualbox.org/wiki/Downloads
+
+
+2. Install
Adding development nodes to your provider
@@ -71,7 +89,7 @@ In order to test the node "web1" we need to start it. Starting a node for the fi
NOTE: Many people have difficulties getting Vagrant working. If the following commands do not work, please see the Vagrant section below to troubleshoot your Vagrant install before proceeding.
- $ leap local start web
+ $ leap local start web1
= created test/
= created test/Vagrantfile
= installing vagrant plugin 'sahara'
@@ -218,6 +236,11 @@ Ubuntu Raring 13.04
* `virtualbox 4.2.10-dfsg-0ubuntu2.1` from Ubuntu raring and `vagrant 1.2.2` from vagrantup.com
+Mac OS X 10.9
+-------------
+
+* `VirtualBox 4.3.10` from virtualbox.org and `vagrant 1.5.4` from vagrantup.com
+
Using Vagrant with libvirt/kvm
==============================
diff --git a/docs/platform/guide.md b/docs/platform/guide.md
index 99147a8..4b3086e 100644
--- a/docs/platform/guide.md
+++ b/docs/platform/guide.md
@@ -16,15 +16,15 @@ When adding a new node to your provider, you should ask yourself four questions:
Brief overview of the services:
* **webapp**: The web application. Runs both webapp control panel for users and admins as well as the REST API that the client uses. Needs to communicate heavily with `couchdb` nodes. You need at least one, good to have two for redundancy. The webapp does not get a lot of traffic, so you will not need many.
-* **couchdb**: The database for users and user data. You can get away with just one, but for proper redundancy you should have at least three. Communicates heavily with `webapp` and `mx` nodes.
-* **soledad**: Handles the data syncing with clients. Typically combined with `couchdb` service, since it communicates heavily with couchdb. (not currently in stable release)
-* **mx**: Incoming and outgoing MX servers. Communicates with the public internet, clients, and `couchdb` nodes. (not currently in stable release)
+* **couchdb**: The database for users and user data. You can get away with just one, but for proper redundancy you should have at least three. Communicates heavily with `webapp`, `mx`, and `soledad` nodes.
+* **soledad**: Handles the data syncing with clients. Typically combined with `couchdb` service, since it communicates heavily with couchdb.
+* **mx**: Incoming and outgoing MX servers. Communicates with the public internet, clients, and `couchdb` nodes.
* **openvpn**: OpenVPN gateway for clients. You need at least one, but want as many as needed to support the bandwidth your users are doing. The `openvpn` nodes are autonomous and don't need to communicate with any other nodes. Often combined with `tor` service.
* **monitor**: Internal service to monitor all the other nodes. Currently, you can have zero or one `monitor` nodes.
* **tor**: Sets up a tor exit node, unconnected to any other service.
* **dns**: Not yet implemented.
-webapp
+Webapp
-----------------------------------
The webapp node is responsible for both the user face web application and the API that the client interacts with.
@@ -45,6 +45,54 @@ And then redeploy to all webapp nodes:
By putting this in `services/webapp.json`, you will ensure that all webapp nodes inherit the value for `webapp.admins`.
+Services
+================================
+
+What nodes do you need for a provider that offers particular services?
+
+<table class="table table-striped">
+<tr>
+<th>Node Type</th>
+<th>VPN Service</th>
+<th>Email Service</th>
+</tr>
+<tr>
+<td>webapp</td>
+<td>required</td>
+<td>required</td>
+</tr>
+<tr>
+<td>couchdb</td>
+<td>required</td>
+<td>required</td>
+</tr>
+<tr>
+<td>soledad</td>
+<td>not used</td>
+<td>required</td>
+</tr>
+<tr>
+<td>mx</td>
+<td>not used</td>
+<td>required</td>
+</tr>
+<tr>
+<td>openvpn</td>
+<td>required</td>
+<td>not used</td>
+</tr>
+<tr>
+<td>monitor</td>
+<td>optional</td>
+<td>optional</td>
+</tr>
+<tr>
+<td>tor</td>
+<td>optional</td>
+<td>optional</td>
+</tr>
+<table>
+
Locations
================================
diff --git a/docs/platform/known-issues.md b/docs/platform/known-issues.md
index 46a77de..5bf41a6 100644
--- a/docs/platform/known-issues.md
+++ b/docs/platform/known-issues.md
@@ -5,6 +5,13 @@
Here you can find documentation about known issues and potential work-arounds in the current Leap Platform release.
+0.5.1
+=====
+CouchDB Sync
+------------
+You can't deploy new couchdb nodes after one or more have been deployed. Make *sure* that you configure and deploy all your couchdb nodes when starting the provider. The problem is that we dont not have a clean way of adding couch nodes after initial creation of the databases, so any nodes added after result in improperly synchronized data. See Bug [#5601](https://leap.se/code/issues/5601) for more information.
+
+
0.5.0rc1
========
diff --git a/docs/platform/quick-start.md b/docs/platform/quick-start.md
index 3171674..9bebe3e 100644
--- a/docs/platform/quick-start.md
+++ b/docs/platform/quick-start.md
@@ -10,7 +10,7 @@ If you are curious how this will look like without trying it out yourself, you c
Our goal
------------------
-We are going to create a minimal LEAP provider offering OpenVPN service. This basic setup can be expanded by adding more OpenVPN nodes to increase capacity, or more webapp and couchdb nodes to increase availability (performance wise, a single couchdb and a single webapp are more than enough for most usage, since they are only lightly used, but you might want redundancy).
+We are going to create a minimal LEAP provider offering OpenVPN service. This basic setup can be expanded by adding more OpenVPN nodes to increase capacity, or more webapp and couchdb nodes to increase availability (performance wise, a single couchdb and a single webapp are more than enough for most usage, since they are only lightly used, but you might want redundancy). Please note: currently it is not possible to safely add additional couchdb nodes at a later point. They should all be added in the beginning, so please consider carefully if you would like more before proceeding.
Our goal is something like this:
@@ -27,13 +27,14 @@ Requirements
In order to complete this Quick Start, you will need a few things:
-* You will need three real or paravirtualized virtual machines (KVM, Xen, Openstack, Amazon, but not Vagrant - sorry) that have a basic Debian Stable installed. If you allocate 10G to each node, that should be plenty.
-* You should be able to SSH into them remotely, and know their IP addresses and their SSH host keys
+* You will need three real or paravirtualized virtual machines (KVM, Xen, Openstack, Amazon, but not Vagrant - sorry) that have a basic Debian Stable installed. If you allocate 20G of disk space to each node for the system, after this process is completed, you will have used less than 10% of that disk space. If you allocate 2 CPUs and 8G of memory to each node, that should be more than enough to begin with.
+* You should be able to SSH into them remotely, and know their root password, IP addresses and their SSH host keys
* You will need four different IPs, one for each node, and a second one for the VPN gateway
* The ability to create/modify DNS entries for your domain is preferable, but not needed. If you don't have access to DNS, you can workaround this by modifying your local resolver, i.e. editing `/etc/hosts`.
* You need to be aware that this process will make changes to your systems, so please be sure that these machines are a basic install with nothing configured or running for other purposes
* Your machines will need to be connected to the internet, and not behind a restrictive firewall.
* You should work locally on your laptop/workstation (one that you trust and that is ideally full-disk encrypted) while going through this guide. This is important because the provider configurations you are creating contain sensitive data that should not reside on a remote machine. The leap cli utility will login to your servers and configure the services.
+* You should do everything described below as an unprivileged user, and only run those commands as root that are noted with *sudo* in front of them. Other than those commands, there is no need for privileged access to your machine, and in fact things may not work correctly.
All the commands in this tutorial are run on your sysadmin machine. In order to complete the tutorial, the sysadmin will do the following:
@@ -272,8 +273,8 @@ If you prefer, you can initalize each node, one at a time:
Deploy the LEAP platform to the nodes
--------------------
-Now you should deploy the platform recipes to the nodes. Deployment can take a while to run, especially on the first run, as it needs to update the packages on the new machine.
-Note that currently, nodes must be deployed in a certain order. The underlying couch database node(s) must be deployed first, and then all other nodes.
+Now you should deploy the platform recipes to the nodes. [Deployment can take a while to run](http://xkcd.com/303/), especially on the first run, as it needs to update the packages on the new machine.
+*Important notes:* currently nodes must be deployed in a certain order. The underlying couch database node(s) must be deployed first, and then all other nodes. Also you need to configure and deploy all of the couchdb nodes that you plan to use at this time, as currently you cannot add more of them later later ([See](https://leap.se/es/docs/platform/known-issues#CouchDB.Sync)).
$ leap deploy couch1
diff --git a/docs/tech/hard-problems/en.md b/docs/tech/hard-problems/en.md
index 635ae16..c419006 100644
--- a/docs/tech/hard-problems/en.md
+++ b/docs/tech/hard-problems/en.md
@@ -6,12 +6,12 @@
If you take a survey of interesting initiatives to create more secure communication, a pattern starts to emerge: it seems that any serious attempt to build a system for secure message communication eventually comes up against the following list of seven hard problems.
-1. **Authenticity problem**: Public key validation is very difficult for users to manage, but without it you cannot have confidentiality.
+1. **Public key problem**: Public key validation is very difficult for users to manage, but without it you cannot have confidentiality.
2. **Meta-data problem**: Existing protocols are vulnerable to meta-data analysis, even though meta-data is often much more sensitive than content.
3. **Asynchronous problem**: For encrypted communication, you must currently choose between forward secrecy or the ability to communicate asynchronously.
4. **Group problem**: In practice, people work in groups, but public key cryptography doesn't.
5. **Resource problem**: There are no open protocols to allow users to securely share a resource.
-6. **Availability problem**: People want to smoothly switch devices, and restore their data if they lose a device, but this very difficult to do securely.
+6. **Availability problem**: People want to smoothly switch devices, and restore their data if they lose a device, but this is very difficult to do securely.
7. **Update problem**: Almost universally, software updates are done in ways that invite attacks and device compromises.
These problems appear to be present regardless of which architectural approach you take (centralized authority, distributed peer-to-peer, or federated servers).
@@ -22,15 +22,36 @@ It is possible to safely ignore many of these problems if you don't particularly
In our work, LEAP has tried to directly face down these seven problems. In some cases, we have come up with solid solutions. In other cases, we are moving forward with temporary stop-gap measures and investigating long term solutions. In two cases, we have no current plan for addressing the problems.
-### Authenticity problem
+### Public key problem
The problem:
-> Public key validation is very difficult for users to manage, but without it you cannot have confidentiality.
+> Public keys is very difficult for users to manage, but without it you cannot have confidentiality.
-If proper key validation is a precondition for secure communication, but it is too difficult for most users, what hope do we have? We have developed a unique federated system called [Nicknym](/nicknym) that automatically discovers and validates public keys allowing the user to take advantage of public key cryptography without knowing anything about keys or signatures.
+If proper key management is a precondition for secure communication, but it is too difficult for most users, what hope do we have?
-The standard protocol that exists today to solve this problem is [DANE](https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities). DANE might be the better option in the long run, but currently DANE is complex to set up, complex for clients to consume, leaks association information to a network observer, and relies on trusting the DNS root zone and TLD zones.
+The problem of public keys breaks down into five discrete issues:
+
+* **Key discovery** is the process of obtaining the public key for a particular user identifier. Currently, there is no commonly accepted standard for mapping an identifier to a public key. For OpenPGP, many people use keyservers for this (although the keyserver infrastructure was not designed to be used in this way). A related problem is how a client can discover public key information for all the contacts in their addressbook for phonebook without revealing this information to a third party.
+* **Key validation** is the process ensuring that a public key really does map to a particular user identifier. This is also called the "binding problem" in computer science. Traditional methods of key validation have recently become discredited.
+* **Key availability** is the assurance that the user will have access, whenever needed, to their keys and the keys of other users. Almost every attempt to solve the key validation problem turns into a key availability problem, because once you have validated a public key, you need to make sure that this validation is available to the user on all the possible devices they might want to send or receive messages on.
+* **Key revocation** is the process of ensuring that people do not use an old public key that has been superseded by a new one.
+
+Of these problems, key validation is the most difficult and most central to proper key management. The two traditional methods of key validation are either the X.509 Certificate Authority (CA) system or the decentralized "Web of Trust" (WoT). Recently, these schemes have come under intense criticism. Repeated security lapses at many of the Certificate Authorities have revealed serious flaws in the CA system. On the other hand, in an age where we better understand the power of social network analysis and the sensitivity of the social graph, the exposure of metadata by a "Web of Trust" is no longer acceptable from a security standpoint.
+
+An alternative method of key validation is called TOFU for Trust On First Use. With TOFU, a public key is assumed to be the right key the first time it is used. TOFU can work well for long term associations and for people who are not being targeted for attack, but its security relies on the security of the discovery transport and the application's ability to retain a memory of discovered keys. TOFU can break down in many real-world situations where a user might need to generate new keys or securely communicate with a new contact.
+
+Other strategies for addressing parts of the key management problem include:
+
+1. Inline Keys: Many projects plan to facilitate discovery by simply including the user's public key in every outgoing message (as an attachment, in a footer, or in a header).
+1. DNS: Key distributed via DNSSEC, where a service provider adds a DNS entry for each user containing the user's public key or fingerprint.
+1. Append-only log: There is a proposal to modify Certificate Transparency to handle user accounts, where audits are performed against append-only logs.
+1. Network perspective: Validation by key endorsement (third party signatures), with audits performed via network perspective.
+1. Introductions: Discovery and validation of keys through acquaintance introduction.
+1. Mobile: Although too lengthy to manually transcribe, an app on a mobile device can be used to easily exchange keys in person (for example, via a QR code or bluetooth connection).
+1. Biometric feedback: In the one case of voice communication, you can use recognition of the other person's voice as a means to validate the key (when used in combination with a Short Authentication String). This is how ZRTP works.
+
+For LEAP, we have developed a unique federated system called [Nicknym](/nicknym) that automatically discovers and validates public keys allowing the user to take advantage of public key cryptography without knowing anything about keys or signatures. Nicknym uses a combination of TOFU, provider endorsement, and network perspective.
### Meta-data problem
@@ -51,6 +72,9 @@ In the long term, we plan to adopt one of several different schemes for securely
* Onion-routing-headers: A message from user A to user B is encoded so that the "to" routing information only contains the name of B's server. When B's server receives the message, it unwraps (unencrypts) a supplementary header that contains the actual user "B". Like aliases, this provides no benefit if both users are on the same server. As an improvement, the message could be routed through intermediary servers.
* Third-party-dropbox: To exchange messages, user A and user B negotiate a unique "dropbox" URL for depositing messages, potentially using a third party. To send a message, user A would post the message to the "dropbox". To receive a message, user B would regularly polls this URL to see if there are new messages.
* Mixmaster-with-signatures: Messages are bounced through a mixmaster-like set of anonymization relays and then finally delivered to the recipient's server. The user's client only displays the message if it is encrypted, has a valid signature, and the user has previously added the sender to a 'allow list' (perhaps automatically generated from the list of validated public keys).
+* Tor: One scheme employed by Pond is to simply allow for direct delivery over Tor from the sender's device to the recipient's server. This is fairly simple, and places all the work on the existing Tor network.
+
+In all of these cases, meta-data protected routing can make abuse prevention more difficult. For this reason, it probably makes sense to only allow once of these options once both parties have already exchanged key material, in order to prevent the user being flooded with anonymous spam.
For a great discussion comparing mix networks and onion routing, see [Tom Ritter's blog post on the topic](https://ritter.vg/blog-mix_and_onion_networks.html).
@@ -128,6 +152,6 @@ The sad state of update security is especially troublesome because update attack
To address the update problem, LEAP is adopting a unique update system called Thandy from the Tor project. Thandy is complex to manage, but is very effective at preventing known update attacks.
-Thandy, and the related [TUF](https://updateframework.com), are designed to address the many [security vulnerabilities in existing software update systems](https://updateframework.com/projects/project/wiki/Docs/Security). In one example, other update systems suffer from an inability of the client to confirm that they have the most up-to-date copy, thus opening a huge vulnerability where the attacker simply waits for a security upgrade, prevents the upgrade, and launches an attack exploiting the vulnerability that should have just been fixed. Thandy/TUF provides a unique mechanism for distributing and verifying updates so that no client device will install the wrong update or miss an update without knowing it.
+Thandy, and the related [TUF](https://updateframework.com), are designed to address the many [security vulnerabilities in existing software update systems](https://github.com/theupdateframework/tuf/blob/develop/SECURITY.md). In one example, other update systems suffer from an inability of the client to confirm that they have the most up-to-date copy, thus opening a huge vulnerability where the attacker simply waits for a security upgrade, prevents the upgrade, and launches an attack exploiting the vulnerability that should have just been fixed. Thandy/TUF provides a unique mechanism for distributing and verifying updates so that no client device will install the wrong update or miss an update without knowing it.
Related to the update problem is the backdoor problem: how do you know that an update does not have a backdoor added by the software developers themselves? Probably the best approach is that taken by [Gitian](https://gitian.org/), which provides a "deterministic build process to allow multiple builders to create identical binaries". We hope to adopt Gitian in the future.
diff --git a/docs/tech/hard-problems/pt.md b/docs/tech/hard-problems/pt.md
index 44f5e95..50c0541 100644
--- a/docs/tech/hard-problems/pt.md
+++ b/docs/tech/hard-problems/pt.md
@@ -4,69 +4,71 @@
## Os sete grandes
-Se você pesquisar as iniciativas para a criação de formas de comunicação mais seguras, um padrão começa a surgir: aparentemente toda tentativa séria de construir um sistema para transmissão de mensagens seguras eventualmente se coloca contra a seguinte lista dos sete problemas difíceis:
+Se você pesquisar iniciativas interessantes para a criação de formas mais seguras de comunicação, verá que surge um padrão: aparentemente toda tentativa séria de construir um sistema para transmissão de mensagens seguras eventualmente se depara com a seguinte lista de sete problemas difíceis:
-1. **Problema da autenticidade**: a validação de chaves públicas é muito difícil para os/as usuários gerenciarem, mas sem isso você não pode ter confidencialidade.
-2. **Problma dos metadados**: os protocolos existentes são vulneráveis à análise de metadados, mesmo considerando que muitas vezes os metadados são muito mais sensíveis do que o conteúdo.
-3. **Problema da assincronicidade**: para comunicação criptografada, atualmente você precisa escolher entre sigilo futuro (forward secrecy) e a habilidade de se comunicar de forma assíncrona.
+1. **Problema da autenticidade**: a validação de chaves públicas é muito difícil para ser gerenciada por usuários, mas sem isso não é possível obter confidencialidade.
+2. **Problema dos metadados**: os protocolos existentes são vulneráveis à análise de metadados, mesmo que os metadados muitas vezes sejam mais sensíveis do que o conteúdo da comunicação.
+3. **Problema da assincronicidade**: para estabelecer comunicação criptografada, atualmente é necessário escolher entre sigilo futuro (forward secrecy) e a habilidade de se comunicar de forma assíncrona.
4. **Problema do grupo**: na prática, pessoas trabalham em grupos, mas a criptografia de chave pública não.
-5. **Problema dos recursos**: não existem protocolos abertos que permitam a usuários/as compartilharem recursos (como arquivos) de forma segura.
-6. **Problema da disponibilidade**: pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se elas perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
-7. **Problema da atualização**: quase que univesalmente, atualizações de software são feitas de maneiras convidativas para ataques e comprometimento de dispositivos.
+5. **Problema dos recursos**: não existem protocolos abertos que permitam aos usuários compartilharem um recurso de forma segura.
+6. **Problema da disponibilidade**: as pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
+7. **Problema da atualização**: quase que universalmente, atualizações de software são feitas de maneiras que são convidativas a ataques e comprometimento de dispositivos.
-Tais problemas parecem estar presentes independentemente da abordagem arquitetônica escolhida (autoridade certificadora, peer-to-peer distribuído ou servidores federados).
+Tais problemas parecem estar presentes independentemente da abordagem arquitetônica escolhida (autoridade centralizada, peer-to-peer distribuído ou servidores federados).
-É possível ignorar muitos desses problemas se você não se importar particularmente com a usabilidade ou com o conjunto de funcionalidades com as quais os usuários/as se acostumaram nos métodos contemporâneos de comunicação online. Mas se você se importa com a usabilidade e recursos, então você terá que encontrar soluções para esses problemas.
+É possível ignorar muitos desses problemas se você não se importar especificamente com a usabilidade ou com o conjunto de funcionalidades com as quais os/as usuários/as se acostumaram nos métodos contemporâneos de comunicação online. Mas se você se importa com a usabilidade e recursos, então você terá que encontrar soluções para esses problemas.
## Nossas soluções
-Em nosso trabalho, o LEAP tentou enfrentar diretamente esses sete problemas. Em alguns casos, chegamos a soluções sólidas. Noutros, estamos avançando com medidas paliativas temporárias e investigando soluções de longo prazo. Em dois casos, não temos nenhum plano atual para lidar com os problemas.
+Em nosso trabalho, o LEAP tentou enfrentar diretamente esses sete problemas. Em alguns casos, chegamos a soluções sólidas. Noutros, estamos avançando com medidas paliativas temporárias e investigando soluções de longo prazo. Em dois casos não temos nenhum plano atual para lidar com os problemas.
### O problema da autenticidade
O problema:
-> A validação de chaves públicas é muito difícil para que os/as usuários gerenciem, mas sem ela você não pode ter sigilo .
+> A validação de chaves públicas é muito difícil para ser gerenciada por usuários, mas sem isso não é possível obter confidencialidade.
-Se a validação de chave adequada é um pressuposto para uma comunicação segura, mas é muito difícil para a maioria dos usuários/as, que esperança temos? Desenvolvemos um sistema federado único chamado [Nicknym](/nicknym)que descobre e valida automaticamente as chaves públicas, permitindo ao usuário tirar partido de criptografia de chave pública sem saber nada sobre chaves ou assinaturas.
+Se a validação de chaves adequada é um pressuposto para uma comunicação segura, mas é muito difícil para a maioria dos usuários/as, que esperança temos? Desenvolvemos um sistema federado único chamado [Nicknym](/nicknym) que descobre e valida automaticamente as chaves públicas, permitindo ao usuário tirar partido de criptografia de chave pública sem saber nada sobre chaves ou assinaturas.
+
+O protocolo padrão que existe hoje para solucionar este problema chama-se [DANE](https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities). O DANE pode ser a melhor opção no longo prazo, mas atualmente é difícil de ser configurado, difícil de ser utilizado por clientes, vaza informações sobre associação para um observador da rede, e depende da confiança na zona raíz do DNS e nas zonas TLD.
### Problema dos metadados
O problema:
-> Os protocolos existentes são vulneráveis à análise de metadados, mesmo quando os metadados muitas vezes são mais importantes do que o conteúdo da comunicação.
+> Os protocolos existentes são vulneráveis à análise de metadados, mesmo que os metadados muitas vezes sejam mais sensíveis do que o conteúdo da comunicação.
-Como medida de curto prazo, estamos integrando transporte criptografado oportunístico (TLS) para email e mensagens de chat quando retransmitidas entre os servidores. Há dois aspectos importantes nisso:
+Como medida de curto prazo, estamos integrando transporte criptografado oportunístico (TLS) para email e mensagens de chat ao serem retransmitidas entre servidores. Há dois aspectos importantes nisso:
* Servidores repetidores (relaying servers) precisam de uma maneira sólida para descobrir e validar as chaves uns dos outros. Para isso, estamos utilizando inicialmente DNSSEC/DANE.
-* Um atacante não deve ser capaz de fazer o downgrade do transporte criptografado para texto não cifrado. Para isso, estamos modificando o software para assegurar que o transporte criptografado não pode sofrer downgrade.
+* Um atacante não deve ser capaz de fazer o downgrade do transporte criptografado para texto não cifrado. Para isso, estamos modificando o software para assegurar que o transporte criptografado não possa sofrer downgrade.
-Tal abordagem é potencialmente eficaz contra observadores externos na rede, mas não protege os metadados dos próprios provedores de serviços. Além disso, ele não tem, por si só, como proteger contra ataques mais avançados que envolvam análise de tráfego e de tempo.
+Tal abordagem é potencialmente eficaz contra observadores externos na rede, mas não protege os metadados dos próprios provedores de serviços. Além disso, ela não protege, por si só, contra ataques mais avançados que envolvam análise de tráfego e de tempo.
-No longo prazo, pretendemos adotar um dos vários esquemas distintos para a segurança de roteamento metadados. Estes incluem:
+No longo prazo, pretendemos adotar um dos vários esquemas distintos para roteamento seguro de metadados. Estes incluem:
-* Pareamento automático de aliases (auto-alias-pairs): cada parte autonegocia aliases para se comunicarem umas com as outras. Nos bastidores, o cliente -- então invisível -- usa esses aliases para a comunicação subsequente. A vantagem é que isso é compatível com o roteamento existente. A desvantagem é que o servidor do usuário/a armazena uma lista de seus aliases. Como uma melhoria, você poderia adicionar a possibilidade de um serviço de terceiros para manter o mapa dos aliases.
-* Cabeçalhos de roteamento do tipo "cebola" (onion-routing-headers): uma mensagem de um/a usuário/a para o/a usuário/a B é codificada para que o as informações de roteamento do destinatário/a contenha apenas o nome do servidor usado por B. Quando o servidor de B recebe a mensagem, ele/a decodifica um cabeçalho adicional que contém o utilizador real "B". Como aliases, isso não proporciona benefícios se os usuários estão no mesmo servidor. Como uma melhoria, a mensagem pode ser encaminhada por meio de servidores intermediários.
-* Caixa de despejo de terceiros (third-party dropbox): para trocar mensagens, o/a usuário/a A e o/a usuário/a B negociam uma URL única de "dropbox" para depositar mensagens, potencialmente usando um agente intermediário. Para enviar uma mensagem, o usuário A que postar a mensagem para o "dropbox". Para receber uma mensagem, o usuário B acessaria regularmente esta URL para ver se há novas mensagens.
-* Mixmaster (misturador) com assinaturas (mixmaster-with-signatures): as mensagens são enviadas através de um mixmaster -- um conjunto de misturadores para anonimato -- e, finalmente, entregues ao servidor do destinatário. O programa cliente do usuário exibe apenas a mensagem se ela é criptografada, tem uma assinatura válida e se o usuário tenha adicionado anteriormente ao remetente para uma 'lista de permissões' (talvez gerada automaticamente a partir da lista de chaves públicas validadas).
+* Pareamento automático de pseudônimos (auto-alias-pairs): cada uma das partes autonegocia pseudônimos para se comunicarem umas com as outras. Nos bastidores, o cliente utiliza de forma invisível esses pseudônimos para a comunicação subsequente. A vantagem é que isso é compatível com o roteamento existente. A desvantagem é que o servidor do usuário/a armazena uma lista de seus pseudônimos. Como uma melhoria, pode-se adicionar a possibilidade de usar um serviço de terceiros para manter o mapa dos pseudônimos.
+* Cabeçalhos de roteamento do tipo "cebola" (onion-routing-headers): uma mensagem de um/a usuário/a para o/a usuário/a B é codificada de forma que as informações de roteamento do destinatário/a contenham apenas o nome do servidor usado por B. Quando o servidor de B recebe a mensagem, decodifica um cabeçalho adicional que contém o utilizador real "B". Como o uso de pseudônimos, isso não proporciona benefícios se os usuários estão no mesmo servidor. Como uma melhoria, a mensagem pode ser encaminhada por meio de servidores intermediários.
+* Caixa de depósito de terceiros (third-party dropbox): para trocar mensagens, o/a usuário/a A e o/a usuário/a B negociam uma URL única de uma "caixa de depósito" (dropbox) para depositar mensagens, potencialmente usando um agente intermediário. Para enviar uma mensagem, o usuário A depositaria a mensagem na caixa. Para receber uma mensagem, o usuário B acessaria regularmente esta URL para ver se há novas mensagens.
+* Misturador com assinaturas (mixmaster-with-signatures): as mensagens são enviadas através de um conjunto de repetirores anonimizadores do tipo mixmaster e ao final são entregues ao servidor do destinatário. O programa cliente do usuário apenas exibe a mensagem se ela for criptografada, tiver uma assinatura válida, e se o usuário tiver adicionado anteriormente o remetente a uma 'lista de permissões' (talvez gerada automaticamente a partir da lista de chaves públicas validadas).
-Para uma grande discussão comparando redes misturadoras com roteamento cebola, veja a [postagem no blog de Tom Ritter](https://ritter.vg/blog-mix_and_onion_networks.html) sobre o tema.
+Para uma boa discussão comparando redes misturadoras com roteamento cebola, veja a [postagem no blog de Tom Ritter](https://ritter.vg/blog-mix_and_onion_networks.html) sobre o tema.
### Problema da assincronicidade
O problema:
-> Para a comunicação criptografada, você atualmente precisa escolher entre sigilo futuro (forward secrecy) ou a capacidade se de comunicar de modo assíncrono.
+> Para estabelecer comunicação criptografada, atualmente é necessário escolher entre sigilo futuro (forward secrecy) e a habilidade de se comunicar de forma assíncrona.
-Com o ritmo de crescimento no armazenamento digital e da criptanálise, o sigilo futuro é cada vez mais importante. Caso contrário, qualquer comunicação criptografada que você fizer hoje provavelmente se torne em comunicação em texto puro num futuro próximo.
+Com o ritmo de crescimento do armazenamento digital e da criptanálise, o sigilo futuro é cada vez mais importante. Caso contrário, qualquer comunicação criptografada que você fizer hoje possivelmente se tornará uma comunicação em texto não cifrado num futuro próximo.
-No caso do email e do chat, temos o OpenPGP para email e OTR para bate-papo: o primeiro fornecendo recursos assíncronos e o segundo fornecendo sigilo futuro, mas nenhum deles possuem ambas habilidades. Precisamos tanto de uma melhor segurança para email e a capacidade de enviar e receber mensagens de bate-papo em modo offline.
+No caso do email e do bate-papo, existem o OpenPGP para email e o OTR para bate-papo: o primeiro fornece recursos assíncronos e o segundo fornece sigilo futuro, mas nenhum deles possuem ambas as habilidades. Precisamos tanto de uma melhor segurança para email quanto da capacidade de enviar e receber mensagens de bate-papo em modo offline.
-No curto prazo, estamos empilhando transporte de email com sigilo futuro e relay de chat em cima de criptografia tradicional de objetos (OpenPGP). Esta abordagem é idêntica à nossa abordagem paliativa para o problema dos metadados, com o acréscimo de que os servidores repetição precisam ter a capacidade de não apenas negociar transporte TLS mas também para negociar cifras que suportem sigilo futuro e que evitem um rebaixamento (downgrade) da cifra utilizada.
+No curto prazo, estamos empilhando transporte de email com sigilo futuro e relay de chat em cima de criptografia tradicional de objetos (OpenPGP). Esta abordagem é idêntica à nossa abordagem paliativa para o problema dos metadados, com o acréscimo de que os servidores repetidores precisam ter a capacidade de não apenas negociar transporte TLS mas também de negociar cifras que suportem sigilo futuro e que evitem uma precarização (downgrade) da cifra utilizada.
Esta abordagem é potencialmente eficaz contra os observadores externos na rede, mas não obtém sigilo futuro dos próprios prestadores de serviço.
-No longo prazo, pretendemos trabalhar com outros grupos para criar novos padrões de protocolo de criptografia que podem ser tanto assíncronas quanto com sigilo futuro:
+No longo prazo, pretendemos trabalhar com outros grupos para criar novos padrões de protocolo de criptografia que podem ser tanto assíncronos quanto permitir o sigilo futuro:
* [Extensões para sigilo futuro para o OpenPGP](http://tools.ietf.org/html/draft-brown-pgp-pfs-03).
* [Handshake Diffie-Hellman triplo com curvas elípticas](https://whispersystems.org/blog/simplifying-otr-deniability/).
@@ -77,47 +79,55 @@ O problema:
> Na prática, as pessoas trabalham em grupos, mas a criptografia de chave pública não.
-Temos um monte de idéias, mas não temos ainda todas as soluções para corrigir isso. Essencialmente, a questão é como usar primitivas existentes de chaves públicas para criar grupos criptográficos fortes, onde a adesão e as permissões são baseadas em chaves e em listas de controle de acesso mantidas no lado do servidor.
+Temos um monte de ideias, mas não temos ainda uma solução para corrigir este problema. Essencialmente, a questão é como usar primitivas de chaves públicas existentes para criar grupos criptográficos fortes, onde a adesão e as permissões são baseadas em chaves e em listas de controle de acesso mantidas no lado do servidor.
+
+A maioria dos trabalhos interessantes nesta área tem sido feitos por empresas que trabalham com backup/sincronização/compartilhamento seguro de arquivos, como Wuala e Spideroak. Infelizmente, ainda não há quaisquer protocolos abertos bons ou pacotes de software livre que possam lidar com criptografia para grupos.
-A maioria dos trabalhos interessantes nesta área tem sido feitos por empresas que trabalham com backup/sincronização/compartilhamento seguro de arquivos, como Wuala e Spideroak. Infelizmente, ainda não há quaisquer protocolos abertos bons ou pacotes de software livre que possam lidar com criptografia grupo.
+Neste momento, é provável que a melhor abordagem seja a abordagem simples: um protocolo no qual o cliente criptografa cada mensagem para cada destinatário individualmente, e que tenha algum mecanismo para verificação da transcrição de forma a garantir que todas as partes tenham recebido a mesma mensagem.
-Existem alguns trabalhos em software livre com blocos construtivos interessantes que podem ser úteis na construção da criptografia de grupo. Por exemplo:
+Existem alguns trabalhos em software livre com blocos construtivos interessantes que podem ser úteis na construção da criptografia para grupos. Por exemplo:
- * [Re-criptografia de proxy (proxy re-encryption)](https://en.wikipedia.org/wiki/Proxy_re-encryption): permite que o servidor cifre novamente conteúdo para novos beneficiários sem acesso ao texto não-encriptado. O [gerenciador de lista de discussão SELS](http://sels.ncsa.illinois.edu/) usa OpenPGP para implementar um [sistema inteligente para o proxy de re-encriptação](http://spar.isi.jhu.edu/~mgreen/proxy.pdf).
- * [Assinaturas em anel (ring signatures)](https://en.wikipedia.org/wiki/Ring_signature): permite que qualquer membro do grupo assine, sem que ninguém saiba qual membro.
+ * [Re-criptografia de proxy (proxy re-encryption)](https://en.wikipedia.org/wiki/Proxy_re-encryption): permite que o servidor cifre o conteúdo para novos beneficiários sem que tenha acesso ao texto não cifrado. O [gerenciador de lista de discussão SELS](http://sels.ncsa.illinois.edu/) usa OpenPGP para implementar um [sistema inteligente para o proxy de re-encriptação](http://spar.isi.jhu.edu/~mgreen/proxy.pdf).
+ * [Assinaturas em anel (ring signatures)](https://en.wikipedia.org/wiki/Ring_signature): permite que qualquer membro do grupo assine, sem que se possa saber qual membro fez a assinatura.
### Problema dos recursos
O problema:
-> Não existem protocolos abertos que permitam aos usuários compartilharem seguramente um recurso.
+> Não existem protocolos abertos que permitam aos usuários compartilharem um recurso de forma segura.
-Por exemplo, ao usar o chat seguro ou rede social segura federada, você precisa de alguma forma de ligação para uma mídia externa, como uma imagem, vídeo ou arquivo, que tenha as mesmas garantias de segurança que a própria mensagem. A incorporação deste tipo de recurso nas mensagens em si é proibitivamente ineficiente.
+Por exemplo, ao usar um bate-papo seguro ou rede social segura federada, você precisa de alguma forma de criar links para uma mídia externa, como uma imagem, vídeo ou arquivo, que tenha as mesmas garantias de segurança que a própria mensagem. A incorporação deste tipo de recurso nas mensagens em si é proibitivamente ineficiente.
Nós não temos uma proposta de como resolver este problema. Há um monte de grandes iniciativas que trabalham sob a bandeira da read-write-web, mas que não levam em conta a criptografia. De muitas maneiras, as soluções para o problema dos recursos são dependentes de soluções para o problema do grupo.
-Tal como acontece com o problema do grupo, a maior parte do progresso nesta área tem sido por pessoas que trabalham em sincronia de arquivos criptografados (por exemplo, estratégias como a Revogação Preguiçosa -- Lazy Revocation -- e Regressão chave -- Key Regression).
+Tal como acontece com o problema do grupo, a maior parte do progresso nesta área tem sido por pessoas que trabalham em sincronização de arquivos criptografados (por exemplo, estratégias como a Revogação Preguiçosa -- Lazy Revocation -- e Regressão de Chave -- Key Regression).
### Problema da disponibilidade
O problema:
-> Pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se elas perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
+> As pessoas querem alternar suavemente entre dispositivos e restaurar seus dados se perderem um dispositivo, mas isso é bem difícil de se fazer com segurança.
+
+Os/as usuários/as atuais exigem a capacidade de acessar seus dados em múltiplos dispositivos e de terem em mente que dados não serão perdidos para sempre se perderem um dispositivo. No mundo do software livre, só o Firefox abordou este problema adequadamente e de forma segura (com o Firefox Sync).
+
+No LEAP, temos trabalhado para resolver o problema de disponibilidade com um sistema que chamamos de [Soledad](/soledad) (um acrônimo em inglês para "sincronização, entre dispositivos, de documentos criptografados localmente"). Soledad dá ao aplicativo cliente um banco de dados de documentos sincronizado, pesquisável e criptografado. Todos os dados são criptografados no lado do cliente, tanto quando ele é armazenado no dispositivo local quanto quando é sincronizado com a nuvem. Até onde sabemos, não há nada parecido com isso, seja no mundo do software livre ou comercial.
-Usuários de hoje exigem a capacidade de acessar seus dados em múltiplos dispositivos e de terem em mente que dados não serão perdidos para sempre se perderem um dispositivo. No mundo do software livre, só o Firefox abordou este problema adequadamente e de forma segura (com o Firefox Sync).
+Soledad tenta resolver o problema genérico da disponibilidade de dados, mas outras iniciativas tentaram abordar o problema mais específico das chaves privadas e da descoberta de chaves públicas. Estas iniciativas incluem:
-No LEAP, temos trabalhado para resolver o problema de disponibilidade com um sistema que chamamos de [Soledad](/soledad) (para sincronização de documentos criptografados localmente entre os dispositivos). Soledad dá ao aplicativo cliente um banco de dados de documentos sincronizáveis, pesquisáveis e criptografados. Todos os dados são criptografados no lado do cliente, tanto quando ele é armazenado no dispositivo local quanto quando sincronizado com a nuvem. Até onde sabemos, não há nada parecido com isso, seja no mundo do software livre ou comercial.
+* [O protocolo proposto por Ben Laurie para armazenamento de segredos na nuvem](http://www.links.org/files/nigori/nigori-protocol-01.html).
+* [Código para armazenamento de chaves na nuvem](https://github.com/mettle/nilcat), experimental e similar ao anterior.
+* [Comentários de Phillip Hallam-Baker sobre questões similares](http://tools.ietf.org/html/draft-hallambaker-prismproof-key-00).
### O problema da atualização
O problema:
-> Quase que universalmente, atualizações de software são feitas de maneiras convidativas para ataques e comprometimento de dispositivos.
+> Quase que universalmente, atualizações de software são feitas de maneiras que são convidativas a ataques e comprometimento de dispositivos.
-O triste estado das atualizações de segurança é especialmente problemático porque os ataques de atualização já podem ser comprados prontos por regimes repressivos. O problema de atualização de software é especialmente ruim em plataformas desktop. No caso aplicativos em HTML5 ou para dispositivos móveis, as vulnerabilidades não são tão terríveis, mas os problemas também são mais difíceis de corrigir.
+O triste estado das atualizações de segurança é especialmente problemático porque os ataques de atualização já podem ser comprados prontos por regimes repressores. O problema de atualização de software é especialmente ruim em plataformas desktop. No caso aplicativos em HTML5 ou para dispositivos móveis, as vulnerabilidades não são tão terríveis, mas os problemas também são mais difíceis de corrigir.
-Para resolver o problema da atualização, o LEAP está adotando um sistema de atualização exclusivo chamado Thandy do projeto Tor. Thandy é complexo para administrar, mas é muito eficaz na prevenção de ataques de actualização conhecidos.
+Para resolver o problema da atualização, o LEAP está adotando um sistema de atualização exclusivo chamado Thandy do projeto Tor. Thandy é complexo para administrar, mas é muito eficaz na prevenção de ataques de atualização conhecidos.
-Thandy, e as respectivas [TUF](https://updateframework.com/), são projetados para dar conta das muitas [vulnerabilidades de segurança em sistemas de atualização de software](https://updateframework.com/projects/project/wiki/Docs/Security) existentes. Num exemplo, outros sistemas de atualização sofrem de uma incapacidade do cliente para confirmar que eles têm a cópia mais recente, abrindo assim uma enorme vulnerabilidade onde o atacante simplesmente espera por uma atualização de segurança, evita que o upgrade ocorra e lança um ataque para a exploração da vulnerabilidade que deveria ter sido apenas corrigida. Thandy/TUF fornecem um mecanismo único para a distribuição e verificação de atualizações de modo que nenhum dispositivo cliente irá instalar a atualização errada ou perca uma atualização sem saber.
+Thandy, e o projeto relacionado [TUF](https://updateframework.com/), são projetados para dar conta das muitas [vulnerabilidades de segurança em sistemas de atualização de software](https://updateframework.com/projects/project/wiki/Docs/Security) existentes. Num exemplo, outros sistemas de atualização sofrem de uma incapacidade do cliente de confirmar que possuem a cópia mais recente, abrindo assim uma enorme vulnerabilidade onde o atacante simplesmente espera por uma atualização de segurança, evita que o upgrade ocorra e lança um ataque para a exploração da vulnerabilidade que deveria ter acabado de ser corrigida. Thandy/TUF fornecem um mecanismo único para a distribuição e verificação de atualizações de modo que nenhum dispositivo cliente irá instalar a atualização errada ou perder uma atualização sem saber.
-Relacionado com o problema da atualização é o problema do backdoor: como você sabe que uma atualização não tem um backdoor adicionado pelos próprios desenvolvedores do software? Provavelmente, a melhor abordagem é aquela tomada pelo [Gitian](https://gitian.org/), que fornece um "processo de construção determinística para permitir que vários construtores criem binários idênticos". Esperamos adotar Gitian no futuro.
+Um problema relacionado com o problema da atualização é o problema do backdoor: como você sabe que uma atualização não tem um backdoor adicionado pelos próprios desenvolvedores do software? Provavelmente, a melhor abordagem é aquela tomada pelo [Gitian](https://gitian.org/), que fornece um "processo de construção determinística para permitir que vários construtores criem binários idênticos". Nós pretendemos adotar o Gitian no futuro.
diff --git a/docs/tech/secure-email/en.md b/docs/tech/secure-email/en.md
new file mode 100644
index 0000000..fe1f51f
--- /dev/null
+++ b/docs/tech/secure-email/en.md
@@ -0,0 +1,578 @@
+@title = "Secure Email Report"
+@summary = "A report on the state of the art in secure email projects"
+@toc = false
+
+There are an increasing number of projects working on next generation secure email or email-like communication. This is an initial draft report highlighting the projects and comparing the approaches. Please help us fill in the missing details and correct any inaccuracies. To contribute to this document, fork repository found at https://github.com/OpenTechFund/secure-email and issue a pull request.
+
+Contents:
+
+1. [Common Problems](#common-problems)
+ 1. [Key Management](#key-management)
+ 1. [Metadata Protection](#metadata-protection)
+ 1. [Forward Secrecy](#forward-secrecy)
+ 1. [Data Availability](#data-availability)
+ 1. [Secure Authentication](#secure-authentication)
+1. [Web Mail](#web-mail)
+ 1. [Lavaboom](#lavaboom)
+ 1. [Mega](#mega)
+ 1. [PrivateSky](#privatesky)
+ 1. [Scramble](#scramble)
+ 1. [Startmail](#startmail)
+ 1. [Whiteout](#whiteout)
+1. [Browser Extensions](#browser-extensions)
+ 1. [Mailvelope](#mailvelope)
+1. [Mail Clients](#mail-clients)
+ 1. [Bitmail](#bitmail)
+ 1. [Mailpile](#mailpile)
+ 1. [Parley](#parley)
+1. [Self-Hosted Email](#self-hosted-email)
+ 1. [Dark Mail Alliance](#self-hosted-dark-mail)
+ 1. [FreedomBox](#freedombox)
+ 1. [Mailpile](#self-hosted-mailpile)
+ 1. [Mail-in-a-box](#mail-in-a-box)
+ 1. [kinko](#kinko)
+1. [Email Infrastructure](#email-infrastructure)
+ 1. [Dark Mail Alliance](#dark-mail-alliance)
+ 1. [LEAP Encryption Access Project](#leap)
+1. [Post-email alternatives](#post-email-alternatives)
+ 1. [Bitmessage](#bitmessage)
+ 1. [Bote mail](#bote-mail)
+ 1. [Cables](#cables)
+ 1. [Dark Mail Alliance](#p2p-dark-mail-alliance)
+ 1. [Enigmabox](#enigmabox)
+ 1. [FlowingMail](#flowingmail)
+ 1. [Goldbug](#goldbug)
+ 1. [Pond](#pond)
+1. [Related Works](#related-works)
+
+<a name="common-problems"></a>Common Problems
+===========================================================
+
+All of the technologies listed here face a common set of problems when trying to make email (or email-like communication) secure and easy to use. These problems are hard, and have defied easy solutions, because there are no quick technological fixes: at issue is the complex interaction between user experience, real world infrastructure, and security. Although no consensus has yet emerged on how best to tackle any of these problems, the diversity of projects listed in this report reflect an surge of interest in this area and an encouraging spirit of experimentation.
+
+<a name="key-management"></a>Key Management
+-----------------------------------------------------------
+
+All the projects in this report use public-key encryption to allow a user to send a confidential message to the intendant recipient, and for the recipient to verify the authorship of the message. Unfortunately, public-key encryption is notoriously difficult to use properly, even for advanced users. The very concepts are confusing for most users: public key versus private key, key signing, key revocation, signing keys versus encryption keys, bit length, and so on.
+
+Traditionally, public key cryptography for email has relied on either the X.509 Certificate Authority (CA) system or a decentralized "Web of Trust" (WoT) for key validation (authenticating that a particular person owns a particular key). Recently, both schemes have come under intense criticism. Repeated security lapses at many of the Certificate Authorities have revealed serious flaws in the CA system. On the other hand, in an age where we better understand the power of social network analysis and the sensitivity of the social graph, the exposure of metadata by a "Web of Trust" is no longer acceptable from a security standpoint.
+
+This is where we are now: we have public key technology that is excessively difficult for the common user, and our only methods of key validation have fallen into disrepute. The projects listed here have plunged into this void, attempting to simplify the usage of public-key cryptography. These efforts have four elements:
+
+* Key discovery: There is no commonly used standard for discovering the public key attached to a particular email address. All the projects here that use OpenPGP intend to initially use, as a stop-gap measure, the OpenPGP keyservers for key discovery, although the keyserver infrastructure was not designed to be used in this way.
+* Key validation: If not Certificate Authorities or Web of Trust, what then? Nearly every project here uses Trust On First Use (TOFU) in one way or another. With TOFU, a key is assumed to be the right key the first time it is used. TOFU can work well for long term associations and for people who are not being targeted for attack, but its security relies on the security of the discovery transport and the application's ability to retain a memory of discovered keys. TOFU can break down in many real-world situations where a user might need to generate new keys or securely communicate with a new contact. The projects here are experimenting with TOFU in different ways, and these problems can likely be mitigated by combining TOFU with other measures.
+* Key availability: Almost every attempt to solve the key validation problem turns into a key availability problem, because once you have validated a public key, you need to make sure that this validation is available to the user on all the possible devices they might want to send or receive messages on.
+* Key revocation: What happens when a private key is lost, and a user want to issue a new public key? None of the projects in this report have an answer for how to deal with this in a post-CA and post-WoT world.
+
+The projects that use a public key as a unique identifier do not have the key validation problem, because they do no need to try to bind a human memorable identifier to a long non-memorable public key: they simply enforce the use of the public key as the user's address. For example, rather than `alice@example.org` as the identifier, these systems might use `8b3b2213ff00e5fb684b003d005ed2fb`. In place of the key validation problem, this approach raises the key exchange problem: how do two parties initially exchange long public keys with one another? This approach is taken by all the P2P projects listed here (although there do exist some P2P application that don't use public key identifiers).
+
+Some of the major experimental approaches to solving the problem of public key discovery and validation include:
+
+1. Inline: Many of the projects here plan to simply include the user's public key as an attachment to every outgoing email (or in a footer or SMTP header).
+1. DNS: Key distributed via DNSSEC, where a service provider adds a DNS entry for each user containing the user's public key or fingerprint.
+1. Append-only log: Proposal to modify Certificate Transparency to handle user accounts, where audits are performed against append-only logs.
+1. Network perspective: Validation by key endorsement (third party signatures), with audits performed via network perspective.
+1. Introductions: Discovery and validation of keys through acquaintance introduction.
+1. Mobile: Although too lengthy to manually transcribe, an app on a mobile device can be used to easily exchange keys in person (for example, via a QR code or bluetooth connection).
+
+<a name="metadata-protection"></a>Metadata Protection
+-----------------------------------------------------------
+
+Traditional schemes for secure email have left metadata exposed. We now know that metadata is often more sensitive than message content: metadata is structured data, easily stored forever, and subject to powerful techniques of social network analysis that can can be incredibly revealing.
+
+Metadata protection, however, is **hard**. In order to protect metadata, the message routing protocol must hide the sender and recipient from all the intermediaries responsible for relaying the message. This is not possible with the traditional protocol for email transport, although it will probably be possible to piggyback additional (non-backward compatible) protocols on top of traditional email transport in order to achieve metadata protection.
+
+Alternately, some projects reject traditional email transport entirely. These decentralized peer-to-peer approaches to metadata protection generally fall into four camps: (1) directly relay the message from sender's device to recipients device; (2) relay messages through a network of friends; (3) broadcast messages to everyone; (4) relay messages through an anonymization network such as Tor. The first two approaches protect metadata, but at the expense of increasing vulnerability to traffic analysis that could reveal the same metadata. The third solution faces serious problems of scalability. Pond uses the fourth method, discussed below.
+
+All schemes for metadata protection face the prospect of increasing Spam (since one of the primary methods used to prevent Spam is analysis of metadata). This is why some schemes with strong metadata protection make it impossible to send or receive messages to anyone you are not already in contact with. This works brilliantly for reducing Spam, but is unlikely to be a viable long term strategy for entirely replacing the utility of email.
+
+<a name="forward-secrecy"></a>Forward Secrecy
+-----------------------------------------------------------
+
+Forward secrecy is a security property that prevents an attacker from saving messages today and then later decrypting these messages once they have captured the user's private key. Without forward secrecy, an attacker is more likely to be able to capture messages today and simply wait for computers to become powerful enough to crack the encryption by brute force. Traditional email encryption offers no forward secrecy.
+
+All methods for forward secrecy involve a process where two parties negotiate an ephemeral key that is used for a short period of time to secure their communication. In many cases, the ephemeral key is generated anew for every single message. Traditional schemes for forward secrecy are incompatible with the asynchronous nature of email communication, since with email you still need to be able to send someone a message even if they are not online and ephemeral key generation requires a back and forth exchange between both parties.
+
+There are several new experimental (and tricky) protocols that attempt to achieve both forward secrecy and support for asynchronous communication, but none have yet emerged as a standard. These protocols either (1) require an initial bootstrap message that is not forward secret, (2) require an initial synchronous exchange to start the process, or (3) rely on a pool of pre-generated ephemeral key pairs that can be used on first contact. When the continually changing ephemeral key for a conversation is lost by either party, then the initialization stage is performed again.
+
+Another possible approach is to use traditional encryption with no support for forward secrecy but instead rely on a scheme for automatic key discovery and validation in order to frequently rotate keys. This way, a user could throw away their private key every few days, achieving a very crude form of forward secrecy.
+
+<a name="data-availability"></a>Data Availability
+-----------------------------------------------------------
+
+Users today demand data availability: they want to be able to access their messages and send messages from any device they choose, wherever they choose, and whenever they choose. Most importantly, they don't want the loss of any particular device to result in a loss of all their data. For insecure communication, achieving data availability is dead simple: simply store everything in the cloud. For secure communication, however, we have no proven solutions to this problem. As noted above, the key management problem is also really a data availability problem.
+
+Most of the email projects here have postponed dealing with the data availability problem. A few have used IMAP to synchronize data or developed their own secure synchronization protocol. Several of the email-like P2P approaches rely on a P2P network for data availability.
+
+<a name="secure-authentication"></a>Secure Authentication
+-----------------------------------------------------------
+
+For those projects that make use of a service provider, one of the key problems is how to authenticate securely with the service provider without revealing the password (since the password is probably also used to encrypt the private key and other secure storage, so it is important that the service provider does not have cleartext access as with typical password authentication schemes). The possible schemes include:
+
+* Separate passwords. The application can use one password for authentication and a separate password for securing secrets.
+* Pre-hash the password on the client before sending it to the server. This method can work, although it does not also authenticate the server (an impostor server can always reply with a success message), and is still vulnerable to brute force dictionary attacks.
+* Use Secure Remote Password (SRP), a type of cryptographic zero-knowledge proof designed for password authentication in which the client and server mutually authenticate. SRP has been around a while, and is fairly well analyzed, but it is still vulnerable to brute force dictionary attacks (albeit much less than traditionally password schemes).
+* Sign a challenge from the server with the user's private key. This has the advantage of being nearly impossible to brute force attack, but is vulnerable to impostor server providers and requires that the user's device has the private key.
+
+No consensus or standard has yet emerged, although SRP has been around a while.
+
+<a name="web-mail"></a>Web Mail
+===========================================================
+
+Most users are familiar with web-based email and the incredible convenience it offers: you can access your email from any device, and you don't need to worry about data synchronization. Developers of web-based email faces several difficult challenges when attempting to make a truly secure application. These challenges can be overcome, but not easily.
+
+First, because the web application is loaded from the web server each time you use it, the service provider could be targeting a version of the client to you that includes a backdoor. To overcome this vulnerability, it is possible to load the code for the web application from a third party. There are two ways of doing this:
+
+1. App Store: Most web browsers support special extensions in the form of "Browser Applications". These are loaded from some kind of app store and installed on the user's device. In this case, the third party that provides the application is the app store. Therefore, the user is then relying on the app store to furnish them with a secure version of the app. For example, this is the approach taken by [cryptocat](https://crypto.cat).
+2. Third Party: There are two advanced mechanisms to allow a web application to be loaded from one website and allow it to access data from another website. One is called CORS ([Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing)) and the other is HTML5's [window.postMessage command](https://developer.mozilla.org/en-US/docs/Web/API/window.postMessage). With either method, anyone can be the third party furnishing the application, or it can be self hosted. For example, this is the approach taken by [Unhosted](https://unhosted.org).
+
+Second, even if the application is loaded from a trusted third party, web browsers are not an ideal environment for sensitive data: there are many ways for an in-browser application to leak data and web browsers are notoriously prone to security holes (it is a very difficult problem to be able to run untrusted code locally in a secure sandbox). To their credit, the browser developers are often vigilant about fixing these holes (depending on who you ask), but the browser environment is far from a secure computing environment. It continues to be, however, the most convenient environment.
+
+Third, developers of web-based secure email face an additional challenge when dealing with offline data or data caching. Modern HTML5 apps typically store a lot of data locally on the user's device using the localStorage facility. Currently, however, no browser stores this encrypted. A secure web-based email application must either choose to not support any local storage, or develop a scheme for individually encrypting each object put in localStorage, a process which is very inefficient. Even storing keys temporarily in short lived session storage is problematic, since these can be easily read from disk later.
+
+These challenges to do not apply to downloaded mail clients that happen to use HTML5 as their interface (Mailpile, for example).
+
+<a name="lavaboom"></a>Lavaboom
+-----------------------------------------------------------
+
+[lavaboom.com](https://www.lavaboom.com)
+
+Lavaboom is a new web-based mail provider from Germany using client-side encryption in the browser. No further details are available at this time.
+
+Lavaboom's name is a tribute to the shuttered Lavabit service, although Lavaboom has no affiliation or people in common with Lavabit.
+
+<a name="mega"></a>Mega
+-----------------------------------------------------------
+
+[mega.co.nz](https://mega.co.nz)
+
+The relaunch of Mega has featured client-side encryption via a javascript application running in the browser. Mega has announced plans to extend their offerings to include email service with a similar design. No details are yet forthcoming. In interviews, Mega has said the javascript running in the browser will be open source, but the server component will be proprietary.
+
+<a name="privatesky"></a>PrivateSky
+-----------------------------------------------------------
+
+PrivateSky was a secure web-based email service that chose to shut down because their design was not compatible with UK law. Many in the press have [said GCHQ forced the closure](http://www.ibtimes.co.uk/articles/529392/20131211/gchq-forced-privatesky-secure-email-service-offline.htm), which the [company refutes](http://www.certivox.com/blog/bid/359788/The-real-story-on-the-PrivateSky-takedown).
+
+<a name="startmail"></a>StartMail
+-----------------------------------------------------------
+
+[startmail.com](http://startmail.com)
+
+The makers of the secure search engine [startpage.com](https://startpage.com) have announced they will be providing secure email service.
+
+Despite the tag line as the "world's most private email," StartMail is remarkably insecure. It offers regular IMAP service and a webmail interface that supports OpenPGP, but the user still must trust StartMail entirely. For example when you authenticate, your password string is sent to StartMail, and new OpenPGP keypairs are generated on the server, not the client. The website also makes some dubious statements, such as claiming to be more secure because their TLS server certificate supports extended validation.
+
+Verdict: oil of snake
+
+<a name="scramble"></a>Scramble
+-----------------------------------------------------------
+
+[scramble.io](https://scramble.io)
+
+Scramble is an OpenPGP email application that can be loaded from a website (with plans to add app store support). Additionally, you can sign up for email service from scramble.io.
+
+**Keys:** Private keys are generated in the browser app, encrypted with the user's passphrase, and then stored on the server. The server never sees the user's passphrase (password is hashed using scrypt before sent to the server during account creation and authentication). The master storage secret (symmetric key) used to encrypt keys is stored in the browser's sessionStorage, which is erased when the user logs out. Keys are validated using notaries.
+
+**Infrastructure:** Scramble uses a system of network perspectives to discover and validate public keys. The client will come with a list of pre-blessed notaries that can be used to query for public keys. If the notaries agree, the client will consider the key to be validated.
+
+**Application:** Currently, Scramble is a traditional HTML5 javascript application loaded from the website. In the future, Scramble will also be an installable browser app.
+
+* Written in: Go, Javascript
+* Source code: https://github.com/dcposch/scramble
+* Design documentation: https://github.com/dcposch/scramble/wiki/Scramble-Protocol
+* License: LGPL
+* Platforms: Windows, Mac, Linux (with Android planned).
+
+<a name="whiteout"></a>Whiteout
+-----------------------------------------------------------
+
+[whiteout.io](https://whiteout.io)
+
+Whiteout is a commercial service featuring an HTML5-based OpenPGP email client that is loaded from the web.
+
+* Written in: Javascript
+* Source code: https://github.com/whiteout-io/mail-html5
+* License: proprietary, but the code is available for inspection.
+
+<a name="browser-extensions"></a>Browser Extensions
+===========================================================
+
+A browser extension modifies the behavior of the web browser (not to be confused with a browser application, which has far fewer permissions and consists of a self-contained application). Browser extensions are able to modify how the user interacts with a variety of websites. Browser extensions share many of the same advantages and disadvantages of [web mail approaches](#webmail).
+
+<a name="mailvelope"></a>Mailvelope
+-----------------------------------------------------------
+
+[mailvelope.com](http://mailvelope.com)
+
+Mailvelope is a browser extension that allows you to use OpenPGP email with traditional web-mail providers like Gmail, Yahoo, and Outlook.com.
+
+**Keys:** The private key is generated for you, password protected, and stored in the browser's local storage (along with public keys). In the future, the plan is to support automatic discovery and validation of public keys using OpenPGP keyservers and message footers.
+
+**Application:** When the extension detects you have opened a web page from a supported web-mail provider such as Gmail, it offers the user the opportunity to encrypt what you type in the compose window and decrypt messages you receive.
+
+**Limitations:** Because of an inherent limitation in the way Mailvelope can interface with web-mail, it is not able to send OpenPGP/MIME (although it can read it fine). As mentioned elsewhere, browser storage is not a particular ideal place to be storing keys. When a web-mail provider changes their UI (or API if they happen to have one), the extension must be updated to handle the new format.
+
+* Contact: info@mailvelope.com
+* Written in: Javascript
+* Source code: https://github.com/toberndo/mailvelope
+* Design documentation: http://www.mailvelope.com/help
+* License: AGPL
+* Platforms: Windows, Mac, Linux (with Android planned).
+
+<a name="mail-clients"></a>Mail Clients
+===========================================================
+
+An email client, or MUA (Mail User Agent), provides a user interface to access email from any service provider. Traditional examples of email clients include Thunderbird or Microsoft Outlook (although both these application include a lot of other functionality as well). Nearly all email clients communicate with the email service provider using IMAP or POP and SMTP, although some also support local mailboxes in mbox or Maildir format.
+
+There are two primary advantages to the mail client approach:
+
+1. Existing accounts: By using a custom secure mail client, a user can continue to use their existing email accounts.
+1. Tailored UI: A custom client has the potential to rethink the email user experience in order to better convey security related details to the user.
+
+The mail client approach, however, also has several disadvantages:
+
+1. Insecure service providers: A mail client cannot address many of the core problems with email security when used with a traditional email provider. For example, metadata will not be protected in storage or transit, and the provider cannot aid in key discovery or validation. Most importantly, many existing mail providers are highly vulnerable, since few rely on DNSSEC for their MX records or validate their StartTLS connections for mail relay (when they even bother to enable StartTLS). A traditional email provider also requires authentication via password that is seen by the provider in clear text, and might be recorded by them. Finally, most service providers retain significant personally identifiable information, such as IP address of clients.
+1. Install a new app: As with many of the other approaches, the custom mail client approach requires that users download and install a specialized application on their device before they can use it.
+
+Ultimately, the level of email security that is possible with the custom mail client approach will always be limited. However, custom email clients may be an excellent strategy for gradually weening users away from email and to a different and more secure protocol. Most of the projects in this section see email support as a gateway to ease the transition to something that can replace email.
+
+<a name="bitmail"></a>Bitmail
+-----------------------------------------------------------
+
+[bitmail.sf.net](http://bitmail.sf.net)
+
+Bitmail is a desktop application that provides a user interface for traditional IMAP-based mail, but also supports a custom peer-to-peer protocol for relaying email through a network of friends. Bitmail will support both OpenPGP and S/MIME.
+
+**Keys:** Keys are validated using a shared secret or fingerprint validation. Public keys are discovered over the P2P network. Keys are stored locally in an encrypted database.
+
+**Routing:** Bitmail uses an opportunistic message distribution model where every message is sent to every neighbor. It is called "Echo" and it is very similar to the protocols used by Retroshare and Briar.
+
+**Application:** Bitmail uses the Qt library for cross platform UI.
+
+There are also plans to include a Bitmail MUA extension.
+
+* Written in: C
+* Source code: http://sourceforge.net/projects/bitmail/files
+* Design documentation: http://sourceforge.net/p/bitmail/code/HEAD/tree/branches/BitMail.06.2088_2013-11-03/BitMail/branches/BitMail/Documentation/
+* License: GPL v2
+* Platforms: Windows (with Mac and Linux planned).
+
+Note: I am unclear which of the previous features are planned and which are currently working.
+
+<a name="mailpile"></a>Mailpile
+-----------------------------------------------------------
+
+[mailpile.is](http://mailpile.is)
+
+Mailpile is an email client designed to quickly handle large amounts of email and also support user-friendly encryption. The initial focus is on email, with plans to eventually support post-email protocols like bitmessage, flowingmail, or darkmail. Also, the developers hope to add support for XMPP-based chat in the future. Since the Mozilla foundation has not committed the resources necessary to keep Thunderbird contemporary, the Mailpile initiative holds a lot of promise as a cross-platform mail client that seeks to redesign how we interact with email.
+
+**Keys:** Mailpile email encryption is based on OpenPGP (it uses your GPG keyring). Key discovery will be handled using OpenPGP keyservers and including public keys as attachments to outgoing email. Public keys are trusted on first use, with plans for validation via DANE and manual fingerprint verification (future support for a P2P protocol might include additional methods, such as Certificate Transparency or Short Authentication Strings). Currently, keys are not backed up.
+
+**Application:** Mailpile UI is written using HTML5 and Javascript, running against a self-hosted Python application (that typically lives locally on the device, but might be running on your own server).
+
+**Limitations:** Mailpile does not currently have a scheme for recovery if your device is destroyed or a method for securely synchronizing keys among devices. Although the search index is stored encrypted on disk (if the user already has GPG installed and a key pair generated), it is encrypted in a way that requires the index to be loaded entirely into memory. Mailpile is under very active development, so these and other issues may change in the near future.
+
+* Written in: Python, Javascript
+* Source code: http://github.com/pagekite/mailpile
+* Design documentation: http://github.com/pagekite/mailpile
+* License: AGPL & Apache
+* Platforms: Windows, Mac, Linux (with Android and iOS planned).
+* Contact: team@mailpile.is
+
+<a name="parley"></a>Parley
+-----------------------------------------------------------
+
+[parley.co](https://parley.co)
+
+Parley is a desktop mail client with a UI written using HTML5 and Javascript, with a local backend written in Python.
+
+**Keys:** Although Parley can be used with any service provider, the Parley servers are used to publish public keys and back up client-encrypted private keys. For key discovery, Parley uses a central repository and the OpenPGP keyservers. For key validation, Parley relies on trust on first use and Parley key endorsement.
+
+**Infrastructure:** All users of the Parley client also sign up for the Parley service, but they use their existing email account. The Parley server acts as a proxy that uses [context.io](http://context.io) for email storage (context.io is a commercial service that provides a REST API for a traditional IMAP account). The Parley server also handles key discovery, validation, and backup. Both the client and server are released as free software.
+
+**Application:** Parley is currently bundled into an executable using [Appcelerator](http://www.appcelerator.com/). The Parley client does not speak IMAP or SMTP directly. Rather, uses the email REST API of context.io.
+
+**Limitations:** All user email is stored by context.io, albeit in OpenPGP format. Metadata is exposed to context.io, however (in addition to your service provider).
+
+* Written in: Python, Javascript
+* Source code: https://github.com/blackchair/parley
+* Design documentation: https://parley.co/#how-it-works
+* License: BSD
+* Platforms: Windows, Mac, Linux (with Android and iOS planned).
+
+<a name="self-hosted-email"></a>Self-Hosted Email
+===========================================================
+
+Traditionally, email is a federated protocol: when you send an email it travels from your computer, to the server of your email provider, to the server of the recipient's provider, and finally to the recipient's computer. The key idea with self-hosted email is to cut out the middleman and run your own email server.
+
+In the United States, much of the interest in self-hosted email is driven by the Supreme Court's current (and particularly odd) interpretation of the 4th amendment called the ["Third-Party Doctrine"](https://en.wikipedia.org/wiki/Third-Party_Doctrine). Essentially, you have much weaker privacy protections in the US if you entrust any of your data to a third party. Additionally, the Court has so far afforded much greater protections to items physically inside your home. "Aha!" say the hackers and the lawyers, "we will just put email in the home."
+
+Unfortunately, it is not so simple. There are some major challenges to putting email servers in everyone's home:
+
+* **Delegated reputation**: The current email infrastructure is essentially a system of delegated reputation. In order to be able to send mail to most providers and not have a large percentage of it marked as Spam, a service provider must gradually build up a good reputation. Users are able to send mail because their provider has cultivated this reputation and maintained it by closing abusive accounts. It is certainly possible to run an email provider with a single user, but it is much harder to build up a good reputation. Also, many email providers block all relay attempts from IP addresses that have been flagged as "home" addresses, on the (probable) assumption that the message is coming from a virus and not a legitimate email server.
+* **Servers are on a hostile network**: Because a server needs to have open ports that are publicly accessible from the internet at all times, running one is much trickier than a simple desktop computer. It is much more critical to make sure security upgrades are applied in a timely manner, and that you are able to respond to external attacks, such as "Spam Bombs". Any publicly addressable IP that is put on the open internet will be continually probed for vulnerabilities. Self-hosting will probably work great for a protocol like Pond, where there are strict restrictions on who may deliver incoming messages. Email, however, is a protocol that is wide open and prone to abuse.
+* **Sysadmins are not robots**: No one has yet figured out how to make self-healing servers that don't require a skilled sysadmin to keep them healthy. Once someone does, a lot of sysadmins will be out of work, but they are presently not very worried. There are many things that commonly go wrong with servers, such as upgrades failing, drives filling up, daemons crashing, memory leaks, hardware failures, and so on.
+* **Does not address the important problems**: Moving the physical location of a device does nothing to solve the hard problems associated with easy-to-use email security (such as data availability and key validation). Some of the approaches to these problems rely on service provider infrastructure that would be infeasible to self host.
+* **DNS is hard**: One of the important security problems with traditional email is the vulnerability MX DNS records. Doing DNS correctly is hard, and not something that can be expected of the common user.
+
+Self-hosted email is an intriguing "legal hack", albeit one that faces many technical challenges.
+
+<a name="self-hosted-dark-mail"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+The Dark Mail Alliance has said they want to support self-hosting for the server component of the system. No details yet.
+
+<a name="freedombox"></a>FreedomBox
+-----------------------------------------------------------
+
+[freedomboxfoundation.org](https://freedomboxfoundation.org)
+
+From its early conception, part of FreedomBox was "email and telecommunications that protects privacy and resists eavesdropping". Email, however, is not currently being worked on as part of FreedomBox. (as far as I can tell).
+
+<a name="self-hosted-mailpile"></a>Mailpile
+-----------------------------------------------------------
+
+Although Mailpile is primarily a mail client, the background Python component can read the Maildir format for email. This means you could install Mailpile on your own server running a Mail Transfer Agent (MTA) like postfix or qmail. You would then access your mail remotely by connecting to your server via a web browser.
+
+<a name="Mail-in-a-box"></a>Mail-in-a-box
+-----------------------------------------------------------
+
+<a href="https://github.com/JoshData/mailinabox">github.com/JoshData/mailinabox</a>
+
+Mail-in-a-box helps people set up self-hosted email for linux hobbyists and email developers. It will install and configure the necessary Debian packages required to turn a machine running Ubuntu into a self-hosted email server. It provides a fairly straightforward, standard email server with IMAP, SMTP, greylisting, DKIM and SPF. It also includes a command line tool for adding and removing accounts.
+
+**Advantages:** Something quick for anyone with some linux skill who wants to experiment with email.
+
+**Limitations:** Setting up an email server is the easy part, maintaining the service over time is the tricky part. Without any automation recipes using something like Puppet, Chef, Salt, or CFEngine, mail-in-a-box is unlikely to be useful to anyone but the curious hobbyist.
+
+* Written in: Bash
+* Source code: https://github.com/JoshData/mailinabox
+* License: CC0 1.0 Universal
+
+<a name="kinko"></a>kinko
+-----------------------------------------------------------
+
+[kinko](https://kinko.me) implements an en/decrypting SMTP- and IMAP-proxy on ARM-class hardware, the kinko box. Emails are synced from the users' email accounts via IMAP to the box and are stored in plaintext in a secure storage area on the box. The kinko box also includes a webmailer to be able to use email with the browser.
+
+Connections to the kinko box are secured by TLS using a private key only known to the box itself. Furthermore, the kinko box is tunnelled to a public internet location. Consequently, users can access secure email from everywhere, using IMAP compatible email clients and/or browsers, including mobile clients.
+
+kinko uses GnuPG for encryption, with the addition of encrypting the email subject. Further additions should allow "Post-email alternatives" (a la bitmessage) to be used with the email clients that users are using today already. Other, privacy-related additions are planned as well.
+
+**Key discovery and validation:** Users can upload existing PGP keyrings. PGP keys are discovered via email
+addresses, email content, and PGP key servers. Keys are trusted on first use (but this policy can be changed
+to explicit fingerprint validation.)
+
+**Project status:** An alpha prototype exists. We are preparing for the release of a beta package in Q2/2014.
+
+**Languages:** The kinko base system is implemented in ruby and shell, with minor portions in native code.
+Applications can be implemented in more or less any language.
+
+**Webmail:** The currently included webmail application is roundcube webmail. That might change in the future.
+
+**Licenses:** All portions of the kinko system will be released under the AGPL license. (Included 3rd party
+applications will use their respective open source licenses). The hardware is open sourced as
+per [olimex](https://www.olimex.com/wiki/A10-OLinuXino-LIME).
+
+<a name="email-infrastructure"></a>Email Infrastructure
+===========================================================
+
+The "infrastructure" projects give a service provider the opportunity to offer secure email accounts to end-users. By modifying how both email clients and email servers work, these projects have the potential to deploy greater security measures than are possible with a client-only approach. For example:
+
+* Encrypted relay: A secure email provider is able to support, and enforce, encrypted transport when relaying mail to other providers. This is an important mechanism for preventing mass surveillance of metadata (which is otherwise not protected by OpenPGP client-side encryption of message contents).
+* Easier key management: A secure email provider can endorse the public keys of its users, and provide assistance to various schemes for automatic validation. Additionally, a secure email provider, coupled with a custom client, can make it easy to securely manage and back up the essential private keys which are otherwise cumbersome for most users to manage.
+* Invisible upgrade to better protocols: A secure email provider has the potential to support multiple protocols bound to a single user@domain address, allowing automatic and invisible upgrades to more secure post-email protocols when both parties detect the capability.
+* A return to federation: The recent concentration of email to a few giant providers greatly reduces the health and resiliency of email as an open protocol, since now only a few players essentially monopolize the medium. Projects that seek to make it easier to offer secure email as a service have the potential to reverse this trend.
+* Secure DNS: A secure provider can support DNSSEC and DANE, while most other email providers are unlikely to anytime soon. This is very important, because it is easy to hijack the MX records of a domain without DNSSEC.
+* Minimal data retention: A service provider that follows "best practices" will choose to retain less personally identifiable information on their users, such as their home IP addresses.
+
+The goal of both projects in this category is to build systems where the service provider is untrusted and cannot compromise the security of its users.
+
+Despite the potential of this approach, there are several unknown factors that might limit its appeal:
+
+* In order to benefit from a more secure provider, a user will need to switch their email account and email address, a very high barrier to adoption.
+* Where once there were many ISPs that offered email service, it is no longer clear if there is either the demand to sustain many email providers or the supply of providers interested in offering email as a service.
+* Users must download and install a custom application.
+
+<a name="dark-mail-alliance"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+[darkmail.info](https://darkmail.info)
+
+The Dark Mail Alliance will include both a client application and server software. The plan is to support traditional encrypted email (both OpenPGP and S/MIME), a new federated email-like protocol adapted from SilentCircle's instant message protocol (SCIMP), and a pure peer-to-peer messaging protocol. Both the client and server will be made available as free software.
+
+**Keys:** Key pairs will be generated on the user's device and uploaded to the service provider. [Certificate Transparency](http://certificate-transparency.org) will be used to automatically validate the service provider's endorsement of these public keys. Dark Mail additionally plans to support fingerprint confirmation, short authentication strings, and shared secret for manual key validation. Automatic discovery of public keys will happen using DNS, HTTPS, and via the messages themselves.
+
+**Routing:** The post-email messaging protocol promises to have forward secrecy and protection from metadata analysis (details have not yet been posted, and SCIMP does not currently support meta-data protection). Dark Mail Alliance plans to additionally support pure peer-to-peer messaging using a key fingerprint as the user identifier.
+
+**Infrastructure:** Dark Mail plans to support three types of architectures: traditional client/server, self-hosted, and pure peer-to-peer. No details yet on how these will work.
+
+**Application:** The client application will work with any existing MUA by exposing a local IMAP/SMTP server that the MUA can connect to.
+
+**Limitations:** Dark Mail has not yet released any code or design documents. However, they certainly have the resources to carry out their plans.
+
+* Written in: C
+* Source code: none yet
+* Design documentation: none yet
+* License: planned to be OSI-compatible
+* Platforms: initially Android and iOS, followed by Windows, OS X, Linux, and Windows Phone.
+* Contact: press@darkmail.info
+
+<a name="leap"></a>LEAP Encryption Access Project
+-----------------------------------------------------------
+
+[leap.se](https://leap.se)
+
+LEAP includes both a client application and turn-key system to automate the process of running a secure service provider. Currently, this includes user registration and management, help tickets, billing, VPN service, and secure email service. The secure email service is based on OpenPGP.
+
+**Keys:** Key pairs are generated on the user's device. Keys, and all user data, are stored in a client-encrypted database that is synchronized among the user's devices and backed up to the service provider. Keys are automatically validated using a combination of provider endorsement and network perspective (coming soon). Keys are discovered via the OpenPGP keyservers, the OpenPGP header, email footers, and a custom HTTP-based discovery protocol.
+
+**Infrastructure:** LEAP follows a traditional federated client/server architecture. The client is designed to work with any LEAP-compatible service provider (with plans to support legacy IMAP providers in the future). For security reasons, users are encouraged to get the application from LEAP and not their service provider.
+
+**Application:** The client application works with any existing MUA by exposing a local IMAP/SMTP server that the MUA can connect to. There is a Thunderbird extension to automate configuration of the account in Thunderbird. The client application communicates with the service provider using a custom protocol for synchronizing encrypted databases. The application is a very small C program that launches the Python code. The user interface is written using Qt.
+
+**Limitations:** In the current implementation, security properties of forward secrecy and metadata production are not end-to-end. Instead, the client relies on the service provider to ensure these properties. This limitation is due to some inherent limitations in the existing protocols for secure email. As with many of the other projects, LEAP's plan is to invisibly upgrade to a post-email protocol when possible in order to overcome these limitations.
+
+* Written in: Python
+* Source code: https://leap.se/source
+* Design documentation: https://leap.se/docs
+* License: mostly GPL v3, some MIT and AGPL.
+
+<a name="post-email-alternatives"></a>Post-email alternatives
+===========================================================
+
+There are several projects to create alternatives to email that are more secure yet still email-like.
+
+These projects share some common advantages:
+
+1. **Trust no one:** These projects share an approach that treats the network, and all parties on the network, as potentially hostile and not to be trusted. With this approach, a user's security can only be betrayed if their own device is compromised or the software is flawed or tampered with, but the user is protected from attacks against any service provider (because there typically is not one).
+1. **Fingerprint as identifier:** All these projects also use the fingerprint of the user's public key as the unique routing identifier for a user, allowing for decentralized and unique names. This neatly solves the problem of validating public keys, because every identifier basically *is* a key, so there is no need to establish a mapping from an identifier to a key.
+
+Except for Pond, all these alternatives take a pure peer-to-peer approach. As such, they face particular challenges:
+
+1. **The "Natural" Network**: Many advocates of peer-to-peer networking advance the notion that decentralized networks are the most efficient networks and are found everywhere in nature (in the neurons in our brain, in how mold grows, in how insects communicate, etc). This notion is only partially true. Networks are found in nature, but these network are not radically decentralized. Instead, natural networks tend to follow a power law distribution (aka "[scale free networks](https://en.wikipedia.org/wiki/Scale-free_network)"), where there is a high degree of partial centralization that balances "brokerage" (ability to communicate far in the network) with "closure" (ability to communicate close in the network). Thus, in practice, digital networks rely on "super hubs" that process most of the traffic. These hubs need to be maintained and hosted by someone, often at great expense (and making the network much more vulnerable to Sybil attacks).
+1. **The Internet:** Sadly, the physical internet infrastructure is actually very polycentric rather than decentralized (more akin to a tree than a spider's web). One reason for the rise of cloud computing is that resources are much cheaper near the core of the internet than near the periphery. Technical strategies that attempt to leverage the periphery will always be disadvantaged from an efficiency standpoint.
+1. **Traffic Analysis:** Most of the peer-to-peer approaches directly relay messages from sender's device to recipient's device, or route messages through the participant's contacts. Such an approach to message routing makes it potentially very easy for a network observer to map the network of associations, even if the message protocol otherwise offers very strong metadata protection.
+1. **Sybil Attacks:** By their nature, peer-to-peer networks do not have a method of blocking participation in the network. This makes them potentially very vulnerable to [Sybil attacks](https://en.wikipedia.org/wiki/Sybil_attack), where an attacker creates a very large number of fake participants in order to control the network or reveal the identity of other network participants.
+1. **Mobile:** Peer-to-peer networks are resource intensive, typically with every node in the network responsible for continually relaying traffic and keeping the network healthy. Unfortunately, this kind of thing is murder on the battery life of a mobile device, and requires a lot of extra network traffic.
+1. **Identifiers**: Using key fingerprints as unique identifiers has some advantages, but it also makes user identifiers impossible to remember. There is a lot of utility in the convenience of memorable username handles, as evidence in the use of email addresses and twitter handles.
+1. **Data Availability**: Unless also paired with a cloud component, peer-to-peer networks have much lower data availability than other approaches. For example, it takes much longer to update message deliveries from a peer network than from a server, particularly when the device has been offline for a while. Also, if a device is lost or destroyed, generally the user loses all their data.
+
+Most of these challenges have possible technological solutions that might make peer-to-peer approaches the most attractive option in the long run. For example, researchers may discover ways to make P2P networks less battery intensive. For this reason, it is important that research continue in this area. However, [in the long run we are all dead](https://en.wikiquote.org/wiki/John_Maynard_Keynes) and peer-to-peer approaches face serious hurdles before they can achieve the kind of user experience demanded today.
+
+<a name="bitmessage"></a>Bitmessage
+-----------------------------------------------------------
+
+[Bitmessage](https://bitmessage.org)
+
+Bitmessage is a peer-to-peer email-like communication protocol. It is totally decentralized and places no trust on any organization for services or validation.
+
+Advantages:
+
+* resistant to metadata analysis
+* relatively easy to use
+* works and is actively used by many people.
+
+Disadvantages:
+
+* no forward secrecy
+* unsolved scaling issues: all messages are broadcast to everyone
+* because there is no forward secrecy, it is especially problematic that anyone can grab an encrypted copy of any message in the system. This means if the private key is ever compromised, then all the past messages can be decrypted easily by anyone using the system.
+* relies on proof of work for spam prevention, which is probably not actually that preventative (spammers often steal CPU anyway).
+
+<a name="bote-mail"></a>Bote mail
+-----------------------------------------------------------
+
+[i2pbote.i2p.us](http://i2pbote.i2p.us) (or [i2pbote.i2p](http://i2pbote.i2p) if using i2p)
+
+Bote mail (aka [IMail](https://en.wikipedia.org/wiki/IMail)) is an email-like communication protocol that uses the anonymizing network I2p for transport. Bote mail stores messages in a global distribute hash table for up to 100 days, during which time the client has an opportunity to download and store the message.
+
+**Keys**: Bote mail uses public-key based addresses. You can create as many identities as you want, each identity corresponding to a ECDSA or NTRU key-pair.
+
+**Application**: Users interact with the user interface through a webmail interface, although the client is running locally.
+
+* Written in: Java
+* License: GPLv3
+
+<a name="cables"></a>Cables
+-----------------------------------------------------------
+
+[github.com/mkdesu/cables](https://github.com/mkdesu/cables)
+
+* Written in: C, Bash
+* License: GPL v2
+
+<a name="p2p-dark-mail-alliance"></a>Dark Mail Alliance
+-----------------------------------------------------------
+
+The Dark Mail Alliance plans to incorporate traditional email, a federated email alternative, and a second email alternative that is pure peer-to-peer. Details are not yet forthwith.
+
+<a name="enigmabox"></a>Enigmabox
+-----------------------------------------------------------
+
+[enigmabox.net](https://enigmabox.net)
+
+Enigmabox is a device that you install on your local network between your computer and the internet. It acts as secure proxy, providing VPN, and communication services analogous to email and VoIP. In order to communicate with another user, they must also have an enigmabox.
+
+Data is routed peer-to-peer directly from one enigmabox to another using cjdns, a system of virtual mesh networking in which IP addresses are derived from public keys. End to end encryption of messages is provided entirely by the cjdns transport layer.
+
+With this scheme, message are forward secret, but not entirely asynchronous. At some point, both the sender and recipient must have their enigmaboxes online at the same time.
+
+<a name="flowingmail"></a>FlowingMail
+-----------------------------------------------------------
+
+[flowingmail.com](http://flowingmail.com)
+
+P2P secure, encrypted email system.
+
+<a name="goldbug"></a>Goldbug
+-----------------------------------------------------------
+
+[goldbug.sf.net](http://goldbug.sf.net)
+
+* Written in: C++, Qt
+* License: BSD
+
+<a name="pond"></a>Pond
+-----------------------------------------------------------
+
+[pond.imperialviolet.org](https://pond.imperialviolet.org/)
+
+Pond is an email-like messaging application with several unique architectural and cryptographic features that make it stand out in the field.
+
+**Message Encryption**: Pond uses [Axolotl](https://github.com/trevp/axolotl/wiki) for asynchronous forward secret messages where the key is frequently ratcheted (akin to OTR, but more robust).
+
+**Routing**: Pond uses a unique architecture where every user relies on a service provider for receiving messages, but sent messages are delivered directly to the recipient's server (over Tor). This allows for strong metadata protection, but does not suffer from the other problems that peer-to-peer systems typically do. In order to prevent excessive Spam under this scheme, Pond uses a clever system of group signatures to allow the server to check if a sender is authorized to deliver to a particular user without leaking any information to the server.
+
+**Keys**: Pond uses Panda, a system for secure peer validation using short authentication strings.
+
+Pond's advantages include:
+
+* Very high security: forward secrecy, metadata protection, resistant to traffic analysis.
+* Pond hybrid federated and peer-to-peer approach is cool and holds a lot of promise.
+* Written in Go, and thus probably has many fewer security flaws than programs written in C or C++.
+* Pond is written by Adam Langley, an extremely well respected crypto-engineer.
+
+Pond's disadvantages include:
+
+* Uses non-human memorial unique identifiers, although this is not a necessary element of the design.
+* Requires that you set up contacts in advance before you can communicate with them (via a Short Authentication String or full public key exchange).
+* Pond is still very difficult to install and use.
+
+Pond is an exciting experiment in how you could build a very secure post-email protocol. Although Pond currently uses non-human identifiers, Pond could be easily modified to use traditional email username@domain.org identifiers (because it relies on service providers for message reception). The requirement in Pond that both parties pre-exchange keys could also be modified to allow users to set up addresses that could receive messages from anyone, albeit at the cost of likely Spam. Currently, Pond uses Tor to anonymize message routing, but the Tor network was designed for low-latency. Pond could potentially use a more secure anonymization network that was designed for higher-latency asynchronous messages.
+
+Ultimately, Pond's unique design makes it a very strong candidate for incorporation into a messaging application that could automatically upgrade from email to Pond should it detect that both parties support it.
+
+* Written in: Go
+* Source code: https://github.com/agl/pond
+* License: BSD
+* Platforms: anything you can compile Go on (for command line interface) or anything you can compile Go + Gtk (for GUI interface).
+
+<a name="related-works"></a>Related Works
+===========================================================
+
+There are many technologies that don't belong in this document because they either (a) are not trying to make encrypted email-like communication easier, (b) use some kind of weird proprietary escrow system, or (c) we just don't know enough about them yet. Here is a place to store links to such projects.
+
+* [Virtru](https://www.virtru.com) has a secure email product that relies on a centralized key escrow. For details, see [here](http://www.theregister.co.uk/Print/2014/01/24/ex_nsa_cloud_guru_email_privacy_startup) and [here](https://www.virtru.com/how-virtru-works).
+* [OpenCom](http://opencom.io) is a secure email and email-like communication in the planning stages.
+* [Ubiquitous Encrypted Email](https://github.com/tomrittervg/uee) is a protocol draft for standards that could lead to universal adoption of encrypted email.
+* [Redecentralize](https://github.com/redecentralize/alternative-internet) has a list of decentralized networks, such as Tor.
diff --git a/menu.txt b/menu.txt
index 52bff05..54f9c36 100644
--- a/menu.txt
+++ b/menu.txt
@@ -9,10 +9,13 @@ docs
hard-problems
limitations
routing
+ secure-email
design
overview
nicknym
soledad
+ tapicero
+ webapp
platform
quick-start
guide