summaryrefslogtreecommitdiff
path: root/doc/troubleshooting
diff options
context:
space:
mode:
Diffstat (limited to 'doc/troubleshooting')
-rw-r--r--doc/troubleshooting/en.haml3
-rw-r--r--doc/troubleshooting/known-issues.md115
-rw-r--r--doc/troubleshooting/tests.md70
-rw-r--r--doc/troubleshooting/vagrant.md45
-rw-r--r--doc/troubleshooting/where-to-look.md249
5 files changed, 482 insertions, 0 deletions
diff --git a/doc/troubleshooting/en.haml b/doc/troubleshooting/en.haml
new file mode 100644
index 00000000..f0f1359c
--- /dev/null
+++ b/doc/troubleshooting/en.haml
@@ -0,0 +1,3 @@
+- @title = "Troubleshooting"
+
+= child_summaries \ No newline at end of file
diff --git a/doc/troubleshooting/known-issues.md b/doc/troubleshooting/known-issues.md
new file mode 100644
index 00000000..4defc886
--- /dev/null
+++ b/doc/troubleshooting/known-issues.md
@@ -0,0 +1,115 @@
+@title = 'Leap Platform Release Notes'
+@nav_title = 'Known issues'
+@summary = 'Known issues in the Leap Platform.'
+@toc = true
+
+Here you can find documentation about known issues and potential work-arounds in the current Leap Platform release.
+
+0.6.0
+==============
+
+Upgrading
+------------------
+
+Upgrade your leap_platform to 0.6 and make sure you have the latest leap_cli.
+
+**Update leap_platform:**
+
+ cd leap_platform
+ git pull
+ git checkout -b 0.6.0 0.6.0
+
+**Update leap_cli:**
+
+If it is installed as a gem from rubygems:
+
+ sudo gem update leap_cli
+
+If it is installed as a gem from source:
+
+ cd leap_cli
+ git pull
+ git checkout master
+ rake build
+ sudo rake install
+
+If it is run directly from source:
+
+ cd leap_cli
+ git pull
+ git checkout master
+
+To upgrade:
+
+ leap --version # must be at least 1.6.2
+ leap cert update
+ leap deploy
+ leap test
+
+If the tests fail, try deploying again. If a test fails because there are two tapicero daemons running, you need to ssh into the server, kill all the tapicero daemons manually, and then try deploying again (sometimes the daemon from platform 0.5 would put its PID file in an odd place).
+
+OpenVPN
+------------------
+
+On deployment to a openvpn node, if the following happens:
+
+ - err: /Stage[main]/Site_openvpn/Service[openvpn]/ensure: change from stopped to running failed: Could not start Service[openvpn]: Execution of '/etc/init.d/openvpn start' returned 1: at /srv/leap/puppet/modules/site_openvpn/manifests/init.pp:189
+
+this is likely the result of a kernel upgrade that happened during the deployment, requiring that the machine be restarted before this service can start. To confirm this, login to the node (leap ssh <nodename>) and look at the end of the /var/log/daemon.log:
+
+ # tail /var/log/daemon.log
+ Nov 22 19:04:15 snail ovpn-udp_config[16173]: ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)
+ Nov 22 19:04:15 snail ovpn-udp_config[16173]: Exiting due to fatal error
+
+if you see this error, simply restart the node.
+
+CouchDB
+---------------------
+
+At the moment, we strongly advise only have one bigcouch server for stability purposes.
+
+With multiple couch nodes (not recommended at this time), in some scenarios, such as when certain components are unavailable, the couchdb syncing will be broken. When things are brought back to normal, shortly after restart, the nodes will attempt to resync all their data, and can fail to complete this process because they run out of file descriptors. A symptom of this is the webapp wont allow you to register or login, the /opt/bigcouch/var/log/bigcouch.log is huge with a lot of errors that include (over multiple lines): {error, emfile}}. We have raised the limits for available file descriptors to bigcouch to try and accommodate for this situation, but if you still experience it, you may need to increase your /etc/sv/bigcouch/run ulimit values and restart bigcouch while monitoring the open file descriptors. We hope that in the next platform release, a newer couchdb will be better at handling these resources.
+
+You can also see the number of file descriptors in use by doing:
+
+ # watch -n1 -d lsof -p `pidof beam`|wc -l
+
+The command `leap db destroy` will not automatically recreate new databases. You must run `leap deploy` afterwards for this.
+
+User setup and ssh
+------------------
+
+At the moment, it is only possible to add an admin who will have access to all LEAP servers (see: https://leap.se/code/issues/2280)
+
+The command `leap add-user --self` allows only one SSH key. If you want to specify more than one key for a user, you can do it manually:
+
+ users/userx/userx_ssh.pub
+ users/userx/otherkey_ssh.pub
+
+All keys matching 'userx/*_ssh.pub' will be used for that user.
+
+Deploying
+---------
+
+If you have any errors during a run, please try to deploy again as this often solves non-deterministic issues that were not uncovered in our testing. Please re-deploy with `leap -v2 deploy` to get more verbose logs and capture the complete output to provide to us for debugging.
+
+If when deploying your debian mirror fails for some reason, network anomoly or the mirror itself is out of date, then platform deployment will not succeed properly. Check the mirror is up and try to deploy again when it is resolved (see: https://leap.se/code/issues/1091)
+
+Deployment gives 'error: in `%`: too few arguments (ArgumentError)' - this is because you attempted to do a deploy before initializing a node, please initialize the node first and then do a deploy afterwards (see: https://leap.se/code/issues/2550)
+
+This release has no ability to custom configure apt sources or proxies (see: https://leap.se/code/issues/1971)
+
+When running a deploy at a verbosity level of 2 and above, you will notice puppet deprecation warnings, these are known and we are working on fixing them
+
+IPv6
+----
+
+As of this release, IPv6 is not supported by the VPN configuration. If IPv6 is detected on your network as a client, it is blocked and instead it should revert to IPv4. We plan on adding IPv6 support in an upcoming release.
+
+
+Special Environments
+--------------------
+
+When deploying to OpenStack release "nova" or newer, you will need to do an initial deploy, then when it has finished run `leap facts update` and then deploy again (see: https://leap.se/code/issues/3020)
+
+It is not possible to actually use the EIP openvpn server on vagrant nodes (see: https://leap.se/code/issues/2401)
diff --git a/doc/troubleshooting/tests.md b/doc/troubleshooting/tests.md
new file mode 100644
index 00000000..b85c19d2
--- /dev/null
+++ b/doc/troubleshooting/tests.md
@@ -0,0 +1,70 @@
+@title = 'Tests and Monitoring'
+@summary = 'Testing and monitoring your infrastructure.'
+@toc = true
+
+## Troubleshooting Tests
+
+At any time, you can run troubleshooting tests on the nodes of your provider infrastructure to check to see if things seem to be working correctly. If there is a problem, these tests should help you narrow down precisely where the problem is.
+
+To run tests on FILTER node list:
+
+ leap test run FILTER
+
+For example, you can also test a single node (`leap test elephant`); test a specific environment (`leap test development`), or any tag (`leap test soledad`).
+
+Alternately, you can run test on all nodes (probably only useful if you have pinned the environment):
+
+ leap test
+
+The tests that are performed are located in the platform under the tests directory.
+
+## Testing with the bitmask client
+
+Download the provider ca:
+
+ wget --no-check-certificate https://example.org/ca.crt -O /tmp/ca.crt
+
+Start bitmask:
+
+ bitmask --ca-cert-file /tmp/ca.crt
+
+## Testing Recieving Mail
+
+Use i.e. swaks to send a testmail
+
+ swaks -f noone@example.org -t testuser@example.org -s example.org
+
+and use your favorite mail client to examine your inbox.
+
+You can also use [offlineimap](http://offlineimap.org/) to fetch mails:
+
+ offlineimap -c vagrant/.offlineimaprc.example.org
+
+WARNING: Use offlineimap *only* for testing/debugging,
+because it will save the mails *decrypted* locally to
+your disk !
+
+## Monitoring
+
+In order to set up a monitoring node, you simply add a `monitor` service tag to the node configuration file. It could be combined with any other service, but we propose that you add it to the webapp node, as this already is public accessible via HTTPS.
+
+After deploying, this node will regularly poll every node to ask for the status of various health checks. These health checks include the checks run with `leap test`, plus many others.
+
+We use [Nagios](http://www.nagios.org/) together with [Check MK agent](https://en.wikipedia.org/wiki/Check_MK) for running checks on remote hosts.
+
+One nagios installation will monitor all nodes in all your environments. You can log into the monitoring web interface via [https://DOMAIN/nagios3/](https://DOMAIN/nagios3/). The username is `nagiosadmin` and the password is found in the secrets.json file in your provider directory.
+Nagios will send out mails to the `contacts` address provided in `provider.json`.
+
+
+## Nagios Frontents
+
+There are other ways to check and get notified by Nagios besides regularly checking the Nagios webinterface or reading email notifications. Check out the [Frontends (GUIs and CLIs)](http://exchange.nagios.org/directory/Addons/Frontends-%28GUIs-and-CLIs%29) on the Nagios project website.
+A recommended status tray application is [Nagstamon](https://nagstamon.ifw-dresden.de/), which is available for Linux, MacOS X and Windows. It can not only notify you of hosts/services failures, you can also acknoledge or recheck these with it.
+
+### Log Monitoring
+
+At the moment, we use [check-mk-agent-logwatch](https://mathias-kettner.de/checkmk_check_logwatch.html) for searching logs for irregularities.
+Logs are parsed for patterns using a blacklist, and are stored in `/var/lib/check_mk/logwatch/<Nodename>`.
+
+In order to "acknowledge" a log warning, you need to log in to the monitoring server, and delete the corresponding file in `/var/lib/check_mk/logwatch/<Nodename>`. This should be done via the nagios webinterface in the future.
+
diff --git a/doc/troubleshooting/vagrant.md b/doc/troubleshooting/vagrant.md
new file mode 100644
index 00000000..ad284161
--- /dev/null
+++ b/doc/troubleshooting/vagrant.md
@@ -0,0 +1,45 @@
+@title = 'LEAP Platform Vagrant testing'
+@nav_title = 'Vagrant Integration'
+@summary = 'Testing your provider with Vagrant'
+
+Setting up Vagrant for a testing the platform
+=============================================
+
+There are two ways you can setup leap platform using vagrant.
+
+Using the Vagrantfile provided by Leap Platform
+-----------------------------------------------
+
+This is by far the easiest way. It will install a single node mail server in the default
+configuration with one single command.
+
+Clone the platform with
+
+ git clone https://github.com/leapcode/leap_platform.git
+
+Start the vagrant box with
+
+ cd leap_platform
+ vagrant up
+
+Follow the instructions how to configure your `/etc/hosts`
+in order to use the provider!
+
+You can login via ssh with the systemuser `vagrant` and the same password.
+
+There are 2 users preconfigured:
+
+. `testuser` with pw `hallo123`
+. `testadmin` with pw `hallo123`
+
+
+Use the leap_cli vagrant integration
+------------------------------------
+
+Install leap_cli and leap_platform on your host, configure a provider from scratch and use the `leap local` commands to manage your vagrant node(s).
+
+See https://leap.se/en/docs/platform/development how to use the leap_cli vagrant
+integration and https://leap.se/en/docs/platform/tutorials/single-node-email how
+to setup a single node mail server.
+
+
diff --git a/doc/troubleshooting/where-to-look.md b/doc/troubleshooting/where-to-look.md
new file mode 100644
index 00000000..fbd95931
--- /dev/null
+++ b/doc/troubleshooting/where-to-look.md
@@ -0,0 +1,249 @@
+@title = 'Where to look for errors'
+@nav_title = 'Where to look'
+@toc = true
+
+
+General
+=======
+
+* Please increase verbosity when debugging / filing issues in our issue tracker. You can do this with adding i.e. `-v 5` after the `leap` cmd, i.e. `leap -v 2 deploy`.
+
+Webapp
+======
+
+Places to look for errors
+-------------------------
+
+* `/var/log/apache2/error.log`
+* `/srv/leap/webapp/log/production.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+* `/var/log/leap/*`
+
+Is haproxy ok ?
+---------------
+
+
+ curl -s -X GET "http://127.0.0.1:4096"
+
+Is couchdb accessible through stunnel ?
+---------------------------------------
+
+* Depending on how many couch nodes you have, increase the port for every test
+ (see /etc/haproxy/haproxy.cfg for the server/port mapping):
+
+
+ curl -s -X GET "http://127.0.0.1:4000"
+ curl -s -X GET "http://127.0.0.1:4001"
+ ...
+
+
+Check couchdb acl as admin
+--------------------------
+
+ mkdir /etc/couchdb
+ cat /srv/leap/webapp/config/couchdb.yml.admin # see username and password
+ echo "machine 127.0.0.1 login admin password <PASSWORD>" > /etc/couchdb/couchdb-admin.netrc
+ chmod 600 /etc/couchdb/couchdb-admin.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-admin.netrc -X GET "http://127.0.0.1:4096"
+ curl -s --netrc-file /etc/couchdb/couchdb-admin.netrc -X GET "http://127.0.0.1:4096/_all_dbs"
+
+Check couchdb acl as unpriviledged user
+---------------------------------------
+
+ cat /srv/leap/webapp/config/couchdb.yml # see username and password
+ echo "machine 127.0.0.1 login webapp password <PASSWORD>" > /etc/couchdb/couchdb-webapp.netrc
+ chmod 600 /etc/couchdb/couchdb-webapp.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-webapp.netrc -X GET "http://127.0.0.1:4096"
+ curl -s --netrc-file /etc/couchdb/couchdb-webapp.netrc -X GET "http://127.0.0.1:4096/_all_dbs"
+
+
+Check client config files
+-------------------------
+
+ https://example.net/provider.json
+ https://example.net/1/config/smtp-service.json
+ https://example.net/1/config/soledad-service.json
+ https://example.net/1/config/eip-service.json
+
+
+Soledad
+=======
+
+ /var/log/soledad.log
+
+
+Couchdb
+=======
+
+Places to look for errors
+-------------------------
+
+* `/opt/bigcouch/var/log/bigcouch.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+
+
+
+Bigcouch membership
+-------------------
+
+* All nodes configured for the provider should appear here:
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET 'http://127.0.0.1:5986/nodes/_all_docs'
+</pre>
+
+* All configured nodes should show up under "cluster_nodes", and the ones online and communicating with each other should appear under "all_nodes". This example output shows the configured cluster nodes `couch1.bitmask.net` and `couch2.bitmask.net`, but `couch2.bitmask.net` is currently not accessible from `couch1.bitmask.net`
+
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc 'http://127.0.0.1:5984/_membership'
+ {"all_nodes":["bigcouch@couch1.bitmask.net"],"cluster_nodes":["bigcouch@couch1.bitmask.net","bigcouch@couch2.bitmask.net"]}
+</pre>
+
+* Sometimes a `/etc/init.d/bigcouch restart` on all nodes is needed, to register new nodes
+
+Databases
+---------
+
+* Following output shows all neccessary DBs that should be present. Note that the `user-0123456....` DBs are the data stores for a particular user.
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET 'http://127.0.0.1:5984/_all_dbs'
+ ["customers","identities","sessions","shared","tickets","tokens","user-0","user-9d34680b01074c75c2ec58c7321f540c","user-9d34680b01074c75c2ec58c7325fb7ff","users"]
+</pre>
+
+
+
+
+Design Documents
+----------------
+
+* Is User `_design doc` available ?
+
+
+<pre>
+ curl -s --netrc-file /etc/couchdb/couchdb.netrc -X GET "http://127.0.0.1:5984/users/_design/User"
+</pre>
+
+Is couchdb cluster backend accessible through stunnel ?
+-------------------------------------------------------
+
+* Find out how many connections are set up for the couchdb cluster backend:
+
+<pre>
+ grep "accept = 127.0.0.1" /etc/stunnel/*
+</pre>
+
+
+* Now connect to all of those local endpoints to see if they up. All these tests should return "localhost [127.0.0.1] 4000 (?) open"
+
+<pre>
+ nc -v 127.0.0.1 4000
+ nc -v 127.0.0.1 4001
+ ...
+</pre>
+
+
+MX
+==
+
+Places to look for errors
+-------------------------
+
+* `/var/log/mail.log`
+* `/var/log/leap_mx.log`
+* `/var/log/syslog` (watch out for stunnel issues)
+
+Is couchdb accessible through stunnel ?
+---------------------------------------
+
+* Depending on how many couch nodes you have, increase the port for every test
+ (see /etc/haproxy/haproxy.cfg for the server/port mapping):
+
+
+ curl -s -X GET "http://127.0.0.1:4000"
+ curl -s -X GET "http://127.0.0.1:4001"
+ ...
+
+Query leap-mx
+-------------
+
+* for useraccount
+
+
+<pre>
+ postmap -v -q "joe@dev.bitmask.net" tcp:localhost:2244
+ ...
+ postmap: dict_tcp_lookup: send: get jow@dev.bitmask.net
+ postmap: dict_tcp_lookup: recv: 200
+ ...
+</pre>
+
+* for mailalias
+
+
+<pre>
+ postmap -v -q "joe@dev.bitmask.net" tcp:localhost:4242
+ ...
+ postmap: dict_tcp_lookup: send: get joe@dev.bitmask.net
+ postmap: dict_tcp_lookup: recv: 200 f01bc1c70de7d7d80bc1ad77d987e73a
+ postmap: dict_tcp_lookup: found: f01bc1c70de7d7d80bc1ad77d987e73a
+ f01bc1c70de7d7d80bc1ad77d987e73a
+ ...
+</pre>
+
+
+Check couchdb acl as unpriviledged user
+---------------------------------------
+
+
+
+ cat /etc/leap/mx.conf # see username and password
+ echo "machine 127.0.0.1 login leap_mx password <PASSWORD>" > /etc/couchdb/couchdb-leap_mx.netrc
+ chmod 600 /etc/couchdb/couchdb-leap_mx.netrc
+
+ curl -s --netrc-file /etc/couchdb/couchdb-leap_mx.netrc -X GET "http://127.0.0.1:4096/_all_dbs" # pick one "user-<hash>" db
+ curl -s --netrc-file /etc/couchdb/couchdb-leap_mx.netrc -X GET "http://127.0.0.1:4096/user-de9c77a3d7efbc779c6c20da88e8fb9c"
+
+
+* you may check multiple times, cause 127.0.0.1:4096 is haproxy load-balancing the different couchdb nodes
+
+
+Mailspool
+---------
+
+* Any file in the leap_mx mailspool longer for a few seconds ?
+
+
+
+<pre>
+ ls -la /var/mail/vmail/Maildir/cur/
+</pre>
+
+* Any mails in postfix mailspool longer than a few seconds ?
+
+<pre>
+ mailq
+</pre>
+
+
+
+Testing mail delivery
+---------------------
+
+ swaks -f alice@example.org -t bob@example.net -s mx1.example.net --port 25
+ swaks -f varac@cdev.bitmask.net -t varac@cdev.bitmask.net -s chipmonk.cdev.bitmask.net --port 465 --tlsc
+ swaks -f alice@example.org -t bob@example.net -s mx1.example.net --port 587 --tls
+
+
+VPN
+===
+
+Places to look for errors
+-------------------------
+
+* `/var/log/syslog` (watch out for openvpn issues)
+
+