Lets Encrypt with Nginx

July 31, 2016

Let's Encrypt is a game changer for websites.

I thought it was a good idea when Let's Encrypt introduced the notion of a free CA Authority making SSL more accessible to the public in early 2015. But, I didn't delve deeper because I was already using sslmate to somewhat automate my certificate management. Then, as I was setting up a new domain, I noticed that Dreamhost was issuing free SSL for any domain, and I thought, "WHAT?? I want that!"

Updating Nginx to use LetsEncrypt Certificates

Getting LetsEncrypt is fairly easy for a Debian/Ubuntu system:

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Get the certificate

sudo /opt/letsencrypt/letsencrypt-auto certonly --agree-tos --webroot -w /path/to/public/www/ -d example.com -d www.example.com

This will drop files into /etc/letsencrypt/live/yoursite.com.

After doing this, you will have to modify Nginx configuration. Add this to your configuration file for nginx where SSL is defined:

ssl_certificate /etc/letsencrypt/live/yoursite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yoursite.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/yoursite.com/chain.pem;

Obviously, change yoursite.com with, well, your domain name.

With a sudo service nginx reload you are up and running on your new free LetsEncrypt SSL cert!

Auto-renewal

A LetsEncrypt cert expires after 90 days. While they email you to renew, it is best to just automate the process.

You can setup a crontab for root like this:

22 4 * * 1 /opt/letsencrypt/letsencrypt-auto renew --quiet --post-hook "/usr/sbin/service nginx reload" >> /var/log/le-renew.log

This will run on the 22nd minute of the 4th hour of the first day of the week (so, once a week). You can tweak to your liking. The reload portion of --post-hook will be ignored if the certificate was not renewed. (Thanks for the update Bjørn!) According to the certbot documentation, any certificate that expires in less than 30 days will be renewed. If you have more complex restart tasks, write a script and initiate the script as a --post-hook argument.

Game Changer

Done are the days of expired certs, unhappy customers seeing a big warning screen because a sysadmin forgot about the renewal date, and paying for certificates! Wildcard certificates? Who needs 'em anymore, just use the --expand flag with letsencrypt to add domains as you need them. Orgs like Comodo and Verisign are about to see a massive drop in SSL certificate income. We'll talk of the golden era when we actually paid $399 for a SSL cert and much of the web was unencrypted.

The only thing I can think of right now is that Commercial CA Authority "Extended Validation" services for high-security firms like banks would still be useful. Other than that, most any webapp can use this free, automated, open service to SECURE ALL THE THINGS.

GPG Key Management

August 03, 2014

GPG2 is a brilliant encryption tool, but so rarely used.

It's not used mostly because it's difficult to get buy-in from all the people with whom you want to securely communicate.

But if you use it and are lucky enough to find peers who use it as well, it's a great boon for secure private communication and data storage.

Keep Your Master Key Safe

This is a key management technique I learned while working at UC Berkeley: Keep your master signing key away from your working keyring and use it only when you need it.

Operations that use your master key include: Signing someone else's key, adding subkeys, and performing revocations. Guard your master key!

Here's how:

If you only generated default keys, you must create a new signing subkey:

gpg --edit-key YOURMASTERKEYID
addkey

Choose the "RSA (sign only)" key type, choose 4096 bits, no expiry. When it's done, save the key:

save

Backup your keyring to an off-system location, like an encrypted USB drive.

Seriously, back them up, you will delete your masterkey in the steps below so don't cry if you fail to backup your keyring and hastily execute the commands below, losing your key-pair.

cp -R ~/.gnupg /Volumes/usbmedia/gnupg

Now do the following:

gpg --list-secret-keys
gpg --export-secret-subkeys SUBKEYID1! SUBKEYID2! > subkeys
# (NOTE: The exclamation marks ! are significant)
gpg --export YOURMASTERKEYID > pubkeys

At this point you have backups of your secret subkeys, and public key.

Remove your master key:

gpg --delete-secret-key YOURKEYID
gpg --import pubkeys subkeys

Now you have a key pair you can use on multiple computers. If you later need to do an action that requires signing with your master key (e.g. signing an imported pubkey for trust), do:

gpg --home=/path/to/backup --sign-key IMPORTEDPUBKEYFILE

All other actions like signing messages and encrypting can use your masterkeyless default keyring.

How This Helps

Your secret subkeys are always linked to your master key. If security of your master key is compromised, you not only lose the web of trust you've built with others, but it also reduces the reputation of anyone who has already signed your key. By separating your master key from signing and encryption subkeys, you can have more control over your key pair by keeping the master key offline.

Granted, if someone steals your working keyring, it does not stop that person from decrypting your past messages or impersonating your signature associated with those subkeys. However, a secure offline master key allows you to revoke compromised or stolen subkeys while retaining your web of trust.

SSL PFS on Nginx

Update: 3/10/2016: Cipher list matches recommendations from https://wiki.mozilla.org/Security/Server_Side_TLS.

Update: 5/29/2015: Modified cipher list for high security.

Update: 12/9/2014: RC4 has been identified by SSL Labs as a weak point in SSL implementations so the example nginx configuration below now includes disabling of RC4 ciphers. This updated configuration means that the horribly outdated browsers IE6 on Windows XP and IE8 on Windows XP will no longer work with your site.

Also, please upgrade to the latest OpenSSL to ensure that TLS POODLE is mitigated via TLS_FALLBACK_SCSV downgrade attack prevention.


August 2, 2014

Doing this in your nginx configuration will make your SSL web server much more robust. Some parts of this, such as the OCSP stapling directives require nginx version 1.3.7+.

Straight to the point: Here is a nginx configuration snippet with HSTS, OCSP, and PFS ciphers:

server {

    listen 443;
    server_name www.yoursitename.com;
    ssl on;

    # HSTS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

    # hide server version
    server_tokens off;

    ssl_certificate ssl-cert-with-ca-chain.crt;
    ssl_certificate_key yourserver.key;

    # OCSP stapling caches OCSP records from URL in ssl-cert-with-ca-chain
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate ssl-cert-with-ca-chain.crt;
    resolver 8.8.8.8 4.2.2.2;

    # Enables session resumption
    ssl_session_timeout 5m;
    ssl_session_cache shared:nginxSSL:10m;

    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;

    # ... the rest of your configuration

}

SSL Perfect Forward Secrecy (PFS)

The perfect forward secrecy cipher list defined in ssl_ciphers enables reference browsers to generate session keys in a way that only the client and server can obtain them. If someone collects and saves your SSL traffic, he or she cannot later steal the server's private key and decrypt the past communications. Traffic sent in the past is still safe. (Note: Man-in-the-middle attacks are still a threat if the private key is stolen.)

PFS future-proofs your SSL web server.

HTTP Strict Transport Security (HSTS)

add_header Strict-Transport-Security tells the browser to use HTTP Strict Transport Security. It tells client browsers to only use HTTPS connections. This, coupled with a nginx permanent redirect of all HTTP traffic to HTTPS, ensures an SSL server that is resistant to spoofing.

server {
    listen 80;

    location / {
        rewrite     ^(.*)   https://yoursitename.com$1 permanent;
    }

    # ...
}

Online Certificate Status Protocol Stapling (OCSP Stapling)

The ssl_stapling directive, with its 3 subsequent lines in the above config, enables server caching of OCSP information. It is not going to work if a trusted Certificate Authority has not signed your SSL certificate. OCSP stapling allows an alternative to pinging your CA for certificate status. The web server caches the response from the CA that issued the certificate. By enabling this in nginx, you must include the CA certificate chain in your certificate file.

Test your SSL

You can always see how secure your web site SSL implementation is by running the Qualys SSL Labs SSL Server Test.


Gnu Screen Status Bar

December 20, 2011

If you live in a terminal, you are likely using Gnu Screen.

This is a pretty cool way to set a status bar at the bottom of your unix screen session. Add this to your screenrc file (one line):

[dennis@caffeinatedcode ~]% vim .screenrc

caption always "%{=b dw}:%{-b dw}:%{=b dk}[ %{-b dw}%{-b dg}$USER%{-b dw}@%{-b dg}%H %{=b dk}] [ %= %?%{-b dg}%-Lw%?%{+b dk}(%{+b dw}%n:%t%{+b dk})%?(%u)%?%{-b dw}%?%{-b dg}%+Lw%? %{=b dk}]%{-b dw}:%{+b dw}:"

Hofstadter's Law

October 15, 2010

It always takes longer than you expect, even when you take into account Hofstadter's Law.

-Douglas Hofstadter

Online Backups for the Truly Paranoid

November 28, 2009

I like paranoia in design. Well, I take that back. I don't like it when it inhibits programming experimentation and creativity, but I do like it when it comes to services, and most especially when it comes to backup.

I wanted to write about my experiences with consumer offsite backup services (e.g. Mozy, Carbonite, Jungle Disk) as well as the plain practice of having a redundant storage device onsite. But all that was side-tracked when I recently needed to quickly backup my servers, and discovered tarsnap.

Tarsnap was created by Dr. Colin Percival, the FreeBSD Security Officer. He also worked on the utilities portsnap and freebsd-update. All of these tools are run in the command-line, and greatly simplify the maintenance (and now backup) of unix systems.

The things I like about tarsnap are encompassed in its listed design features:

  • It encrypts information before sending it to the Amazon Cloud (AWS), so if a person somehow gets access to the cloud servers, the information is unreadable. Even metadata is unreadable (filenames, sizes, names of the backups, etc.).
  • It's easy to learn and use, and quite scriptable.
  • It breaks your backup into variable-length blocks, and keeps track of these, so if another archive contains the same data, that same block does not get re-uploaded. As long as any backup references that piece of info, it'll remain stored and not be deleted. It's like storing incremental changes, but so much cooler.
  • It's quite cheap. Especially if used for server backups, which typically won't take terabytes of space. 300 picodollars per byte transferred ($0.30/GB), and 300 picodollars per byte per month stored (again, $0.30/GB-month).

Also, other than "security, flexibility, efficiency, utility", I personally liked that:

  • The client code is open to peer review.
  • It uses AWS! Geo-replication concerns are no longer a problem.
  • It runs on almost any OS, yes, even Cygwin.
  • You can secure the backup keys so that, if a person breaks into your system and starts deleting everything, they cannot also read or delete your backed up data. Even the backup key security is fascinating!
  • Not to mention the author was surprisingly responsive via email to some questions I had about the web-based reporting and command-line options.

The tool has basically addressed almost every concern I've had about backups.

Most early "backup service" providers would simply give you space at a cost, but had very little to say about their data loss/breach liability or who had access to their systems. Others would claim their service is "secure using 128-bit encryption" but that only meant they installed an SSL cert for transfer; backups were still unencrypted on disk. Then there are those who tout The Cloud, and how much safer it is, without a hint about data geo-redundancy (or if they have more than one data center).

But with tarsnap, I just install it, create a key, split the keys to read, write, and delete keys (encrypting the read and delete keys), and with a command I'm securely backing up entire directories. Online. In the Amazon™ cloud.

tarsnap --keyfile /usr/tarsnap.key -c -f backup-2009-11-27 /usr/home /usr/local/etc /etc

How much easier could that be? And if your backups aren't gigabytes large, the small pre-payment of the online service could last a very long time.

My only concern is that the tarsnap server is, as of this time, a single point of failure. We have no option of having our own tarsnap interface to our own personal AWS accounts. So 1) we have to trust that it is indeed being sent to AWS by Dr. Percival (is that too paranoid?), and 2) we have to hope that the tarsnap server is fault tolerant and can be restored quickly. Granted, this problem exists for any online backup service unless you write your own. We depend on third-party uptime for any service, so it boils down to who's thought it through, and has addressed our backup concerns.

For now, I am glad "production" isn't the only place some important data lives. I am glad to not have to manually tar.gz files and move them to my workstation to be picked up by my desktop backup scheduler. With tarsnap, I was able to upgrade from FreeBSD 7.2 to 8.0-RELEASE and not worry (too much) about having to rebuild the server in case all failed. (I didn't need to.)

Online backup for the truly paranoid. Who backs stuff up who isn't paranoid?

See more in the archive.

About Me and this Site