Article
Preventing Private Key Theft with a Reverse Proxy
The use of TLS, and therefore HTTPS, heavily depends on asymmetric cryptography, and their corresponding public/private key pairs. This key pair, together with a CA-signed certificate, enable the browser to check the identity of the server. The server will have to prove the possession of the private key that’s associated with the public key in the certificate, before the browser accepts the connection as authentic. Since the private key is supposed to be private, no one will be able to impersonate the server, unless they gain access to the private key, or obtain a freshly generated key pair with a mis-issued certificate. Depending on the chosen cipher suites, the public/private key pair may even be used to guarantee confidentiality as well, making the private key even more sensitive.
When control over a private key is lost, the right course of action is to revoke the corresponding certificate with your CA, so that browsers will no longer accept this certificate as valid. Unfortunately, revoking the certificate is a step that is often omitted, or that requires the payment of an additional fee. To make matters worse, even if revocation is performed correctly, chances are that browsers don’t check revocation information anyway, because of performance reasons. Sadly, this is the current state of affairs, even though the situation may be improving in the future, when OCSP stapling is more widely deployed.
Isolating Private Keys with a Reverse Proxy
One straightforward way to avoid having to deal with the whole revocation mess, is to invest more effort in theft prevention. One way to make the theft of the private key more difficult, is by keeping it out of reach of the Web server running the potentially vulnerable Web site. The expensive, enterprise way of doing this is by using a Hardware Security Module, where you offload your keys into a dedicated device. A cheaper and smaller-scale alternative to isolate your TLS keys is by running an TLS-terminating reverse proxy, which accepts incoming HTTPS connections and forwards the actual request to the appropriate internal Web server. These internal servers can be physical machines in the DMZ, but you can apply the same approach in a virtualized environment, where one container acts as a reverse proxy, and forwards requests to other containers.
The reverse proxy handles HTTPS connections with the browser, and forwards requests to the internal (virtualized) Web server
In this scenario, all the sensitive TLS key material is stored on the reverse proxy, which only forwards requests and hosts no sites. The actual Web sites are running on different server instances, in a different environment, where the sensitive TLS keys are not accessible. In case one of these sites is compromised, the damage should stay limited to that site, or other sites running in the same environment, and the TLS keys will definitely not be compromised.
If you’re taking this approach, there is one aspect that deserves some additional discussion: the communication channel between the reverse proxy and the internal Web servers. It is definitely recommended to run this communication channel over TLS as well. A first reason is that some Web sites require explicit configuration of the base URL, and make connections to themselves, for running certain diagnostics. Running the Web site over HTTPS on the public part of your network, but running it over HTTP in the private part may cause interoperability issues. Second, it has become good practice to run internal connections over secure channels as well, even within your own private network. In practice, this means that you will also have to configure your internal servers to use HTTPS. Since they are shielded from external connections by the reverse proxy, you can easily use self-signed certificates for this purpose, and make your reverse proxy accept those certificates.
The Icing on the Cake with Let’s Encrypt
When you’re using the freely available certificates from Let’s Encrypt (LE), you can automatically request them using the LE client. The LE client identifies the server administrator by generating a public/private key pair, which will be used to renew certificates once control over a domain has been asserted. Similarly to TLS public/private key pairs, the private key of the LE key pair is very sensitive, and should be protected against unauthorized access. This will not be the case if the LE key pair is stored on the Web server, along with the potentially vulnerable Web sites.
By running a TLS-terminating reverse proxy, you are able to store your LE key pair on the reverse proxy, well out of reach of any compromised Web site running on an internal Web server. The only disadvantage with the reverse proxy setup, is that you will need to explicitly configure the proxy to enable support for the Let’s Encrypt challenge mechanism, where a challenge is placed in the Web root of the domain. This is done by mapping the .well-known
folder to the local filesystem, and redirecting all other requests to HTTPS, as shown in an Nginx example below.
server { listen 80; server_name www.websec.be; location / { return 301 https://www.websec.be$request_uri; } location ^~ /.well-known { root /var/www/letsencrypt/; } }
Supporting Let’s Encrypt’s webroot feature with a TLS-terminating reverse proxy
To summarize, it’s a good security practice to keep your TLS public/private key pair out of reach of a potentially vulnerable Web site. By keeping the sensitive, CA-signed TLS key material on a reverse proxy, you can limit part of the consequences that come from a compromised Web site. This becomes even more relevant if you use Let’s Encrypt, which uses a separate public/private key pair to associate domains with the server administrator.
If you have anything to add here, or if you have a different approach to protect your TLS key material, feel free to share in the comments below.
Philippe De Ryck
BLOGPOSTS
Tweet