in defense of opportunistic encryption
I’ve always been a secret admirer (and occasional not so secret advocate) of opportunistic encryption. Sometimes less flatteringly called unauthenticated encryption. Or even less flatteringly “not encrypted”. I’ve slowly come around, on the uselessness of unauthenticated encryption, but with the caveat that many times it’s not that bad. Here are a few notes on how I made self signed certs work for me. One could always go with one of those free certs, but seriously, fuck the CAbal.
The key word here is opportunity. Basically, it’s entirely optional but we’ll take it if we can get it. This generally means a blind key exchange, where we don’t check the identity of the other end. Self signed or otherwise unverified certs. Hence, unauthenticated.
The classic argument for opportunistic encryption is that it’s a defense against a passive adversary. “Hey, we can encrypt this, so let’s do so and thwart those undersea cable tappers!” The problem here is we’re starting with what we can defend against, and then building our threat model in response. That’s entirely backwards. Why are we focusing on submarines when practically anybody off the street can tamper with the coffee shop wifi and become an active adversary?
The two weakest points (i.e. the things we should focus on first) are probably the local access point and our upstream ISP. If it’s public wifi, it may have been tampered with or a rogue station. There’s no way we can trust it not to mitm our connection. If it’s a home router running the factory image, it probably has a variety of intentional and unintentional backdoors as well. And our ISP, whoever our local telecom may be, is also likely to be pretty shady. If they can inject ads into a website, they can definitely mitm a connection.
I’ve come around and I agree that opportunistic encryption is, from this perspective, useless. We must assume our adversary has full control over our connection.
All this assumes that when the “DANGER DANGER unknown cert DANGER DANGER” popup dialog box shows up, we stick our heads in the sand and click accept. Let’s try a little harder. Let’s try to authenticate our certs. This isn’t as hard as it seems.
When I setup web mail (mostly to check email on my iPhone), I wanted it to be secure, but I also wanted to use my own cert. It helps to know exactly what iOS does here. Unverified certs by default result in a dialog asking what to do. Safari doesn’t provide a lot of info about unverified certs, but once you accept it, an exception is stored. That cert is now good for that site forever. If a different unverified cert comes along, it will be rejected (with the same prompt as before).
There’s our solution. We only need to visit the site once under controlled conditions to accept the cert. After that, we can be confident we’re not being attacked. I did this at home by setting up an ssh tunnel. (Another time I installed the same cert locally and played games with DNS.) The same technique works for pinning a cert in Firefox.
All this isn’t really necessary. The correcter way to do this is to create your own CA cert and install that in the trusted list. I skipped doing it that way because it involves reading the manual. Either way, the point is that our self signed cert is now authenticated. We’re safe.
Could we have skipped the previous step? What if we use a friend’s phone to check email? Should we accept whatever cert comes along?
Correct answer: No. There are no shortcuts if you want to be secure.
Heretical answer: Maybe. I’ll note that people engaging in mitm activities can’t easily tell if you’ve been to a site and pinned its cert or not. For example, essentially 100% of the traffic to my web mail is me, from previously authenticated devices. To mitm that connection, self signed though it appears to be, would be as dangerous (in the sense of revealing the adversary’s presence) as doing the same to gmail.
What’s interesting is that if the bad guys do intercept your first contact with a site, they have to keep intercepting all future contact between that device and that site, or they will be detected when the correct cert is revealed. Of course, if you are borrowing your friend’s device, that may never happen and you’ll never catch on. That’s the danger. I did originally say our threat model was full control of the connection. Full control over all connnections (past, present, and future) is arguably a different threat model and implies a significantly more capable adversary, but nevertheless, let’s play it safe and assume that’s what we’re dealing with.
For a general purpose site, no, I’m sorry, this technique doesn’t work. I sometimes wonder if I could get away with switching to https and simply demanding that users accept whatever cert they see first and hope for the best. This is a pretty low value site, after all. But to encourage such behavior probably falls under the umbrella of terribly irresponsible. And sadly, the UI/UX for offline or out of band cert checking is so bad it’s useless. It’s so bad I’m practically convinced browser developers are paid to make sure you don’t trust anybody but the anointed CAs. But for the particular case of single user personal (owner operator) sites, it’s not much work at all to authenticate as above (hammering through the bad UI blindly because we know the cert is legit, not because we’re actually verifying that fact).
I’m not sure when it changed to https (good), but for a while the first thing you’d do when getting a new computer was download Firefox (or Mozilla) over http. Ironic that the highly discouraged “hope your first connection isn’t intercepted” technique was exactly the technique used to onboard you to the better CA system.
One unexpected benefit from even completely unauthenticated encryption is that the good guys (or at least, those trying to appear kinda good) don’t know that it’s unauthenticated, and therefore can’t mess with it. AT&T used to run some broken http proxy such that whenever I viewed certain websites, I’d be looking at proxy errors instead. Switching to https always solved that problem, because their dumb proxy wouldn’t touch it. Similarly for ad injecting proxies.
This isn’t at all a security benefit, but even the weakest level of encryption does keep the nominally good guys’ hands off our traffic. I’m still on the side of the fence that says when HTTP 2.0 rolls around, it’d be nice to see it always mandate encryption and simply indicate plaintext in the UI if it’s not authenticated. These are the kinds of mitm “attacks” that I’ve experienced first hand, so anything that prevents them is at the very least a quality of life improvement.
Long term, I’d love to see something like TACK to let others share in the self signed goodness. In the mean time, I’m kinda doing it as a one man show with my own devices.
There’s also that DANE thing which I thought was cool but in practice will likely be a superset of the current CA problems, with both systems operating concurrently. I think it’s dead. I hope it’s dead.
So far, this has focused almost exclusively on HTTP. Opportunistic encryption is actually in widespread use in SMTP via STARTTLS. Of course, since approximately 0% of mail servers verify certs (perhaps because in practice many certs used for mail servers don’t validate), there’s some question of how effective this really is.
Trevor Perrin has posted some great thoughts on opportunistic encryption and authentication.