Survey is good. Thesis is strange.

February 14, 2012 21 comments

(UPDATE: I’ve written quite a bit more about this subject, in the wake of Nadia Heninger posting some fantastic data on the subject.)


Recently, Arjen Lenstra and James Hughes et al released some excellent survey work in their “Ron Was Wrong, Whit Was Right” paper regarding the quality of keys exposed on the global Internet. Rather than just assume proper generation of public key material, this survey looked at 11.7 million public keys in as much depth as possible.

The conclusion they reached was that RSA, because it has two secrets (the two primes, p and q), is “significantly riskier” than systems using “single-secrets” like (EC)DSA or ElGamel.

What?

Let me be clear. This is a mostly great paper, with lots of solid data on the state of key material on the Internet. We’re an industry that really operates without data, and with this work, we see (for example) that there’s not an obvious Debian-class time bomb floating around out there.

But there’s just no way we get from this survey work, to the thesis that surrounds it.

On the most basic level, risk in cryptography is utterly dominated, not by cipher selection, but by key management. The study found 12,720 public keys. It also found approximately 2.94 million expired certificates. And while the study didn’t discuss the number of certificates that had no reason to be trusted in the first place (being self signed), it did find 5.4M PGP keys.

It does not matter the strength of your public key if nobody knows to demand it. What the data from this survey says, unambiguously, is that most keys on the Internet today have no provenance that can be trusted, not even through whatever value the CA system affords. Key Management — as Whit Diffie himself has said — is The Hard Problem now for cryptography.

Whether you use RSA or DSA or ECDSA, that differential risk is utterly dwarfed by our problems with key management.

Is all this risk out of scope? Given that public key cryptography is itself a key management technology for symmetric algorithms like AES or 3DES, and that the paper is specifically arguing for one technology over another, it’s hard to argue that. But suppose we were to say key management is orthogonal to cryptographic analysis, like buffer overflows or other implementation flaws.

This is a paper based on survey work, in which the empirically validated existence of an implementation flaw (12,720 crackable public keys) is being used to justify a design bias (don’t use a multi-secret algorithm). The argument is that multi-secret algorithms cause crackable public keys.

You don’t just get to cite implementation flaws when they’re convenient.

More importantly, correlation is not causation. That we see a small number of bad keys at the same time we see RSA does not mean the latter caused the former. Alas, this isn’t even correlation. Out of 6,185,372 X.509 certificates seen by the survey, 6,185,230 of them used RSA. That’s all of 142 certificates that didn’t. We clearly do not have a situation where the choice of RSA vs. otherwise is random. Indeed, cipher selection is driven by network effects, historical patents, and government regulation, not the roll of the dice even mere correlation would require.

Finally, if we were to say that a cipher that created 12,720 broken instances had to be suspect, could we ever touch ECDSA again? Sony’s Playstation 3 ECDSA Fixed Nonce hit millions of systems around the world. Thus the fault with this whole “multi-secret” complaint — every cipher has moving parts that can break down. “Nothing is foolproof because fools are so ingenious”, as they say. Even if one accepted the single-sample link between multi-secret and RSA, the purportedly safe “single-secret” systems have also failed in the past, in quite larger numbers when you get down to it.

I don’t mean to be too hard on this paper, which again, has some excellent data and analysis inside. I’ve been strongly advocating for the collection of data in security, as I think we operate more on assumption and rumor than we’d like to admit. The flip side is that we must take care not to fit our data to those assumptions.


Apparently this paper escaped into the popular press.

I think the most important question to be asked is — if 0.2% of the global population of RSA moduli are insecure, what about those attached to unexpired certificates with legitimate issuers? Is that also 0.2%, or less?

(There’s no security differential if there was no security to begin with.)

Categories: Security

How The WPS Bug Came To Be, And How Ugly It Actually Is

January 26, 2012 6 comments

FINDING: WPS was designed to be secure against a malicious access point. It isn’t: A malicious AP recovers the entire PIN after only two protocol exchanges. This server to client attack is substantially easier than the client to server attack, with the important caveat that the latter does not require a client to be “lured”.

SUMMARY: Stefan Viehböck’s WPS vuln was one of the bigger flaws found in all of 2011, and should be understood as an attempt to improve the usability of a security technology in the face of severe and ongoing deficiencies in our ability to manage key management (something we’re fixing with DNSSEC). WPS seems to have started as a mechanism to integrate UI-less consumer electronics into the Wi-Fi fold; it grew to be a primary mechanism for PCs to link up. WPS attempted to use an online “proof of possession” process to mutually authenticate and securely configure clients for APs, including the provisioning of WEP/WPA/WPA2 secrets. In what appears to have been a mechanism to prevent malicious “evil twin” APs from extracting PINs from rerouted clients, the already small PIN was split in half. Stefan noted that this made brute force efforts much easier, as the first half could be guessed independently of the second half. While rate limiting at the AP server is an obvious defense, it turns out to be the only defense, as both not continuing the protocol, and providing a fake message, are trivially detectable signs that the half-PIN was guessed incorrectly. Meanwhile, providing the legitimate response even in the face of an incorrect PIN guess actually leaks enough data for the attacker to brute force the PIN. I further note that, even for the purposes of mutual authentication, the split-PIN is problematic. Servers can immediately offline brute force the first half of the PIN from the M4 message, and after resetting the protocol, they can generate the appropriate messages to allow them to offline brute the second half of the PIN. To conclude, I discuss the vagaries of just how much needs to be patched, and thus how long we need to wait, before this vulnerability can actually be addressed server side. Finally, I note that we can directly trace the existence of this vulnerability to the unclear patent state of Secure Remote Passwords (SRP), which really is the canonical way to do Password Authenticated Key Exchange (PAKE).

Note: I really should be writing about DNSSEC right now. What’s interesting is, in a very weird way, I actually am.


SECTIONS:
Introduction
On Usability
Attacks, Online and Off
A Simple Plan
Rate Limiting: The Only Option?
It Gets (Unexpectedly) Worse
On Patching
Right Tech, Wrong Law


INTRODUCTION

Security is not usually a field with good news. Generally, all our discussions center around the latest compromise, the newest vulnerabilities, or even the old bugs that stubbornly refuse to disappear despite our “best practices”. I’m somewhat strange, in that I’m the ornery optimist that’s been convinced we can, must, and will (three different things) fix this broken and critical system we call the Internet.

That being said — fixing the Net isn’t going to be easy, and it’s important people understand why. And so, rather than jump right into DNSSEC, I’m going to write now about what it looks like when even an optimist gets backed into a corner.

Lets talk about what I think has to be considered one of the biggest finds of 2011, if not the biggest: Stefan Viehböck’s discovery of fundamental weaknesses in Wi-Fi Protected Setup, or WPS. Largest number of affected servers, largest number of affected clients, out of some of the most battle hardened engineers in all of technology. Also, hardest to patch bug in recent memory. Maybe ever.(Craig Heffner also found the bug, as did some uncountable number of unnamed parties over the years. None of us are likely to be the first to find a bug…we just have the choice to be the last. Craig did release the nicely polished Reaver for us though, which was greatly appreciated, and so I’m happy to cite his independent discovery.)

It wasn’t supposed to be this way. If any industry group had suffered the slings and arrows of the security community, it was the Wi-Fi Alliance. WEP was essentially a live-fire testing range for all the possible things that could go wrong when using RC4 as a stream cipher. It took a while, but we eventually ended up with the fairly reasonable WPA2 standard. Given either a password, or a certificate, WPA2 grants you a fairly solid encrypted link to a network.

That’s quite the given — for right there, staring us in the face once more, was the key management problem.

(Yes, the very same problem we ultimately require DNSSEC to solve. What follows is what happens when engineers don’t have the right tools to fix something, but try anyway. Please forgive me for not using the word passphrase; when I actually see such phrases in the wild we can talk.)

ON USABILITY

So it should be an obvious statement, but a system that is not usable, is not actually used. But wireless encryption in general, and WPA2 in particular, actually has been achieving a remarkable level of adoption. I’ve already written about the deployment numbers; suffice it to say, this is one of the brighter spots in real world cryptography. (The only things larger are the telnet->ssh switch, and maybe the widespread adoption of SSL by Top 100 sites in the wake of Firesheep and state level packet chicanery.)

So, if this WPA2 technology is actually being used, something had to be making it usable. That something, it turned out, was an entirely separate configuration layer: WPS, for WiFi Protected Setup. WPS was pretty clearly originally designed as a mechanism for consumer electronic devices to link to access points. This makes a lot of sense — there’s not much in the way of user interface on these devices, but there’s always room to paste a sticker. Not to mention that the alternative, Bluetooth, is fairly complex and heavy, particularly if all you want is just IP to a box.

However, a good quarter to half of all the access points exposing encrypted access today expose WPS not for random consumer devices, but for PCs themselves. After all, home users rather obviously didn’t have an Enterprise Certificate Authority around to issue them a certificate, and those passwords that WPA2 wanted are (seriously, legitimately, empirically) hard to remember.

Not to mention, how does the user deal with WPA2 passwords during “onboarding”, meaning they just pulled the device out of the box?

WPS cleaned all this up. In its most common deployment concept, “Label”, it envisioned a second password. This one would be unique to the device and stickered to the back of it. A special exchange, independent of WPA2 or WPA or even WEP, would execute. This exchange would not only authenticate the client to the server, but also the server to the client. By this mechanism, the designers of WPS would defend against so-called “evil twin” attacks, in which a malicious access point present during initial setup would pair with the client instead of the the real AP. The password would be simplified — a PIN, 8 digits, with 1 digit being a check so the user could be quickly informed they’d typed it wrong.

ATTACKS, ONLINE AND OFF

How could this possibly be safe? Well, one of the finer points of password security is that there is a difference between “online” and “offline” attackers. An online attacker has to interact with the defender on each password attempt — say, entering a password into a web form. Even if a server doesn’t lock you out after a certain number of failures (itself a mechanism of attack) only so many password attempts per second can be reasonably attempted. Such limits do not apply to the offline attacker, who perhaps is staring at a website’s backend database filled with hashes of many users’ passwords. This attacker can attempt as many passwords as he has computational resources to evaluate.

So there does exist a game, in which the defender forces the attacker to interact with him every time a password is attempted. This game can be made quite slow, to the point where as long as the password itself isn’t something obvious (read: chosen by the user), the attacker’s not getting in anytime soon. That was the path WPS was on, and this is what it looked like, as per the Microsoft document that Stefan Viehbock borrowed from in his WPS paper:

There’s a lot going on here, because there’s a lot of different attacks this protocol is trying to defend against. It wants to suppress replay, it wants to mutually authenticate, it wants to create an encrypted channel between the two parties, and it really wants to be invulnerable to an offline attack (remember, there’s only 10^7, or around 23 bits of entropy in the seven digit password). All of these desires combine to make a spec that hides its fatal, and more importantly, unfixable weakness.

A SIMPLE PLAN

Lets simplify. Assume for a moment that there is in fact a secure channel between the client and server. We’re also going to change the word “Registrar” to “Client” and “Enrollee” to “Server”. (Yes. This is backwards. I blame some EE somewhere.) The protocol now becomes something vaguely like:

1. Server -> Client:  Hash(Secret1 || PIN)         
2. Client -> Server:  Hash(Secret2 || PIN), Secret2
3. Server -> Client:  Secret1

The right mental image to have, is one of “lockboxes”: Neither party wants to give the other the PIN, so they give eachother hashes of the PIN combined with long secrets. Then they give eachother the long secrets. Now, even without sending the PINs, they can each combine the secrets they’ve received with the secret PINs they already know, see that the hashes computed are equal to the hashes sent, and thus know that the other has knowledge of the PIN. The transmission of Secrets “unlocks” the ability for the Hash to prove possession of the PIN.

Mutual authentication achieved, great job everyone.

Not quite. What Stefan figured out is that WPS doesn’t actually operate across the entire PIN directly; instead, it splits things up more like:

1. Server -> Client:  Hash(Secret1 || first_half_of_PIN)         
2. Client -> Server:  Hash(Secret2 || first_half_of_PIN), Secret2
3. Server -> Client:  Secret1

First_half_of_PIN is only four digits, ranging from 0000 to 9999. A client sending Hash(Secret2 |first_half_of_PIN) may actually have no idea what the real first half is. It might just be trying them all…but whenever they get it right, Server’s going to send Client Secret! It’s a PIN oracle! Whatever can the server do?

RATE LIMITING: THE ONLY OPTION?

The obvious answer is that the server could rate limit the client. That was the recommended path, but most APs didn’t implement that. (Well, didn’t implement it intentionally. Apparently it’s really easy to knock over wireless access points by just flooding this WPS endpoint. It’s amazing the degree to which global Internet connectivity depends on code sold at $40/box.)

What’s not obvious is the degree to which we really have no other degrees of freedom but rate limiting. My initial assumption, upon hearing of this bug, was that we could “fake” messages — that it would be possible to leave an attacker blind to whether they’d guessed half the PIN.

Thanks to mutual authentication, no such freedom exists. Here’s the corner we’re in:

If we do as the protocol specifies — refuse to send Secret1, because the client’s brute forcing us — then the “client” can differentiate a correct guess of the first four digits from an incorrect guess. That is the basic vulnerability.

If we lie — send, say, Secret_fake — ah, now the client simply compares Hash(Secret1 || first_half_of_PIN) from the first message with this new Hash(Secret_fake || first_half_of_PIN) and it’s clear the response is a lie and the first_half_of_PIN was incorrectly guessed.

And finally, if we tell the truth — if we send the client the real Secret1, despite their incorrect guess of first_half_of_PIN — then look what he’s sitting on: Hash(secret1, first_half_of_PIN) and Secret1. Now he can run an offline attack against first_half_of_PIN.

It takes very little time for a computer to run through 10,000 possibilities.

So. Can’t shut up, can’t lie, and absolutely cannot tell the truth. Ouch.

IT GETS (UNEXPECTEDLY) WORSE

Aside from rate limiting, can we fix the protocol? It’s surprisingly difficult. Lets remove a bit more simplification, and add second_half_of_PIN:

[M3] 1. Server -> Client:  Hash(Secret1 || first_half_of_PIN), 
                           Hash(Secret3 || second_half_of_PIN)         
[M4] 2. Client -> Server:  Hash(Secret2 || first_half_of_PIN),
                           Hash(Secret4 || second_half_of_PIN), Secret2
[M5] 3. Server -> Client:  Secret1
[M6] 4. Client -> Server:  Secret4
[M7] 5. Server -> Client:  Secret3

So, in this model, the server provides information about the entire PIN, but with Secret1 and Secret3 (really, E-S1 and E-S2) being 128 random bits, the PIN cannot be brute forced. Then the client provides the same mishmash of information about the PIN, now with Secret2 (R-S1) and Secret4 (R-S2). But immediately the client “unlocks” the first half of the PIN to the server, who replies by “unlocking” that same half. Only when the server “unlocks” the first half to the client, does the client “unlock” the second half of the PIN to the server and vice versa.

It’s always a bit tricky to divine what people where thinking when they made a protocol. Believe me, DNSSEC is way more useful than even its designers intended. But it’s pretty clear the designers were trying to make sure that a malicious server never got its hands on the full PIN from the client. That seems to be the thinking from the designer here — Secret4, unlocking second_half_of_PIN from the client to the server, is only sent if the server actually knew first_half_of_PIN back in that first (well, really M3) message.

For such a damaging flaw, we sure didn’t get much in return. Problem is, a malicious “evil twin” server is basically handed first_half_of_PIN right there in the second, M4 message. He’s got Hash(Secret2||first_half_of_PIN) and Secret2. That’s 10,000 computations away from first_half_of_PIN.

Technically, it’s too late! He can’t go back in time and fix M3 to now contain the correct first_half_of_PIN. But, in an error usually only seen among economists and freshman psychology students, there is an assumption that the protocol is “one-shot” — that the server can’t just tell the server that something went wrong, and that client should then start from scratch.

So, in the “evil twin” attack, the attacker resets the session after brute forcing the correct first_half_of_PIN. This allows him to send a correct Hash(Secret1||first_half_of_PIN) in the next iteration of M3, which causes the Secret1 in M5 to be accepted, which causes Secret4 in M6 to be sent to the server thus unlocking second_half_of_PIN.

Again, it’s possible I’m missing the “real reason” for splitting the PIN. But if this was it, not only did the split make it much easier for a malicious client to attack an AP server, but it also made it absolutely trivial for a malicious AP server to attack a client. There are rumors of this pattern elsewhere, and I’m now rather concerned.

(To be entirely honest, I was not intending to find another WPS attack. I was just trying to document Stefan’s work.) Ouch.

ON PATCHING

Fixing WPS is going to be a mess. There’s a pile of devices out there that simply expect to be able to execute the above protocol in order to discover their wireless configuration, including encryption keys. Even when a more advanced protocol becomes available, preventing the downgrade attack in which an attacker simply claims to be an unpatched client is brutal. Make no mistake, the patch treadmill developed over the last decade is impressive, and hundreds of millions of devices recieve updates on a regular basis.

There’s billions that don’t. Those billions include $40 access points, mobile phones, even large numbers of desktops and laptops. When do you get to turn off an insecure but critical function required for basic connectivity?

Good question. Hard, hard answer. Aside from rate limiting defenses, this flaw is going to be around, and enabled by default, for quite some time. years, at least.

To some extent, that’s the reality of security. It’s the long game, or none at all. Accepting this is a ray of hope. We can talk about problems that will take years to solve, because we know that’s the only scale in which change can occur. The Wi-Fi Alliance deserves credit for playing this game on the long scale. This is a lost battle, but at least this is a group of engineers that have been here before.

RIGHT TECH, WRONG LAW

Now, what should have been built? Once again, we should have used SRP, and didn’t. SRP is unambigiously the right way to solve the PAKE (Password Authenticated Key Exchange) problem, quite a bit more even than my gratuitious (if entertaining) work with Phidelius which somewhat suffers from offline vulnerability. Our inability to migrate to SRP is usually ascribed to some unclear status relating to patents; that our legal system has been unable to declare SRP covered or not pretty much led to the obviously correct technology not being used.

Somebody should start lobbying or something.

Categories: Security

Salt The Fries: Some Notes On Password Complexity

January 5, 2012 12 comments

Just a small post, because Rob Graham asked.

The question is: What is the differential complexity increase offered by salting hashes in a password database?

The short answer, in bits, is: The square root base 2 log of the number of accounts the attacker is interested in cracking.

Rob wanted me to explain this in a bit more depth, and so I’m happy to. In theory, the basic properties of a cryptographic hash is that it’s infeasible to invert the hash back to the bitstream that forged it. This is used in password stores by taking the plaintext password provided by the user, hashing it, and comparing the hash from the database to the hash of the user provided plaintext. If they match, the user is authenticated. Even if the database is lost, the attack won’t be able to invert the database back to the passwords, and users are protected.

There are, of course, two attacks against this model. The first, surprisingly ignored, is that the plaintext password still passes through the web app front end before it is hash-compared against the value in the database. An attacker with sufficient access to dump the database often has sufficient access to get code running on either the server or client front ends (and you thought cross-site scripting was uninteresting!). Granted, this type of attack reduces exposed users from “everyone” to “everyone who logs in while the site is compromised”. That’d probably be more of an improvement, if we believed attackers were only interested in instantaneous smash-and-grabs and were unable to, or afraid of persisting their threats for long periods of time.

Heh.

The more traditional threat against password hashes is offline brute forcing. While hashes can’t be inverted, the advent of GPUs means they can be tested at the rate of hundreds of millions to billions per second. So, instead of solving the hard problem of figuring out what input created the given output, just try all the inputs until one happens to equal your desire.

How long will this take? Suppose we can test one billion hashes per second. That’s approximately 2^30, so we can say we can handle 30 bits of entropy per second. If we’re testing eight character passwords with the character set A through Z, a through z, and 0 through 9, that’s (26+26+10)^8, or 218,340,105,584,896 attempts. That’s roughly 2^48, or 48 bits of entropy. We can handle only 30 per second, and 48 – 30 == 18, so it will take approximately 2^18 seconds to run through the entire hash space — that’s 262,144 seconds, or 72 hours.

(Yes, I’m rounding extensively. The goal is to have a general feel for complexity, not to get too focused on the precise arithmetic.)

There are two important oversimplifications going on here. First, passwords selected by humans, even those that use those all important punctuation, numbers, and other forms of l33tsp33k, do not utilize these hardening factors particularly randomly. XKCD has made fun of this in the past. One of the real consequences of all these passwords drops is that we’re getting hard data on the types of password patterns people actually use. So groups like TeamHashCat are selecting their billion-hashes-per-second not in random order, but increasingly close to the likelihood that a given input will actually be somebody’s password.

So how does salting play into this?

There’s a much older optimization for password cracking, than selecting passwords based on complex models of what people use. An unsalted hash is the direct output of hash(password). If two people use the same password, they will have the same password hash. And thus, the work effort to crack passwords amortizes — the same operation to find out if Alice’s password is “abcd1234″ also discloses whether Bob’s password is “abcd1234″. Salting changes this, by adding some sort of randomizer (possibly just “Alice” or “Bob”) to the hash function. That way, Alice’s password hash is the hash for “Aliceabcd1234″, while Bob’s password hash is the hash for “Bobabcd1234″. To crack both Alice and Bob’s passwords requires twice as much work.

That’s one extra bit.

And that then is how you back into the complexity increase of salting on password cracking. When you’re cracking an unsalted database of passwords, the more accounts you have, the more “bites at the apple” there are — each attempt may match any one of n hashes. Salting just removes the optimization. If you want to crack 256 accounts instead of 1, you need to work 256 times harder — that’s 2^8, or 8 bits. If you want to crack 1M accounts, you need to work a million times harder — that’s 2^20, or 20 bits.

Note however that the question is not how many accounts there are in total, but how many accounts you’re interested in. If only 256 of the 1M accounts are interesting, salting only wins you 8 bits.

A bigger deal, I think, is memory hardness. Password hash functions can and often are run in a loop, such that the hash function is actually run thousands of times in a serial fashion. This does win ~12 bits of security. However, the loops are still viable within the practical means by which attackers are getting billions of hashing operations per second, i.e. GPUs. GPUs have an architectural limit, however — they can be run out of memory. Requiring each hash process to spend 256M+ of RAM may practically eliminate GPUs from the hash-cracking game entirely, even if the number of interesting accounts is just one.

That’s not game-over for all attackers — million node botnets really do have a couple gigs of RAM available for each cracking process, after all — but it does reduce the population of attackers who get to play. That’s always nice. The best path for memory-hard hashes is Colin Percival’s scrypt, which I’ve been playing with lately inside of my cute (if ill-advised) Phidelius project. scrypt has an odd operational limitation (no libscrypt) but hopefully that can be resolved soon, and we can further refine best practices in this space.


Edit: Ah, some of my commenters have mentioned rainbow tables. These actually do change things slightly, though from an interesting direction. Unsalted password hashes are not merely shared across all accounts in a given database; they’re also shared across all accounts for that entire implementation, at least for each account someone is trying to crack. That means it’s possible to both pregenerate hashes (there’s no need to know a salt in advance, because there isn’t one) and to calculate more hashes for more time (since the work effort can be amortized not just cross-account, but cross-database).

It’s become a bit of a thing to just take password hashes and put them into Google. If they’re MD5, and they’re unhashed, a surprising amount of time you do get an answer. Moral of the story: Salting does keep you out of a hash dataset that may be 24+ bits (16M+) “cheaper” (less expensive per account) to attack than just your own.

Peter Maxwell also noted (read the comments) that it’s important to, say, not have a 16 bit salt. Really? Do people do that?

Categories: Security

Phidelius: Constructing Asymmetric Keypairs From Mere Passwords For Fun and PAKE

January 3, 2012 1 comment

TL;DR: New toy.

$ phidelius
Phidelius 1.0: Entropy Spoofing Engine
Author: Dan Kaminsky / dan@doxpara.com
Description: This code replaces most sources of an application's entropy with
a psuedorandom stream seeded from a password, a file, or a generated
sequence. This causes most cryptographic key generators (ssh-keygen,
openssl, etc) to emit apparently strong keys with a presumably memorable
backdoor. For many protocols this creates PAKE (Password Authenticated
Key Exchange) semantics without the server updating any code or even
being aware of the password stored client side. However, the cost of
blinding the server is increased exposure to offline brute force
attacks by a MITM, a risk only partially mitigatable by time/memory
hard crack resistance.
Example: phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f id_dsa"


Passwords are a problem. They’re constantly being lost, forgotten, and stolen. Something like 50% of compromises are associated with their loss. Somewhere along the way, websites started thinking l33tsp33k was a security technology to be enforced upon users.

So what we’re about to talk about in this post — and, in fact, what’s in the code I’m about to finally drop — is by no means a good idea. It may perhaps be an interesting idea, however.

So! First discussed in my Black Ops of 2011 talk, I’m finally releasing Phidelius 1.0. Phidelius allows a client armed with nothing but a password to generate predictable RSA/DSA/ECC keypairs that can then be used, unmodified, against real world applications such as SSH, SSL, IPsec, PGP, and even BitCoin (though I haven’t quite figured out that particular invocation yet — it’s pretty cool, you could send money to the bearer of a photograph).

Now, why would you do this? There’s been a longstanding question as to how can a server support passwords, without necessarily learning those passwords. The standard exhortations against storing unhashed passwords mean nothing against this problem; even if the server is storing hashed values, it’s still receiving them in plain text. There are hash-based challenge response protocols (“please give me the password hashed with this random value”) but they require the server to then store the password (or a password equivalent) so they can recognize valid responses.

There’s been a third class of solutions, belonging to the PAKE (Password Authenticated Key Exchange) family. These solutions generally come down to “password authenticated Diffie-Helman”. For various reasons, not least of which have been patents, these approaches haven’t gotten far. Phidelius basically “solves” PAKE, by totally cheating. Rather than authenticating an otherwise random exchange, the keypair itself is nothing but an encoding of the password. The server doesn’t even have to know. (If you are a dev, your ears just perked up. Solutions that require only one side to patch are infinitely easier to deploy.) How can this work?

Well, what was the one bug that affected asymmetric key generation universally, corrupting RSA, DSA, and ECC equally? Of course, the Debian Random Number Generator flaw. See, asymmetric keys of all sorts are all essentially built the same way. A pile of bits is drawn from some ostensibly random source (/dev/random, CryptGenRandom, etc). These bits are then accepted, massaged, or rejected, until they meet the required form for a keypair of the appropriate algorithm.

Under the Debian RNG bug, only a rather small subset of bit patterns could be drawn, so the same keys would be generated over and over. Phidelius just makes predictable key generation a feature: We know how to turn a password into a stream of random numbers. We call that “seeding a pseudorandom number generator”. So, we just make the output of a PRNG the input to /dev/random, /dev/urandom, a couple of OpenSSL functions, and some other miscellaneous noise, and — poof — same password in, same keypair out, as so:


# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
...The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06
# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06

Now, as I mentioned earlier, it’s not necessarily a particularly good idea to use this technique, and not just because passwords themselves are inherently messy. The biggest weakness of the Phidelius approach is that the public keys it generates are also backdoored with the password. The problem here is subtle. It’s not interesting that the server can brute force the public key back to the password. This is *always* the case, even in properly beautiful protocols like SRP. (In fact, Phidelius uses the PRNG from scrypt, a time and memory hard algorithm, to make brute forcing not only hard, but probably harder than any traditional PAKE algorithm can achieve.)

What’s interesting is that public keys are supposed to be public. So, most protocols expose them at least to MITM’s, and sometimes even publish the things. Yes, there’s quite a bit of effort put into suppressing brute force attacks, but this is a scheme that really threatens you with them.

It also doesn’t help that salting Phidelius-derived keys is pretty tricky. The fact that the exact same password creates the exact same keypair isn’t necessarily ideal. The best trick I’ve found is to embed parameters for the private key in the larger context afforded to most public keys, i.e. the certificate. You’d retrieve the public key, mix in the randomized salt and some useful parameters (like, say, options for scrypt), and reconstruct the private key. This requires quite a bit of infrastructure though.

And finally, of course, this is a rather fragile solution. The precise bit-for-bit use of entropy isn’t standardized; all bits are supposed to be equal. So you’re not just generating a key with ssh-keygen, you’re generating from ssh-keygen 5.3, possibly as emitted from a particular compiler. A proper implementation would have to standardize all this.

So, there are caveats. But this is interesting, and I look forward to seeing how people play with it.

(Oh, why Phidelius? It’s a spell from Harry Potter, which properly understood is the story of the epic consequences of losing one’s password.)


Edit: Forgot! Tom Ritter saw my talk at Black Hat, and posted that he’d actually worked out how to do this sort of password-to-keypair construction for PGP keys. It’s actually a bit tricky. You can see his code here.

Categories: Security

Crypto Interrupted: Data On The WPS WiFi Break

January 2, 2012 5 comments

TL,DR:  Went wardriving around Berlin. 26.3% of APs with crypto enabled exposed methods that imply vulnerability to the WPS design flaw.  A conservative extrapolation from WIGLE data suggests at least 4.1M vulnerable hosts.  Yikes.

=======

Fixing things is hard.  Fixing things in security, even more so.  (There’s a reasonable argument that the interlocking dependencies of security create a level of Hard that is mathematically representable.) And yet, WiFi was quietly but noticeably one of our industry’s better achievements.  Sure, WiFi’s original encryption algorithm — WEP — was essentially a demonstration of what not to do when deploying cryptographic primitives.  And indeed, due to various usability issues, it used to be rare to see encryption deployed at all.  But, over the last ten years, in a conversion we’ve notably not witnessed with SSL (at least not at nearly the same scale)…well, take a look:

In January 2002, almost 60% of AP’s seen worldwide by the WIGLE wireless mapping project had encryption disabled.  Ten years later, only 20% remained, and while WEP isn’t gone, it’s on track to be replaced by WPA/WPA2. You can explore the raw data here, and while there’s certainly room to argue about some of the numbers, it’s difficult to say things haven’t gotten better.

Which is what makes this WPS flaw found by Stefan Viehböck (and independently discovered by Craig Heffner) quite so tragic.

I’m not going to lie.  Trying to reverse engineer just how this bug came about is being something of a challenge.  There’s a desire to compete with Bluetooth for device enrollment, there’s some defense against evil twin / fake APs, there’s a historical similarity to the old LANMAN bug…I haven’t really figured it out yet.  At the end of the day though, the problem is that an attacker can determine that they’ve guessed the first four digits before working on the next three (and the last is a checksum), meaning 11000 queries (5500 on average) is enough to guess a PIN.

So, that’s the issue (and, likely, the fix — prevent the attacker from knowing they’re halfway there, by blinding the midpoint error).  But just like the telnet encryption bug, I think it’s important we get a handle on just how widespread this vulnerability is.  The WIGLE data doesn’t actually declare whether a given encryption-supporting node also supports WPS.  So…lets go outside and find out.

Wardriving?  In 2012?  It’s more likely than you think.

Over an approximately 4 hour period, 3,738 AP’s were found in the Berlin area.  (I was in Berlin for CCC.)  2,758 of these AP’s had encryption enabled — 73%, a bit higher than WIGLE average.  535 of the 2,758 (19.39%) had the “Label” WPS method enabled, along with crypto.  These are almost certainly vulnerable.  Another 191 (6.9%) had the “Display” WPS method enabled.  We believe these to be vulnerable as well.  Finally, 611/2758 — 22% — had WPS enabled, with no methods declared.  We have no data on whether these devices are exposed.

As things stand, we see 26.3% of AP’s with encryption enabled, likely to be exposing the WPS vulnerability.

What does this mean, in terms of absolute numbers?  WIGLE’s seen about 26M networks with crypto enabled.  However, they don’t age out AP’s, meaning the fact that a network hasn’t been seen since 2003 doesn’t mean it isn’t represented in the above numbers.  Still, 60% of the networks they’ve ever seen, were first seen in the last two years.  If we just take 60% of the 26M networks — 15.6M — and we entirely ignore all the networks WIGLE was unable to identify the crypto for, and all the networks that declare WPS but don’t declare a type…

We end up with 4.1M networks vulnerable to the WPS design vulnerability around the globe.  That’s the conservative number, and it’s a pretty big deal.

Note, of course, that whether a node is vulnerable is not a random function.  Like the UPNP issues I discussed in my 2011 Black Hat talk, this is an issue that may show up on every access point distributed by a particular ISP.  Given that there are entire countries with a single Internet provider, it is likely there are entire regions where every access point is now wide open.

Curious about your own networks?  Run:

iw scan wlan0 | grep “Config methods”

If you see anything with “Label” or “Display” you’ve got a problem.

===
Small update:

There’s code in one WPA supplicant that implies that, perhaps APs that do not expose a method still support PINs:


/*
* In theory, this could also verify that attr.sel_reg_config_methods
* includes WPS_CONFIG_LABEL, WPS_CONFIG_DISPLAY, or WPS_CONFIG_KEYPAD,
* but some deployed AP implementations do not set Selected Registrar
* Config Methods attribute properly, so it is safer to just use
* Device Password ID here.
*/

I’m going to stick with the conservative 26.1%, rather than the larger 48.1% estimate, until someone actually confirms an effective attack against such an AP.

Categories: Security

From 0day to 0data: TelnetD

December 29, 2011 Leave a comment

Recently, it was found that BSD-derived Telnet implementations had a fairly straightforward vulnerability in their encryption handler. (Also, it was found that there was an encryption handler.) Telnet was the de facto standard protocol for remote administration of everything but Windows systems, so there’s been some curiosity in just how nasty this bug is operationally.

So, with the help of Fyodor and the NMAP scripting team, I set out to collect some data on just how widespread this bug is. First, I set out to find all hosts listening on tcp/23. Then, with Fyodor’s code, I checked for encryption support. Presence of support does not necessarily imply vulnerability, but absence does imply invulnerability.

The scanner’s pretty rough, and I’m doing some sampling here, but here’s the bottom line data:

Out of 156113 confirmed telnet servers randomly distributed across the Internet, only 429 exposed encryption support. By far my largest source of noise is the estimate of just how many telnet servers (as opposed to just TCP listeners on 23, or transient services) exist across the entire Internet. My conservative numbers place that count at around 7.8M. This yields an estimate of ~13785 ~21546 (or 15,600 to 23,500, at 95% confidence — thanks, David Horn!) potentially vulnerable servers.

Patched servers will still look vulnerable unless they remove encryption support entirely, so this isn’t something we can watch get fixed (well, unless we’re willing to crash servers, which I’m not, even if it’d be safe because these things are launched from inetd).

Here is a command line that will scan your networks. You may need nmap from svn.


/usr/local/bin/nmap -p23 -PS23 --script telnet-encryption -T5 -oA telnet_logs -iL list_of_ips -v

So, great bug, just almost certainly not widely distributed. I’ll document the analysis process when I’m not at con.

Categories: Security

Exploring ReU: Rewriting URLs For Fun And XSRF/HTTPS

October 20, 2011 1 comment

So I’ve been musing lately about a scheme I call ReU. It would allow client side rewriting of URLs, to make the addition of XSRF tokens much easier. As a side effect, it should also make it much easier to upgrade the links on a site from http to https. Does it work? Is it a good idea? I’m not at all sure. But I think it’s worth talking about:

===

There is a consistent theme to both XSS and XSRF attacks: XS. The fact that foreign sites can access local resources, with all the credentials of the user, is a fundamental design issue. Random web sites on the Internet should not be able to retrieve Page 3 of 5 of a shopping cart in the same way they can retrieve this blog entry! The most common mechanism recommended to prevent this sort of cross-site chicanery is to use a XSRF token, like so:

http://foo.com/bar?session_id=12345

The token is randomized, so now links from the outside world fail despite having appropriate credentials in the cookie.

In some people’s minds, this is quite enough. It only needs to be possible to deploy secure solutions; cost of deployment is so irrelevant, it’s not even worth being aware of. The problem is, somebody’s got to fix this junk, and it turns out that retrofitting a site with new URLs is quite tricky. As Martin Johns wrote:

…all application local URLs have to be rewritten for using the randomizer object. While standard HTML forms and hyperlinks pose no special challenge, prior existing JavaScript may be harder to deal with. All JavaScript functions that assign values to document.location or open new windows have to be located and modified. Also all existing onclick and onsubmit events have to be rewritten. Furthermore, HTML code might include external referenced JavaScript libraries, which have to be processed as well. Because of these problems, a web application that is protected by such a solution has to be examined and tested thoroughly.

Put simply, web pages are pieced together from pieces developed across sprints, teams, companies, and languages. No one group knows the entire story — but if anyone screws up and leaves the token off, the site breaks.

The test load to prevent those breakages from reoccurring are a huge part of why delivering secure solutions is such a slow and expensive process.

So I’ve been thinking: If upgrading the security of URLs is difficult server side, because the code there tends to be fragmented, could we do something on the client? Could the browser itself be used to add XSRF tokens to URLs in advance of their retrieval or navigation, in a way that basically came down to a single script embedded in the HEAD segment of an HTML file?

Would not such a scheme be useful not only for adding XSRF tokens, but also changing http links to https, paving the way for increasing the adoption of TLS?

The answer seems to be yes. A client side rewriter — ReU, as I call it — might very well be a small, constrained, but enormously useful request of the browser community. I see the code looking something vaguely like:

var session_token=aaa241ee1298371;
var session_domain="foo.com";

function analyzeURL(url, domnode, flags)
{
   domain = getDomain(url);
   // only add token to local resources
   if(isInside(session_domain, domain)) { 
      addParam(url, "token", session_token); }
      forceHTTPS(url);
   }
   return url;
}

window.registerURLRewriter(analyzeURL);

Now, something that could totally kill this proposal, is if browsers that didn’t support the extension suddenly broke — or, worse, if users with new browsers couldn’t experience any of the security benefits of the extension, lest older users have their functionality broken. Happily, I think we can avoid this trap, by encoding in the stored cookie whether the user’s browser supports ReU. Remember, for the XSRF case we’re basically trying to figure out in what situation we want to ignore a browser’s request despite the presence of a valid cookie. So if the cookie reports to us that it came from a user without the ability to automatically add XSRF tokens, oh well. We let that subset through anyway.

As long as the bad guys don’t have the ability to force modern browsers to appear as insecure clients, we’re OK.

Now, there’s some history here. Jim Manico pointed me at CSRFGuard, a project from the OWASP guys. This also tries to “upgrade” URLs through client side inspection. The problem is that real world frameworks really do assemble URLs dynamically, or go through paths that CSRFGuard can’t interrupt. These aren’t obscure paths, either:

No known way to inject CSRF prevention tokens into setters of document.location (ex: document.location=”http://www.owasp.org”;)
Tokens are not injected into HTML elements dynamically generated using JavaScript “createElement” and “setAttribute”

Yeah, there’s a lot of setters of document.location in the field. Ultimately, what we’re asking for here is a new security boundary for browsers — a direct statement that, if the client’s going somewhere or grabbing something, this inspector is going to be able to take a look at it.

There’s also history with both the Referer and Origin headers, which I spoke about (along with the seeds of this research) in my Interpolique talk. The basic concept with each is that the navigation source of a request — say http://www.google.com — would identify itself in some manner in the HTTP header of the subsequent request. The problem for Referer is that there are just so many sources of requests, and it’s best effort whether they show up or not (let alone whether they show up correctly).

Origin was its own problem. Effectively, they added this entire use case where images and links on a shared forum wouldn’t get access to Origin headers. It was necessary for their design, because their security model was entirely static. But for everything that’s *not* an Internet forum, the security model is effectively random.

With ReU, you write the code (or at least import the library) that determines whether or not a token is applied. And as a bonus, you can quickly upgrade to HTTPS too!

There is at least one catastrophic error case that needs to be handled. Basically, those pop-under windows that advertisers use, are actually really useful for all sorts of interesting client side attacks. See, those pop-unders usually retain a connection to the window they came from, and (here’s the important part) can renavigate those windows on demand. So there’s a possible scenario in which you go to a malicious site, it spawns the pop-under, you go to another site, you log in, and then the malicious page drags your real logged in window to a malicious URL at the otherwise protected site.

It’s a navigation event, and so a naive implementation of ReU would fire, adding the token. Yikes.

The solution is straightforward — guarantee that, at least by default, any event that fired the rewriter came from a local source. But that brings up a much trickier question:

Do we try to stop clickjacking with this mechanism as well? IFrames are beautiful, but they are also the source of a tremendous number of events that comes from rather untrusted parties. It’s one thing to request a URL rewriter that can special case external navigation events. It’s possibly another to get a default filter against events sourced from actions that really came from an IFrame parent.

It is very, very possible to make technically infeasible demands. People think this is OK, because it counts as “trying”. The problem is that “trying” can end up eating your entire budget for fixes. So I’m a little nervous about making ReU not fire when the event that triggered it came from the parent of an IFrame.

There are a few other things of note — this approach is rather bookmark friendly, both because the URLs still contain tokens, and because a special “bookmark handler” could be registered to make a long-lived bookmark of sorts. It’s also possible to allow other modulations for the XSRF token, which (as people note) does leak into Referer headers and the like. For example, HTTP headers could be added, or the cookie could be modified on a per request basis. Also, for navigations offsite, secrets could be removed from the referring URL before handed to foreign sites — this could close a longstanding concern with the Referer header too.

Finally, for debuggability, it’d be necessary to have some way of inspecting the URL rewriting stack. It might be interesting to allow the stack to be locked, at least in order of when things got patched in. Worth thinking about.

In summary, there’s lots of quirks to noodle on — but I’m fairly convinced that we can give web developers a “one stop shop” for implementing a number of the behaviors we in the security community believe they should be exhibiting. We can make things easier, more comprehensive, less buggy. It’s worth exploring.

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 716 other followers