Archive for the ‘Security’ Category

How The WPS Bug Came To Be, And How Ugly It Actually Is

January 26, 2012 6 comments

FINDING: WPS was designed to be secure against a malicious access point. It isn’t: A malicious AP recovers the entire PIN after only two protocol exchanges. This server to client attack is substantially easier than the client to server attack, with the important caveat that the latter does not require a client to be “lured”.

SUMMARY: Stefan Viehböck’s WPS vuln was one of the bigger flaws found in all of 2011, and should be understood as an attempt to improve the usability of a security technology in the face of severe and ongoing deficiencies in our ability to manage key management (something we’re fixing with DNSSEC). WPS seems to have started as a mechanism to integrate UI-less consumer electronics into the Wi-Fi fold; it grew to be a primary mechanism for PCs to link up. WPS attempted to use an online “proof of possession” process to mutually authenticate and securely configure clients for APs, including the provisioning of WEP/WPA/WPA2 secrets. In what appears to have been a mechanism to prevent malicious “evil twin” APs from extracting PINs from rerouted clients, the already small PIN was split in half. Stefan noted that this made brute force efforts much easier, as the first half could be guessed independently of the second half. While rate limiting at the AP server is an obvious defense, it turns out to be the only defense, as both not continuing the protocol, and providing a fake message, are trivially detectable signs that the half-PIN was guessed incorrectly. Meanwhile, providing the legitimate response even in the face of an incorrect PIN guess actually leaks enough data for the attacker to brute force the PIN. I further note that, even for the purposes of mutual authentication, the split-PIN is problematic. Servers can immediately offline brute force the first half of the PIN from the M4 message, and after resetting the protocol, they can generate the appropriate messages to allow them to offline brute the second half of the PIN. To conclude, I discuss the vagaries of just how much needs to be patched, and thus how long we need to wait, before this vulnerability can actually be addressed server side. Finally, I note that we can directly trace the existence of this vulnerability to the unclear patent state of Secure Remote Passwords (SRP), which really is the canonical way to do Password Authenticated Key Exchange (PAKE).

Note: I really should be writing about DNSSEC right now. What’s interesting is, in a very weird way, I actually am.

On Usability
Attacks, Online and Off
A Simple Plan
Rate Limiting: The Only Option?
It Gets (Unexpectedly) Worse
On Patching
Right Tech, Wrong Law


Security is not usually a field with good news. Generally, all our discussions center around the latest compromise, the newest vulnerabilities, or even the old bugs that stubbornly refuse to disappear despite our “best practices”. I’m somewhat strange, in that I’m the ornery optimist that’s been convinced we can, must, and will (three different things) fix this broken and critical system we call the Internet.

That being said — fixing the Net isn’t going to be easy, and it’s important people understand why. And so, rather than jump right into DNSSEC, I’m going to write now about what it looks like when even an optimist gets backed into a corner.

Lets talk about what I think has to be considered one of the biggest finds of 2011, if not the biggest: Stefan Viehböck’s discovery of fundamental weaknesses in Wi-Fi Protected Setup, or WPS. Largest number of affected servers, largest number of affected clients, out of some of the most battle hardened engineers in all of technology. Also, hardest to patch bug in recent memory. Maybe ever.(Craig Heffner also found the bug, as did some uncountable number of unnamed parties over the years. None of us are likely to be the first to find a bug…we just have the choice to be the last. Craig did release the nicely polished Reaver for us though, which was greatly appreciated, and so I’m happy to cite his independent discovery.)

It wasn’t supposed to be this way. If any industry group had suffered the slings and arrows of the security community, it was the Wi-Fi Alliance. WEP was essentially a live-fire testing range for all the possible things that could go wrong when using RC4 as a stream cipher. It took a while, but we eventually ended up with the fairly reasonable WPA2 standard. Given either a password, or a certificate, WPA2 grants you a fairly solid encrypted link to a network.

That’s quite the given — for right there, staring us in the face once more, was the key management problem.

(Yes, the very same problem we ultimately require DNSSEC to solve. What follows is what happens when engineers don’t have the right tools to fix something, but try anyway. Please forgive me for not using the word passphrase; when I actually see such phrases in the wild we can talk.)


So it should be an obvious statement, but a system that is not usable, is not actually used. But wireless encryption in general, and WPA2 in particular, actually has been achieving a remarkable level of adoption. I’ve already written about the deployment numbers; suffice it to say, this is one of the brighter spots in real world cryptography. (The only things larger are the telnet->ssh switch, and maybe the widespread adoption of SSL by Top 100 sites in the wake of Firesheep and state level packet chicanery.)

So, if this WPA2 technology is actually being used, something had to be making it usable. That something, it turned out, was an entirely separate configuration layer: WPS, for WiFi Protected Setup. WPS was pretty clearly originally designed as a mechanism for consumer electronic devices to link to access points. This makes a lot of sense — there’s not much in the way of user interface on these devices, but there’s always room to paste a sticker. Not to mention that the alternative, Bluetooth, is fairly complex and heavy, particularly if all you want is just IP to a box.

However, a good quarter to half of all the access points exposing encrypted access today expose WPS not for random consumer devices, but for PCs themselves. After all, home users rather obviously didn’t have an Enterprise Certificate Authority around to issue them a certificate, and those passwords that WPA2 wanted are (seriously, legitimately, empirically) hard to remember.

Not to mention, how does the user deal with WPA2 passwords during “onboarding”, meaning they just pulled the device out of the box?

WPS cleaned all this up. In its most common deployment concept, “Label”, it envisioned a second password. This one would be unique to the device and stickered to the back of it. A special exchange, independent of WPA2 or WPA or even WEP, would execute. This exchange would not only authenticate the client to the server, but also the server to the client. By this mechanism, the designers of WPS would defend against so-called “evil twin” attacks, in which a malicious access point present during initial setup would pair with the client instead of the the real AP. The password would be simplified — a PIN, 8 digits, with 1 digit being a check so the user could be quickly informed they’d typed it wrong.


How could this possibly be safe? Well, one of the finer points of password security is that there is a difference between “online” and “offline” attackers. An online attacker has to interact with the defender on each password attempt — say, entering a password into a web form. Even if a server doesn’t lock you out after a certain number of failures (itself a mechanism of attack) only so many password attempts per second can be reasonably attempted. Such limits do not apply to the offline attacker, who perhaps is staring at a website’s backend database filled with hashes of many users’ passwords. This attacker can attempt as many passwords as he has computational resources to evaluate.

So there does exist a game, in which the defender forces the attacker to interact with him every time a password is attempted. This game can be made quite slow, to the point where as long as the password itself isn’t something obvious (read: chosen by the user), the attacker’s not getting in anytime soon. That was the path WPS was on, and this is what it looked like, as per the Microsoft document that Stefan Viehbock borrowed from in his WPS paper:

There’s a lot going on here, because there’s a lot of different attacks this protocol is trying to defend against. It wants to suppress replay, it wants to mutually authenticate, it wants to create an encrypted channel between the two parties, and it really wants to be invulnerable to an offline attack (remember, there’s only 10^7, or around 23 bits of entropy in the seven digit password). All of these desires combine to make a spec that hides its fatal, and more importantly, unfixable weakness.


Lets simplify. Assume for a moment that there is in fact a secure channel between the client and server. We’re also going to change the word “Registrar” to “Client” and “Enrollee” to “Server”. (Yes. This is backwards. I blame some EE somewhere.) The protocol now becomes something vaguely like:

1. Server -> Client:  Hash(Secret1 || PIN)         
2. Client -> Server:  Hash(Secret2 || PIN), Secret2
3. Server -> Client:  Secret1

The right mental image to have, is one of “lockboxes”: Neither party wants to give the other the PIN, so they give eachother hashes of the PIN combined with long secrets. Then they give eachother the long secrets. Now, even without sending the PINs, they can each combine the secrets they’ve received with the secret PINs they already know, see that the hashes computed are equal to the hashes sent, and thus know that the other has knowledge of the PIN. The transmission of Secrets “unlocks” the ability for the Hash to prove possession of the PIN.

Mutual authentication achieved, great job everyone.

Not quite. What Stefan figured out is that WPS doesn’t actually operate across the entire PIN directly; instead, it splits things up more like:

1. Server -> Client:  Hash(Secret1 || first_half_of_PIN)         
2. Client -> Server:  Hash(Secret2 || first_half_of_PIN), Secret2
3. Server -> Client:  Secret1

First_half_of_PIN is only four digits, ranging from 0000 to 9999. A client sending Hash(Secret2 |first_half_of_PIN) may actually have no idea what the real first half is. It might just be trying them all…but whenever they get it right, Server’s going to send Client Secret! It’s a PIN oracle! Whatever can the server do?


The obvious answer is that the server could rate limit the client. That was the recommended path, but most APs didn’t implement that. (Well, didn’t implement it intentionally. Apparently it’s really easy to knock over wireless access points by just flooding this WPS endpoint. It’s amazing the degree to which global Internet connectivity depends on code sold at $40/box.)

What’s not obvious is the degree to which we really have no other degrees of freedom but rate limiting. My initial assumption, upon hearing of this bug, was that we could “fake” messages — that it would be possible to leave an attacker blind to whether they’d guessed half the PIN.

Thanks to mutual authentication, no such freedom exists. Here’s the corner we’re in:

If we do as the protocol specifies — refuse to send Secret1, because the client’s brute forcing us — then the “client” can differentiate a correct guess of the first four digits from an incorrect guess. That is the basic vulnerability.

If we lie — send, say, Secret_fake — ah, now the client simply compares Hash(Secret1 || first_half_of_PIN) from the first message with this new Hash(Secret_fake || first_half_of_PIN) and it’s clear the response is a lie and the first_half_of_PIN was incorrectly guessed.

And finally, if we tell the truth — if we send the client the real Secret1, despite their incorrect guess of first_half_of_PIN — then look what he’s sitting on: Hash(secret1, first_half_of_PIN) and Secret1. Now he can run an offline attack against first_half_of_PIN.

It takes very little time for a computer to run through 10,000 possibilities.

So. Can’t shut up, can’t lie, and absolutely cannot tell the truth. Ouch.


Aside from rate limiting, can we fix the protocol? It’s surprisingly difficult. Lets remove a bit more simplification, and add second_half_of_PIN:

[M3] 1. Server -> Client:  Hash(Secret1 || first_half_of_PIN), 
                           Hash(Secret3 || second_half_of_PIN)         
[M4] 2. Client -> Server:  Hash(Secret2 || first_half_of_PIN),
                           Hash(Secret4 || second_half_of_PIN), Secret2
[M5] 3. Server -> Client:  Secret1
[M6] 4. Client -> Server:  Secret4
[M7] 5. Server -> Client:  Secret3

So, in this model, the server provides information about the entire PIN, but with Secret1 and Secret3 (really, E-S1 and E-S2) being 128 random bits, the PIN cannot be brute forced. Then the client provides the same mishmash of information about the PIN, now with Secret2 (R-S1) and Secret4 (R-S2). But immediately the client “unlocks” the first half of the PIN to the server, who replies by “unlocking” that same half. Only when the server “unlocks” the first half to the client, does the client “unlock” the second half of the PIN to the server and vice versa.

It’s always a bit tricky to divine what people where thinking when they made a protocol. Believe me, DNSSEC is way more useful than even its designers intended. But it’s pretty clear the designers were trying to make sure that a malicious server never got its hands on the full PIN from the client. That seems to be the thinking from the designer here — Secret4, unlocking second_half_of_PIN from the client to the server, is only sent if the server actually knew first_half_of_PIN back in that first (well, really M3) message.

For such a damaging flaw, we sure didn’t get much in return. Problem is, a malicious “evil twin” server is basically handed first_half_of_PIN right there in the second, M4 message. He’s got Hash(Secret2||first_half_of_PIN) and Secret2. That’s 10,000 computations away from first_half_of_PIN.

Technically, it’s too late! He can’t go back in time and fix M3 to now contain the correct first_half_of_PIN. But, in an error usually only seen among economists and freshman psychology students, there is an assumption that the protocol is “one-shot” — that the server can’t just tell the server that something went wrong, and that client should then start from scratch.

So, in the “evil twin” attack, the attacker resets the session after brute forcing the correct first_half_of_PIN. This allows him to send a correct Hash(Secret1||first_half_of_PIN) in the next iteration of M3, which causes the Secret1 in M5 to be accepted, which causes Secret4 in M6 to be sent to the server thus unlocking second_half_of_PIN.

Again, it’s possible I’m missing the “real reason” for splitting the PIN. But if this was it, not only did the split make it much easier for a malicious client to attack an AP server, but it also made it absolutely trivial for a malicious AP server to attack a client. There are rumors of this pattern elsewhere, and I’m now rather concerned.

(To be entirely honest, I was not intending to find another WPS attack. I was just trying to document Stefan’s work.) Ouch.


Fixing WPS is going to be a mess. There’s a pile of devices out there that simply expect to be able to execute the above protocol in order to discover their wireless configuration, including encryption keys. Even when a more advanced protocol becomes available, preventing the downgrade attack in which an attacker simply claims to be an unpatched client is brutal. Make no mistake, the patch treadmill developed over the last decade is impressive, and hundreds of millions of devices recieve updates on a regular basis.

There’s billions that don’t. Those billions include $40 access points, mobile phones, even large numbers of desktops and laptops. When do you get to turn off an insecure but critical function required for basic connectivity?

Good question. Hard, hard answer. Aside from rate limiting defenses, this flaw is going to be around, and enabled by default, for quite some time. years, at least.

To some extent, that’s the reality of security. It’s the long game, or none at all. Accepting this is a ray of hope. We can talk about problems that will take years to solve, because we know that’s the only scale in which change can occur. The Wi-Fi Alliance deserves credit for playing this game on the long scale. This is a lost battle, but at least this is a group of engineers that have been here before.


Now, what should have been built? Once again, we should have used SRP, and didn’t. SRP is unambigiously the right way to solve the PAKE (Password Authenticated Key Exchange) problem, quite a bit more even than my gratuitious (if entertaining) work with Phidelius which somewhat suffers from offline vulnerability. Our inability to migrate to SRP is usually ascribed to some unclear status relating to patents; that our legal system has been unable to declare SRP covered or not pretty much led to the obviously correct technology not being used.

Somebody should start lobbying or something.

Categories: Security

Salt The Fries: Some Notes On Password Complexity

January 5, 2012 12 comments

Just a small post, because Rob Graham asked.

The question is: What is the differential complexity increase offered by salting hashes in a password database?

The short answer, in bits, is: The square root base 2 log of the number of accounts the attacker is interested in cracking.

Rob wanted me to explain this in a bit more depth, and so I’m happy to. In theory, the basic properties of a cryptographic hash is that it’s infeasible to invert the hash back to the bitstream that forged it. This is used in password stores by taking the plaintext password provided by the user, hashing it, and comparing the hash from the database to the hash of the user provided plaintext. If they match, the user is authenticated. Even if the database is lost, the attack won’t be able to invert the database back to the passwords, and users are protected.

There are, of course, two attacks against this model. The first, surprisingly ignored, is that the plaintext password still passes through the web app front end before it is hash-compared against the value in the database. An attacker with sufficient access to dump the database often has sufficient access to get code running on either the server or client front ends (and you thought cross-site scripting was uninteresting!). Granted, this type of attack reduces exposed users from “everyone” to “everyone who logs in while the site is compromised”. That’d probably be more of an improvement, if we believed attackers were only interested in instantaneous smash-and-grabs and were unable to, or afraid of persisting their threats for long periods of time.


The more traditional threat against password hashes is offline brute forcing. While hashes can’t be inverted, the advent of GPUs means they can be tested at the rate of hundreds of millions to billions per second. So, instead of solving the hard problem of figuring out what input created the given output, just try all the inputs until one happens to equal your desire.

How long will this take? Suppose we can test one billion hashes per second. That’s approximately 2^30, so we can say we can handle 30 bits of entropy per second. If we’re testing eight character passwords with the character set A through Z, a through z, and 0 through 9, that’s (26+26+10)^8, or 218,340,105,584,896 attempts. That’s roughly 2^48, or 48 bits of entropy. We can handle only 30 per second, and 48 – 30 == 18, so it will take approximately 2^18 seconds to run through the entire hash space — that’s 262,144 seconds, or 72 hours.

(Yes, I’m rounding extensively. The goal is to have a general feel for complexity, not to get too focused on the precise arithmetic.)

There are two important oversimplifications going on here. First, passwords selected by humans, even those that use those all important punctuation, numbers, and other forms of l33tsp33k, do not utilize these hardening factors particularly randomly. XKCD has made fun of this in the past. One of the real consequences of all these passwords drops is that we’re getting hard data on the types of password patterns people actually use. So groups like TeamHashCat are selecting their billion-hashes-per-second not in random order, but increasingly close to the likelihood that a given input will actually be somebody’s password.

So how does salting play into this?

There’s a much older optimization for password cracking, than selecting passwords based on complex models of what people use. An unsalted hash is the direct output of hash(password). If two people use the same password, they will have the same password hash. And thus, the work effort to crack passwords amortizes — the same operation to find out if Alice’s password is “abcd1234″ also discloses whether Bob’s password is “abcd1234″. Salting changes this, by adding some sort of randomizer (possibly just “Alice” or “Bob”) to the hash function. That way, Alice’s password hash is the hash for “Aliceabcd1234″, while Bob’s password hash is the hash for “Bobabcd1234″. To crack both Alice and Bob’s passwords requires twice as much work.

That’s one extra bit.

And that then is how you back into the complexity increase of salting on password cracking. When you’re cracking an unsalted database of passwords, the more accounts you have, the more “bites at the apple” there are — each attempt may match any one of n hashes. Salting just removes the optimization. If you want to crack 256 accounts instead of 1, you need to work 256 times harder — that’s 2^8, or 8 bits. If you want to crack 1M accounts, you need to work a million times harder — that’s 2^20, or 20 bits.

Note however that the question is not how many accounts there are in total, but how many accounts you’re interested in. If only 256 of the 1M accounts are interesting, salting only wins you 8 bits.

A bigger deal, I think, is memory hardness. Password hash functions can and often are run in a loop, such that the hash function is actually run thousands of times in a serial fashion. This does win ~12 bits of security. However, the loops are still viable within the practical means by which attackers are getting billions of hashing operations per second, i.e. GPUs. GPUs have an architectural limit, however — they can be run out of memory. Requiring each hash process to spend 256M+ of RAM may practically eliminate GPUs from the hash-cracking game entirely, even if the number of interesting accounts is just one.

That’s not game-over for all attackers — million node botnets really do have a couple gigs of RAM available for each cracking process, after all — but it does reduce the population of attackers who get to play. That’s always nice. The best path for memory-hard hashes is Colin Percival’s scrypt, which I’ve been playing with lately inside of my cute (if ill-advised) Phidelius project. scrypt has an odd operational limitation (no libscrypt) but hopefully that can be resolved soon, and we can further refine best practices in this space.

Edit: Ah, some of my commenters have mentioned rainbow tables. These actually do change things slightly, though from an interesting direction. Unsalted password hashes are not merely shared across all accounts in a given database; they’re also shared across all accounts for that entire implementation, at least for each account someone is trying to crack. That means it’s possible to both pregenerate hashes (there’s no need to know a salt in advance, because there isn’t one) and to calculate more hashes for more time (since the work effort can be amortized not just cross-account, but cross-database).

It’s become a bit of a thing to just take password hashes and put them into Google. If they’re MD5, and they’re unhashed, a surprising amount of time you do get an answer. Moral of the story: Salting does keep you out of a hash dataset that may be 24+ bits (16M+) “cheaper” (less expensive per account) to attack than just your own.

Peter Maxwell also noted (read the comments) that it’s important to, say, not have a 16 bit salt. Really? Do people do that?

Categories: Security

Phidelius: Constructing Asymmetric Keypairs From Mere Passwords For Fun and PAKE

January 3, 2012 1 comment

TL;DR: New toy.

$ phidelius
Phidelius 1.0: Entropy Spoofing Engine
Author: Dan Kaminsky /
Description: This code replaces most sources of an application's entropy with
a psuedorandom stream seeded from a password, a file, or a generated
sequence. This causes most cryptographic key generators (ssh-keygen,
openssl, etc) to emit apparently strong keys with a presumably memorable
backdoor. For many protocols this creates PAKE (Password Authenticated
Key Exchange) semantics without the server updating any code or even
being aware of the password stored client side. However, the cost of
blinding the server is increased exposure to offline brute force
attacks by a MITM, a risk only partially mitigatable by time/memory
hard crack resistance.
Example: phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f id_dsa"

Passwords are a problem. They’re constantly being lost, forgotten, and stolen. Something like 50% of compromises are associated with their loss. Somewhere along the way, websites started thinking l33tsp33k was a security technology to be enforced upon users.

So what we’re about to talk about in this post — and, in fact, what’s in the code I’m about to finally drop — is by no means a good idea. It may perhaps be an interesting idea, however.

So! First discussed in my Black Ops of 2011 talk, I’m finally releasing Phidelius 1.0. Phidelius allows a client armed with nothing but a password to generate predictable RSA/DSA/ECC keypairs that can then be used, unmodified, against real world applications such as SSH, SSL, IPsec, PGP, and even BitCoin (though I haven’t quite figured out that particular invocation yet — it’s pretty cool, you could send money to the bearer of a photograph).

Now, why would you do this? There’s been a longstanding question as to how can a server support passwords, without necessarily learning those passwords. The standard exhortations against storing unhashed passwords mean nothing against this problem; even if the server is storing hashed values, it’s still receiving them in plain text. There are hash-based challenge response protocols (“please give me the password hashed with this random value”) but they require the server to then store the password (or a password equivalent) so they can recognize valid responses.

There’s been a third class of solutions, belonging to the PAKE (Password Authenticated Key Exchange) family. These solutions generally come down to “password authenticated Diffie-Helman”. For various reasons, not least of which have been patents, these approaches haven’t gotten far. Phidelius basically “solves” PAKE, by totally cheating. Rather than authenticating an otherwise random exchange, the keypair itself is nothing but an encoding of the password. The server doesn’t even have to know. (If you are a dev, your ears just perked up. Solutions that require only one side to patch are infinitely easier to deploy.) How can this work?

Well, what was the one bug that affected asymmetric key generation universally, corrupting RSA, DSA, and ECC equally? Of course, the Debian Random Number Generator flaw. See, asymmetric keys of all sorts are all essentially built the same way. A pile of bits is drawn from some ostensibly random source (/dev/random, CryptGenRandom, etc). These bits are then accepted, massaged, or rejected, until they meet the required form for a keypair of the appropriate algorithm.

Under the Debian RNG bug, only a rather small subset of bit patterns could be drawn, so the same keys would be generated over and over. Phidelius just makes predictable key generation a feature: We know how to turn a password into a stream of random numbers. We call that “seeding a pseudorandom number generator”. So, we just make the output of a PRNG the input to /dev/random, /dev/urandom, a couple of OpenSSL functions, and some other miscellaneous noise, and — poof — same password in, same keypair out, as so:

# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
...The key fingerprint is:
# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
The key fingerprint is:

Now, as I mentioned earlier, it’s not necessarily a particularly good idea to use this technique, and not just because passwords themselves are inherently messy. The biggest weakness of the Phidelius approach is that the public keys it generates are also backdoored with the password. The problem here is subtle. It’s not interesting that the server can brute force the public key back to the password. This is *always* the case, even in properly beautiful protocols like SRP. (In fact, Phidelius uses the PRNG from scrypt, a time and memory hard algorithm, to make brute forcing not only hard, but probably harder than any traditional PAKE algorithm can achieve.)

What’s interesting is that public keys are supposed to be public. So, most protocols expose them at least to MITM’s, and sometimes even publish the things. Yes, there’s quite a bit of effort put into suppressing brute force attacks, but this is a scheme that really threatens you with them.

It also doesn’t help that salting Phidelius-derived keys is pretty tricky. The fact that the exact same password creates the exact same keypair isn’t necessarily ideal. The best trick I’ve found is to embed parameters for the private key in the larger context afforded to most public keys, i.e. the certificate. You’d retrieve the public key, mix in the randomized salt and some useful parameters (like, say, options for scrypt), and reconstruct the private key. This requires quite a bit of infrastructure though.

And finally, of course, this is a rather fragile solution. The precise bit-for-bit use of entropy isn’t standardized; all bits are supposed to be equal. So you’re not just generating a key with ssh-keygen, you’re generating from ssh-keygen 5.3, possibly as emitted from a particular compiler. A proper implementation would have to standardize all this.

So, there are caveats. But this is interesting, and I look forward to seeing how people play with it.

(Oh, why Phidelius? It’s a spell from Harry Potter, which properly understood is the story of the epic consequences of losing one’s password.)

Edit: Forgot! Tom Ritter saw my talk at Black Hat, and posted that he’d actually worked out how to do this sort of password-to-keypair construction for PGP keys. It’s actually a bit tricky. You can see his code here.

Categories: Security

Crypto Interrupted: Data On The WPS WiFi Break

January 2, 2012 5 comments

TL,DR:  Went wardriving around Berlin. 26.3% of APs with crypto enabled exposed methods that imply vulnerability to the WPS design flaw.  A conservative extrapolation from WIGLE data suggests at least 4.1M vulnerable hosts.  Yikes.


Fixing things is hard.  Fixing things in security, even more so.  (There’s a reasonable argument that the interlocking dependencies of security create a level of Hard that is mathematically representable.) And yet, WiFi was quietly but noticeably one of our industry’s better achievements.  Sure, WiFi’s original encryption algorithm — WEP — was essentially a demonstration of what not to do when deploying cryptographic primitives.  And indeed, due to various usability issues, it used to be rare to see encryption deployed at all.  But, over the last ten years, in a conversion we’ve notably not witnessed with SSL (at least not at nearly the same scale)…well, take a look:

In January 2002, almost 60% of AP’s seen worldwide by the WIGLE wireless mapping project had encryption disabled.  Ten years later, only 20% remained, and while WEP isn’t gone, it’s on track to be replaced by WPA/WPA2. You can explore the raw data here, and while there’s certainly room to argue about some of the numbers, it’s difficult to say things haven’t gotten better.

Which is what makes this WPS flaw found by Stefan Viehböck (and independently discovered by Craig Heffner) quite so tragic.

I’m not going to lie.  Trying to reverse engineer just how this bug came about is being something of a challenge.  There’s a desire to compete with Bluetooth for device enrollment, there’s some defense against evil twin / fake APs, there’s a historical similarity to the old LANMAN bug…I haven’t really figured it out yet.  At the end of the day though, the problem is that an attacker can determine that they’ve guessed the first four digits before working on the next three (and the last is a checksum), meaning 11000 queries (5500 on average) is enough to guess a PIN.

So, that’s the issue (and, likely, the fix — prevent the attacker from knowing they’re halfway there, by blinding the midpoint error).  But just like the telnet encryption bug, I think it’s important we get a handle on just how widespread this vulnerability is.  The WIGLE data doesn’t actually declare whether a given encryption-supporting node also supports WPS.  So…lets go outside and find out.

Wardriving?  In 2012?  It’s more likely than you think.

Over an approximately 4 hour period, 3,738 AP’s were found in the Berlin area.  (I was in Berlin for CCC.)  2,758 of these AP’s had encryption enabled — 73%, a bit higher than WIGLE average.  535 of the 2,758 (19.39%) had the “Label” WPS method enabled, along with crypto.  These are almost certainly vulnerable.  Another 191 (6.9%) had the “Display” WPS method enabled.  We believe these to be vulnerable as well.  Finally, 611/2758 — 22% — had WPS enabled, with no methods declared.  We have no data on whether these devices are exposed.

As things stand, we see 26.3% of AP’s with encryption enabled, likely to be exposing the WPS vulnerability.

What does this mean, in terms of absolute numbers?  WIGLE’s seen about 26M networks with crypto enabled.  However, they don’t age out AP’s, meaning the fact that a network hasn’t been seen since 2003 doesn’t mean it isn’t represented in the above numbers.  Still, 60% of the networks they’ve ever seen, were first seen in the last two years.  If we just take 60% of the 26M networks — 15.6M — and we entirely ignore all the networks WIGLE was unable to identify the crypto for, and all the networks that declare WPS but don’t declare a type…

We end up with 4.1M networks vulnerable to the WPS design vulnerability around the globe.  That’s the conservative number, and it’s a pretty big deal.

Note, of course, that whether a node is vulnerable is not a random function.  Like the UPNP issues I discussed in my 2011 Black Hat talk, this is an issue that may show up on every access point distributed by a particular ISP.  Given that there are entire countries with a single Internet provider, it is likely there are entire regions where every access point is now wide open.

Curious about your own networks?  Run:

iw scan wlan0 | grep “Config methods”

If you see anything with “Label” or “Display” you’ve got a problem.

Small update:

There’s code in one WPA supplicant that implies that, perhaps APs that do not expose a method still support PINs:

* In theory, this could also verify that attr.sel_reg_config_methods
* but some deployed AP implementations do not set Selected Registrar
* Config Methods attribute properly, so it is safer to just use
* Device Password ID here.

I’m going to stick with the conservative 26.1%, rather than the larger 48.1% estimate, until someone actually confirms an effective attack against such an AP.

Categories: Security

From 0day to 0data: TelnetD

December 29, 2011 Leave a comment

Recently, it was found that BSD-derived Telnet implementations had a fairly straightforward vulnerability in their encryption handler. (Also, it was found that there was an encryption handler.) Telnet was the de facto standard protocol for remote administration of everything but Windows systems, so there’s been some curiosity in just how nasty this bug is operationally.

So, with the help of Fyodor and the NMAP scripting team, I set out to collect some data on just how widespread this bug is. First, I set out to find all hosts listening on tcp/23. Then, with Fyodor’s code, I checked for encryption support. Presence of support does not necessarily imply vulnerability, but absence does imply invulnerability.

The scanner’s pretty rough, and I’m doing some sampling here, but here’s the bottom line data:

Out of 156113 confirmed telnet servers randomly distributed across the Internet, only 429 exposed encryption support. By far my largest source of noise is the estimate of just how many telnet servers (as opposed to just TCP listeners on 23, or transient services) exist across the entire Internet. My conservative numbers place that count at around 7.8M. This yields an estimate of ~13785 ~21546 (or 15,600 to 23,500, at 95% confidence — thanks, David Horn!) potentially vulnerable servers.

Patched servers will still look vulnerable unless they remove encryption support entirely, so this isn’t something we can watch get fixed (well, unless we’re willing to crash servers, which I’m not, even if it’d be safe because these things are launched from inetd).

Here is a command line that will scan your networks. You may need nmap from svn.

/usr/local/bin/nmap -p23 -PS23 --script telnet-encryption -T5 -oA telnet_logs -iL list_of_ips -v

So, great bug, just almost certainly not widely distributed. I’ll document the analysis process when I’m not at con.

Categories: Security

Exploring ReU: Rewriting URLs For Fun And XSRF/HTTPS

October 20, 2011 1 comment

So I’ve been musing lately about a scheme I call ReU. It would allow client side rewriting of URLs, to make the addition of XSRF tokens much easier. As a side effect, it should also make it much easier to upgrade the links on a site from http to https. Does it work? Is it a good idea? I’m not at all sure. But I think it’s worth talking about:


There is a consistent theme to both XSS and XSRF attacks: XS. The fact that foreign sites can access local resources, with all the credentials of the user, is a fundamental design issue. Random web sites on the Internet should not be able to retrieve Page 3 of 5 of a shopping cart in the same way they can retrieve this blog entry! The most common mechanism recommended to prevent this sort of cross-site chicanery is to use a XSRF token, like so:

The token is randomized, so now links from the outside world fail despite having appropriate credentials in the cookie.

In some people’s minds, this is quite enough. It only needs to be possible to deploy secure solutions; cost of deployment is so irrelevant, it’s not even worth being aware of. The problem is, somebody’s got to fix this junk, and it turns out that retrofitting a site with new URLs is quite tricky. As Martin Johns wrote:

…all application local URLs have to be rewritten for using the randomizer object. While standard HTML forms and hyperlinks pose no special challenge, prior existing JavaScript may be harder to deal with. All JavaScript functions that assign values to document.location or open new windows have to be located and modified. Also all existing onclick and onsubmit events have to be rewritten. Furthermore, HTML code might include external referenced JavaScript libraries, which have to be processed as well. Because of these problems, a web application that is protected by such a solution has to be examined and tested thoroughly.

Put simply, web pages are pieced together from pieces developed across sprints, teams, companies, and languages. No one group knows the entire story — but if anyone screws up and leaves the token off, the site breaks.

The test load to prevent those breakages from reoccurring are a huge part of why delivering secure solutions is such a slow and expensive process.

So I’ve been thinking: If upgrading the security of URLs is difficult server side, because the code there tends to be fragmented, could we do something on the client? Could the browser itself be used to add XSRF tokens to URLs in advance of their retrieval or navigation, in a way that basically came down to a single script embedded in the HEAD segment of an HTML file?

Would not such a scheme be useful not only for adding XSRF tokens, but also changing http links to https, paving the way for increasing the adoption of TLS?

The answer seems to be yes. A client side rewriter — ReU, as I call it — might very well be a small, constrained, but enormously useful request of the browser community. I see the code looking something vaguely like:

var session_token=aaa241ee1298371;
var session_domain="";

function analyzeURL(url, domnode, flags)
   domain = getDomain(url);
   // only add token to local resources
   if(isInside(session_domain, domain)) { 
      addParam(url, "token", session_token); }
   return url;


Now, something that could totally kill this proposal, is if browsers that didn’t support the extension suddenly broke — or, worse, if users with new browsers couldn’t experience any of the security benefits of the extension, lest older users have their functionality broken. Happily, I think we can avoid this trap, by encoding in the stored cookie whether the user’s browser supports ReU. Remember, for the XSRF case we’re basically trying to figure out in what situation we want to ignore a browser’s request despite the presence of a valid cookie. So if the cookie reports to us that it came from a user without the ability to automatically add XSRF tokens, oh well. We let that subset through anyway.

As long as the bad guys don’t have the ability to force modern browsers to appear as insecure clients, we’re OK.

Now, there’s some history here. Jim Manico pointed me at CSRFGuard, a project from the OWASP guys. This also tries to “upgrade” URLs through client side inspection. The problem is that real world frameworks really do assemble URLs dynamically, or go through paths that CSRFGuard can’t interrupt. These aren’t obscure paths, either:

No known way to inject CSRF prevention tokens into setters of document.location (ex: document.location=””;)
Tokens are not injected into HTML elements dynamically generated using JavaScript “createElement” and “setAttribute”

Yeah, there’s a lot of setters of document.location in the field. Ultimately, what we’re asking for here is a new security boundary for browsers — a direct statement that, if the client’s going somewhere or grabbing something, this inspector is going to be able to take a look at it.

There’s also history with both the Referer and Origin headers, which I spoke about (along with the seeds of this research) in my Interpolique talk. The basic concept with each is that the navigation source of a request — say — would identify itself in some manner in the HTTP header of the subsequent request. The problem for Referer is that there are just so many sources of requests, and it’s best effort whether they show up or not (let alone whether they show up correctly).

Origin was its own problem. Effectively, they added this entire use case where images and links on a shared forum wouldn’t get access to Origin headers. It was necessary for their design, because their security model was entirely static. But for everything that’s *not* an Internet forum, the security model is effectively random.

With ReU, you write the code (or at least import the library) that determines whether or not a token is applied. And as a bonus, you can quickly upgrade to HTTPS too!

There is at least one catastrophic error case that needs to be handled. Basically, those pop-under windows that advertisers use, are actually really useful for all sorts of interesting client side attacks. See, those pop-unders usually retain a connection to the window they came from, and (here’s the important part) can renavigate those windows on demand. So there’s a possible scenario in which you go to a malicious site, it spawns the pop-under, you go to another site, you log in, and then the malicious page drags your real logged in window to a malicious URL at the otherwise protected site.

It’s a navigation event, and so a naive implementation of ReU would fire, adding the token. Yikes.

The solution is straightforward — guarantee that, at least by default, any event that fired the rewriter came from a local source. But that brings up a much trickier question:

Do we try to stop clickjacking with this mechanism as well? IFrames are beautiful, but they are also the source of a tremendous number of events that comes from rather untrusted parties. It’s one thing to request a URL rewriter that can special case external navigation events. It’s possibly another to get a default filter against events sourced from actions that really came from an IFrame parent.

It is very, very possible to make technically infeasible demands. People think this is OK, because it counts as “trying”. The problem is that “trying” can end up eating your entire budget for fixes. So I’m a little nervous about making ReU not fire when the event that triggered it came from the parent of an IFrame.

There are a few other things of note — this approach is rather bookmark friendly, both because the URLs still contain tokens, and because a special “bookmark handler” could be registered to make a long-lived bookmark of sorts. It’s also possible to allow other modulations for the XSRF token, which (as people note) does leak into Referer headers and the like. For example, HTTP headers could be added, or the cookie could be modified on a per request basis. Also, for navigations offsite, secrets could be removed from the referring URL before handed to foreign sites — this could close a longstanding concern with the Referer header too.

Finally, for debuggability, it’d be necessary to have some way of inspecting the URL rewriting stack. It might be interesting to allow the stack to be locked, at least in order of when things got patched in. Worth thinking about.

In summary, there’s lots of quirks to noodle on — but I’m fairly convinced that we can give web developers a “one stop shop” for implementing a number of the behaviors we in the security community believe they should be exhibiting. We can make things easier, more comprehensive, less buggy. It’s worth exploring.

Categories: Security

These Are Not The Certs You’re Looking For

August 31, 2011 16 comments

It began near Africa.

This is not, of course, the * certificate, seen in Iran, signed by DigiNotar, and announced on August 29th with some fanfare.

This certificate is for Facebook. It was seen in Syria, signed by Verisign (now Symantec), and was routed to me out of the blue by a small group of interesting people — including Stephan Urbach, “The Doctor”, and Meredith Patterson — on August 25th, four days prior.

It wasn’t the first time this year that we’d looked at a mysterious certificate found in the Syrian wild. In May, the Electronic Frontier Foundation reported that it “received several reports from Syrian users who spotted SSL errors when trying to access Facebook over HTTPS.” But this certificate would yield no errors, as it was legitimately signed by Verisign (now Symantec).

But Facebook didn’t use Verisign (now Symantec). From the States, I witnessed Facebook with a DigiCert cert. So too did Meredith in Belgium, with a third strike from Stephan in Belgium: Facebook’s CA was clearly DigiCert.

The SSL Observatory dataset, in all its thirty five gigabytes of glorious detail, agreed completely. With the exception of one Equifax certificate, Facebook was clearly identifiable by DigiCert alone.

And then Meredith found the final nail:

VeriSign intermediary certificate currently used for VeriSign Global certificates. Chained with VeriSign Class 3 Public Primary Certification Authority. End of use in May 2009.

So here we were, looking at a Facebook certificate that couldn’t be found anywhere on the net — by any of us, or by the SSL Observatory — signed by an authority that apparently was never used by the site, found in a country that had recently been found interfering with Facebook communication, using a private key that had been decommissioned for years. Could things get any sketchier?

Well, they could certainly get weirder. That certificate that was killed in 2009, was the certificate killed by us! That’s the MD2-based certificate that Meredith, myself, and the late Len Sassaman warned about in our Black Hat 2009 talk (Slides / Paper)! Somehow, the one cryptographic function that we actually killed before it could do any harm (at the CA, in the browser, even via RFC) was linked to a Facebook certificate issued on July 13th, 2011, two years after its death?

This wasn’t just an apparently false certificate. This was overwhelmingly, almost cartoonishly bogus. Something very strange was happening in Syria.

Facebook was testing some new certificates on one of their load balancers.


I’ve got a few friends at Facebook. So, with the consent of everyone else involved, I pinged them — “Likely Spoofed Facebook Cert in Syria”. Thought it would get their attention.

It did.

We’re safe.

Turns out we just ordered new (smaller) certs from Verisign and deployed them to a few test LBs on Monday night. We’re currently serving a mixture of digicert and verisign certificates.

One of our test LBs with the new cert: a.b.c.d:443

And thus, what could have been a flash of sound and fury, ultimately signifying nothing, was silenced. There was no attack. Facebook had the private key matching the cert — something not even Verisign would have had, in the case of an actual Syrian break.

In traditional security fashion, we dropped our failed investigation and made no public comment about it — perhaps it’d be a funny story for a future conference, but it was certainly nothing to discuss with urgency.

And then, four days later, the DigiNotar break of * went public. Totally unrelated, and obviously in progress while our false alarm was ringing. The fates are strange sometimes.


The only entity that can truly assert the cryptographic keys of Facebook, is Facebook. We, as outside observers with direct expertise, detected practically every possible sign that this particular Facebook cert was malicious. It wasn’t. It just wasn’t. That such a comically incorrect certificate might actually be real, has genuine design consequences.

It means you have to put an error screen, with bypass, up for the user. No matter what the outside observers say, they may very well be wrong. The user must be given an override. And the data on what users do, when given an SSL error screen, isn’t remotely subtle. Moxie Marlinspike, probably the smartest guy in the world right now on SSL issues, did a study a few years ago on how many Tor users — not even regular users, but Tor users, clearly concerned about their privacy and possessed with some advanced level of expertise — would notice SSL being disabled and refuse to browse their desired content.

Moxie didn’t find a single user who resisted disabled security. His tool, sslsniff, worked against 100% of the sample set.

To be fair, Moxie’s tool didn’t pop an error — it suppressed security entirely. And, who knows, maybe Tor users think they’ve “done enough” for security and just consider their security budget spent after that. Like most things in security, our data is awful.

But, you know, this UI data ain’t exactly encouraging.

Recently, Moxie’s been presenting some interesting technology, in the form of his Convergence platform. There’s a lot to be said about Convergence, which frankly, I don’t have time to write about right now. (Disclosure: I’m going to Burning Man in about twelve hours. Honestly, I was hoping to postpone this entire discussion until N00ter’s first batch of data was ready. Ah well.) But, to be clear, I was fooled, Meredith was fooled, Moxie would have been fooled…I mean, look at this certificate! How could it possibly be right?


It is true that the only entity that can truly assert being Facebook, is Facebook. Does that mean everybody should just host self-signed certificates, where they assert themselves directly? Well, no. It doesn’t scale, and just ends up repeatedly asking the user whether a key (first seen, or changed) should be trusted.

Whether a key should be trusted, is an answer to which users only reply yes. Nice for blaming users, I suppose, but it doesn’t move the ball.

What’s better is a way to go from users having to trust everybody, to users having to trust only a small subset of organizations through which they have a business relationship with. Then, that subset of organization can enter into business relationships with the hordes of entities that need their entities asserted…

…and we just reinvented Certificate Authorities.

There’s a lot to say about CA’s, but one thing my friend at Verisign (there were a number of friendly mails exchanged) did point out is that the Facebook certificate passed a check via OCSP — the Online Certificate Status Protocol. Designed because CRLs just do not scale, OCSP allows a browser to quickly determine whether a given certificate is valid or not.

(Not quickly enough — browsers only enforce OCSP checks under the rules of Extended Validation, and *maybe* not even then. I’ve heard rumors.)

CA’s are special, in that they have an actual business relationship with the named parties. For a brief moment, I was excited — perhaps a way for everybody to know if a particular cert was valid, was to ask the certificate authority that issued it! Maybe even we could do something short of the “Internet Death Penalty” for certificates — browsers could be forced to check OCSP before accepting any certificate from a questioned authority.

Alas, there was a problem — and not just “the only value people are adding is republishing the data from the CA”. No, this concept doesn’t work at all, because OCSP assumes a CA never loses control of its keys. I repeat, the system in place to handle a CA losing its keys, assumes the CA never loses the keys.

Look at this.

$ openssl ocsp -issuer vrsn_issuer_cert.pem -cert Cert.pem -url -resp_text
OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response
Version: 1 (0x0)
Responder Id: O = VeriSign Trust Network, OU = “VeriSign, Inc.”, OU = VeriSign International Server OCSP Responder – Class 3, OU = Terms of use at (c)03
Produced At: Aug 30 09:18:44 2011 GMT
Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

Signature Algorithm: sha1WithRSAEncryption
Version: 3 (0x2)
Serial Number:
Signature Algorithm: sha1WithRSAEncryption
Issuer: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server CA – Class 3, Ref. LIABILITY LTD.(c)97 VeriSign
Not Before: Aug 4 00:00:00 2011 GMT
Not After : Nov 2 23:59:59 2011 GMT
Subject: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server OCSP Responder – Class 3, OU=Terms of use at (c)03
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
X509v3 Certificate Policies:
Policy: 2.16.840.1.113733.
User Notice:
Organization: VeriSign, Inc.
Number: 1
Explicit Text: VeriSign’s CPS incorp. by reference liab. ltd. (c)97 VeriSign

X509v3 Extended Key Usage:
OCSP Signing
X509v3 Key Usage:
Digital Signature
OCSP No Check:

X509v3 Subject Alternative Name:
Signature Algorithm: sha1WithRSAEncryption
Cert.pem: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

I’ve spent years saying X.509 is broken, but seriously, this is it for me. I copy this in full because the full reply needs to be parsed, and I don’t see this as dropping 0day because it’s very clearly an intentional part of the design. Here is what OCSP asserts:

Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good

To translate: “A certificate with the serial number 092197E1C0E9CD03DA35243656108681, using sha1, signed by a name with the hash of C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0, using a key with the hash of 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6, is good.”

To translate even further, please imagine a scene from one of my favorite shows, the IT crowd.

Roy: Moss…did you…did you sign this certificate?
Moss: I did indeed sign a certificate…with that serial number.
Roy: Yes, but did you sign this particular certificate?
Moss: Yes, I did indeed sign a particular certificate…with that serial number.
Roy: Moss? Did YOU sign, with YOUR key, this particular certificate?
Moss: Roy. I DID sign, with MY key, a certificate….with that serial number.
Roy: Yes, but what if the certificate you signed isn’t the certificate that I have? What if your certificate is for Moe’s Doner Shack, and mine is for *
Moss: I would never sign two certificates with the same serial number.
Roy: What if you did?
Moss: I would never sign two certificates with the same serial number.
Roy: What if somebody else did, got your keys perhaps?
Moss: I would never sign two certificates with the same serial number.
Roy: Moss?
Moss: I would never sign two certificates with the same serial number.

Long and short of it is that OCSP doesn’t tell you if a certificate is good, it just tells you if the CA thinks there’s a good certificate out there with that serial number. An attacker who can control the serial number — either by compromising the infrastructure, or just getting the keys — wins.

(This is very similar to the issue Kevin S. McArthur (and Marsh Ray?) and I were musing about on Twitter a few months back during Comodogate, in which it became clear that CRL’s also only secure serial numbers. The difference is that CRL’s are an ancient technology that we knew didn’t work, while OCSP was designed with full awareness of threats and really went out of its way to avoid hashing the parts of the certificate that actually mean something, like Subject Name and whether the certificate is a God Mode Intermediate.)

Now, that’s not to say that another certificate validation endpoint couldn’t be exposed by the CAs, one that actually reported whether a particular certificate was valid. The problem is that CAs can lie — they can claim the existence of a business relationship, when none exists. The CA’s I deal with are actually quite upstanding and hard working — let me call out Comodo and say unequivocally that they took a lot of business risk for doing the right thing, and it couldn’t have been an easy decision to make. The call to push for a browser patch, and not to pretend that browser revocation checking was meaningful, was a pretty big deal. But with over 1500 known entities with the ability to declare certificates for everyone, the trust level is unfortunately degraded.

1500 is too big. We need fewer, and more importantly, we need to reward those that implement better products. We need a situation where there is at least a chain of business (or governmental — we’ll talk about this more next week) relationships between the identity we trust, and the entities that help us recognize that trust — and that those outside that trust (DigiNotar) can be as easily excluded as those inside the trust (Verisign/Symantec) are included.

DNSSEC doesn’t do everything. Specifically, it can’t assert brands, which are different things entirely from domain names. But the degree to which it ties *business relationships* to *key management* is unparalleled — there’s a reason X.509 certificates, for all the gunk they coded into Subject Names, ended up running straight to DNS names as “Common Names” the moment they needed to scale. That’s what every other application does, when it needs to work outside the org! From the registrar that inserts a domain into DNS, to the registry that hosts only the top portion of the namespace, to the one root which is guarded from so much technical and bureaucratic influence, DNSSEC aligns trust with extant relationships.

And, in no uncertain terms, when DNSSEC tells you a certificate is valid, it won’t just be talking about the serial number.

More later. For now, Burning Man. ISPs, a reminder…I’m serious, I genuinely would prefer N00ter not find anything problematic.


Curious how Facebook could have a certificate that chained to a dead cert signed with a broken hashing function? So was I, until I remembered — roots within browsers are not trusted because of their self signature (best I can tell, just a cheap trick to make ASN.1 parsers happy). They are trusted because the code is made to explicitly trust them. The public key in a certificate otherwise self-signed by MD2 is no less capable of signing SHA-1 blobs as well, which is what’s happening in the Facebook cert. If you want to see the MD2 kill actually breaking stuff, see this post by the brilliant Eric Lawrence.

It IS a little weird, but man, that’s ops. Ops is always a little weird.

Categories: Security

Get every new post delivered to your Inbox.

Join 756 other followers