Archive

Author Archive

Primal Fear: Demuddling The Broken Moduli Bug

February 17, 2012 6 comments

There’s been a lot of talk about this supposed vulnerability in RSA, independently discovered by Arjen Lenstra and James P. Hughes et al, and Nadia Heninger et al. I wrote about the bug a few days ago, but that was before Heninger posted her data. Lets talk about what’s one of the more interesting, if misunderstood, bugs in quite some time.


SUMMARY
INTRODUCTION
THE ATTACK
IT’S NOT ABOUT RON AND WHIT
WHO MATTERS
FAILURE TO ENROLL
THE ACTUAL THREAT
ACTIONABLE INTELLIGENCE
CONCLUSION


SUMMARY

  • The “weak RSA moduli” bug is almost (and possibly) exclusively found within certificates that were already insecure (i.e. expired, or not signed by a valid CA).
  • This attack almost certainly affects not a single production website.
  • The attack utilizes a property of RSA whereby if half the private key material is shared between two public keys, the private key is leaked. Researchers scaled this method to cross-compare every RSA key on the Internet against every other RSA key on the Internet.
  • The flaw has nothing to do with RSA or “multi-secret” systems. The exact same broken random number generator would play just as much havoc, if not more, with “single-secret” algorithms such as ECDSA.
  • DSA, unlike RSA, leaks the private key with every signature under conditions of faulty entropy. That is arguably worse than RSA which leaks its private key only during generation, only if a similar device emits the same key, and only if the attacker finds both devices’ keys.
  • The first major finding is that most devices offer no crypto at all, and even when they do, the crypto is easily man-in-the-middled due to a presumption that nobody cares whether the right public key is in use.
  • Cost and deployment difficulty drive the non-deployment of cryptographic keys even while almost all systems acquire enough configuration for basic connectivity.
  • DNSSEC will dramatically reduce this cost, but can do nothing if devices themselves are generating poor key material and expecting DNSSEC to publish it.
  • The second major finding is that it is very likely that these findings are only the low hanging fruit of easily discoverable bad random number generation flaws in devices. It is specifically unlikely that only a third of one particular product had bad keys, and the rest managed to call quality entropy.
  • This is a particularly nice attack in that no knowledge of the underlying hardware or software architecture is required to extract the lost key material.
  • Recommendations:
    1. Don’t panic about websites. This has very little to absolutely nothing to do with them.
    2. When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.
    3. Audit smartcard keys.
    4. Stop buying or building CPUs without hardware random number generators.
    5. Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.
    6. When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.
    7. Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.

INTRODUCTION
If there’s one thing to take away from this entire post, it’s the following line from Nadia Heninger’s writeup:

Only one of the factorable SSL keys was signed by a trusted certificate authority and it has already expired.

What this means, in a nutshell, is that there was never any security to be lost from crackable RSA keys; due to failures in key management, almost certainly all of the affected keys were vulnerable to being silently “swapped out” by a Man-In-The-Middle attacker. It isn’t merely the fact that all the headlines proclaiming “0.2% of websites using RSA are insecure” are straight up false, because the flaws are concentrated on devices. It’s also the case that the devices themselves were already using RSA insecurely to begin with, merely by being deployed outside a key management system.

If there’s a second thing to take away, it’s that this is important research with real actions that should be taken by the security community in response. There’s no question this research is pointing to things that are very wrong with the systems we depend on. Do not make the mistake of discounting this work as merely academic. We have a problem with random numbers, that is even larger than Lenstra and Hughes and Heninger are finding with their particular mechanism.


THE ATTACK

To grossly simplify, RSA works by:

1) Generating two random primes, p and q. These are private.
2) Multiplying p ad q together into n. This becomes public, because figuring out the exact primes used to create n is Hard.
3) Content is signed or decrypted with p and q.
4) Content is verified or encrypted with n.

It’s obvious that if p is not random, q can be calculated: n/p==q, because p*q==n. What’s slightly less obvious, however, is that if either p or q is repeated across multiple n’s, then the repeated value can be trivially extracted using an ancient algorithm by Euclid called the Greatest Common Denominator test. The trick looks something like this (you can follow along with Python, if you like):

>>> from gmpy import *
>>> p=next_prime(rand(“next”))
>>> q=next_prime(rand(“next”))
>>> p
mpz(136093819)
>>> q
mpz(118190411)
>>> n=p*q
>>> n
mpz(16084984402169609)

Now we have two primes, multiplied into some larger n.

>>> q2=next_prime(rand(“next”))
>>> n2=p*q2
>>> n2
mpz(289757928023349293)

Ah, now we have n2, which is the combination of a previously used p, and a newly generated q2. But look what an attacker with n and n2 can do:

>>> gcd(n,n2)
mpz(136093819)
>>> p
mpz(136093819)

Ta-da! p falls right out, and thus q as well (since again, n/p==q). So what these teams did was gcd every RSA key on the Internet, against every other RSA key on the Internet. This would of course be quite slow — six million keys times six million keys is thirty six trillion comparisons — and so they used a clever algorithm to scale the attack such that multiple keys could be simultaneously compared against one another. It’s not clear what Lenstra and Hughes used, but Heninger’s implementation leveraged some 2005 research from (who else?) Dan Bernstein.

That is the math. What is the impact, upon computer security? What is the actionable intelligence we can derive from these findings?

Uncertified public keys or not, there’s a lot to worry about here. But it’s not at all where Lenstra and Hughes think.


IT’S NOT ABOUT RON AND WHIT

Lenstra and Hughes, in arguing that “Ron was Wrong, Whit was right”, declare:

Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for “multiple-secrets” cryptosystems such as RSA is signi cantly riskier than for “single-secret” ones such as ElGamal or (EC)DSA which are based on Die-Hellman.

The argument is essentially that, had those 12,000 keys been ECDSA instead of RSA, then they would have been secure. In my previous post on this subject (a post RSA Inc. itself is linking to!) I argued that this paper, while chock-full of useful and important data, failed to make the case that the blame could be laid at the feet of the “multiple-secret” RSA. Specifically:

  • Risk in cryptography is utterly dominated, not by cipher selection, but by key management. It does not matter the strength of your public key if nobody knows to demand it. Whatever the differential risk is that is introduced by the choice of RSA vs. ECDSA pales in comparison to whether there’s any reason to believe the public key in question is the actual key of the desired target. (As it turns out, few if any of the keys with broken moduli, had any reason to trust them in the first place.)
  • There aren’t enough samples of DSA in the same kind of deployments to compare failure rates from one asymmetric cipher to another.
  • If one is going to blame a cipher for implementation failures in the field, it’s hard to blame the maybe dozens of implementations of RSA that caused 12,000 failures, while ignoring the one implementation of ECDSA that broke millions upon millions of Playstation 3′s.

But it wasn’t until I read the (thoroughly excellent) blog post from Heninger that it became clear that, no, there’s no rational way this bug can be blamed on multi-secret systems in general or RSA in particular.

However, some implementations add additional randomness between generating the primes p and q, with the intention of increasing security:

prng.seed(seed)
p = prng.generate_random_prime()
prng.add_randomness(bits)
q = prng.generate_random_prime()
N = p*q

If the initial seed to the pseudorandom number generator is generated with low entropy, this could result in multiple devices generating different moduli which share the prime factor p and have different second factors q. Then both moduli can be easily factored by computing their GCD: p = gcd(N1, N2).

OpenSSL’s RSA key generation functions this way: each time random bits are produced from the entropy pool to generate the primes p and q, the current time in seconds is added to the entropy pool. Many, but not all, of the vulnerable keys were generated by OpenSSL and OpenSSH, which calls OpenSSL’s RSA key generation code.

Would the above code become secure, if the asymmetric cipher selected was the single-secret ECDSA instead of the multi-secret RSA?

prng.seed(seed);
ecdsa_private_key=prng.generate_random_integer();

No. All public keys emitted would be identical (as identical as they’d be with RSA, anyway). I suppose it’s possible we could see the following construction instead:

prng.seed(seed);
first_half=prng.generate_random_integer();
prng.add_randomness(bits);
second_half=prng.generate_random_integer();
ecdsa_private_key=concatenate(first_half, second_half);

…but I don’t think anyone would claim that ECDSA, DSA, or ElGamel is secure in the circumstance where half its key material has leaked. (Personal note: Prodded by Heninger, I’ve personally cracked a variant of RSA-1024 where all bits set to 1 were revealed. It wasn’t difficult — cryptosystems fail quickly when their fundamental assumptions are violated.)

Hughes has made the argument that RSA is unique here because the actions of someone else — publishing n composed of your p and his q — impact an innocent you. Alas, you’re no more innocent than he is; you’re republishing his p, and exposing his q2 as much as he’s exposing your q. Not to mention, in DSA, unlike RSA, you don’t need someone else’s help to expose your private key. A single signature, emitted without a strong RNG, is sufficient. As we saw from the last major RNG flaw, involving Debian:

Furthermore, all DSA keys ever used on affected Debian systems for signing or authentication purposes should be considered compromised; the Digital Signature Algorithm relies on a secret random value used during signature generation.

The fact that Lenstra and Hughes found a different vulnerability rate for DSA keys than RSA keys comes from the radically different generator for the former — PGP keys as generated by GNU Privacy Guard or PGP itself. This is not a thing commonly executed on embedded hardware.

There is a case to be made for the superiority of ECDSA over RSA. Certainly this is the conclusion of various government entities. I don’t think anyone would be surprised if the latter was broken before the former. But this situation, this particular flaw, reflects nothing about any particular asymmetric cipher. Under conditions of broken random number generation, all ciphers are suspect.

Can we dismiss these findings entirely, then?

Lets talk about that.


WHO MATTERS

The headlines have indeed been grating. “Only 99.8% of the worlds PKI uses secure randomness”, said TechSnap. “EFF: Tens of thousands of websites’ SSL “offers effectively no security” from Boing Boing. And, of course, from the New York Times:

The operators of large Web sites will need to make changes to ensure the security of their systems, the researchers said.

The potential danger of the flaw is that even though the number of users affected by the flaw may be small, confidence in the security of Web transactions is reduced, the authors said.

“Some people may say that 99.8 percent security is fine,” he added. That still means that approximately as many as two out of every thousand keys would not be secure.

The operators of large web sites will not need to make any changes, as they are excluded from the population that is vulnerable to this flaw. It is not two out of every thousand keys that is insecure. Out of those keys that had any hope of being secure to begin with — those keys that participate in the (flawed, but bear with me) Certificate Authority system — approximately none of these keys are threatened by this attack.

It is not two out of every 1000 already insecure keys that are insecure. It is 1000 of 1000. But that is not changed by the existence of broken RSA moduli.

To be clear, these numbers do not come from Lenstra and Hughes, who have cautiously refused to disclose the population of certificates, nor from the EFF, who have apparently removed those particular certificates from the SSL Observatory (interesting side project: Diff against the local copy). They come from Heninger’s research, which not only discloses who isn’t affected (“Don’t worry, the key for your bank’s web site is probably safe”), but analyzes the population that is:

Firewall product X:
52,055 hosts in our SSL dataset
6,730 share public keys
12,880 have factorable keys

Consumer-grade router Y:
26,952 hosts in our SSL dataset
9,345 share public keys
4,792 have factorable keys

Enterprise remote access solution Z:
1,648 hosts in our SSL dataset
24 share public keys
0 factorable

Finally, we get to why this research matters, and what actions should be taken in response to it. Why are large numbers of devices — of security devices, even! — not even pretending to successfully manage keys? And worse, why are the keys they do generate (even if nobody checks them) so insecure?


FAILURE TO ENROLL

Practically every device on a network is issued an IP address. Most of them also receive DNS names. Sometimes assignment is dynamic via DHCP, and sometimes (particularly on corporate/professionally managed networks) addressing is managed statically through various solutions, but basic addressing and connectivity across devices from multiple vendors is something of a solved problem.

Basic key management, by contrast, is a disaster.

You don’t have to like the Certificate Authority system. I certainly don’t; while I respect the companies involved, they’re pretty clearly working with a flawed technology. (I’ve been talking about this since 2009, see the slides here. Slide 22 talks about the very intermediate certificate issuance that people are now worrying about with respect to Trustwave and GeoTrust) However…

Out of hundreds of millions of servers on the Internet with globally routable IP addresses, only about six million present public keys to be authenticated against. And of those with keys, only a relatively small portion — maybe a million? — are actually enrolled into the global PKI managed by the Certificate Authorities.

The point is not that the CA’s don’t work. The point is that, for the most part, clients don’t care, nothing is checking the validity of device certificates in the first place. Most devices, even security devices, are popping these huge errors every time the user connects to their SSL ports. Because this is legitimate behavior — because there’s no reason to trust the provenance of a public key, right or wrong — users click through.

And why are so many devices, even in the rare instance that they have key material on their administrative interfaces, unenrolled in PKI? Because it requires this chain of cross organizational interaction to be followed. The manufacturer of the device could generate a key at the factory, or on first boot, but they’re not the customer so they can’t have certificates signed in a customers namespace (which might change, anyway). Meanwhile, the customer, who can easily assign IP addresses and DNS names without constantly asking for anyone’s permission, has to integrate with an external entity, extract key material from the manufacturer’s generated content, provide it to that entity, integrate the response back into the device, pay the entity, and do it all over again within a year or three. Repeat for every single device.

Or, they could just not do all that. Much easier to simply not use the encrypted interface (it’s the Internal Network, after all), or to ignore the warning prompts when they come up.

One of the hardest things to convince people of in security, is that it’s not enough to make security better. Better doesn’t mean anything, it’s a measure without a metric. No, what’s important is making security cheaper. How do we start actually delivering value, without imagining that customers have an infinite budget to react with? How do we even measure cost?

I can propose at least one metric: How many meetings need to be held, to deploy a particular technology? Money is one thing but the sheer time required to get things done is a remarkably effective barrier to project completion. And the further away somebody is from a deploying organization — another group, another company, another company that needs to get paid thus requiring a contract and Purchasing’s involvement — the worst things are.

Devices are just not being enrolled into the CA system. Heninger found 19,610 instances of a single firewall — a security device — and not one of those instances was even pretending to be secure. That’s a success rate not of 99.8%, but of 0.00%.

Whatever good we’ve done on production websites has been met with deafening silence across, “firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones”. Believe me, this abject security failure cares not a whit about the relative merits of RSA vs. ECDSA. The key management penalty is just too damn high.

The solution, of course, is DNSSEC. It’s not entirely true that third parties aren’t involved with IP assignment; it’s just that once you’ve got your IP space, you don’t have to keep returning to the well. It’s the same with DNS — while yes, you need to maintain your registration for whatever.com, you don’t need to interact with your registrar every time you add or remove a host (unless you’d like to, anyway). In the future, you’ll register keys in your own DNS just as you register IP addresses. It will not be drama. It will not be a particularly big deal. It will be a simple, straightforward, and relatively inexpensive mechanism by which devices are brought into your network, and then if you so choose, recognized as yours worldwide.

This will be very exciting, and hopefully, quite boring.

It’s not quite so simple, though. DNSSEC offers a very nice model — one that will be completely vulnerable to the findings of Lenstra, Hughes, and Heninger, unless we start dealing with these badly generated private keys.


THE ACTUAL THREAT

DNSSEC will make key management scale. That is not actually a good thing, if the keys it’s spreading are in fact insecure! What we’re finding here is that a lot of small devices are generating keys insecurely. The biggest mistake we can make here is thinking that only the keys vulnerable to this particular attack, were badly generated.

There are many ways to fail at key generation, that aren’t nearly as trivially discoverable as a massive group GCD. Heninger found that Firewall Vendor Z had over a third of their keys either crackable via GCD or repeated elsewhere (as you’d expect, if both p and q were emitted from identical entropy). One would have to be delusional to expect that the code is perfectly secure two thirds of the time, and only fails in this manner every once in a while. The primary lesson from this incident is that there’s a lot more from where the Debian flaw came from. According to Heninger, much of the hardware they saw was actually running OpenSSL, pretty much the gold standard in available libraries for open cryptographic work.

It still failed.

This particular attack is nice, in that it’s independent of the particular vagaries of a device implementation. As long as any two nodes share either p or q, both p and q will be disclosed. But this is a canary in the coal mine — if this went wrong, so too must a lot more have. What I predict is that, when we crack devices open, we’re going to see mechanisms that rarely repeat, but still have an insufficient amount of entropy to “get the job done” — nodes seeding with MAC addresses (far fewer than 48 bits of entropy, when you think about it), the ever classic clock seeds, even fixed “entropic starting points” stored on the file system are going to be found.

Tens of thousands of random keys being not quite so random is one thing. But that a generic mechanism found this many issues, strongly implies a device specific mechanism will be even more effective. We should have gone looking after the Debian bug. I’m glad Lenstra, Hughes, and Heninger et al. did.


ACTIONABLE INTELLIGENCE

So what to do? Here is my advice.

1) Obviously, don’t panic about any website being vulnerable. This is pretty much a web interface problem.

2) When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.

This is cryptographic heresy, of course. Private keys should not be moved, as bits are cheap. But, you know, some bits are cheaper than others, and I’ve got a lot more faith now in a PC generating key material at this point than some random box. (No way I would have argued this a few weeks ago. Like I said from the beginning, this is excellent work.) In particular, I’d regenerate keys used to authenticate clients to servers.

3) Audit smartcard keys.

The largest population of key material that Lenstra, Hughes, and Heninger could never have looked at, are the public keys contained within the world’s smartcard population. Unless you’re the manufacturer, you have no way to “look inside” to see if the private keys are being reasonably generated. But you do have access to the public keys, and most likely you don’t have so many of them that the slow mechanism (gcd’ing each moduli against each other moduli) is infeasibly slow. Ping me if you’re interested in doing this.

4) Stop buying or building CPUs without hardware random number generators.

Alright, CPU vendors. There’s not many of you, and it’s time for you to stop being so deterministic. By 2014, if your CPU does not support cryptographically strong random number generation, it really doesn’t belong in anything trying to be secure. Since at this point, that’s everything, please start providing a simple instruction that isn’t hidden in some random chipset somewhere for developers and kernels to use to seed proper entropy. It doesn’t even need to be fast.

I hear Intel is adding a Hardware RNG to “Ivy Bridge”. One would also be nice for Atom.

5) Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.

This is the most controversial advice I’ve ever given, and most of you have no idea what I’m talking about. From the code, written originally seventeen years ago by D. P. Mitchell and modified by the brilliant Matt Blaze (who’s probably going to kill me):

* Truerand is a dubious, unproven hack for generating “true” random
* numbers in software. It is at best a good “method of last resort”
* for generating key material in environments where there is no (or
* only an insufficient) source of better-understood randomness. It
* can also be used to augment unreliable randomness sources (such as
* input from a human operator).
*
* The basic idea behind truerand is that between clock “skew” and
* various hard-to-predict OS event arrivals, counting a tight loop
* will yield a little bit (maybe one bit or so) of “good” randomness
* per interval clock tick. This seems to work well in practice even
* on unloaded machines…

* Because the randomness source is not well-understood, I’ve made
* fairly conservative assumptions about how much randomness can be
* extracted in any given interval. Based on a cursory analysis of
* the BSD kernel, there seem to be about 100-200 bits of unexposed
* “state” that changes each time a system call occurs and that affect
* the exact handling of interrupt scheduling, plus a great deal of
* slower-changing but still hard-to-model state based on, e.g., the
* process table, the VM state, etc. There is no proof or guarantee
* that some of this state couldn’t be easily reconstructed, modeled
* or influenced by an attacker, however, so we keep a large margin
* for error. The truerand API assumes only 0.3 bits of entropy per
* interval interrupt, amortized over 24 intervals and whitened with
* SHA.

Disk seek time is often unavailable on embedded hardware, because there simply is no disk. Ambient network traffic rates are often also unavailable because the system is not yet on a production network. And human interval analysis (keystrokes, mouse clicks) may not be around, because there is no human. (Remember, strong entropy is required all the time, not just during key generation.)

what’s also not around on embedded hardware, however, is a hypervisor that’s doling out time slices. And consider: What makes humans an interesting thing to build randomness off of, is that we’re operating on a completely different clock and frequency than a computer. We are a “slow oscillator” that chaotically interacts with a fast one.

Well, most every piece of hardware has at least two clocks, and they are not in sync: The CPU clock, and the Real Time Clock. You can add a third, in fact, if you include system memory. All three may be made “slow oscillators” relative to the rest, if only by running in loops. At the extreme scale, you’re going to find slight deviations that can only be explained not through deterministic processes but through the chaotic and limited tolerances of different clocks running at different frequencies and different temperatures.

On the one hand, I might be wrong. Perhaps this research path only ends in tears. On the other hand, it is the most persistent delusion in all of computing that there’s only one computer inside your computer, rather than a network of vaguely cooperative subsystems — each with their own clocks that are in no way synchronized against one another.

It wouldn’t be the first time asynchronous behavior made a system non-deterministic.

Ultimately, the truerand approach can’t make things worse. One of the very nice things about entropy sources is that you can stir all sorts of total crap in there, and as long as there’s at least 80 or so bits that are actually difficult to predict, you’ve won. I see no reason we shouldn’t investigate this path, and possibly integrate it as another entropy source to OpenSSL and maybe libc as well.

(Why am I not suggesting ‘switch your apps over to /dev/random? Because if your developers did enough to break what is already enabled by default in OpenSSL, they did so because tests showed the box freezing up, and aren’t going to appreciate some security yutz telling them to ship something that locks on boot.)

6) When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.

7) Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.


CONCLUSION

It’s 2012, and we’re still having problems with Random Number Generators. Crazy, but we’re still suffering (mightily) from SQL injection and that’s a “fixed problem” too. This research, even if accompanied by some flawed analysis, is truly important. There are more dragons down this path, and it’s going to be entertaining to slay them.

Categories: Security

Survey is good. Thesis is strange.

February 14, 2012 21 comments

(UPDATE: I’ve written quite a bit more about this subject, in the wake of Nadia Heninger posting some fantastic data on the subject.)


Recently, Arjen Lenstra and James Hughes et al released some excellent survey work in their “Ron Was Wrong, Whit Was Right” paper regarding the quality of keys exposed on the global Internet. Rather than just assume proper generation of public key material, this survey looked at 11.7 million public keys in as much depth as possible.

The conclusion they reached was that RSA, because it has two secrets (the two primes, p and q), is “significantly riskier” than systems using “single-secrets” like (EC)DSA or ElGamel.

What?

Let me be clear. This is a mostly great paper, with lots of solid data on the state of key material on the Internet. We’re an industry that really operates without data, and with this work, we see (for example) that there’s not an obvious Debian-class time bomb floating around out there.

But there’s just no way we get from this survey work, to the thesis that surrounds it.

On the most basic level, risk in cryptography is utterly dominated, not by cipher selection, but by key management. The study found 12,720 public keys. It also found approximately 2.94 million expired certificates. And while the study didn’t discuss the number of certificates that had no reason to be trusted in the first place (being self signed), it did find 5.4M PGP keys.

It does not matter the strength of your public key if nobody knows to demand it. What the data from this survey says, unambiguously, is that most keys on the Internet today have no provenance that can be trusted, not even through whatever value the CA system affords. Key Management — as Whit Diffie himself has said — is The Hard Problem now for cryptography.

Whether you use RSA or DSA or ECDSA, that differential risk is utterly dwarfed by our problems with key management.

Is all this risk out of scope? Given that public key cryptography is itself a key management technology for symmetric algorithms like AES or 3DES, and that the paper is specifically arguing for one technology over another, it’s hard to argue that. But suppose we were to say key management is orthogonal to cryptographic analysis, like buffer overflows or other implementation flaws.

This is a paper based on survey work, in which the empirically validated existence of an implementation flaw (12,720 crackable public keys) is being used to justify a design bias (don’t use a multi-secret algorithm). The argument is that multi-secret algorithms cause crackable public keys.

You don’t just get to cite implementation flaws when they’re convenient.

More importantly, correlation is not causation. That we see a small number of bad keys at the same time we see RSA does not mean the latter caused the former. Alas, this isn’t even correlation. Out of 6,185,372 X.509 certificates seen by the survey, 6,185,230 of them used RSA. That’s all of 142 certificates that didn’t. We clearly do not have a situation where the choice of RSA vs. otherwise is random. Indeed, cipher selection is driven by network effects, historical patents, and government regulation, not the roll of the dice even mere correlation would require.

Finally, if we were to say that a cipher that created 12,720 broken instances had to be suspect, could we ever touch ECDSA again? Sony’s Playstation 3 ECDSA Fixed Nonce hit millions of systems around the world. Thus the fault with this whole “multi-secret” complaint — every cipher has moving parts that can break down. “Nothing is foolproof because fools are so ingenious”, as they say. Even if one accepted the single-sample link between multi-secret and RSA, the purportedly safe “single-secret” systems have also failed in the past, in quite larger numbers when you get down to it.

I don’t mean to be too hard on this paper, which again, has some excellent data and analysis inside. I’ve been strongly advocating for the collection of data in security, as I think we operate more on assumption and rumor than we’d like to admit. The flip side is that we must take care not to fit our data to those assumptions.


Apparently this paper escaped into the popular press.

I think the most important question to be asked is — if 0.2% of the global population of RSA moduli are insecure, what about those attached to unexpired certificates with legitimate issuers? Is that also 0.2%, or less?

(There’s no security differential if there was no security to begin with.)

Categories: Security

How The WPS Bug Came To Be, And How Ugly It Actually Is

January 26, 2012 6 comments

FINDING: WPS was designed to be secure against a malicious access point. It isn’t: A malicious AP recovers the entire PIN after only two protocol exchanges. This server to client attack is substantially easier than the client to server attack, with the important caveat that the latter does not require a client to be “lured”.

SUMMARY: Stefan Viehböck’s WPS vuln was one of the bigger flaws found in all of 2011, and should be understood as an attempt to improve the usability of a security technology in the face of severe and ongoing deficiencies in our ability to manage key management (something we’re fixing with DNSSEC). WPS seems to have started as a mechanism to integrate UI-less consumer electronics into the Wi-Fi fold; it grew to be a primary mechanism for PCs to link up. WPS attempted to use an online “proof of possession” process to mutually authenticate and securely configure clients for APs, including the provisioning of WEP/WPA/WPA2 secrets. In what appears to have been a mechanism to prevent malicious “evil twin” APs from extracting PINs from rerouted clients, the already small PIN was split in half. Stefan noted that this made brute force efforts much easier, as the first half could be guessed independently of the second half. While rate limiting at the AP server is an obvious defense, it turns out to be the only defense, as both not continuing the protocol, and providing a fake message, are trivially detectable signs that the half-PIN was guessed incorrectly. Meanwhile, providing the legitimate response even in the face of an incorrect PIN guess actually leaks enough data for the attacker to brute force the PIN. I further note that, even for the purposes of mutual authentication, the split-PIN is problematic. Servers can immediately offline brute force the first half of the PIN from the M4 message, and after resetting the protocol, they can generate the appropriate messages to allow them to offline brute the second half of the PIN. To conclude, I discuss the vagaries of just how much needs to be patched, and thus how long we need to wait, before this vulnerability can actually be addressed server side. Finally, I note that we can directly trace the existence of this vulnerability to the unclear patent state of Secure Remote Passwords (SRP), which really is the canonical way to do Password Authenticated Key Exchange (PAKE).

Note: I really should be writing about DNSSEC right now. What’s interesting is, in a very weird way, I actually am.


SECTIONS:
Introduction
On Usability
Attacks, Online and Off
A Simple Plan
Rate Limiting: The Only Option?
It Gets (Unexpectedly) Worse
On Patching
Right Tech, Wrong Law


INTRODUCTION

Security is not usually a field with good news. Generally, all our discussions center around the latest compromise, the newest vulnerabilities, or even the old bugs that stubbornly refuse to disappear despite our “best practices”. I’m somewhat strange, in that I’m the ornery optimist that’s been convinced we can, must, and will (three different things) fix this broken and critical system we call the Internet.

That being said — fixing the Net isn’t going to be easy, and it’s important people understand why. And so, rather than jump right into DNSSEC, I’m going to write now about what it looks like when even an optimist gets backed into a corner.

Lets talk about what I think has to be considered one of the biggest finds of 2011, if not the biggest: Stefan Viehböck’s discovery of fundamental weaknesses in Wi-Fi Protected Setup, or WPS. Largest number of affected servers, largest number of affected clients, out of some of the most battle hardened engineers in all of technology. Also, hardest to patch bug in recent memory. Maybe ever.(Craig Heffner also found the bug, as did some uncountable number of unnamed parties over the years. None of us are likely to be the first to find a bug…we just have the choice to be the last. Craig did release the nicely polished Reaver for us though, which was greatly appreciated, and so I’m happy to cite his independent discovery.)

It wasn’t supposed to be this way. If any industry group had suffered the slings and arrows of the security community, it was the Wi-Fi Alliance. WEP was essentially a live-fire testing range for all the possible things that could go wrong when using RC4 as a stream cipher. It took a while, but we eventually ended up with the fairly reasonable WPA2 standard. Given either a password, or a certificate, WPA2 grants you a fairly solid encrypted link to a network.

That’s quite the given — for right there, staring us in the face once more, was the key management problem.

(Yes, the very same problem we ultimately require DNSSEC to solve. What follows is what happens when engineers don’t have the right tools to fix something, but try anyway. Please forgive me for not using the word passphrase; when I actually see such phrases in the wild we can talk.)

ON USABILITY

So it should be an obvious statement, but a system that is not usable, is not actually used. But wireless encryption in general, and WPA2 in particular, actually has been achieving a remarkable level of adoption. I’ve already written about the deployment numbers; suffice it to say, this is one of the brighter spots in real world cryptography. (The only things larger are the telnet->ssh switch, and maybe the widespread adoption of SSL by Top 100 sites in the wake of Firesheep and state level packet chicanery.)

So, if this WPA2 technology is actually being used, something had to be making it usable. That something, it turned out, was an entirely separate configuration layer: WPS, for WiFi Protected Setup. WPS was pretty clearly originally designed as a mechanism for consumer electronic devices to link to access points. This makes a lot of sense — there’s not much in the way of user interface on these devices, but there’s always room to paste a sticker. Not to mention that the alternative, Bluetooth, is fairly complex and heavy, particularly if all you want is just IP to a box.

However, a good quarter to half of all the access points exposing encrypted access today expose WPS not for random consumer devices, but for PCs themselves. After all, home users rather obviously didn’t have an Enterprise Certificate Authority around to issue them a certificate, and those passwords that WPA2 wanted are (seriously, legitimately, empirically) hard to remember.

Not to mention, how does the user deal with WPA2 passwords during “onboarding”, meaning they just pulled the device out of the box?

WPS cleaned all this up. In its most common deployment concept, “Label”, it envisioned a second password. This one would be unique to the device and stickered to the back of it. A special exchange, independent of WPA2 or WPA or even WEP, would execute. This exchange would not only authenticate the client to the server, but also the server to the client. By this mechanism, the designers of WPS would defend against so-called “evil twin” attacks, in which a malicious access point present during initial setup would pair with the client instead of the the real AP. The password would be simplified — a PIN, 8 digits, with 1 digit being a check so the user could be quickly informed they’d typed it wrong.

ATTACKS, ONLINE AND OFF

How could this possibly be safe? Well, one of the finer points of password security is that there is a difference between “online” and “offline” attackers. An online attacker has to interact with the defender on each password attempt — say, entering a password into a web form. Even if a server doesn’t lock you out after a certain number of failures (itself a mechanism of attack) only so many password attempts per second can be reasonably attempted. Such limits do not apply to the offline attacker, who perhaps is staring at a website’s backend database filled with hashes of many users’ passwords. This attacker can attempt as many passwords as he has computational resources to evaluate.

So there does exist a game, in which the defender forces the attacker to interact with him every time a password is attempted. This game can be made quite slow, to the point where as long as the password itself isn’t something obvious (read: chosen by the user), the attacker’s not getting in anytime soon. That was the path WPS was on, and this is what it looked like, as per the Microsoft document that Stefan Viehbock borrowed from in his WPS paper:

There’s a lot going on here, because there’s a lot of different attacks this protocol is trying to defend against. It wants to suppress replay, it wants to mutually authenticate, it wants to create an encrypted channel between the two parties, and it really wants to be invulnerable to an offline attack (remember, there’s only 10^7, or around 23 bits of entropy in the seven digit password). All of these desires combine to make a spec that hides its fatal, and more importantly, unfixable weakness.

A SIMPLE PLAN

Lets simplify. Assume for a moment that there is in fact a secure channel between the client and server. We’re also going to change the word “Registrar” to “Client” and “Enrollee” to “Server”. (Yes. This is backwards. I blame some EE somewhere.) The protocol now becomes something vaguely like:

1. Server -> Client:  Hash(Secret1 || PIN)         
2. Client -> Server:  Hash(Secret2 || PIN), Secret2
3. Server -> Client:  Secret1

The right mental image to have, is one of “lockboxes”: Neither party wants to give the other the PIN, so they give eachother hashes of the PIN combined with long secrets. Then they give eachother the long secrets. Now, even without sending the PINs, they can each combine the secrets they’ve received with the secret PINs they already know, see that the hashes computed are equal to the hashes sent, and thus know that the other has knowledge of the PIN. The transmission of Secrets “unlocks” the ability for the Hash to prove possession of the PIN.

Mutual authentication achieved, great job everyone.

Not quite. What Stefan figured out is that WPS doesn’t actually operate across the entire PIN directly; instead, it splits things up more like:

1. Server -> Client:  Hash(Secret1 || first_half_of_PIN)         
2. Client -> Server:  Hash(Secret2 || first_half_of_PIN), Secret2
3. Server -> Client:  Secret1

First_half_of_PIN is only four digits, ranging from 0000 to 9999. A client sending Hash(Secret2 |first_half_of_PIN) may actually have no idea what the real first half is. It might just be trying them all…but whenever they get it right, Server’s going to send Client Secret! It’s a PIN oracle! Whatever can the server do?

RATE LIMITING: THE ONLY OPTION?

The obvious answer is that the server could rate limit the client. That was the recommended path, but most APs didn’t implement that. (Well, didn’t implement it intentionally. Apparently it’s really easy to knock over wireless access points by just flooding this WPS endpoint. It’s amazing the degree to which global Internet connectivity depends on code sold at $40/box.)

What’s not obvious is the degree to which we really have no other degrees of freedom but rate limiting. My initial assumption, upon hearing of this bug, was that we could “fake” messages — that it would be possible to leave an attacker blind to whether they’d guessed half the PIN.

Thanks to mutual authentication, no such freedom exists. Here’s the corner we’re in:

If we do as the protocol specifies — refuse to send Secret1, because the client’s brute forcing us — then the “client” can differentiate a correct guess of the first four digits from an incorrect guess. That is the basic vulnerability.

If we lie — send, say, Secret_fake — ah, now the client simply compares Hash(Secret1 || first_half_of_PIN) from the first message with this new Hash(Secret_fake || first_half_of_PIN) and it’s clear the response is a lie and the first_half_of_PIN was incorrectly guessed.

And finally, if we tell the truth — if we send the client the real Secret1, despite their incorrect guess of first_half_of_PIN — then look what he’s sitting on: Hash(secret1, first_half_of_PIN) and Secret1. Now he can run an offline attack against first_half_of_PIN.

It takes very little time for a computer to run through 10,000 possibilities.

So. Can’t shut up, can’t lie, and absolutely cannot tell the truth. Ouch.

IT GETS (UNEXPECTEDLY) WORSE

Aside from rate limiting, can we fix the protocol? It’s surprisingly difficult. Lets remove a bit more simplification, and add second_half_of_PIN:

[M3] 1. Server -> Client:  Hash(Secret1 || first_half_of_PIN), 
                           Hash(Secret3 || second_half_of_PIN)         
[M4] 2. Client -> Server:  Hash(Secret2 || first_half_of_PIN),
                           Hash(Secret4 || second_half_of_PIN), Secret2
[M5] 3. Server -> Client:  Secret1
[M6] 4. Client -> Server:  Secret4
[M7] 5. Server -> Client:  Secret3

So, in this model, the server provides information about the entire PIN, but with Secret1 and Secret3 (really, E-S1 and E-S2) being 128 random bits, the PIN cannot be brute forced. Then the client provides the same mishmash of information about the PIN, now with Secret2 (R-S1) and Secret4 (R-S2). But immediately the client “unlocks” the first half of the PIN to the server, who replies by “unlocking” that same half. Only when the server “unlocks” the first half to the client, does the client “unlock” the second half of the PIN to the server and vice versa.

It’s always a bit tricky to divine what people where thinking when they made a protocol. Believe me, DNSSEC is way more useful than even its designers intended. But it’s pretty clear the designers were trying to make sure that a malicious server never got its hands on the full PIN from the client. That seems to be the thinking from the designer here — Secret4, unlocking second_half_of_PIN from the client to the server, is only sent if the server actually knew first_half_of_PIN back in that first (well, really M3) message.

For such a damaging flaw, we sure didn’t get much in return. Problem is, a malicious “evil twin” server is basically handed first_half_of_PIN right there in the second, M4 message. He’s got Hash(Secret2||first_half_of_PIN) and Secret2. That’s 10,000 computations away from first_half_of_PIN.

Technically, it’s too late! He can’t go back in time and fix M3 to now contain the correct first_half_of_PIN. But, in an error usually only seen among economists and freshman psychology students, there is an assumption that the protocol is “one-shot” — that the server can’t just tell the server that something went wrong, and that client should then start from scratch.

So, in the “evil twin” attack, the attacker resets the session after brute forcing the correct first_half_of_PIN. This allows him to send a correct Hash(Secret1||first_half_of_PIN) in the next iteration of M3, which causes the Secret1 in M5 to be accepted, which causes Secret4 in M6 to be sent to the server thus unlocking second_half_of_PIN.

Again, it’s possible I’m missing the “real reason” for splitting the PIN. But if this was it, not only did the split make it much easier for a malicious client to attack an AP server, but it also made it absolutely trivial for a malicious AP server to attack a client. There are rumors of this pattern elsewhere, and I’m now rather concerned.

(To be entirely honest, I was not intending to find another WPS attack. I was just trying to document Stefan’s work.) Ouch.

ON PATCHING

Fixing WPS is going to be a mess. There’s a pile of devices out there that simply expect to be able to execute the above protocol in order to discover their wireless configuration, including encryption keys. Even when a more advanced protocol becomes available, preventing the downgrade attack in which an attacker simply claims to be an unpatched client is brutal. Make no mistake, the patch treadmill developed over the last decade is impressive, and hundreds of millions of devices recieve updates on a regular basis.

There’s billions that don’t. Those billions include $40 access points, mobile phones, even large numbers of desktops and laptops. When do you get to turn off an insecure but critical function required for basic connectivity?

Good question. Hard, hard answer. Aside from rate limiting defenses, this flaw is going to be around, and enabled by default, for quite some time. years, at least.

To some extent, that’s the reality of security. It’s the long game, or none at all. Accepting this is a ray of hope. We can talk about problems that will take years to solve, because we know that’s the only scale in which change can occur. The Wi-Fi Alliance deserves credit for playing this game on the long scale. This is a lost battle, but at least this is a group of engineers that have been here before.

RIGHT TECH, WRONG LAW

Now, what should have been built? Once again, we should have used SRP, and didn’t. SRP is unambigiously the right way to solve the PAKE (Password Authenticated Key Exchange) problem, quite a bit more even than my gratuitious (if entertaining) work with Phidelius which somewhat suffers from offline vulnerability. Our inability to migrate to SRP is usually ascribed to some unclear status relating to patents; that our legal system has been unable to declare SRP covered or not pretty much led to the obviously correct technology not being used.

Somebody should start lobbying or something.

Categories: Security

Salt The Fries: Some Notes On Password Complexity

January 5, 2012 12 comments

Just a small post, because Rob Graham asked.

The question is: What is the differential complexity increase offered by salting hashes in a password database?

The short answer, in bits, is: The square root base 2 log of the number of accounts the attacker is interested in cracking.

Rob wanted me to explain this in a bit more depth, and so I’m happy to. In theory, the basic properties of a cryptographic hash is that it’s infeasible to invert the hash back to the bitstream that forged it. This is used in password stores by taking the plaintext password provided by the user, hashing it, and comparing the hash from the database to the hash of the user provided plaintext. If they match, the user is authenticated. Even if the database is lost, the attack won’t be able to invert the database back to the passwords, and users are protected.

There are, of course, two attacks against this model. The first, surprisingly ignored, is that the plaintext password still passes through the web app front end before it is hash-compared against the value in the database. An attacker with sufficient access to dump the database often has sufficient access to get code running on either the server or client front ends (and you thought cross-site scripting was uninteresting!). Granted, this type of attack reduces exposed users from “everyone” to “everyone who logs in while the site is compromised”. That’d probably be more of an improvement, if we believed attackers were only interested in instantaneous smash-and-grabs and were unable to, or afraid of persisting their threats for long periods of time.

Heh.

The more traditional threat against password hashes is offline brute forcing. While hashes can’t be inverted, the advent of GPUs means they can be tested at the rate of hundreds of millions to billions per second. So, instead of solving the hard problem of figuring out what input created the given output, just try all the inputs until one happens to equal your desire.

How long will this take? Suppose we can test one billion hashes per second. That’s approximately 2^30, so we can say we can handle 30 bits of entropy per second. If we’re testing eight character passwords with the character set A through Z, a through z, and 0 through 9, that’s (26+26+10)^8, or 218,340,105,584,896 attempts. That’s roughly 2^48, or 48 bits of entropy. We can handle only 30 per second, and 48 – 30 == 18, so it will take approximately 2^18 seconds to run through the entire hash space — that’s 262,144 seconds, or 72 hours.

(Yes, I’m rounding extensively. The goal is to have a general feel for complexity, not to get too focused on the precise arithmetic.)

There are two important oversimplifications going on here. First, passwords selected by humans, even those that use those all important punctuation, numbers, and other forms of l33tsp33k, do not utilize these hardening factors particularly randomly. XKCD has made fun of this in the past. One of the real consequences of all these passwords drops is that we’re getting hard data on the types of password patterns people actually use. So groups like TeamHashCat are selecting their billion-hashes-per-second not in random order, but increasingly close to the likelihood that a given input will actually be somebody’s password.

So how does salting play into this?

There’s a much older optimization for password cracking, than selecting passwords based on complex models of what people use. An unsalted hash is the direct output of hash(password). If two people use the same password, they will have the same password hash. And thus, the work effort to crack passwords amortizes — the same operation to find out if Alice’s password is “abcd1234″ also discloses whether Bob’s password is “abcd1234″. Salting changes this, by adding some sort of randomizer (possibly just “Alice” or “Bob”) to the hash function. That way, Alice’s password hash is the hash for “Aliceabcd1234″, while Bob’s password hash is the hash for “Bobabcd1234″. To crack both Alice and Bob’s passwords requires twice as much work.

That’s one extra bit.

And that then is how you back into the complexity increase of salting on password cracking. When you’re cracking an unsalted database of passwords, the more accounts you have, the more “bites at the apple” there are — each attempt may match any one of n hashes. Salting just removes the optimization. If you want to crack 256 accounts instead of 1, you need to work 256 times harder — that’s 2^8, or 8 bits. If you want to crack 1M accounts, you need to work a million times harder — that’s 2^20, or 20 bits.

Note however that the question is not how many accounts there are in total, but how many accounts you’re interested in. If only 256 of the 1M accounts are interesting, salting only wins you 8 bits.

A bigger deal, I think, is memory hardness. Password hash functions can and often are run in a loop, such that the hash function is actually run thousands of times in a serial fashion. This does win ~12 bits of security. However, the loops are still viable within the practical means by which attackers are getting billions of hashing operations per second, i.e. GPUs. GPUs have an architectural limit, however — they can be run out of memory. Requiring each hash process to spend 256M+ of RAM may practically eliminate GPUs from the hash-cracking game entirely, even if the number of interesting accounts is just one.

That’s not game-over for all attackers — million node botnets really do have a couple gigs of RAM available for each cracking process, after all — but it does reduce the population of attackers who get to play. That’s always nice. The best path for memory-hard hashes is Colin Percival’s scrypt, which I’ve been playing with lately inside of my cute (if ill-advised) Phidelius project. scrypt has an odd operational limitation (no libscrypt) but hopefully that can be resolved soon, and we can further refine best practices in this space.


Edit: Ah, some of my commenters have mentioned rainbow tables. These actually do change things slightly, though from an interesting direction. Unsalted password hashes are not merely shared across all accounts in a given database; they’re also shared across all accounts for that entire implementation, at least for each account someone is trying to crack. That means it’s possible to both pregenerate hashes (there’s no need to know a salt in advance, because there isn’t one) and to calculate more hashes for more time (since the work effort can be amortized not just cross-account, but cross-database).

It’s become a bit of a thing to just take password hashes and put them into Google. If they’re MD5, and they’re unhashed, a surprising amount of time you do get an answer. Moral of the story: Salting does keep you out of a hash dataset that may be 24+ bits (16M+) “cheaper” (less expensive per account) to attack than just your own.

Peter Maxwell also noted (read the comments) that it’s important to, say, not have a 16 bit salt. Really? Do people do that?

Categories: Security

Phidelius: Constructing Asymmetric Keypairs From Mere Passwords For Fun and PAKE

January 3, 2012 1 comment

TL;DR: New toy.

$ phidelius
Phidelius 1.0: Entropy Spoofing Engine
Author: Dan Kaminsky / dan@doxpara.com
Description: This code replaces most sources of an application's entropy with
a psuedorandom stream seeded from a password, a file, or a generated
sequence. This causes most cryptographic key generators (ssh-keygen,
openssl, etc) to emit apparently strong keys with a presumably memorable
backdoor. For many protocols this creates PAKE (Password Authenticated
Key Exchange) semantics without the server updating any code or even
being aware of the password stored client side. However, the cost of
blinding the server is increased exposure to offline brute force
attacks by a MITM, a risk only partially mitigatable by time/memory
hard crack resistance.
Example: phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f id_dsa"


Passwords are a problem. They’re constantly being lost, forgotten, and stolen. Something like 50% of compromises are associated with their loss. Somewhere along the way, websites started thinking l33tsp33k was a security technology to be enforced upon users.

So what we’re about to talk about in this post — and, in fact, what’s in the code I’m about to finally drop — is by no means a good idea. It may perhaps be an interesting idea, however.

So! First discussed in my Black Ops of 2011 talk, I’m finally releasing Phidelius 1.0. Phidelius allows a client armed with nothing but a password to generate predictable RSA/DSA/ECC keypairs that can then be used, unmodified, against real world applications such as SSH, SSL, IPsec, PGP, and even BitCoin (though I haven’t quite figured out that particular invocation yet — it’s pretty cool, you could send money to the bearer of a photograph).

Now, why would you do this? There’s been a longstanding question as to how can a server support passwords, without necessarily learning those passwords. The standard exhortations against storing unhashed passwords mean nothing against this problem; even if the server is storing hashed values, it’s still receiving them in plain text. There are hash-based challenge response protocols (“please give me the password hashed with this random value”) but they require the server to then store the password (or a password equivalent) so they can recognize valid responses.

There’s been a third class of solutions, belonging to the PAKE (Password Authenticated Key Exchange) family. These solutions generally come down to “password authenticated Diffie-Helman”. For various reasons, not least of which have been patents, these approaches haven’t gotten far. Phidelius basically “solves” PAKE, by totally cheating. Rather than authenticating an otherwise random exchange, the keypair itself is nothing but an encoding of the password. The server doesn’t even have to know. (If you are a dev, your ears just perked up. Solutions that require only one side to patch are infinitely easier to deploy.) How can this work?

Well, what was the one bug that affected asymmetric key generation universally, corrupting RSA, DSA, and ECC equally? Of course, the Debian Random Number Generator flaw. See, asymmetric keys of all sorts are all essentially built the same way. A pile of bits is drawn from some ostensibly random source (/dev/random, CryptGenRandom, etc). These bits are then accepted, massaged, or rejected, until they meet the required form for a keypair of the appropriate algorithm.

Under the Debian RNG bug, only a rather small subset of bit patterns could be drawn, so the same keys would be generated over and over. Phidelius just makes predictable key generation a feature: We know how to turn a password into a stream of random numbers. We call that “seeding a pseudorandom number generator”. So, we just make the output of a PRNG the input to /dev/random, /dev/urandom, a couple of OpenSSL functions, and some other miscellaneous noise, and — poof — same password in, same keypair out, as so:


# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
...The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06
# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06

Now, as I mentioned earlier, it’s not necessarily a particularly good idea to use this technique, and not just because passwords themselves are inherently messy. The biggest weakness of the Phidelius approach is that the public keys it generates are also backdoored with the password. The problem here is subtle. It’s not interesting that the server can brute force the public key back to the password. This is *always* the case, even in properly beautiful protocols like SRP. (In fact, Phidelius uses the PRNG from scrypt, a time and memory hard algorithm, to make brute forcing not only hard, but probably harder than any traditional PAKE algorithm can achieve.)

What’s interesting is that public keys are supposed to be public. So, most protocols expose them at least to MITM’s, and sometimes even publish the things. Yes, there’s quite a bit of effort put into suppressing brute force attacks, but this is a scheme that really threatens you with them.

It also doesn’t help that salting Phidelius-derived keys is pretty tricky. The fact that the exact same password creates the exact same keypair isn’t necessarily ideal. The best trick I’ve found is to embed parameters for the private key in the larger context afforded to most public keys, i.e. the certificate. You’d retrieve the public key, mix in the randomized salt and some useful parameters (like, say, options for scrypt), and reconstruct the private key. This requires quite a bit of infrastructure though.

And finally, of course, this is a rather fragile solution. The precise bit-for-bit use of entropy isn’t standardized; all bits are supposed to be equal. So you’re not just generating a key with ssh-keygen, you’re generating from ssh-keygen 5.3, possibly as emitted from a particular compiler. A proper implementation would have to standardize all this.

So, there are caveats. But this is interesting, and I look forward to seeing how people play with it.

(Oh, why Phidelius? It’s a spell from Harry Potter, which properly understood is the story of the epic consequences of losing one’s password.)


Edit: Forgot! Tom Ritter saw my talk at Black Hat, and posted that he’d actually worked out how to do this sort of password-to-keypair construction for PGP keys. It’s actually a bit tricky. You can see his code here.

Categories: Security

Crypto Interrupted: Data On The WPS WiFi Break

January 2, 2012 5 comments

TL,DR:  Went wardriving around Berlin. 26.3% of APs with crypto enabled exposed methods that imply vulnerability to the WPS design flaw.  A conservative extrapolation from WIGLE data suggests at least 4.1M vulnerable hosts.  Yikes.

=======

Fixing things is hard.  Fixing things in security, even more so.  (There’s a reasonable argument that the interlocking dependencies of security create a level of Hard that is mathematically representable.) And yet, WiFi was quietly but noticeably one of our industry’s better achievements.  Sure, WiFi’s original encryption algorithm — WEP — was essentially a demonstration of what not to do when deploying cryptographic primitives.  And indeed, due to various usability issues, it used to be rare to see encryption deployed at all.  But, over the last ten years, in a conversion we’ve notably not witnessed with SSL (at least not at nearly the same scale)…well, take a look:

In January 2002, almost 60% of AP’s seen worldwide by the WIGLE wireless mapping project had encryption disabled.  Ten years later, only 20% remained, and while WEP isn’t gone, it’s on track to be replaced by WPA/WPA2. You can explore the raw data here, and while there’s certainly room to argue about some of the numbers, it’s difficult to say things haven’t gotten better.

Which is what makes this WPS flaw found by Stefan Viehböck (and independently discovered by Craig Heffner) quite so tragic.

I’m not going to lie.  Trying to reverse engineer just how this bug came about is being something of a challenge.  There’s a desire to compete with Bluetooth for device enrollment, there’s some defense against evil twin / fake APs, there’s a historical similarity to the old LANMAN bug…I haven’t really figured it out yet.  At the end of the day though, the problem is that an attacker can determine that they’ve guessed the first four digits before working on the next three (and the last is a checksum), meaning 11000 queries (5500 on average) is enough to guess a PIN.

So, that’s the issue (and, likely, the fix — prevent the attacker from knowing they’re halfway there, by blinding the midpoint error).  But just like the telnet encryption bug, I think it’s important we get a handle on just how widespread this vulnerability is.  The WIGLE data doesn’t actually declare whether a given encryption-supporting node also supports WPS.  So…lets go outside and find out.

Wardriving?  In 2012?  It’s more likely than you think.

Over an approximately 4 hour period, 3,738 AP’s were found in the Berlin area.  (I was in Berlin for CCC.)  2,758 of these AP’s had encryption enabled — 73%, a bit higher than WIGLE average.  535 of the 2,758 (19.39%) had the “Label” WPS method enabled, along with crypto.  These are almost certainly vulnerable.  Another 191 (6.9%) had the “Display” WPS method enabled.  We believe these to be vulnerable as well.  Finally, 611/2758 — 22% — had WPS enabled, with no methods declared.  We have no data on whether these devices are exposed.

As things stand, we see 26.3% of AP’s with encryption enabled, likely to be exposing the WPS vulnerability.

What does this mean, in terms of absolute numbers?  WIGLE’s seen about 26M networks with crypto enabled.  However, they don’t age out AP’s, meaning the fact that a network hasn’t been seen since 2003 doesn’t mean it isn’t represented in the above numbers.  Still, 60% of the networks they’ve ever seen, were first seen in the last two years.  If we just take 60% of the 26M networks — 15.6M — and we entirely ignore all the networks WIGLE was unable to identify the crypto for, and all the networks that declare WPS but don’t declare a type…

We end up with 4.1M networks vulnerable to the WPS design vulnerability around the globe.  That’s the conservative number, and it’s a pretty big deal.

Note, of course, that whether a node is vulnerable is not a random function.  Like the UPNP issues I discussed in my 2011 Black Hat talk, this is an issue that may show up on every access point distributed by a particular ISP.  Given that there are entire countries with a single Internet provider, it is likely there are entire regions where every access point is now wide open.

Curious about your own networks?  Run:

iw scan wlan0 | grep “Config methods”

If you see anything with “Label” or “Display” you’ve got a problem.

===
Small update:

There’s code in one WPA supplicant that implies that, perhaps APs that do not expose a method still support PINs:


/*
* In theory, this could also verify that attr.sel_reg_config_methods
* includes WPS_CONFIG_LABEL, WPS_CONFIG_DISPLAY, or WPS_CONFIG_KEYPAD,
* but some deployed AP implementations do not set Selected Registrar
* Config Methods attribute properly, so it is safer to just use
* Device Password ID here.
*/

I’m going to stick with the conservative 26.1%, rather than the larger 48.1% estimate, until someone actually confirms an effective attack against such an AP.

Categories: Security

From 0day to 0data: TelnetD

December 29, 2011 Leave a comment

Recently, it was found that BSD-derived Telnet implementations had a fairly straightforward vulnerability in their encryption handler. (Also, it was found that there was an encryption handler.) Telnet was the de facto standard protocol for remote administration of everything but Windows systems, so there’s been some curiosity in just how nasty this bug is operationally.

So, with the help of Fyodor and the NMAP scripting team, I set out to collect some data on just how widespread this bug is. First, I set out to find all hosts listening on tcp/23. Then, with Fyodor’s code, I checked for encryption support. Presence of support does not necessarily imply vulnerability, but absence does imply invulnerability.

The scanner’s pretty rough, and I’m doing some sampling here, but here’s the bottom line data:

Out of 156113 confirmed telnet servers randomly distributed across the Internet, only 429 exposed encryption support. By far my largest source of noise is the estimate of just how many telnet servers (as opposed to just TCP listeners on 23, or transient services) exist across the entire Internet. My conservative numbers place that count at around 7.8M. This yields an estimate of ~13785 ~21546 (or 15,600 to 23,500, at 95% confidence — thanks, David Horn!) potentially vulnerable servers.

Patched servers will still look vulnerable unless they remove encryption support entirely, so this isn’t something we can watch get fixed (well, unless we’re willing to crash servers, which I’m not, even if it’d be safe because these things are launched from inetd).

Here is a command line that will scan your networks. You may need nmap from svn.


/usr/local/bin/nmap -p23 -PS23 --script telnet-encryption -T5 -oA telnet_logs -iL list_of_ips -v

So, great bug, just almost certainly not widely distributed. I’ll document the analysis process when I’m not at con.

Categories: Security

Exploring ReU: Rewriting URLs For Fun And XSRF/HTTPS

October 20, 2011 1 comment

So I’ve been musing lately about a scheme I call ReU. It would allow client side rewriting of URLs, to make the addition of XSRF tokens much easier. As a side effect, it should also make it much easier to upgrade the links on a site from http to https. Does it work? Is it a good idea? I’m not at all sure. But I think it’s worth talking about:

===

There is a consistent theme to both XSS and XSRF attacks: XS. The fact that foreign sites can access local resources, with all the credentials of the user, is a fundamental design issue. Random web sites on the Internet should not be able to retrieve Page 3 of 5 of a shopping cart in the same way they can retrieve this blog entry! The most common mechanism recommended to prevent this sort of cross-site chicanery is to use a XSRF token, like so:

http://foo.com/bar?session_id=12345

The token is randomized, so now links from the outside world fail despite having appropriate credentials in the cookie.

In some people’s minds, this is quite enough. It only needs to be possible to deploy secure solutions; cost of deployment is so irrelevant, it’s not even worth being aware of. The problem is, somebody’s got to fix this junk, and it turns out that retrofitting a site with new URLs is quite tricky. As Martin Johns wrote:

…all application local URLs have to be rewritten for using the randomizer object. While standard HTML forms and hyperlinks pose no special challenge, prior existing JavaScript may be harder to deal with. All JavaScript functions that assign values to document.location or open new windows have to be located and modified. Also all existing onclick and onsubmit events have to be rewritten. Furthermore, HTML code might include external referenced JavaScript libraries, which have to be processed as well. Because of these problems, a web application that is protected by such a solution has to be examined and tested thoroughly.

Put simply, web pages are pieced together from pieces developed across sprints, teams, companies, and languages. No one group knows the entire story — but if anyone screws up and leaves the token off, the site breaks.

The test load to prevent those breakages from reoccurring are a huge part of why delivering secure solutions is such a slow and expensive process.

So I’ve been thinking: If upgrading the security of URLs is difficult server side, because the code there tends to be fragmented, could we do something on the client? Could the browser itself be used to add XSRF tokens to URLs in advance of their retrieval or navigation, in a way that basically came down to a single script embedded in the HEAD segment of an HTML file?

Would not such a scheme be useful not only for adding XSRF tokens, but also changing http links to https, paving the way for increasing the adoption of TLS?

The answer seems to be yes. A client side rewriter — ReU, as I call it — might very well be a small, constrained, but enormously useful request of the browser community. I see the code looking something vaguely like:

var session_token=aaa241ee1298371;
var session_domain="foo.com";

function analyzeURL(url, domnode, flags)
{
   domain = getDomain(url);
   // only add token to local resources
   if(isInside(session_domain, domain)) { 
      addParam(url, "token", session_token); }
      forceHTTPS(url);
   }
   return url;
}

window.registerURLRewriter(analyzeURL);

Now, something that could totally kill this proposal, is if browsers that didn’t support the extension suddenly broke — or, worse, if users with new browsers couldn’t experience any of the security benefits of the extension, lest older users have their functionality broken. Happily, I think we can avoid this trap, by encoding in the stored cookie whether the user’s browser supports ReU. Remember, for the XSRF case we’re basically trying to figure out in what situation we want to ignore a browser’s request despite the presence of a valid cookie. So if the cookie reports to us that it came from a user without the ability to automatically add XSRF tokens, oh well. We let that subset through anyway.

As long as the bad guys don’t have the ability to force modern browsers to appear as insecure clients, we’re OK.

Now, there’s some history here. Jim Manico pointed me at CSRFGuard, a project from the OWASP guys. This also tries to “upgrade” URLs through client side inspection. The problem is that real world frameworks really do assemble URLs dynamically, or go through paths that CSRFGuard can’t interrupt. These aren’t obscure paths, either:

No known way to inject CSRF prevention tokens into setters of document.location (ex: document.location=”http://www.owasp.org”;)
Tokens are not injected into HTML elements dynamically generated using JavaScript “createElement” and “setAttribute”

Yeah, there’s a lot of setters of document.location in the field. Ultimately, what we’re asking for here is a new security boundary for browsers — a direct statement that, if the client’s going somewhere or grabbing something, this inspector is going to be able to take a look at it.

There’s also history with both the Referer and Origin headers, which I spoke about (along with the seeds of this research) in my Interpolique talk. The basic concept with each is that the navigation source of a request — say http://www.google.com — would identify itself in some manner in the HTTP header of the subsequent request. The problem for Referer is that there are just so many sources of requests, and it’s best effort whether they show up or not (let alone whether they show up correctly).

Origin was its own problem. Effectively, they added this entire use case where images and links on a shared forum wouldn’t get access to Origin headers. It was necessary for their design, because their security model was entirely static. But for everything that’s *not* an Internet forum, the security model is effectively random.

With ReU, you write the code (or at least import the library) that determines whether or not a token is applied. And as a bonus, you can quickly upgrade to HTTPS too!

There is at least one catastrophic error case that needs to be handled. Basically, those pop-under windows that advertisers use, are actually really useful for all sorts of interesting client side attacks. See, those pop-unders usually retain a connection to the window they came from, and (here’s the important part) can renavigate those windows on demand. So there’s a possible scenario in which you go to a malicious site, it spawns the pop-under, you go to another site, you log in, and then the malicious page drags your real logged in window to a malicious URL at the otherwise protected site.

It’s a navigation event, and so a naive implementation of ReU would fire, adding the token. Yikes.

The solution is straightforward — guarantee that, at least by default, any event that fired the rewriter came from a local source. But that brings up a much trickier question:

Do we try to stop clickjacking with this mechanism as well? IFrames are beautiful, but they are also the source of a tremendous number of events that comes from rather untrusted parties. It’s one thing to request a URL rewriter that can special case external navigation events. It’s possibly another to get a default filter against events sourced from actions that really came from an IFrame parent.

It is very, very possible to make technically infeasible demands. People think this is OK, because it counts as “trying”. The problem is that “trying” can end up eating your entire budget for fixes. So I’m a little nervous about making ReU not fire when the event that triggered it came from the parent of an IFrame.

There are a few other things of note — this approach is rather bookmark friendly, both because the URLs still contain tokens, and because a special “bookmark handler” could be registered to make a long-lived bookmark of sorts. It’s also possible to allow other modulations for the XSRF token, which (as people note) does leak into Referer headers and the like. For example, HTTP headers could be added, or the cookie could be modified on a per request basis. Also, for navigations offsite, secrets could be removed from the referring URL before handed to foreign sites — this could close a longstanding concern with the Referer header too.

Finally, for debuggability, it’d be necessary to have some way of inspecting the URL rewriting stack. It might be interesting to allow the stack to be locked, at least in order of when things got patched in. Worth thinking about.

In summary, there’s lots of quirks to noodle on — but I’m fairly convinced that we can give web developers a “one stop shop” for implementing a number of the behaviors we in the security community believe they should be exhibiting. We can make things easier, more comprehensive, less buggy. It’s worth exploring.

Categories: Security

These Are Not The Certs You’re Looking For

August 31, 2011 16 comments

It began near Africa.

This is not, of course, the *.google.com certificate, seen in Iran, signed by DigiNotar, and announced on August 29th with some fanfare.

This certificate is for Facebook. It was seen in Syria, signed by Verisign (now Symantec), and was routed to me out of the blue by a small group of interesting people — including Stephan Urbach, “The Doctor”, and Meredith Patterson — on August 25th, four days prior.

It wasn’t the first time this year that we’d looked at a mysterious certificate found in the Syrian wild. In May, the Electronic Frontier Foundation reported that it “received several reports from Syrian users who spotted SSL errors when trying to access Facebook over HTTPS.” But this certificate would yield no errors, as it was legitimately signed by Verisign (now Symantec).

But Facebook didn’t use Verisign (now Symantec). From the States, I witnessed Facebook with a DigiCert cert. So too did Meredith in Belgium, with a third strike from Stephan in Belgium: Facebook’s CA was clearly DigiCert.

The SSL Observatory dataset, in all its thirty five gigabytes of glorious detail, agreed completely. With the exception of one Equifax certificate, Facebook was clearly identifiable by DigiCert alone.

And then Meredith found the final nail:

VeriSign intermediary certificate currently used for VeriSign Global certificates. Chained with VeriSign Class 3 Public Primary Certification Authority. End of use in May 2009.

So here we were, looking at a Facebook certificate that couldn’t be found anywhere on the net — by any of us, or by the SSL Observatory — signed by an authority that apparently was never used by the site, found in a country that had recently been found interfering with Facebook communication, using a private key that had been decommissioned for years. Could things get any sketchier?

Well, they could certainly get weirder. That certificate that was killed in 2009, was the certificate killed by us! That’s the MD2-based certificate that Meredith, myself, and the late Len Sassaman warned about in our Black Hat 2009 talk (Slides / Paper)! Somehow, the one cryptographic function that we actually killed before it could do any harm (at the CA, in the browser, even via RFC) was linked to a Facebook certificate issued on July 13th, 2011, two years after its death?

This wasn’t just an apparently false certificate. This was overwhelmingly, almost cartoonishly bogus. Something very strange was happening in Syria.

Facebook was testing some new certificates on one of their load balancers.

WOT

I’ve got a few friends at Facebook. So, with the consent of everyone else involved, I pinged them — “Likely Spoofed Facebook Cert in Syria”. Thought it would get their attention.

It did.

We’re safe.

Turns out we just ordered new (smaller) certs from Verisign and deployed them to a few test LBs on Monday night. We’re currently serving a mixture of digicert and verisign certificates.

One of our test LBs with the new cert: a.b.c.d:443

And thus, what could have been a flash of sound and fury, ultimately signifying nothing, was silenced. There was no attack. Facebook had the private key matching the cert — something not even Verisign would have had, in the case of an actual Syrian break.

In traditional security fashion, we dropped our failed investigation and made no public comment about it — perhaps it’d be a funny story for a future conference, but it was certainly nothing to discuss with urgency.

And then, four days later, the DigiNotar break of *.google.com went public. Totally unrelated, and obviously in progress while our false alarm was ringing. The fates are strange sometimes.

ACTIONABLE INTELLIGENCE

The only entity that can truly assert the cryptographic keys of Facebook, is Facebook. We, as outside observers with direct expertise, detected practically every possible sign that this particular Facebook cert was malicious. It wasn’t. It just wasn’t. That such a comically incorrect certificate might actually be real, has genuine design consequences.

It means you have to put an error screen, with bypass, up for the user. No matter what the outside observers say, they may very well be wrong. The user must be given an override. And the data on what users do, when given an SSL error screen, isn’t remotely subtle. Moxie Marlinspike, probably the smartest guy in the world right now on SSL issues, did a study a few years ago on how many Tor users — not even regular users, but Tor users, clearly concerned about their privacy and possessed with some advanced level of expertise — would notice SSL being disabled and refuse to browse their desired content.

Moxie didn’t find a single user who resisted disabled security. His tool, sslsniff, worked against 100% of the sample set.

To be fair, Moxie’s tool didn’t pop an error — it suppressed security entirely. And, who knows, maybe Tor users think they’ve “done enough” for security and just consider their security budget spent after that. Like most things in security, our data is awful.

But, you know, this UI data ain’t exactly encouraging.

Recently, Moxie’s been presenting some interesting technology, in the form of his Convergence platform. There’s a lot to be said about Convergence, which frankly, I don’t have time to write about right now. (Disclosure: I’m going to Burning Man in about twelve hours. Honestly, I was hoping to postpone this entire discussion until N00ter’s first batch of data was ready. Ah well.) But, to be clear, I was fooled, Meredith was fooled, Moxie would have been fooled…I mean, look at this certificate! How could it possibly be right?

A SMALL TECHNICAL PREVIEW

It is true that the only entity that can truly assert being Facebook, is Facebook. Does that mean everybody should just host self-signed certificates, where they assert themselves directly? Well, no. It doesn’t scale, and just ends up repeatedly asking the user whether a key (first seen, or changed) should be trusted.

Whether a key should be trusted, is an answer to which users only reply yes. Nice for blaming users, I suppose, but it doesn’t move the ball.

What’s better is a way to go from users having to trust everybody, to users having to trust only a small subset of organizations through which they have a business relationship with. Then, that subset of organization can enter into business relationships with the hordes of entities that need their entities asserted…

…and we just reinvented Certificate Authorities.

There’s a lot to say about CA’s, but one thing my friend at Verisign (there were a number of friendly mails exchanged) did point out is that the Facebook certificate passed a check via OCSP — the Online Certificate Status Protocol. Designed because CRLs just do not scale, OCSP allows a browser to quickly determine whether a given certificate is valid or not.

(Not quickly enough — browsers only enforce OCSP checks under the rules of Extended Validation, and *maybe* not even then. I’ve heard rumors.)

CA’s are special, in that they have an actual business relationship with the named parties. For a brief moment, I was excited — perhaps a way for everybody to know if a particular cert was valid, was to ask the certificate authority that issued it! Maybe even we could do something short of the “Internet Death Penalty” for certificates — browsers could be forced to check OCSP before accepting any certificate from a questioned authority.

Alas, there was a problem — and not just “the only value people are adding is republishing the data from the CA”. No, this concept doesn’t work at all, because OCSP assumes a CA never loses control of its keys. I repeat, the system in place to handle a CA losing its keys, assumes the CA never loses the keys.

Look at this.

$ openssl ocsp -issuer vrsn_issuer_cert.pem -cert Cert.pem -url http://ocsp.verisign.com -resp_text
OCSP Response Data:
OCSP Response Status: successful (0×0)
Response Type: Basic OCSP Response
Version: 1 (0×0)
Responder Id: O = VeriSign Trust Network, OU = “VeriSign, Inc.”, OU = VeriSign International Server OCSP Responder – Class 3, OU = Terms of use at http://www.verisign.com/rpa (c)03
Produced At: Aug 30 09:18:44 2011 GMT
Responses:
Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

Signature Algorithm: sha1WithRSAEncryption
d0:01:89:71:4c:f9:81:e8:5f:bf:d0:cb:46:e5:ba:34:30:bf:
a0:3a:33:f9:7f:47:51:f8:9b:1d:7d:bc:bc:95:b8:6f:64:49:
e5:b4:34:d9:eb:75:ff:6a:a2:01:b4:0a:ae:d5:f0:1d:a2:bf:
56:45:f2:42:27:44:ef:9b:5f:0d:ba:47:60:ac:1b:eb:9a:45:
f8:c8:3d:27:ad:0e:cb:ae:2a:03:7c:95:40:2b:5f:5b:5a:be:
ad:ea:60:54:15:39:a2:07:37:ac:11:b0:a6:42:2e:03:14:7e:
ff:0e:7c:87:30:f7:63:01:4c:fa:e3:42:54:bc:e8:f3:81:4c:
ba:66
Certificate:
Data:
Version: 3 (0×2)
Serial Number:
78:3a:4f:aa:ce:5f:ab:d6:8f:bd:60:9d:35:53:d4:e9
Signature Algorithm: sha1WithRSAEncryption
Issuer: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server CA – Class 3, OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign
Validity
Not Before: Aug 4 00:00:00 2011 GMT
Not After : Nov 2 23:59:59 2011 GMT
Subject: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server OCSP Responder – Class 3, OU=Terms of use at http://www.verisign.com/rpa (c)03
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:ef:a4:35:c9:29:e0:66:ca:2c:4e:77:95:55:55:
e8:60:46:5e:f3:8d:2c:c9:d2:83:01:f7:9c:2b:29:
b6:75:f2:e2:7e:45:f6:e9:29:ff:1a:54:d6:40:d2:
72:ef:78:12:7f:18:5c:c0:dc:ee:ac:c3:7c:97:9b:
fb:d5:b1:96:ab:07:bd:5e:e0:b0:de:0c:e9:03:ae:
c3:9f:36:dd:ad:74:97:8a:9e:81:4e:66:85:17:e6:
24:fa:a7:b6:ce:fd:56:f3:d4:c9:f2:36:3b:8d:dc:
fe:ae:0b:9c:25:23:06:8b:88:52:44:94:2c:4a:0c:
50:1a:c1:3f:ff:b3:17:c4:ab
Exponent: 65537 (0×10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Certificate Policies:
Policy: 2.16.840.1.113733.1.7.23.3
CPS: https://www.verisign.com/CPS
User Notice:
Organization: VeriSign, Inc.
Number: 1
Explicit Text: VeriSign’s CPS incorp. by reference liab. ltd. (c)97 VeriSign

X509v3 Extended Key Usage:
OCSP Signing
X509v3 Key Usage:
Digital Signature
OCSP No Check:

X509v3 Subject Alternative Name:
DirName:/CN=OCSP8-TGV7-123
Signature Algorithm: sha1WithRSAEncryption
52:c1:39:cf:65:8e:52:b1:a2:3c:38:dc:e2:d1:3e:bc:85:41:
de:c6:88:61:91:82:10:39:cb:63:62:8b:02:8c:7e:ce:dc:df:
4d:2f:44:d1:f0:89:e1:64:8d:5e:57:5b:a5:71:80:29:28:0b:
b9:c0:8b:68:1c:29:b6:5d:d7:99:d8:b8:e3:a3:11:61:8f:c6:
fb:f3:ad:3b:35:bb:fc:30:b9:0f:69:06:19:9f:9f:72:6d:a3:
f7:06:41:8f:8c:2a:26:94:ba:8e:cf:f6:76:75:b0:35:57:fb:
92:ac:79:03:91:17:24:46:09:88:ca:44:03:92:ed:e5:b8:76:
92:28
—–BEGIN CERTIFICATE—–
MIIEDDCCA3WgAwIBAgIQeDpPqs5fq9aPvWCdNVPU6TANBgkqhkiG9w0BAQUFADCB
ujEfMB0GA1UEChMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazEXMBUGA1UECxMOVmVy
aVNpZ24sIEluYy4xMzAxBgNVBAsTKlZlcmlTaWduIEludGVybmF0aW9uYWwgU2Vy
dmVyIENBIC0gQ2xhc3MgMzFJMEcGA1UECxNAd3d3LnZlcmlzaWduLmNvbS9DUFMg
SW5jb3JwLmJ5IFJlZi4gTElBQklMSVRZIExURC4oYyk5NyBWZXJpU2lnbjAeFw0x
MTA4MDQwMDAwMDBaFw0xMTExMDIyMzU5NTlaMIGwMR8wHQYDVQQKExZWZXJpU2ln
biBUcnVzdCBOZXR3b3JrMRcwFQYDVQQLEw5WZXJpU2lnbiwgSW5jLjE/MD0GA1UE
CxM2VmVyaVNpZ24gSW50ZXJuYXRpb25hbCBTZXJ2ZXIgT0NTUCBSZXNwb25kZXIg
LSBDbGFzcyAzMTMwMQYDVQQLEypUZXJtcyBvZiB1c2UgYXQgd3d3LnZlcmlzaWdu
LmNvbS9ycGEgKGMpMDMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAO+kNckp
4GbKLE53lVVV6GBGXvONLMnSgwH3nCsptnXy4n5F9ukp/xpU1kDScu94En8YXMDc
7qzDfJeb+9WxlqsHvV7gsN4M6QOuw5823a10l4qegU5mhRfmJPqnts79VvPUyfI2
O43c/q4LnCUjBouIUkSULEoMUBrBP/+zF8SrAgMBAAGjggEZMIIBFTAJBgNVHRME
AjAAMIGsBgNVHSAEgaQwgaEwgZ4GC2CGSAGG+EUBBxcDMIGOMCgGCCsGAQUFBwIB
FhxodHRwczovL3d3dy52ZXJpc2lnbi5jb20vQ1BTMGIGCCsGAQUFBwICMFYwFRYO
VmVyaVNpZ24sIEluYy4wAwIBARo9VmVyaVNpZ24ncyBDUFMgaW5jb3JwLiBieSBy
ZWZlcmVuY2UgbGlhYi4gbHRkLiAoYyk5NyBWZXJpU2lnbjATBgNVHSUEDDAKBggr
BgEFBQcDCTALBgNVHQ8EBAMCB4AwDwYJKwYBBQUHMAEFBAIFADAmBgNVHREEHzAd
pBswGTEXMBUGA1UEAxMOT0NTUDgtVEdWNy0xMjMwDQYJKoZIhvcNAQEFBQADgYEA
UsE5z2WOUrGiPDjc4tE+vIVB3saIYZGCEDnLY2KLAox+ztzfTS9E0fCJ4WSNXldb
pXGAKSgLucCLaBwptl3Xmdi446MRYY/G+/OtOzW7/DC5D2kGGZ+fcm2j9wZBj4wq
JpS6js/2dnWwNVf7kqx5A5EXJEYJiMpEA5Lt5bh2kig=
—–END CERTIFICATE—–
Cert.pem: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

I’ve spent years saying X.509 is broken, but seriously, this is it for me. I copy this in full because the full reply needs to be parsed, and I don’t see this as dropping 0day because it’s very clearly an intentional part of the design. Here is what OCSP asserts:

Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good

To translate: “A certificate with the serial number 092197E1C0E9CD03DA35243656108681, using sha1, signed by a name with the hash of C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0, using a key with the hash of 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6, is good.”

To translate even further, please imagine a scene from one of my favorite shows, the IT crowd.


Roy: Moss…did you…did you sign this certificate?
Moss: I did indeed sign a certificate…with that serial number.
Roy: Yes, but did you sign this particular certificate?
Moss: Yes, I did indeed sign a particular certificate…with that serial number.
Roy: Moss? Did YOU sign, with YOUR key, this particular certificate?
Moss: Roy. I DID sign, with MY key, a certificate….with that serial number.
Roy: Yes, but what if the certificate you signed isn’t the certificate that I have? What if your certificate is for Moe’s Doner Shack, and mine is for *.google.com?
Moss: I would never sign two certificates with the same serial number.
Roy: What if you did?
Moss: I would never sign two certificates with the same serial number.
Roy: What if somebody else did, got your keys perhaps?
Moss: I would never sign two certificates with the same serial number.
Roy: Moss?
Moss: I would never sign two certificates with the same serial number.

Long and short of it is that OCSP doesn’t tell you if a certificate is good, it just tells you if the CA thinks there’s a good certificate out there with that serial number. An attacker who can control the serial number — either by compromising the infrastructure, or just getting the keys — wins.

(This is very similar to the issue Kevin S. McArthur (and Marsh Ray?) and I were musing about on Twitter a few months back during Comodogate, in which it became clear that CRL’s also only secure serial numbers. The difference is that CRL’s are an ancient technology that we knew didn’t work, while OCSP was designed with full awareness of threats and really went out of its way to avoid hashing the parts of the certificate that actually mean something, like Subject Name and whether the certificate is a God Mode Intermediate.)

Now, that’s not to say that another certificate validation endpoint couldn’t be exposed by the CAs, one that actually reported whether a particular certificate was valid. The problem is that CAs can lie — they can claim the existence of a business relationship, when none exists. The CA’s I deal with are actually quite upstanding and hard working — let me call out Comodo and say unequivocally that they took a lot of business risk for doing the right thing, and it couldn’t have been an easy decision to make. The call to push for a browser patch, and not to pretend that browser revocation checking was meaningful, was a pretty big deal. But with over 1500 known entities with the ability to declare certificates for everyone, the trust level is unfortunately degraded.

1500 is too big. We need fewer, and more importantly, we need to reward those that implement better products. We need a situation where there is at least a chain of business (or governmental — we’ll talk about this more next week) relationships between the identity we trust, and the entities that help us recognize that trust — and that those outside that trust (DigiNotar) can be as easily excluded as those inside the trust (Verisign/Symantec) are included.

DNSSEC doesn’t do everything. Specifically, it can’t assert brands, which are different things entirely from domain names. But the degree to which it ties *business relationships* to *key management* is unparalleled — there’s a reason X.509 certificates, for all the gunk they coded into Subject Names, ended up running straight to DNS names as “Common Names” the moment they needed to scale. That’s what every other application does, when it needs to work outside the org! From the registrar that inserts a domain into DNS, to the registry that hosts only the top portion of the namespace, to the one root which is guarded from so much technical and bureaucratic influence, DNSSEC aligns trust with extant relationships.

And, in no uncertain terms, when DNSSEC tells you a certificate is valid, it won’t just be talking about the serial number.

More later. For now, Burning Man. ISPs, a reminder…I’m serious, I genuinely would prefer N00ter not find anything problematic.

A SMALL EPILOGUE

Curious how Facebook could have a certificate that chained to a dead cert signed with a broken hashing function? So was I, until I remembered — roots within browsers are not trusted because of their self signature (best I can tell, just a cheap trick to make ASN.1 parsers happy). They are trusted because the code is made to explicitly trust them. The public key in a certificate otherwise self-signed by MD2 is no less capable of signing SHA-1 blobs as well, which is what’s happening in the Facebook cert. If you want to see the MD2 kill actually breaking stuff, see this post by the brilliant Eric Lawrence.

It IS a little weird, but man, that’s ops. Ops is always a little weird.

Categories: Security

Black Ops of TCP/IP 2011

August 5, 2011 4 comments

This year’s Black Hat and Defcon slides!

Man, it’s nice to be playing with packets again!

People seem to be rather excited (ForbesDark ReadingSearch Security) about the Neutrality Router I’ve been working on. It’s called N00ter, and in a nutshell, it normalizes your link such that any differences in performance can’t be coming from different servers taking different routes, and have to instead be caused at the ISP. Here’s a summary of what I posted to Slashdot, explaining more succinctly what N00ter is up to.

Say Google is 50ms slower than Bing. Is this because of the ISP, or the routers and myriad server and path differentials between the ISP and Google, vs. the ISP and Bing? Can’t tell, it’s all conflated. We have to normalize the connection between the two sites, to measure if the ISP is using policy to alter QoS. Here’s how we do this with n00ter.

Start with a VPN, that creates an encrypted link from a Client to a broker/concentrator. An IP at the Broker talks plaintext with Google and Bing, who replies to the Broker. The Broker now encrypts the traffic back to the Client.

Policy can’t differentiate Bing traffic from Google traffic, it’s all encrypted.

Now, lets change things up — let’s have the Broker push the response traffic from Google and Bing, completely in the open. In fact, lets have it go so far as to spoof traffic from the original sources, making it look like there isn’t even a Broker in place. There’s just nice clean streams from Google and Bing.

If traffic from the same host, being sent over the same network path, but looking like Google, arrives faster (or slower) than traffic that looks like it came from Bing, then there’s policy differentiating Google from Bing.

Now, what if the policy is only applied to full flows, and not half flows? Well, in this case, we have one session that’s a straight normal download from Bing. Then we have another, where the entire client->server path is tunneled as before, but the Broker immediately emits the tunneled packets to Bing *spoofing the Client’s IP address*. So basically we’re now comparing the speed of a full legitimate flow to Bing, with a half flow. If QoS differs — as it would, if policy is only applied to full flows, then once again the policy is detected.

I call this client->server spoofing mode Roto-N00ter.

There’s more tricks, but this is what N00ter’s up to in a nutshell. It should work for anything IP based — if you want to know if XBox360 traffic routes faster than PS3 traffic, this’ll tell you.

Also, I’ve been doing some interesting things with BitCoin. (Len, we’ll miss you.) A few weeks ago, I gave a talk at Toorcon Seattle on the subject. Here are those slides as well.

Where’s the code? Well, two things are slowing down Paketto Keiretsu 3.0 (do people even remember scanrand?). First, I could use a release manager. I swear, packing stuff up for release is actually harder in many ways than writing the code! I do admit to know TCP rather better than Autoconf.

Secondly, and I know this is going to sound strange — I’m really not out to bust anyone with N00ter. Yes, I know it’s inevitable. But if a noxious filter is quietly removed with the knowledge that it’s presence is going to be provable in a court of law, well, all’s well that ends well, right?

So, give me a week or two. I have to get back from Germany anyway (the Black Ops talk will indeed be presented in a nuke hardened air bunker, overlooking MiG’s on the front lawn. LOL.)

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 474 other followers