Phidelius: Constructing Asymmetric Keypairs From Mere Passwords For Fun and PAKE

January 3, 2012 1 comment

TL;DR: New toy.

$ phidelius
Phidelius 1.0: Entropy Spoofing Engine
Author: Dan Kaminsky / dan@doxpara.com
Description: This code replaces most sources of an application's entropy with
a psuedorandom stream seeded from a password, a file, or a generated
sequence. This causes most cryptographic key generators (ssh-keygen,
openssl, etc) to emit apparently strong keys with a presumably memorable
backdoor. For many protocols this creates PAKE (Password Authenticated
Key Exchange) semantics without the server updating any code or even
being aware of the password stored client side. However, the cost of
blinding the server is increased exposure to offline brute force
attacks by a MITM, a risk only partially mitigatable by time/memory
hard crack resistance.
Example: phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f id_dsa"


Passwords are a problem. They’re constantly being lost, forgotten, and stolen. Something like 50% of compromises are associated with their loss. Somewhere along the way, websites started thinking l33tsp33k was a security technology to be enforced upon users.

So what we’re about to talk about in this post — and, in fact, what’s in the code I’m about to finally drop — is by no means a good idea. It may perhaps be an interesting idea, however.

So! First discussed in my Black Ops of 2011 talk, I’m finally releasing Phidelius 1.0. Phidelius allows a client armed with nothing but a password to generate predictable RSA/DSA/ECC keypairs that can then be used, unmodified, against real world applications such as SSH, SSL, IPsec, PGP, and even BitCoin (though I haven’t quite figured out that particular invocation yet — it’s pretty cool, you could send money to the bearer of a photograph).

Now, why would you do this? There’s been a longstanding question as to how can a server support passwords, without necessarily learning those passwords. The standard exhortations against storing unhashed passwords mean nothing against this problem; even if the server is storing hashed values, it’s still receiving them in plain text. There are hash-based challenge response protocols (“please give me the password hashed with this random value”) but they require the server to then store the password (or a password equivalent) so they can recognize valid responses.

There’s been a third class of solutions, belonging to the PAKE (Password Authenticated Key Exchange) family. These solutions generally come down to “password authenticated Diffie-Helman”. For various reasons, not least of which have been patents, these approaches haven’t gotten far. Phidelius basically “solves” PAKE, by totally cheating. Rather than authenticating an otherwise random exchange, the keypair itself is nothing but an encoding of the password. The server doesn’t even have to know. (If you are a dev, your ears just perked up. Solutions that require only one side to patch are infinitely easier to deploy.) How can this work?

Well, what was the one bug that affected asymmetric key generation universally, corrupting RSA, DSA, and ECC equally? Of course, the Debian Random Number Generator flaw. See, asymmetric keys of all sorts are all essentially built the same way. A pile of bits is drawn from some ostensibly random source (/dev/random, CryptGenRandom, etc). These bits are then accepted, massaged, or rejected, until they meet the required form for a keypair of the appropriate algorithm.

Under the Debian RNG bug, only a rather small subset of bit patterns could be drawn, so the same keys would be generated over and over. Phidelius just makes predictable key generation a feature: We know how to turn a password into a stream of random numbers. We call that “seeding a pseudorandom number generator”. So, we just make the output of a PRNG the input to /dev/random, /dev/urandom, a couple of OpenSSL functions, and some other miscellaneous noise, and — poof — same password in, same keypair out, as so:


# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
...The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06
# phidelius -p "ax-op-nm-qw-yi" -e "ssh-keygen -f $RANDOM -N ''"
The key fingerprint is:
c4:77:27:85:17:a2:83:fb:46:6a:06:97:f8:48:36:06

Now, as I mentioned earlier, it’s not necessarily a particularly good idea to use this technique, and not just because passwords themselves are inherently messy. The biggest weakness of the Phidelius approach is that the public keys it generates are also backdoored with the password. The problem here is subtle. It’s not interesting that the server can brute force the public key back to the password. This is *always* the case, even in properly beautiful protocols like SRP. (In fact, Phidelius uses the PRNG from scrypt, a time and memory hard algorithm, to make brute forcing not only hard, but probably harder than any traditional PAKE algorithm can achieve.)

What’s interesting is that public keys are supposed to be public. So, most protocols expose them at least to MITM’s, and sometimes even publish the things. Yes, there’s quite a bit of effort put into suppressing brute force attacks, but this is a scheme that really threatens you with them.

It also doesn’t help that salting Phidelius-derived keys is pretty tricky. The fact that the exact same password creates the exact same keypair isn’t necessarily ideal. The best trick I’ve found is to embed parameters for the private key in the larger context afforded to most public keys, i.e. the certificate. You’d retrieve the public key, mix in the randomized salt and some useful parameters (like, say, options for scrypt), and reconstruct the private key. This requires quite a bit of infrastructure though.

And finally, of course, this is a rather fragile solution. The precise bit-for-bit use of entropy isn’t standardized; all bits are supposed to be equal. So you’re not just generating a key with ssh-keygen, you’re generating from ssh-keygen 5.3, possibly as emitted from a particular compiler. A proper implementation would have to standardize all this.

So, there are caveats. But this is interesting, and I look forward to seeing how people play with it.

(Oh, why Phidelius? It’s a spell from Harry Potter, which properly understood is the story of the epic consequences of losing one’s password.)


Edit: Forgot! Tom Ritter saw my talk at Black Hat, and posted that he’d actually worked out how to do this sort of password-to-keypair construction for PGP keys. It’s actually a bit tricky. You can see his code here.

Categories: Security

Crypto Interrupted: Data On The WPS WiFi Break

January 2, 2012 5 comments

TL,DR:  Went wardriving around Berlin. 26.3% of APs with crypto enabled exposed methods that imply vulnerability to the WPS design flaw.  A conservative extrapolation from WIGLE data suggests at least 4.1M vulnerable hosts.  Yikes.

=======

Fixing things is hard.  Fixing things in security, even more so.  (There’s a reasonable argument that the interlocking dependencies of security create a level of Hard that is mathematically representable.) And yet, WiFi was quietly but noticeably one of our industry’s better achievements.  Sure, WiFi’s original encryption algorithm — WEP — was essentially a demonstration of what not to do when deploying cryptographic primitives.  And indeed, due to various usability issues, it used to be rare to see encryption deployed at all.  But, over the last ten years, in a conversion we’ve notably not witnessed with SSL (at least not at nearly the same scale)…well, take a look:

In January 2002, almost 60% of AP’s seen worldwide by the WIGLE wireless mapping project had encryption disabled.  Ten years later, only 20% remained, and while WEP isn’t gone, it’s on track to be replaced by WPA/WPA2. You can explore the raw data here, and while there’s certainly room to argue about some of the numbers, it’s difficult to say things haven’t gotten better.

Which is what makes this WPS flaw found by Stefan Viehböck (and independently discovered by Craig Heffner) quite so tragic.

I’m not going to lie.  Trying to reverse engineer just how this bug came about is being something of a challenge.  There’s a desire to compete with Bluetooth for device enrollment, there’s some defense against evil twin / fake APs, there’s a historical similarity to the old LANMAN bug…I haven’t really figured it out yet.  At the end of the day though, the problem is that an attacker can determine that they’ve guessed the first four digits before working on the next three (and the last is a checksum), meaning 11000 queries (5500 on average) is enough to guess a PIN.

So, that’s the issue (and, likely, the fix — prevent the attacker from knowing they’re halfway there, by blinding the midpoint error).  But just like the telnet encryption bug, I think it’s important we get a handle on just how widespread this vulnerability is.  The WIGLE data doesn’t actually declare whether a given encryption-supporting node also supports WPS.  So…lets go outside and find out.

Wardriving?  In 2012?  It’s more likely than you think.

Over an approximately 4 hour period, 3,738 AP’s were found in the Berlin area.  (I was in Berlin for CCC.)  2,758 of these AP’s had encryption enabled — 73%, a bit higher than WIGLE average.  535 of the 2,758 (19.39%) had the “Label” WPS method enabled, along with crypto.  These are almost certainly vulnerable.  Another 191 (6.9%) had the “Display” WPS method enabled.  We believe these to be vulnerable as well.  Finally, 611/2758 — 22% — had WPS enabled, with no methods declared.  We have no data on whether these devices are exposed.

As things stand, we see 26.3% of AP’s with encryption enabled, likely to be exposing the WPS vulnerability.

What does this mean, in terms of absolute numbers?  WIGLE’s seen about 26M networks with crypto enabled.  However, they don’t age out AP’s, meaning the fact that a network hasn’t been seen since 2003 doesn’t mean it isn’t represented in the above numbers.  Still, 60% of the networks they’ve ever seen, were first seen in the last two years.  If we just take 60% of the 26M networks — 15.6M — and we entirely ignore all the networks WIGLE was unable to identify the crypto for, and all the networks that declare WPS but don’t declare a type…

We end up with 4.1M networks vulnerable to the WPS design vulnerability around the globe.  That’s the conservative number, and it’s a pretty big deal.

Note, of course, that whether a node is vulnerable is not a random function.  Like the UPNP issues I discussed in my 2011 Black Hat talk, this is an issue that may show up on every access point distributed by a particular ISP.  Given that there are entire countries with a single Internet provider, it is likely there are entire regions where every access point is now wide open.

Curious about your own networks?  Run:

iw scan wlan0 | grep “Config methods”

If you see anything with “Label” or “Display” you’ve got a problem.

===
Small update:

There’s code in one WPA supplicant that implies that, perhaps APs that do not expose a method still support PINs:


/*
* In theory, this could also verify that attr.sel_reg_config_methods
* includes WPS_CONFIG_LABEL, WPS_CONFIG_DISPLAY, or WPS_CONFIG_KEYPAD,
* but some deployed AP implementations do not set Selected Registrar
* Config Methods attribute properly, so it is safer to just use
* Device Password ID here.
*/

I’m going to stick with the conservative 26.1%, rather than the larger 48.1% estimate, until someone actually confirms an effective attack against such an AP.

Categories: Security

From 0day to 0data: TelnetD

December 29, 2011 Leave a comment

Recently, it was found that BSD-derived Telnet implementations had a fairly straightforward vulnerability in their encryption handler. (Also, it was found that there was an encryption handler.) Telnet was the de facto standard protocol for remote administration of everything but Windows systems, so there’s been some curiosity in just how nasty this bug is operationally.

So, with the help of Fyodor and the NMAP scripting team, I set out to collect some data on just how widespread this bug is. First, I set out to find all hosts listening on tcp/23. Then, with Fyodor’s code, I checked for encryption support. Presence of support does not necessarily imply vulnerability, but absence does imply invulnerability.

The scanner’s pretty rough, and I’m doing some sampling here, but here’s the bottom line data:

Out of 156113 confirmed telnet servers randomly distributed across the Internet, only 429 exposed encryption support. By far my largest source of noise is the estimate of just how many telnet servers (as opposed to just TCP listeners on 23, or transient services) exist across the entire Internet. My conservative numbers place that count at around 7.8M. This yields an estimate of ~13785 ~21546 (or 15,600 to 23,500, at 95% confidence — thanks, David Horn!) potentially vulnerable servers.

Patched servers will still look vulnerable unless they remove encryption support entirely, so this isn’t something we can watch get fixed (well, unless we’re willing to crash servers, which I’m not, even if it’d be safe because these things are launched from inetd).

Here is a command line that will scan your networks. You may need nmap from svn.


/usr/local/bin/nmap -p23 -PS23 --script telnet-encryption -T5 -oA telnet_logs -iL list_of_ips -v

So, great bug, just almost certainly not widely distributed. I’ll document the analysis process when I’m not at con.

Categories: Security

Exploring ReU: Rewriting URLs For Fun And XSRF/HTTPS

October 20, 2011 1 comment

So I’ve been musing lately about a scheme I call ReU. It would allow client side rewriting of URLs, to make the addition of XSRF tokens much easier. As a side effect, it should also make it much easier to upgrade the links on a site from http to https. Does it work? Is it a good idea? I’m not at all sure. But I think it’s worth talking about:

===

There is a consistent theme to both XSS and XSRF attacks: XS. The fact that foreign sites can access local resources, with all the credentials of the user, is a fundamental design issue. Random web sites on the Internet should not be able to retrieve Page 3 of 5 of a shopping cart in the same way they can retrieve this blog entry! The most common mechanism recommended to prevent this sort of cross-site chicanery is to use a XSRF token, like so:

http://foo.com/bar?session_id=12345

The token is randomized, so now links from the outside world fail despite having appropriate credentials in the cookie.

In some people’s minds, this is quite enough. It only needs to be possible to deploy secure solutions; cost of deployment is so irrelevant, it’s not even worth being aware of. The problem is, somebody’s got to fix this junk, and it turns out that retrofitting a site with new URLs is quite tricky. As Martin Johns wrote:

…all application local URLs have to be rewritten for using the randomizer object. While standard HTML forms and hyperlinks pose no special challenge, prior existing JavaScript may be harder to deal with. All JavaScript functions that assign values to document.location or open new windows have to be located and modified. Also all existing onclick and onsubmit events have to be rewritten. Furthermore, HTML code might include external referenced JavaScript libraries, which have to be processed as well. Because of these problems, a web application that is protected by such a solution has to be examined and tested thoroughly.

Put simply, web pages are pieced together from pieces developed across sprints, teams, companies, and languages. No one group knows the entire story — but if anyone screws up and leaves the token off, the site breaks.

The test load to prevent those breakages from reoccurring are a huge part of why delivering secure solutions is such a slow and expensive process.

So I’ve been thinking: If upgrading the security of URLs is difficult server side, because the code there tends to be fragmented, could we do something on the client? Could the browser itself be used to add XSRF tokens to URLs in advance of their retrieval or navigation, in a way that basically came down to a single script embedded in the HEAD segment of an HTML file?

Would not such a scheme be useful not only for adding XSRF tokens, but also changing http links to https, paving the way for increasing the adoption of TLS?

The answer seems to be yes. A client side rewriter — ReU, as I call it — might very well be a small, constrained, but enormously useful request of the browser community. I see the code looking something vaguely like:

var session_token=aaa241ee1298371;
var session_domain="foo.com";

function analyzeURL(url, domnode, flags)
{
   domain = getDomain(url);
   // only add token to local resources
   if(isInside(session_domain, domain)) { 
      addParam(url, "token", session_token); }
      forceHTTPS(url);
   }
   return url;
}

window.registerURLRewriter(analyzeURL);

Now, something that could totally kill this proposal, is if browsers that didn’t support the extension suddenly broke — or, worse, if users with new browsers couldn’t experience any of the security benefits of the extension, lest older users have their functionality broken. Happily, I think we can avoid this trap, by encoding in the stored cookie whether the user’s browser supports ReU. Remember, for the XSRF case we’re basically trying to figure out in what situation we want to ignore a browser’s request despite the presence of a valid cookie. So if the cookie reports to us that it came from a user without the ability to automatically add XSRF tokens, oh well. We let that subset through anyway.

As long as the bad guys don’t have the ability to force modern browsers to appear as insecure clients, we’re OK.

Now, there’s some history here. Jim Manico pointed me at CSRFGuard, a project from the OWASP guys. This also tries to “upgrade” URLs through client side inspection. The problem is that real world frameworks really do assemble URLs dynamically, or go through paths that CSRFGuard can’t interrupt. These aren’t obscure paths, either:

No known way to inject CSRF prevention tokens into setters of document.location (ex: document.location=”http://www.owasp.org”;)
Tokens are not injected into HTML elements dynamically generated using JavaScript “createElement” and “setAttribute”

Yeah, there’s a lot of setters of document.location in the field. Ultimately, what we’re asking for here is a new security boundary for browsers — a direct statement that, if the client’s going somewhere or grabbing something, this inspector is going to be able to take a look at it.

There’s also history with both the Referer and Origin headers, which I spoke about (along with the seeds of this research) in my Interpolique talk. The basic concept with each is that the navigation source of a request — say http://www.google.com — would identify itself in some manner in the HTTP header of the subsequent request. The problem for Referer is that there are just so many sources of requests, and it’s best effort whether they show up or not (let alone whether they show up correctly).

Origin was its own problem. Effectively, they added this entire use case where images and links on a shared forum wouldn’t get access to Origin headers. It was necessary for their design, because their security model was entirely static. But for everything that’s *not* an Internet forum, the security model is effectively random.

With ReU, you write the code (or at least import the library) that determines whether or not a token is applied. And as a bonus, you can quickly upgrade to HTTPS too!

There is at least one catastrophic error case that needs to be handled. Basically, those pop-under windows that advertisers use, are actually really useful for all sorts of interesting client side attacks. See, those pop-unders usually retain a connection to the window they came from, and (here’s the important part) can renavigate those windows on demand. So there’s a possible scenario in which you go to a malicious site, it spawns the pop-under, you go to another site, you log in, and then the malicious page drags your real logged in window to a malicious URL at the otherwise protected site.

It’s a navigation event, and so a naive implementation of ReU would fire, adding the token. Yikes.

The solution is straightforward — guarantee that, at least by default, any event that fired the rewriter came from a local source. But that brings up a much trickier question:

Do we try to stop clickjacking with this mechanism as well? IFrames are beautiful, but they are also the source of a tremendous number of events that comes from rather untrusted parties. It’s one thing to request a URL rewriter that can special case external navigation events. It’s possibly another to get a default filter against events sourced from actions that really came from an IFrame parent.

It is very, very possible to make technically infeasible demands. People think this is OK, because it counts as “trying”. The problem is that “trying” can end up eating your entire budget for fixes. So I’m a little nervous about making ReU not fire when the event that triggered it came from the parent of an IFrame.

There are a few other things of note — this approach is rather bookmark friendly, both because the URLs still contain tokens, and because a special “bookmark handler” could be registered to make a long-lived bookmark of sorts. It’s also possible to allow other modulations for the XSRF token, which (as people note) does leak into Referer headers and the like. For example, HTTP headers could be added, or the cookie could be modified on a per request basis. Also, for navigations offsite, secrets could be removed from the referring URL before handed to foreign sites — this could close a longstanding concern with the Referer header too.

Finally, for debuggability, it’d be necessary to have some way of inspecting the URL rewriting stack. It might be interesting to allow the stack to be locked, at least in order of when things got patched in. Worth thinking about.

In summary, there’s lots of quirks to noodle on — but I’m fairly convinced that we can give web developers a “one stop shop” for implementing a number of the behaviors we in the security community believe they should be exhibiting. We can make things easier, more comprehensive, less buggy. It’s worth exploring.

Categories: Security

These Are Not The Certs You’re Looking For

August 31, 2011 16 comments

It began near Africa.

This is not, of course, the *.google.com certificate, seen in Iran, signed by DigiNotar, and announced on August 29th with some fanfare.

This certificate is for Facebook. It was seen in Syria, signed by Verisign (now Symantec), and was routed to me out of the blue by a small group of interesting people — including Stephan Urbach, “The Doctor”, and Meredith Patterson — on August 25th, four days prior.

It wasn’t the first time this year that we’d looked at a mysterious certificate found in the Syrian wild. In May, the Electronic Frontier Foundation reported that it “received several reports from Syrian users who spotted SSL errors when trying to access Facebook over HTTPS.” But this certificate would yield no errors, as it was legitimately signed by Verisign (now Symantec).

But Facebook didn’t use Verisign (now Symantec). From the States, I witnessed Facebook with a DigiCert cert. So too did Meredith in Belgium, with a third strike from Stephan in Belgium: Facebook’s CA was clearly DigiCert.

The SSL Observatory dataset, in all its thirty five gigabytes of glorious detail, agreed completely. With the exception of one Equifax certificate, Facebook was clearly identifiable by DigiCert alone.

And then Meredith found the final nail:

VeriSign intermediary certificate currently used for VeriSign Global certificates. Chained with VeriSign Class 3 Public Primary Certification Authority. End of use in May 2009.

So here we were, looking at a Facebook certificate that couldn’t be found anywhere on the net — by any of us, or by the SSL Observatory — signed by an authority that apparently was never used by the site, found in a country that had recently been found interfering with Facebook communication, using a private key that had been decommissioned for years. Could things get any sketchier?

Well, they could certainly get weirder. That certificate that was killed in 2009, was the certificate killed by us! That’s the MD2-based certificate that Meredith, myself, and the late Len Sassaman warned about in our Black Hat 2009 talk (Slides / Paper)! Somehow, the one cryptographic function that we actually killed before it could do any harm (at the CA, in the browser, even via RFC) was linked to a Facebook certificate issued on July 13th, 2011, two years after its death?

This wasn’t just an apparently false certificate. This was overwhelmingly, almost cartoonishly bogus. Something very strange was happening in Syria.

Facebook was testing some new certificates on one of their load balancers.

WOT

I’ve got a few friends at Facebook. So, with the consent of everyone else involved, I pinged them — “Likely Spoofed Facebook Cert in Syria”. Thought it would get their attention.

It did.

We’re safe.

Turns out we just ordered new (smaller) certs from Verisign and deployed them to a few test LBs on Monday night. We’re currently serving a mixture of digicert and verisign certificates.

One of our test LBs with the new cert: a.b.c.d:443

And thus, what could have been a flash of sound and fury, ultimately signifying nothing, was silenced. There was no attack. Facebook had the private key matching the cert — something not even Verisign would have had, in the case of an actual Syrian break.

In traditional security fashion, we dropped our failed investigation and made no public comment about it — perhaps it’d be a funny story for a future conference, but it was certainly nothing to discuss with urgency.

And then, four days later, the DigiNotar break of *.google.com went public. Totally unrelated, and obviously in progress while our false alarm was ringing. The fates are strange sometimes.

ACTIONABLE INTELLIGENCE

The only entity that can truly assert the cryptographic keys of Facebook, is Facebook. We, as outside observers with direct expertise, detected practically every possible sign that this particular Facebook cert was malicious. It wasn’t. It just wasn’t. That such a comically incorrect certificate might actually be real, has genuine design consequences.

It means you have to put an error screen, with bypass, up for the user. No matter what the outside observers say, they may very well be wrong. The user must be given an override. And the data on what users do, when given an SSL error screen, isn’t remotely subtle. Moxie Marlinspike, probably the smartest guy in the world right now on SSL issues, did a study a few years ago on how many Tor users — not even regular users, but Tor users, clearly concerned about their privacy and possessed with some advanced level of expertise — would notice SSL being disabled and refuse to browse their desired content.

Moxie didn’t find a single user who resisted disabled security. His tool, sslsniff, worked against 100% of the sample set.

To be fair, Moxie’s tool didn’t pop an error — it suppressed security entirely. And, who knows, maybe Tor users think they’ve “done enough” for security and just consider their security budget spent after that. Like most things in security, our data is awful.

But, you know, this UI data ain’t exactly encouraging.

Recently, Moxie’s been presenting some interesting technology, in the form of his Convergence platform. There’s a lot to be said about Convergence, which frankly, I don’t have time to write about right now. (Disclosure: I’m going to Burning Man in about twelve hours. Honestly, I was hoping to postpone this entire discussion until N00ter’s first batch of data was ready. Ah well.) But, to be clear, I was fooled, Meredith was fooled, Moxie would have been fooled…I mean, look at this certificate! How could it possibly be right?

A SMALL TECHNICAL PREVIEW

It is true that the only entity that can truly assert being Facebook, is Facebook. Does that mean everybody should just host self-signed certificates, where they assert themselves directly? Well, no. It doesn’t scale, and just ends up repeatedly asking the user whether a key (first seen, or changed) should be trusted.

Whether a key should be trusted, is an answer to which users only reply yes. Nice for blaming users, I suppose, but it doesn’t move the ball.

What’s better is a way to go from users having to trust everybody, to users having to trust only a small subset of organizations through which they have a business relationship with. Then, that subset of organization can enter into business relationships with the hordes of entities that need their entities asserted…

…and we just reinvented Certificate Authorities.

There’s a lot to say about CA’s, but one thing my friend at Verisign (there were a number of friendly mails exchanged) did point out is that the Facebook certificate passed a check via OCSP — the Online Certificate Status Protocol. Designed because CRLs just do not scale, OCSP allows a browser to quickly determine whether a given certificate is valid or not.

(Not quickly enough — browsers only enforce OCSP checks under the rules of Extended Validation, and *maybe* not even then. I’ve heard rumors.)

CA’s are special, in that they have an actual business relationship with the named parties. For a brief moment, I was excited — perhaps a way for everybody to know if a particular cert was valid, was to ask the certificate authority that issued it! Maybe even we could do something short of the “Internet Death Penalty” for certificates — browsers could be forced to check OCSP before accepting any certificate from a questioned authority.

Alas, there was a problem — and not just “the only value people are adding is republishing the data from the CA”. No, this concept doesn’t work at all, because OCSP assumes a CA never loses control of its keys. I repeat, the system in place to handle a CA losing its keys, assumes the CA never loses the keys.

Look at this.

$ openssl ocsp -issuer vrsn_issuer_cert.pem -cert Cert.pem -url http://ocsp.verisign.com -resp_text
OCSP Response Data:
OCSP Response Status: successful (0×0)
Response Type: Basic OCSP Response
Version: 1 (0×0)
Responder Id: O = VeriSign Trust Network, OU = “VeriSign, Inc.”, OU = VeriSign International Server OCSP Responder – Class 3, OU = Terms of use at http://www.verisign.com/rpa (c)03
Produced At: Aug 30 09:18:44 2011 GMT
Responses:
Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

Signature Algorithm: sha1WithRSAEncryption
d0:01:89:71:4c:f9:81:e8:5f:bf:d0:cb:46:e5:ba:34:30:bf:
a0:3a:33:f9:7f:47:51:f8:9b:1d:7d:bc:bc:95:b8:6f:64:49:
e5:b4:34:d9:eb:75:ff:6a:a2:01:b4:0a:ae:d5:f0:1d:a2:bf:
56:45:f2:42:27:44:ef:9b:5f:0d:ba:47:60:ac:1b:eb:9a:45:
f8:c8:3d:27:ad:0e:cb:ae:2a:03:7c:95:40:2b:5f:5b:5a:be:
ad:ea:60:54:15:39:a2:07:37:ac:11:b0:a6:42:2e:03:14:7e:
ff:0e:7c:87:30:f7:63:01:4c:fa:e3:42:54:bc:e8:f3:81:4c:
ba:66
Certificate:
Data:
Version: 3 (0×2)
Serial Number:
78:3a:4f:aa:ce:5f:ab:d6:8f:bd:60:9d:35:53:d4:e9
Signature Algorithm: sha1WithRSAEncryption
Issuer: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server CA – Class 3, OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign
Validity
Not Before: Aug 4 00:00:00 2011 GMT
Not After : Nov 2 23:59:59 2011 GMT
Subject: O=VeriSign Trust Network, OU=VeriSign, Inc., OU=VeriSign International Server OCSP Responder – Class 3, OU=Terms of use at http://www.verisign.com/rpa (c)03
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:ef:a4:35:c9:29:e0:66:ca:2c:4e:77:95:55:55:
e8:60:46:5e:f3:8d:2c:c9:d2:83:01:f7:9c:2b:29:
b6:75:f2:e2:7e:45:f6:e9:29:ff:1a:54:d6:40:d2:
72:ef:78:12:7f:18:5c:c0:dc:ee:ac:c3:7c:97:9b:
fb:d5:b1:96:ab:07:bd:5e:e0:b0:de:0c:e9:03:ae:
c3:9f:36:dd:ad:74:97:8a:9e:81:4e:66:85:17:e6:
24:fa:a7:b6:ce:fd:56:f3:d4:c9:f2:36:3b:8d:dc:
fe:ae:0b:9c:25:23:06:8b:88:52:44:94:2c:4a:0c:
50:1a:c1:3f:ff:b3:17:c4:ab
Exponent: 65537 (0×10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Certificate Policies:
Policy: 2.16.840.1.113733.1.7.23.3
CPS: https://www.verisign.com/CPS
User Notice:
Organization: VeriSign, Inc.
Number: 1
Explicit Text: VeriSign’s CPS incorp. by reference liab. ltd. (c)97 VeriSign

X509v3 Extended Key Usage:
OCSP Signing
X509v3 Key Usage:
Digital Signature
OCSP No Check:

X509v3 Subject Alternative Name:
DirName:/CN=OCSP8-TGV7-123
Signature Algorithm: sha1WithRSAEncryption
52:c1:39:cf:65:8e:52:b1:a2:3c:38:dc:e2:d1:3e:bc:85:41:
de:c6:88:61:91:82:10:39:cb:63:62:8b:02:8c:7e:ce:dc:df:
4d:2f:44:d1:f0:89:e1:64:8d:5e:57:5b:a5:71:80:29:28:0b:
b9:c0:8b:68:1c:29:b6:5d:d7:99:d8:b8:e3:a3:11:61:8f:c6:
fb:f3:ad:3b:35:bb:fc:30:b9:0f:69:06:19:9f:9f:72:6d:a3:
f7:06:41:8f:8c:2a:26:94:ba:8e:cf:f6:76:75:b0:35:57:fb:
92:ac:79:03:91:17:24:46:09:88:ca:44:03:92:ed:e5:b8:76:
92:28
—–BEGIN CERTIFICATE—–
MIIEDDCCA3WgAwIBAgIQeDpPqs5fq9aPvWCdNVPU6TANBgkqhkiG9w0BAQUFADCB
ujEfMB0GA1UEChMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazEXMBUGA1UECxMOVmVy
aVNpZ24sIEluYy4xMzAxBgNVBAsTKlZlcmlTaWduIEludGVybmF0aW9uYWwgU2Vy
dmVyIENBIC0gQ2xhc3MgMzFJMEcGA1UECxNAd3d3LnZlcmlzaWduLmNvbS9DUFMg
SW5jb3JwLmJ5IFJlZi4gTElBQklMSVRZIExURC4oYyk5NyBWZXJpU2lnbjAeFw0x
MTA4MDQwMDAwMDBaFw0xMTExMDIyMzU5NTlaMIGwMR8wHQYDVQQKExZWZXJpU2ln
biBUcnVzdCBOZXR3b3JrMRcwFQYDVQQLEw5WZXJpU2lnbiwgSW5jLjE/MD0GA1UE
CxM2VmVyaVNpZ24gSW50ZXJuYXRpb25hbCBTZXJ2ZXIgT0NTUCBSZXNwb25kZXIg
LSBDbGFzcyAzMTMwMQYDVQQLEypUZXJtcyBvZiB1c2UgYXQgd3d3LnZlcmlzaWdu
LmNvbS9ycGEgKGMpMDMwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAO+kNckp
4GbKLE53lVVV6GBGXvONLMnSgwH3nCsptnXy4n5F9ukp/xpU1kDScu94En8YXMDc
7qzDfJeb+9WxlqsHvV7gsN4M6QOuw5823a10l4qegU5mhRfmJPqnts79VvPUyfI2
O43c/q4LnCUjBouIUkSULEoMUBrBP/+zF8SrAgMBAAGjggEZMIIBFTAJBgNVHRME
AjAAMIGsBgNVHSAEgaQwgaEwgZ4GC2CGSAGG+EUBBxcDMIGOMCgGCCsGAQUFBwIB
FhxodHRwczovL3d3dy52ZXJpc2lnbi5jb20vQ1BTMGIGCCsGAQUFBwICMFYwFRYO
VmVyaVNpZ24sIEluYy4wAwIBARo9VmVyaVNpZ24ncyBDUFMgaW5jb3JwLiBieSBy
ZWZlcmVuY2UgbGlhYi4gbHRkLiAoYyk5NyBWZXJpU2lnbjATBgNVHSUEDDAKBggr
BgEFBQcDCTALBgNVHQ8EBAMCB4AwDwYJKwYBBQUHMAEFBAIFADAmBgNVHREEHzAd
pBswGTEXMBUGA1UEAxMOT0NTUDgtVEdWNy0xMjMwDQYJKoZIhvcNAQEFBQADgYEA
UsE5z2WOUrGiPDjc4tE+vIVB3saIYZGCEDnLY2KLAox+ztzfTS9E0fCJ4WSNXldb
pXGAKSgLucCLaBwptl3Xmdi446MRYY/G+/OtOzW7/DC5D2kGGZ+fcm2j9wZBj4wq
JpS6js/2dnWwNVf7kqx5A5EXJEYJiMpEA5Lt5bh2kig=
—–END CERTIFICATE—–
Cert.pem: good
This Update: Aug 30 09:18:44 2011 GMT
Next Update: Sep 6 09:18:44 2011 GMT

I’ve spent years saying X.509 is broken, but seriously, this is it for me. I copy this in full because the full reply needs to be parsed, and I don’t see this as dropping 0day because it’s very clearly an intentional part of the design. Here is what OCSP asserts:

Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0
Issuer Key Hash: 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6
Serial Number: 092197E1C0E9CD03DA35243656108681
Cert Status: good

To translate: “A certificate with the serial number 092197E1C0E9CD03DA35243656108681, using sha1, signed by a name with the hash of C0FE0278FC99188891B3F212E9C7E1B21AB7BFC0, using a key with the hash of 0DFC1DF0A9E0F01CE7F2B213177E6F8D157CD4F6, is good.”

To translate even further, please imagine a scene from one of my favorite shows, the IT crowd.


Roy: Moss…did you…did you sign this certificate?
Moss: I did indeed sign a certificate…with that serial number.
Roy: Yes, but did you sign this particular certificate?
Moss: Yes, I did indeed sign a particular certificate…with that serial number.
Roy: Moss? Did YOU sign, with YOUR key, this particular certificate?
Moss: Roy. I DID sign, with MY key, a certificate….with that serial number.
Roy: Yes, but what if the certificate you signed isn’t the certificate that I have? What if your certificate is for Moe’s Doner Shack, and mine is for *.google.com?
Moss: I would never sign two certificates with the same serial number.
Roy: What if you did?
Moss: I would never sign two certificates with the same serial number.
Roy: What if somebody else did, got your keys perhaps?
Moss: I would never sign two certificates with the same serial number.
Roy: Moss?
Moss: I would never sign two certificates with the same serial number.

Long and short of it is that OCSP doesn’t tell you if a certificate is good, it just tells you if the CA thinks there’s a good certificate out there with that serial number. An attacker who can control the serial number — either by compromising the infrastructure, or just getting the keys — wins.

(This is very similar to the issue Kevin S. McArthur (and Marsh Ray?) and I were musing about on Twitter a few months back during Comodogate, in which it became clear that CRL’s also only secure serial numbers. The difference is that CRL’s are an ancient technology that we knew didn’t work, while OCSP was designed with full awareness of threats and really went out of its way to avoid hashing the parts of the certificate that actually mean something, like Subject Name and whether the certificate is a God Mode Intermediate.)

Now, that’s not to say that another certificate validation endpoint couldn’t be exposed by the CAs, one that actually reported whether a particular certificate was valid. The problem is that CAs can lie — they can claim the existence of a business relationship, when none exists. The CA’s I deal with are actually quite upstanding and hard working — let me call out Comodo and say unequivocally that they took a lot of business risk for doing the right thing, and it couldn’t have been an easy decision to make. The call to push for a browser patch, and not to pretend that browser revocation checking was meaningful, was a pretty big deal. But with over 1500 known entities with the ability to declare certificates for everyone, the trust level is unfortunately degraded.

1500 is too big. We need fewer, and more importantly, we need to reward those that implement better products. We need a situation where there is at least a chain of business (or governmental — we’ll talk about this more next week) relationships between the identity we trust, and the entities that help us recognize that trust — and that those outside that trust (DigiNotar) can be as easily excluded as those inside the trust (Verisign/Symantec) are included.

DNSSEC doesn’t do everything. Specifically, it can’t assert brands, which are different things entirely from domain names. But the degree to which it ties *business relationships* to *key management* is unparalleled — there’s a reason X.509 certificates, for all the gunk they coded into Subject Names, ended up running straight to DNS names as “Common Names” the moment they needed to scale. That’s what every other application does, when it needs to work outside the org! From the registrar that inserts a domain into DNS, to the registry that hosts only the top portion of the namespace, to the one root which is guarded from so much technical and bureaucratic influence, DNSSEC aligns trust with extant relationships.

And, in no uncertain terms, when DNSSEC tells you a certificate is valid, it won’t just be talking about the serial number.

More later. For now, Burning Man. ISPs, a reminder…I’m serious, I genuinely would prefer N00ter not find anything problematic.

A SMALL EPILOGUE

Curious how Facebook could have a certificate that chained to a dead cert signed with a broken hashing function? So was I, until I remembered — roots within browsers are not trusted because of their self signature (best I can tell, just a cheap trick to make ASN.1 parsers happy). They are trusted because the code is made to explicitly trust them. The public key in a certificate otherwise self-signed by MD2 is no less capable of signing SHA-1 blobs as well, which is what’s happening in the Facebook cert. If you want to see the MD2 kill actually breaking stuff, see this post by the brilliant Eric Lawrence.

It IS a little weird, but man, that’s ops. Ops is always a little weird.

Categories: Security

Black Ops of TCP/IP 2011

August 5, 2011 4 comments

This year’s Black Hat and Defcon slides!

Man, it’s nice to be playing with packets again!

People seem to be rather excited (ForbesDark ReadingSearch Security) about the Neutrality Router I’ve been working on. It’s called N00ter, and in a nutshell, it normalizes your link such that any differences in performance can’t be coming from different servers taking different routes, and have to instead be caused at the ISP. Here’s a summary of what I posted to Slashdot, explaining more succinctly what N00ter is up to.

Say Google is 50ms slower than Bing. Is this because of the ISP, or the routers and myriad server and path differentials between the ISP and Google, vs. the ISP and Bing? Can’t tell, it’s all conflated. We have to normalize the connection between the two sites, to measure if the ISP is using policy to alter QoS. Here’s how we do this with n00ter.

Start with a VPN, that creates an encrypted link from a Client to a broker/concentrator. An IP at the Broker talks plaintext with Google and Bing, who replies to the Broker. The Broker now encrypts the traffic back to the Client.

Policy can’t differentiate Bing traffic from Google traffic, it’s all encrypted.

Now, lets change things up — let’s have the Broker push the response traffic from Google and Bing, completely in the open. In fact, lets have it go so far as to spoof traffic from the original sources, making it look like there isn’t even a Broker in place. There’s just nice clean streams from Google and Bing.

If traffic from the same host, being sent over the same network path, but looking like Google, arrives faster (or slower) than traffic that looks like it came from Bing, then there’s policy differentiating Google from Bing.

Now, what if the policy is only applied to full flows, and not half flows? Well, in this case, we have one session that’s a straight normal download from Bing. Then we have another, where the entire client->server path is tunneled as before, but the Broker immediately emits the tunneled packets to Bing *spoofing the Client’s IP address*. So basically we’re now comparing the speed of a full legitimate flow to Bing, with a half flow. If QoS differs — as it would, if policy is only applied to full flows, then once again the policy is detected.

I call this client->server spoofing mode Roto-N00ter.

There’s more tricks, but this is what N00ter’s up to in a nutshell. It should work for anything IP based — if you want to know if XBox360 traffic routes faster than PS3 traffic, this’ll tell you.

Also, I’ve been doing some interesting things with BitCoin. (Len, we’ll miss you.) A few weeks ago, I gave a talk at Toorcon Seattle on the subject. Here are those slides as well.

Where’s the code? Well, two things are slowing down Paketto Keiretsu 3.0 (do people even remember scanrand?). First, I could use a release manager. I swear, packing stuff up for release is actually harder in many ways than writing the code! I do admit to know TCP rather better than Autoconf.

Secondly, and I know this is going to sound strange — I’m really not out to bust anyone with N00ter. Yes, I know it’s inevitable. But if a noxious filter is quietly removed with the knowledge that it’s presence is going to be provable in a court of law, well, all’s well that ends well, right?

So, give me a week or two. I have to get back from Germany anyway (the Black Ops talk will indeed be presented in a nuke hardened air bunker, overlooking MiG’s on the front lawn. LOL.)

Categories: Security

New York Times coming out against DNS filtering

This is rather cool: The New York Times is citing our paper and agreeing that filtering DNS for intellectual property violations would be both ineffective and counterproductive.

Some of the remedies are problematic. A group of Internet safety experts cautioned that the procedure to redirect Internet traffic from offending Web sites would mimic what hackers do when they take over a domain. If it occurred on a large enough scale it could impair efforts to enhance the safety of the domain name system.

This kind of blocking is unlikely to be very effective. Users could reach offending Web sites simply by writing the numerical I.P. address in the navigator box, rather than the URL. The Web sites could distribute free plug-ins to translate addresses into numbers automatically.

The bill before the Senate is an important step toward making piracy less profitable. But it shouldn’t pass as is. If protecting intellectual property is important, so is protecting the Internet from overzealous enforcement.

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 475 other followers