Home > Security > Defense and Offense For HTML5

Defense and Offense For HTML5

(Hehe.  This is a fun post.  Hopefully it annoys the W3C and Security communities equally.)

Just to go on the record: I’m just not all that concerned with new security threats from HTML5. Frankly, minus WebGL, HTML5 in 2010 gives us roughly the functionality that Flash gave us in 2004, but with oddly less polish.  (I’m not kidding:  They’re still trying to figure out how to correctly implement full screen video.)

From a security perspective, that means whatever is scary about HTML5 has already been in place, in 99% of browsers, for half a decade.  Not that that means we’ve attacked, or fixed it all — we in the open security community aren’t exactly known for our speed in finding issues.  But I can’t say I’ve seen much in HTML5 that’s made me nervous, at least relative to problems that already exist in HTML4 and Flash (to say nothing of Java).

What does however bother me is the lack of security technology in HTML5.  The performance guys got goodies.  The artists got goodies.  Even the database guys got goodies — they’re getting SQLite in every browser!  Meanwhile, Cross Site Scripting and Cross Site Request Forgery attacks remain endemic to the web, and fixing them remains embarrassingly expensive.

The real problem with HTML5 is not what risks it adds.  It’s what risks it doesn’t even attempt to remove.  I don’t think we can even blame W3C for this — who, in the security community, has been willing to spend the years necessary to push a spec through?  Thus far, not enough of us.

In the long run, if we’re going to actually fix things, this is something we’re going to have to do more of.  I of course have my own thoughts on what to do about these endemic attacks — I think we need better session management — but actual participation is probably more important than a random slide deck and some code.

(Slight edit:  Yes, the browser manufacturers have been playing around with some technologies to address these issues — Microsoft, with their Anti-XSS Filter, and Mozilla, with Content Security Policies.  Are either technologies looking to be on the larger standardization path?  Not as far as I can tell.)

Interestingly, I just watched a talk at Black Hat Abu Dhabi about a particular realm of failure in Web Session Management:  Robert “RSnake” Hansen and Josh Sokel’s “HTTPS Can Byte Me“.  Their point is that the HTTP version of a site actually has quite a bit of control about the credentials presented to the HTTPS version of a site — and that this control, while not overwhelming, is a lot more powerful, and troubling than expected.  The two attacks I liked best:

1) If you have a site with a wildcard certificate, and that site has an XSS attack reachable irrespective of Host header, then an attacker w/ MITM capabilities can read content from anything valid under the wildcard.  Put another way, if you have supersecure.mozilla.org and *.mozilla.org certs, and the *.mozilla.org site gets hit, supersecure.mozilla.org is somewhat exposed. (I’m using the example of mozilla.org, because apparently this actually affected them).

2) Since HTTP sites can write cookies that will be reflected to HTTPS endpoints, and since cookies can be tied to certain paths, and since servers will puke if given too long cookies, an HTTP attacker can “turn off” portions of the HTTPS namespace by throwing in enormous cookies.  Cute.

Ultimately, these findings have increased my belief that we need the ability to mark sites as SSL-only, so they simply don’t have an HTTP endpoint to corrupt.  The melange of technologies, from HTTPS Everywhere, to Strict-Transport-Security, to the as-yet unspecified DNSSEC Strict Transport markings, become ever more important.

Categories: Security
  1. December 13, 2010 at 9:59 pm | #1

    Your points are well-taken. FWIW, Mozilla has proposed CSP for standardization at W3C, and I really hope we’ll get around to proposing a charter around that to the membership in the not too distant future. Meanwhile, volunteers to help with that work are more than welcome.

  2. December 13, 2010 at 10:12 pm | #2

    So I’m not a huge fan of CSP (I’m not convinced it lowers the cost of writing secure code) but I’m a huge fan of Mozilla working to get a security technology standardized outside of a single browser. I mean, we’re decently good at point fixes, but it’s good seeing some experience in getting something more structural.

  3. Lennie
    December 16, 2010 at 11:53 am | #3

    I’ve been thinking about SPDY ( http://www.chromium.org/spdy/spdy-whitepaper ) a lot lately.

    Also I don’t think any browser vendor will enable SPDY for HTTP, because their are probably a lot of proxies out there which block CONNECT for port 80 and only allow 443. So when it is ready, it will use TLS.

    While the current spec doesn’t talk about TLS, it is definitely the goal of the creators to use it. I think it is just because that is how specs need to be written.

    So we have a more secure protocol which speeds up the loading of not just webpages over HTTPS but also is faster then plain-text HTTP.

    This is a great way to get webdeveloper excited about adopting HTTPS instead of HTTP and every time the webdevelopers get involved it creates a lot more hype which is closely followed by much more speedy adoption.

    So in theory SPDY could be a great idea. :-)

    What do you think ?

    • December 16, 2010 at 1:32 pm | #4

      I don’t know a lot about SPDY itself, but if you look at XmlHTTPRequest, we’ve basically moved the game of building new protocols into JavaScript. HTTP on the wire is not sending prebuilt pages, or even dynamically generated pages, like it used to. Everything is in pieces.

      I actually do expect SPDY over TLS to be the only way SPDY can work, since HTTP proxies are in front of about 20% of the net. I’m a little worried about what site administrators will think about the loss of their proxies, but the bottom line is that if a site doesn’t want to be proxied, it can always force cache misses.

      HTTPS adoption is really dependent on all the domain dependencies getting valid certs, which is actually a tricky thing. DNSSEC should help.

      • Leen
        December 16, 2010 at 4:22 pm | #5

        Well, SPDY is ‘just’ multiplexing of HTTP streams over TCP or SSL over TCP with some header compression and some other similair tricks.

        And Google has been working on adding extra tricks to TLS to get the connection to start sending data sooner (False Start, etc.)

        SPDY is trying to reduce the problem of TCP-slowstart. Because of HTTP only can handle one request at a time over TCP and usually the requests/responses are to short to really get going.

        Browsers do open several connections (up to 6 per domain, it used to be just 2 in IE6 and IE7), but it only helps solve the parallel requests problem and obviously adds extra overhead.

        I guess SNI for TLS (and maybe StartSSL for price) also help solve the problems with HTTPS adoption.

        But Windows XP still has at about 50% of the desktop share. Although I don’t know how many of them are IE or Safari users.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 475 other followers

%d bloggers like this: