And, we’re back

December 13, 2010 Leave a comment

It has come to my attention that there are forms of writing that do not fit into 140 characters.

I intend to explore them.

This should be interesting.  For the last two years, I’ve been studying DNSSEC — what it is, what it can be, how to make it useful.  Much of what I’ve learned is dotted across the slide decks cited on the right of this blog.

Slides are not papers.  And there’s quite a bit that hasn’t made it into the decks at all.

So I’m going to do something I hope you’ll find useful:  I’m going to blog, in occasionally excruciating detail, the design decisions behind my use of DNSSEC in the Phreebird Suite.  I’m just one engineer, and I won’t always be right.  But hopefully these writings will shed some light on why I switched sides, from someone who thought DNSSEC was too much work for not enough benefit, to somebody who’s decided this is a major shift in the Internet’s ability to deliver secure, scalable, affordable systems.

It is not enough to deliver security.  We must, as a community, learn how to deliver security cheaper.  Like or not, we compete with insecure, and insecure is winning.


Well, you all said you wanted me to work on something that had nothing to do with DNS ;)  Muahhaa.

Categories: Security

Defense and Offense For HTML5

November 1, 2010 5 comments

(Hehe.  This is a fun post.  Hopefully it annoys the W3C and Security communities equally.)

Just to go on the record: I’m just not all that concerned with new security threats from HTML5. Frankly, minus WebGL, HTML5 in 2010 gives us roughly the functionality that Flash gave us in 2004, but with oddly less polish.  (I’m not kidding:  They’re still trying to figure out how to correctly implement full screen video.)

From a security perspective, that means whatever is scary about HTML5 has already been in place, in 99% of browsers, for half a decade.  Not that that means we’ve attacked, or fixed it all — we in the open security community aren’t exactly known for our speed in finding issues.  But I can’t say I’ve seen much in HTML5 that’s made me nervous, at least relative to problems that already exist in HTML4 and Flash (to say nothing of Java).

What does however bother me is the lack of security technology in HTML5.  The performance guys got goodies.  The artists got goodies.  Even the database guys got goodies — they’re getting SQLite in every browser!  Meanwhile, Cross Site Scripting and Cross Site Request Forgery attacks remain endemic to the web, and fixing them remains embarrassingly expensive.

The real problem with HTML5 is not what risks it adds.  It’s what risks it doesn’t even attempt to remove.  I don’t think we can even blame W3C for this — who, in the security community, has been willing to spend the years necessary to push a spec through?  Thus far, not enough of us.

In the long run, if we’re going to actually fix things, this is something we’re going to have to do more of.  I of course have my own thoughts on what to do about these endemic attacks — I think we need better session management — but actual participation is probably more important than a random slide deck and some code.

(Slight edit:  Yes, the browser manufacturers have been playing around with some technologies to address these issues — Microsoft, with their Anti-XSS Filter, and Mozilla, with Content Security Policies.  Are either technologies looking to be on the larger standardization path?  Not as far as I can tell.)

Interestingly, I just watched a talk at Black Hat Abu Dhabi about a particular realm of failure in Web Session Management:  Robert “RSnake” Hansen and Josh Sokel’s “HTTPS Can Byte Me“.  Their point is that the HTTP version of a site actually has quite a bit of control about the credentials presented to the HTTPS version of a site — and that this control, while not overwhelming, is a lot more powerful, and troubling than expected.  The two attacks I liked best:

1) If you have a site with a wildcard certificate, and that site has an XSS attack reachable irrespective of Host header, then an attacker w/ MITM capabilities can read content from anything valid under the wildcard.  Put another way, if you have and * certs, and the * site gets hit, is somewhat exposed. (I’m using the example of, because apparently this actually affected them).

2) Since HTTP sites can write cookies that will be reflected to HTTPS endpoints, and since cookies can be tied to certain paths, and since servers will puke if given too long cookies, an HTTP attacker can “turn off” portions of the HTTPS namespace by throwing in enormous cookies.  Cute.

Ultimately, these findings have increased my belief that we need the ability to mark sites as SSL-only, so they simply don’t have an HTTP endpoint to corrupt.  The melange of technologies, from HTTPS Everywhere, to Strict-Transport-Security, to the as-yet unspecified DNSSEC Strict Transport markings, become ever more important.

Categories: Security

Regarding DLL Hijacking

August 27, 2010 Leave a comment

Been recovering some old writings that never got posted.  Back in August, we were talking  a lot on Full Disclosure about DLL hijacking.  The question of what represents a bug, is actually more complex than people realize, so I weighed in.  I’m responding to Matt from, and he’s just posted another attack, this one an AutoRun variant.

Sure, but you have the same problem most of the other DLL hijacking exploits have:

There’s no security boundary that says that what appears to be a .ppt, actually is one.  The attacker controls both the icon and the filename.  That’s the case from a network share, that’s the case from a USB mount, that’s just the way it is.

Here’s the larger picture:

These are unusually interesting findings.  On the one hand, you have arbitrary code execution, generally the gold standard for vulnerability.  On the other hand, operating systems *by definition* execute arbitrary code  The question is whether they’re supposed to execute code in this particular context.

The answer is not actually obvious.

The specific boundary at play seems to be the document boundary.  There are very popular file formats that we’d like to be able to read and view, without running any embedded code inside — powerpoint files, for instance.  In the proposed scenario, the user has double clicked a file from a remote share.

Calc pops up.  Clearly a vuln, right?

Here’s the problem:  How could this boundary ever be maintained?  The user has no actual way of knowing that the file he’s clicking on is, in fact, a PowerPoint document in the first place.  It could just as easily be a .exe, faking its icon.  And every major operating system has been hiding file extensions for years.

Worse, even if extension weren’t hidden, its not like applications have any sort of “safety contract” when executed from the desktop.  Some formats have very clear mandates, inherited from being trafficked via email.  Others are outright scripting languages and are fundamentally designed to execute arbitrary code.

The web has developed a very clear security model:  If you can parse it zero click from a web site, it’s required to stay within the browser sandbox.  The desktop has no such model — some formats are ‘safe’, others aren’t.

There was some talk about whether PSD was “vulnerable” to this flaw.  The complexity comes from the fact that, for all we know, opening a PSD file executes arbitrary scripts by design.  Why shouldn’t it?  Not everything is a web browser.

So, it’s not that this is a weak bug or a massive bug.  It’s a characteristic, that has managed to make the otherwise unambiguous proof of concept — popping calc — ambiguously problematic.  That’s actually impressive, if a bit meta.

We’ll probably see some unambiguous attacks pop up, but they haven’t yet.

They still haven’t, though I don’t expect this state to last forever.

Categories: Security


May 21, 2010 2 comments

Well, I didn’t get to make it to SIGGRAPH this year — but, as always, Ke-Sen Huang’s SIGGRAPH Paper Archive has us covered.  Here’s some papers that caught my eye:

The Frankencamera: An Experimental Platform for Computational Photography

Is there any realm of technology that moves faster than digital imaging?  Consumer cameras are on a six month product cycle, and the level of imaging we’re getting out of professional gear nowadays absolutely (and finally) blows away what we were able to do with film.  And yet, there is so much more possible.  Much of the intelligence inside consumer and professional cameras is locked away.  While there are some efforts at exposing scriptability (see CHDK for Canon consumer/prosumer cams), the really low level stuff remains just out of reach.

That’s changing.  A group out of Stanford is creating the Frankencamera — a completely hackable, to the microsecond scale, base for what’s being referred to as Computational Photography.  Put simply, CCDs are not film.  They don’t require a shutter to cycle, their pixels do not need to be read simultaneously, and since images are actually fundamentally quite redundant (thus their compressibility), a lot can be done by merging the output of several frames and running extensive computations on them, etc.  Some of the better things out of the computational photography realm:

Getting these mechanisms out of the lab, and into your hands, is going to require a better platform.  Well, it’s coming :)

Parametric Reshaping of Human Bodies in Images

Electrical Engineering has Alternating Current (AC) and Direct Current (DC).  Computer Security has Code Review (human-driven analysis of software, seeking faults) and Fuzzing (machine-driven randomizing bashing of software, also seeking faults).  And Computer Graphics has Polygons (shapes with textures and lighting) and Images (flat collections of pixels, collected from real world example).  To various degrees of fidelity, anything can be built with either approach, but the best things often happen at the intersections of each tech tree.

In graphics, a remarkable amount of interesting work is happening when real world footage of humans is merged with an underlying polygonal awareness of what’s happening in three dimensional space.  Specifically, we’re getting the ability to accurately and trivially alter images of human bodies.  In the video above, and in MovieReshape: Tracking and Reshaping of Humans in Videos from SIGGRAPH Asia 2010, complex body characteristics such as height and musculature are being programmatically altered to a level of fidelity that the human visual system sees everything as normal.

That is actually somewhat surprising.  After all, if there’s anything we’re supposed to be able to see small changes in, it’s people.  I guess in the era of Benjamin Button (or, heck, normal people photoshopping their social networking photos) nothing should be surprising, but a slider for “taller” wasn’t expected.

Unstructured Video-Based Rendering: Interactive Exploration of Casually Captured Videos

In The Beginning, we thought it was all about capturing the data.  So we did so, to the tune of petabytes.

Then we realized it was actually about exploring the data — condensing millions of data points down to something actionable.

Am I talking about what you’re thinking about?  Yes. Because this precise pattern has shown up everywhere, as search, security, business, and science itself finds themselves absolutely flooded with data, but not quite so much as much knowledge of what to do with all that data.  We’re doing OK, but there’s room for improvement, especially for some of the richer data types — like video.

In this paper, the authors extend the sort of image correlation research we’ve seen in Photosynth to video streams, allowing multiple streams to be cross-referenced in time and space. It’s well known that, after the Oklahoma City bombing, federal agents combed through thousands of CCTV tapes, using the explosion as a sync pulse and tracking the one van that had the attackers.  One wonders the degree to which that will become automatable, in the presence of this sort of code.

Note:  There’s actually a demo, and it’s 4GB of data!

Second note:  Another fascinating piece of image correlation, relevant to both unstructured imagery and the unification of imagery and polygons, is here:  Building Rome On A Cloudless Day.

Dynamic Video Narratives

In terms of “user interfaces that would make my life easier”, I must say, something that does seeking better than the “sequence of tiny freeze frames” (at best) or “just a flat bar I can click on” (at worst) into “a jigsaw puzzle of useful images” would be mighty nice.

(See also:  Video Tapestries)

Structure Based ASCII Art

You know, some people ask me why I pay attention to pretty pictures, when I should be out trying to save the world or something.

Well, lets be honest.  A given piece of code may or may not be secure.  But the world’s most optimized algorithm for turning Anime into ASCII is without question totally awesome.

Categories: Imagery

Predictions for 2010

December 19, 2009 Leave a comment

In December of 2009, Bill Brenner from CSO Magazine asked me what to expect from 2010.  I’d actually never done one of these “predict the future” writeups before, but I took a shot at it.  Bill ended up posting the more CxO of these.  Here’s (roughly) the full set:

1. Economics will cause a few members of the “Old and Hoary Prediction Club” to finally come true.

One of the defining laws of security in general can be thought of as:  “What could possibly go wrong is much more than what actually does.”  Most bad things do not happen because they’re prevented.  They don’t happen because “the bad guys” simply do not choose to do them.  But this is truly the first major economic recession of the Information age, and whatever the numbers say, a *lot* of people are struggling.  That’s motive.  People who struggle get creative, by which I mean “start doing creative and profitable things they heard about”.  Not everything that’s been predicted in previous years will come true, but at the end of 2010, look back to predictions for 2007, 2008, and 2009.  Some of the wrong ones will have happened.  One in particular is number two on this list:

2. Cyber extortion will finally enter the public consciousness.

There’s no good data on — wait, this is security.  There’s not much in the way of good data on *anything*.  But, facing a credible threat to a downtime sensitive, computer driven infrastructure, extortion demands do in fact get paid.  Sometimes the system is large, like a public utility or a manufacturing facility.  Sometimes it’s not exactly on the side of angels, like an online gaming establishment.  And sometimes it’s just some random mom and pop, or even citizen, being told to spend $50 if they ever want to see their documents again.  There’s no good way of knowing how big a problem this has been, but this may be the year that extortion, like credit card fraud and even more like identity theft, becomes part of the national conversation.  Expect stock filings to start having to disclose unexpected expenses, some truly ill-advised marketing campaigns by security vendors, backlash (not at all entirely undeserved) that the threat is being wildly overblown, and continuing aggression towards the channels by which the extortionists get paid.

3. Prosecution for cybercrime will begin in earnest, starting with the sloppy rich.

Albert Gonzalez, one of the hackers behind the Heartland and 7/11 attacks, reportedly spent over $75,000 on a birthday party.  The worst of the worst know to have far lower profiles, but what that party should tell you is that there’s a lot of low hanging fruit for law enforcement to scoop up, with some serious ill-gotten gains to recover.

As a corrolary to this, the international jurisdiction problem that has stymied prosecutions in the past will be dealt with — possibly by agreement, maybe by treaty, and maybe even (if you will forgive me speaking about things I truly know little about) the application of anti-terrorist compacts to cybercriminal activity.  Put simply, there aren’t enough terrorists, and the ones there are, are political hot potatoes of the first order.  The cybercriminals will have far less baggage.

4. Data sharing will struggle, but will actually begin — driven by compliance requirements and the push for “public/private partnership”

One of the great challenges of security is operationalization:  Sure, there are small cadres of attackers and defenders who know how this field works, but spreading them thin across the industry as consultants doesn’t scale.  To make a real difference, the knowledge of a few must be pushed into process for many.  Existing efforts along these lines — involving compliance regimes — have actually achieved more penetration than they’re given credit for.  People are actually doing what the rules say.  But the rules, without naming names, can leave something to be desired.  In the face of deep compromises of fully compliant systems, the standards bodies will lay the blame on insufficient data sharing between victims.  Right or wrong (and, given the terrible state of data in security, more the former than the latter), this will lead to American legislation centered on funding a LE clearing house for, and a yearly report on, attacks seen against American cyber assets.  Compliance standards, by in large, will compel participation in this regime.  (There’s a small chance this effort will be fast-tracked by end of 2010, but only in the face of a front-page-news scale attack — if I was to predict an example, electronic interference with military-related logistics within a civilian supplier.  Otherwise, it will be a program that is well underway, but not operational by end of year.)  Europe will follow.

5. Ineffective security technologies will finally get called out as such, but not without cost

Many cyber defense technologies do not work.  Specifically, given a large sample of environments with the defense, and a large sample without, differences in infection rates in the former and the latter will not in fact be statistically significant.  Some that do work, only work through the multicultural effect:  The defense simply doesn’t haven’t enough market share to spawn evolution in attackers.  Figuring what doesn’t work, what won’t work in the long term, and what’s a genuine defensible security boundary will become a major driver for the next generation of compliance standards.  Given the money at stake, expect this process to be brutal and politicized.

6. The Cloud will get worse before it gets better.  But it will get better.

The Cloud is *going to win*.  I don’t know how else to say this:  It’s faster.  It’s better.  It’s cheaper.  But there are security issues, and they’re not simply the sort of problems that can be worked out by taking a CIO out to golf and promising everything’s going to be OK.  Genuine, technical security faults in cloud technology will garner a huge amount of attention.  It may appear to some that all is lost.  But the faults will be addressed, because existing investments are so very high.  And anyway, it’s not like the status quo is anything to be proud of.

7. DNSSEC will continue its inexorable progress towards replacing X.509.

According to Verizon Business, 61% of compromises can be traced back not to vulnerabilities, but to failures in authentication.  Technically impressive attacks are fun and all, but no passwords, bad passwords, default passwords, and shared passwords are the bread and butter of real world exploitation.  (That, and SQL injection.)  PKI was supposed to eliminate this password problem, ages ago, but it didn’t exactly work out.  We built PKI on X.509 — but X.509-based PKI was obviously a scalability non-starter in 2002.  That it’s still the best we have going into 2010 is an embarassment.  2010 will see major DNS TLD’s — and yes, I predict .com will get something up early, alongside the July 2010 signing of the root — spin up DNSSEC operations.  And then…no, the Internet will not become safe overnight, and there will be some snarking about “well, what now?”.

By end of year, we’ll see what:  A stream of security products, previous unable to scale due to their dependency on X.509, will transition their trust systems over to DNSSEC.  And then the race in 2011 will be for much larger suppliers to adapt to the architectural shift their scrappier competitors have shown to be viable.

That being said, there’s at least a 25% politics will scutttle all of this, and we’ll be stuck with our auth-driven 61% (which I don’t see getting better anytime soon, compliance or not).

8. “Personalized Prices” — price discrimination via identity-discovery technology already deployed for targeted advertising — will become a fundamental battleground in the privacy wars.

Lets be honest:  The Web knows who you are.  For over a decade, identity discovery technologies have been deployed on major websites simply to better target advertising.  Beyond banner ads, major e-commerce sites like Amazon have discovered that the right product shown to the right user at the right time will significantly improve sales.  But in 2010, expect to see discovery of something much deeper:  A major e-commerce site will be discovered to have significantly altered prices based on prior, disturbingly detailed knowledge of the particular user browsing their site  Put simply, different users have different sensitivity to prices, but traditional retail has always had too high friction to exploit this efficiently — a price tag is a price tag, for everybody.  But online, everybody can technically receive a “personalized price”, tuned precisely to the likelihood that they’ll buy.
There’s been some rumblings in this direction already — witness the recent claims (apparently inaccurate) regarding a retailer raising their prices in the presence of Bing’s Cashback scheme ( — but nothing compared to what will be unambiguously proven in 2010.  Small guys will use IP Geolocation and ZIP Code databases to apply a “rich buyer” tax (and perhaps a “high fraud rate” tax) to incoming purchases, but at least one large organization will be found to have thrown serious quantitative analysis against databases of buyer names, addresses, credit scores, average bank balances, previous shopping history (long term and recent), and presence or absence of comparison shopping *for that particular product*.  Even social network data may get involved — if one of your friends just bought a Nikon camera, yours may become more expensive since it’s statistically likely you’ve received a word of mouth vouch.  If your family is having a wedding (an inelastic demand if there ever was one), your plane tickets could become much more expensive.

The discovery of Personalized Pricing in 2010 will be fascinating to watch.  The geeks will immediately start pushing privacy enhancing technologies like Tor, which will suddenly become the cheapest way to shop online — at least, when they successfully block identification, which they won’t always or even often.  The marketers, having invested heavily in this technology (and having made ridiculous amounts of money with it), will push an enormous PR campaign, arguing that “Personalized Prices” (get used to it, it’s going to be given a cutesy name like this) mean better deals for consumers.  (Indeed, businesses have long since operated under this reality, though some troubling variants re: internal email hacking may be discovered.  However, businesses have an entire negotiation class to deal with this reality.  Consumers do not.)

Where it will all go wrong, are in two places.  First, the quants will ultimately have charged protected classes more in some non-zero number of instances.  Maybe it’s females, maybe it’s African Americans, maybe it’s just residents of a ZIP code that has been historically “redlined”.  The political hay to be made from this, especially going into the American November 2010 elections, will be extensive.  Second, it will be realized that e-commerce sites have much more interesting data to steal than even credit card numbers.  Significant personal information will show up on Wikileaks, and the question will become:

How the heck did all this data get collected in the first place?

Some will have been provided in previous transactions, though not necessarily with that particular company.  But a disturbing amount will have been pulled from deep packet inspection engines at ISPs — and what won’t have come from them, will be sourced to toolbars pushed into people’s browsers.  This will change the nature of the Net Neutrality debate entirely…

Categories: Security

A Marathon, Not A Sprint

April 2, 2009 2 comments

Our scanners improve!  A new nmap Beta 7 is out (Windows, OSX, Source), with slightly more accurate scan logic research by Renaud Deraison of Nessus.  This tends to find a few extra boxes per network, so it’s definitely worth grabbing.  It doesn’t take too much work to spin up another scan, and heh, it’s an opportunity to play with ndiff, the nmap diffing engine.

McAfee also has a really nice Windows based scanner out the door — check it out.

Of course, you may be thinking:  The world didn’t come to an end.  Clearly, this whole thing was just a Y2K hypefest.  I’m sorry the bad guys aren’t quite the eschatologists some people would like them to be, but somebody’s been investing extraordinary amounts of resources making a worm very difficult to kill.  It’s not like there was a contingent of rogue coders, sitting around figuring out where they could put two-character date fields after January 1st, 2001.  There’s a bad guy out there, and while we shouldn’t panic, we shouldn’t quite ignore the situation either.  Botnets — even much smaller botnets than would have otherwise have been created, thanks to rapid patching and automatic updates by Microsoft — are big business.  As my friend Jason Larsen says, it’s not about ownage, it’s about continued ownage.

What to do?  That’s what makes these scanners nice.  They represent clean, actionable, operationally viable guidance for IT staff that aren’t exactly bursting at the seams with free time.  I continue to be appreciative of all the developers who worked this last weekend to push Tillmann and Felix’s code into their products.  It goes a long way towards moving us closer to less fear, more certainly, and no doubt.

Categories: Security

More Conficker Toolage

March 30, 2009 2 comments

There be news!

First off, the Honeynet Project has released Know Your Enemy:  Containing Conficker.  Tillmann and Felix have done a tremendous job of analyzing Conficker, and their paper makes for a great read.  Plus, you get to find out exactly what Conficker is doing differently on the anonymous network surface.

Secondly, nmap has released 4.85Beta5 (Windows, OSX, Source), with the Conficker detection logic.  Cool, no more hacking up Beta4.

Please, be careful to actually use –script-args=safe=1 , like so:

nmap -PN -d -p 445 –script=smb-check-vulns –script-args=safe=1

Third, I’ve rebuilt the py2exe versions of Tillmann and Felix’s scs code, now with Core’s impacket library safely embedded.  See here.  Source code, of course, is at their site (see

Finally, commercial scanning products continue to make steady progress protecting their customers, with nCircle and Qualys coming online.

There’s something else cool, but I’m pretty exhausted.  More in the morn.

Categories: Security

Get every new post delivered to your Inbox.

Join 792 other followers