Archive

Archive for the ‘Security’ Category

Bloody Cert Certified

April 12, 2014 7 comments

Oh, Information Disclosure vulnerabilities.  Truly the Rodney Dangerfield of vulns, people never quite know what their impact is going to be.  With Memory Corruption, we’ve basically accepted that a sufficiently skilled attacker always has enough degrees of freedom to at least unreliably achieve arbitrary code execution (and from there, by the way, to leak arbitrary information like private keys).  With Information Disclosure, even the straight up finder of Heartbleed has his doubts:

So, can Heartbleed leak private keys in the real world or not?  The best way to resolve this discussion is PoC||GTFO (Proof of Concept, or you can figure out the rest).  CloudFlare threw up a challenge page to steal their key.  It would appear Fedor Indutny and Illkka Mattila have successfully done just that.  Now, what’s kind of neat is that because this is a crypto challenge, Fedor can actually prove he pulled this off, in a way everyone else can prove but nobody else can duplicate.

I’m not sure if key theft has already occurred, but I think this fairly conclusively establishes key theft is coming.  Lets do a walk through:

First, we retrieve the certificate from http://www.cloudflarechallenge.com.

$ echo “” | openssl s_client -connect www.cloudflarechallenge.com:443 -showcerts | openssl x509 > cloudflare.pem
depth=4 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root
verify error:num=19:self signed certificate in certificate chain
verify return:0
DONE

This gives us the certificate, with lots of metadata about the RSA key.  We just want the key.

$ openssl x509 -pubkey -noout -in cloudflare.pem > cloudflare_pubkey.pem

Now, we take the message posted by Fedor, and Base64 decode it:

$ wget https://gist.githubusercontent.com/indutny/a11c2568533abcf8b9a1/raw/1d35c1670cb74262ee1cc0c1ae46285a91959b3f/1.bash
–2014-04-11 18:14:48– https://gist.githubusercontent.com/indutny/a11c2568533abcf8b9a1/raw/1d35c1670cb74262ee1cc0c1ae46285a91959b3f/1.bash
Resolving gist.githubusercontent.com… 192.30.252.159
Connecting to gist.githubusercontent.com|192.30.252.159|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/plain]
Saving to: `1.bash’

[ <=> ] 456 –.-K/s in 0s

2014-04-11 18:14:49 (990 KB/s) – `1.bash’ saved [456]

$ cat 1.bash
> echo “Proof I have your key. fedor@indutny.com” | openssl sha1 -sign key.pem -sha1 | openssl enc -base64
aKmd7OEXbTvlh6CQNczA+XYVsru4b4t1RMcHCqvjQOW3sxrLFX0laO47fks1G90c
6CtcQwYO06uVxwa9XAr8TAXbmZZj+kGoSNkRzpaNeeqqNkcvVUEv5dV5wy4Q2Mfr
aL6YQSVCfbGGlU0SwBYKdI3cssfPPiClM+i9psi655zDYcgYDdZIvl3/XpGBHeKu
CH9gmt9R9ey448IdZ2uCF+oX+KAogcl9AgpX7GPmJNXZWN13ASCjIFxPdQUguj5p
Z1e8sv6DmenKABaWGmMtQDf/JAzHDCl8CaYlReXQW1wja3I4crD2tDqxq6+Hbajn
SOUFr2XOSpuj5wGxlo20KA==

$ cat 1.bash | grep -v fedor | openssl enc -d -base64 > fedor_signed_proof.bin

So, what do we have?  A message, a public key, and a signature linking that specific message to that specific public key.  At least, we’re supposed to.  Does OpenSSL agree?

$ echo “Proof I have your key. fedor@indutny.com” | openssl dgst -verify cloudflare_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verified OK

Ah, but what if we tried this for some other RSA public key, like the one from Google?

$ echo “” | openssl s_client -connect www.google.com:443 -showcerts | openssl x509 > google.pem
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify error:num=20:unable to get local issuer certificate
verify return:0
DONE

$ openssl x509 -pubkey -noout -in google.pem > google_pubkey.pem

$ echo “Proof I have your key. fedor@indutny.com” | openssl dgst -verify google_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verification Failure

Or what if I changed things around so it looked like it was me who cracked the code?

$ echo “Proof I have your key. dan@whiteops.com” | openssl dgst -verify cloudflare_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verification Failure

Nope, it’s Fedor or bust :)  Note that it’s a common mistake, when testing cryptographic systems, to not test the failure modes.  Why was “Verified OK” important?  Because “Verification Failure” happened when our expectations weren’t met.

There is, of course, one mode I haven’t brought up.  Fedor could have used some other exploit to retrieve the key.  He claims not to have, and I believe him.  But that this is a failure mode is also part of the point — there have been lots of bugs that affect something that ultimately grants access to SSL private keys.  The world didn’t end then and it’s not ending now.

I am satisfied that the burden of proof for Heartbleed leaking  private keys has been met, and I’m sufficiently convinced that “noisy but turnkey solutions” for key extraction will be in the field in the coming weeks (it’s only been ~3 days since Neel told everyone not to panic).

Been getting emails asking for what the appropriate response to Heartbleed is.  My advice is pretty exclusively for system administrators and CISOs, and is something akin to “Patch immediately, particularly the systems exposed to the outside world, and don’t just worry about HTTP.  Find anything moving SSL, particularly your SSL VPNs, prioritizing on open inbound, any TCP port.  Cycle your certs if you have them, you’re going to lose them, you may have already, we don’t know.  But patch, even if there’s self signed certs, this is a generic Information Leakage in all sorts of apps.  If there is no patch and probably won’t ever be, look at putting a TLS proxy in front of the endpoint.  Pretty sure stunnel4 can do this for you.”

QUICK EDIT:  One final note – the bug was written at the end of 2011, so devices that are older than that and have not been patched at all remain invulnerable (at least to Heartbleed).  The guidance is really to identify systems that are exposing OpenSSL 1.01  SSL servers (and eventually clients) w/ heartbeat support, on top of any protocol, and get ‘em patched up.

Yes, I’m trying to get focus off of users and passwords and even certificates and onto stemming the bleeding, particularly against non-obvious endpoints.  For some reason a surprising number of people think SSL is only for HTTP and browsers.  No, it’s 2014.  A lot of protocol implementations have basically evolved to use this one environment middleboxes can’t mess with.

A lot of corporate product implementations, by the way.  This has nothing to do with Open Source, except to the degree that Open Source code is just basic Critical Infrastructure now.

Categories: Security

Be Still My Breaking Heart

April 10, 2014 35 comments

Abstract:  Heartbleed wasn’t fun.  It represents us moving from “attacks could happen” to “attacks have happened”, and that’s not necessarily a good thing.  The larger takeaway actually isn’t “This wouldn’t have happened if we didn’t add Ping”, the takeaway is “We can’t even add Ping, how the heck are we going to fix everything else?”.  The answer is that we need to take Matthew Green’s advice, start getting serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.  It took three years to find Heartbleed.  We have to move towards a model of No More Accidental Finds.

======================

You know, I’d hoped I’d be able to avoid a long form writeup on Heartbleed.  Such things are not meant to be.  I’m going to leave many of the gory technical details to others, but there’s a few angles that haven’t been addressed and really need to be.  So, let’s talk.  What to make of all this noise?

First off, there’s been a subtle shift in the risk calculus around security vulnerabilities.  Before, we used to say:  “A flaw has been discovered.  Fix it, before it’s too late.”  In the case of Heartbleed, the presumption is that it’s already too late, that all information that could be extracted, has been extracted, and that pretty much everyone needs to execute emergency remediation procedures.

It’s a significant change, to assume the worst has already occurred.

It always seems like a good idea in security to emphasize prudence over accuracy, possible risk over evidence of actual attack.  And frankly this policy has been run by the privacy community for some time now.  Is this a positive shift?  It certainly allows an answer to the question for your average consumer, “What am I supposed to do in response to this Internet ending bug?”  “Well, presume all your passwords leaked and change them!”

I worry, and not merely because “You can’t be too careful” has not at all been an entirely pleasant policy in the real world.  We have lots of bugs in software.  Shall we presume every browser flaw not only needs to be patched, but has already been exploited globally worldwide, and you should wipe your machine any time one is discovered?  This OpenSSL flaw is pernicious, sure.  We’ve had big flaws before, ones that didn’t just provide read access to remote memory either.  Why the freak out here?

Because we expected better, here, of all places.

There’s been quite a bit of talk, about how we never should have been exposed to Heartbleed at all, because TLS heartbeats aren’t all that important a feature anyway.  Yes, it’s 2014, and the security community is complaining about Ping again.  This is of course pretty rich, given that it seems half of us just spent the last few days pinging the entire Internet to see who’s still exposed to this particular flaw.  We in security sort of have blinders on, in that if the feature isn’t immediately and obviously useful to us, we don’t see the point.

In general, you don’t want to see a protocol designed by the security community.  It won’t do much.  In return (with the notable and very appreciated exception of Dan Bernstein), the security community doesn’t want to design you a protocol.  It’s pretty miserable work.  Thanks to what I’ll charitably describe as “unbound creativity” the fast and dumb and unmodifying design of the Internet has made way to a hodge podge of proxies and routers and “smart” middleboxes that do who knows what.  Protocol design is miserable, nothing is elegant.  Anyone who’s spent a day or two trying to make P2P VoIP work on the modern Internet discovers very quickly why Skype was worth billions.  It worked, no matter what.

Anyway, in an alternate universe TLS heartbeats (with full ping functionality) are a beloved security feature of the protocol as they’re the key to constant time, constant bandwidth tunneling of data over TLS without horrifying application layer hacks.  As is, they’re tremendously useful for keeping sessions alive, a thing I’d expect hackers with even a mild amount of experience with remote shells to appreciate.  The Internet is moving to long lived sessions, as all Gmail users can attest to.  KeepAlives keep long lived things working.  SSH has been supporting protocol layer KeepAlives forever, as can be seen:

The takeaway here is not “If only we hadn’t added ping, this wouldn’t have happened.”  The true lesson is, “If only we hadn’t added anything at all, this wouldn’t have happened.”  In other words, if we can’t even securely implement Ping, how could we ever demand “important” changes?  Those changes tend to be much more fiddly, much more complicated, much riskier.  But if we can’t even securely add this line of code:

if (1 + 2 + payload + 16 > s->s3->rrec.length)

I know Neel Mehta.  I really like Neel Mehta.  It shouldn’t take absolute heroism, one of the smartest guys in our community, and three years for somebody to notice a flaw when there’s a straight up length field in the patch.  And that, I think, is a major and unspoken component of the panic around Heartbleed.  The OpenSSL dev shouldn’t have written this (on New Years Eve, at 1AM apparently).  His coauthors and release engineers shouldn’t have let it through.  The distros should have noticed.  Somebody should have been watching the till, at least this one particular till, and it seems nobody was.

Nobody publicly, anyway.

If we’re going to fix the Internet, if we’re going to really change things, we’re going to need the freedom to do a lot more dramatic changes than just Ping over TLS.  We have to be able to manage more; we’re failing at less.

There’s a lot of rigamarole around defense in depth, other languages that OpenSSL could be written in, “provable software”, etc.  Everyone, myself included, has some toy that would have fixed this.  But you know, word from the Wall Street Journal is that there have been all of $841 in donations to the OpenSSL project to address this matter.  We are building the most important technologies for the global economy on shockingly underfunded infrastructure.  We are truly living through Code in the Age of Cholera.

Professor Matthew Green of Johns Hopkins University recently commented that he’s been running around telling the world for some time that OpenSSL is Critical Infrastructure.  He’s right.  He really is.  The conclusion is resisted strongly, because you cannot imagine the regulatory hassles normally involved with traditionally being deemed Critical Infrastructure.  A world where SSL stacks have to be audited and operated against such standards is a world that doesn’t run SSL stacks at all.

And so, finally, we end up with what to learn from Heartbleed.  First, we need a new model of Critical Infrastructure protection, one that dedicates real financial resources to the safety and stability of the code our global economy depends on – without attempting to regulate that code to death.  And second, we need to actually identify that code.

When I said that we expected better of OpenSSL, it’s not merely that there’s some sense that security-driven code should be of higher quality.  (OpenSSL is legendary for being considered a mess, internally.)  It’s that the number of systems that depend on it, and then expose that dependency to the outside world, are considerable.  This is security’s largest contributed dependency, but it’s not necessarily the software ecosystem’s largest dependency.  Many, maybe even more systems depend on web servers like Apache, nginx, and IIS.  We fear vulnerabilities significantly more in libz than libbz2 than libxz, because more servers will decompress untrusted gzip over bzip2 over xz.  Vulnerabilities are not always in obvious places – people underestimate just how exposed things like libxml and libcurl and libjpeg are.  And as HD Moore showed me some time ago, the embedded space is its own universe of pain, with 90’s bugs covering entire countries.

If we accept that a software dependency becomes Critical Infrastructure at some level of economic dependency, the game becomes identifying those dependencies, and delivering direct technical and even financial support.  What are the one million most important lines of code that are reachable by attackers, and least covered by defenders?  (The browsers, for example, are very reachable by attackers but actually defended pretty zealously – FFMPEG public is not FFMPEG in Chrome.)

Note that not all code, even in the same project, is equally exposed.    It’s tempting to say it’s a needle in a haystack.  But I promise you this:  Anybody patches Linux/net/ipv4/tcp_input.c (which handles inbound network for Linux), a hundred alerts are fired and many of them are not to individuals anyone would call friendly.  One guy, one night, patched OpenSSL.  Not enough defenders noticed, and it took Neel Mehta to do something.

We fix that, or this happens again.  And again.  And again.

No more accidental finds.  The stakes are just too high.

Categories: Security

On Brinksmanship

January 18, 2013 6 comments

Reality is what refuses to go away when you stop believing in it.

The reality – the ground truth—is that Aaron Swartz is dead.

Now what.

Brinksmanship is a terrible game, that all too many systems evolve towards.  The suicide of Aaron Swartz is an awful outcome, an unfair outcome, a radically out of proportion outcome.   As in all negotiations to the brink, it represents a scenario in which all parties lose.

Aaron Swartz lost.  He paid with his life.  This is no victory for Carmen Ortiz, or Steve Heymann, or JSTOR, MIT, the United States Government, or society in general.  In brinksmanship, everybody loses.

Suicide is a horrendous act and an even worse threat.  But let us not pretend that a set of charges covering the majority of Aaron’s productive years is not also fundamentally noxious, with ultimately a deeply similar outcome.  Carmen Ortiz (and, presumably, Steve Heymann) are almost certainly telling the truth when they say – they had no intention of demanding thirty years of imprisonment from Aaron.  This did not stop them from in fact, demanding thirty years of imprisonment from Aaron.

Brinksmanship.  It’s just negotiation.  Nothing personal.

Let’s return to ground truth.  MIT was a mostly open network, and the content “stolen” by Aaron was itself mostly open.  You can make whatever legalistic argument you like; the reality is there simply wasn’t much offense taken to Aaron’s actions.  He wasn’t stealing credit card numbers, he wasn’t reading personal or professional emails, he wasn’t extracting design documents or military secrets.  These were academic papers he was ‘liberating’.

What he was, was easy to find.

I have been saying, for some time now, that we have three problems in computer security.  First, we can’t authenticate.  Second, we can’t write secure code.  Third, we can’t bust the bad guys.  What we’ve experienced here, is a failure of the third category.  Computer crime exists.  Somebody caused a huge amount of damage – and made a lot of money – with a Java exploit, and is going to get away with it.  That’s hard to accept.  Some of our rage from this ground truth is sublimated by blaming Oracle.  But some of it turns into pressure on prosecutors, to find somebody, anybody, who can be made an example of.

There are two arguments to be made now.  Perhaps prosecution by example is immoral – people should only be punished for their own crimes.  In that case, these crimes just weren’t offensive enough for the resources proposed (prison isn’t free for society).  Or perhaps prosecution by example is how the system works, don’t be naïve – well then.

Aaron Swartz’s antics were absolutely annoying to somebody at MIT and somebody at JSTOR.  (Apparently someone at PACER as well.)  That’s not good, but that’s not enough.  Nobody who we actually have significant consensus for prosecuting, models himself after Aaron Swartz and thinks “Man, if they go after him, they might go after me”.

The hard truth is that this should have gone away, quietly, ages ago.  Aaron should have received a restraining order to avoid MIT, or perhaps some sort of fine.  Instead, we have a death.  There will be consequences to that – should or should not doesn’t exist here, it is simply a statement of fact.  Reality is what refuses to go away, and this is the path by which brinksmanship is disincentivized.

My take on the situation is that we need a higher class of computer crime prosecution.  We, the computer community in general, must engage at a higher level – both in terms of legislation that captures our mores (and funds actual investigations – those things ain’t free!), and operational support that can provide a critical check on who is or isn’t punished for their deeds.  Aaron’s law is an excellent start, and I support it strongly, but it excludes faux law rather than including reasoned policy.  We can do more.  I will do more.

The status quo is not sustainable, and has cost us a good friend.  It’s so out of control, so desperate to find somebody – anybody! – to take the fall for unpunished computer crime, that it’s almost entirely become about the raw mechanics of being able to locate and arrest the individual instead of about their actual actions.

Aaron Swartz should be alive today.  Carmen Ortiz and Steve Heymann should have been prosecuting somebody else.  They certainly should not have been applying a 60x multiple between the amount of time they wanted, and the degree of threat they were issuing.  The system, in all of its brinksmanship, has failed.  It falls on us, all of us, to fix it.

Categories: Security

Actionable Intelligence: The Mouse That Squeaked

December 14, 2012 3 comments

[Obligatory disclosures -- I've consulted for Microsoft, and had been doing some research on Mouse events myself.]

So one of the more important aspects of security reporting is what I’ve been calling Actionable Intelligence. Specifically, when discussing a bug — and there are many, far more than are ever discovered let alone disclosed — we have to ask:

What can an attacker do today, that he couldn’t do yesterday, for what class attacker, to what class victim?

Spider.IO, a fraud analytics company, recently disclosed that under Internet Explorer attackers can capture mouse movement events from outside an open window. What is the Actionable Intelligence here? It’s moderately tempting to reply: We have a profound new source of modern art.

psmap
(Credit: Anatoly Zenkov’s IOGraph tool)

I exaggerate, but not much. The simple truth is that there are simply not many situations where mouse movements are security sensitive. Keyboard events, of course, would be a different story — but mouse? As more than a few people have noted, they’d be more than happy to publish their full movement history for the past few years.

It is interesting to discuss the case of the “virtual keyboard”. There has been a movement (thankfully rare) to force credential input via mouse instead of keyboard, to stymie keyloggers. This presupposes a class of attacker that has access to keyboard events, but not mouse movements or screen content. No such class actually exists; the technique was never protecting much of anything in the first place. It’s just pain-in-the-butt-as-a-feature.  More precisely, it’s another example of Rick Wash’s profoundly interesting turn of phrase, Folk Security. Put simply, there is a belief that if something is hard for a legitimate user, it’s even harder for the hacker. Karmic security is (unfortunately) not a property of the universe.

(What about the attacker with an inline keylogger? Not only does he have physical access, he’s not actually constrained to just emulating a keyboard. He’s on the USB bus, he has many more interesting devices to spoof.)

That’s not to say spider.io has not found a bug. Mouse events should only come from the web frame for which script has dominion over, in much the same way CNN should not be receiving Image Load events from a tab open to Yahoo. But the story of the last decade is that bugs are not actually rare, and that from time to time issues will be found in everything. We don’t need to have an outright panic when a small leak is found. The truth is, every remote code execution vulnerability can also capture full screen mouse motion. Every universal cross site scripting attack (in which CNN can inject code into a frame owned by Yahoo) can do similar, though perhaps only against other browser windows.

I would like to live in a world where this sort of very limited overextension of the web security model warrants a strong reaction. It is in fact nice that we do live in a world where browsers effectively expose the most nuanced and well developed (if by fire) security model in all of software. Where else is even the proper scope of mouse events even a comprehensible discussion?

(Note that it’s a meaningless concept to say that mouse events within the frame shouldn’t be capturable. Being able to “hover” on items is a core user interface element, particularly for the highly dynamic UI’s that Canvas and WebGL enable. The depth of damage one would have to inflict on the browser usability model, to ‘secure’ activity in what’s actually the legitimate realm of a page, would be profound. When suggesting defenses, one must consider whether the changes required to make them reparable under actual assault ruins the thing being defended in the first place. We can’t go off destroying villages in order to save them.)

So, in summary: Sure, there’s a bug here with these mouse events. I expect it will be fixed, like tens of thousands of others. But it’s not particularly significant.  What can an attacker do today, that he couldn’t do yesterday?  Not much, to not many.  Spider.io’s up to interesting stuff, but not really this.

Categories: Security

DakaRand 1.0: Revisiting Clock Drift For Entropy Generation

August 15, 2012 21 comments

“The generation of random numbers is too important to be left to chance.”
Robert R. Coveyou

“One out of 200 RSA keys in the field were badly generated as a result of standard dogma.  There’s a chance this might fail less.”
–Me

[Note:  There are times I write things with CIO's in mind.  This is not one of those times.]

So, I’ve been playing with userspace random number generation, as per Matt Blaze and D.P. Mitchell’s TrueRand from 1996.  (Important:  Matt Blaze has essentially disowned this approach, and seems to be honestly horrified that I’m revisiting it.)  The basic concept is that any system with two clocks has a hardware number generator, since clocks jitter relative to one another based on physical properties, particularly when one is operating on a slow scale (like, say, a human hitting a keyboard) while another is operating on a fast scale (like a CPU counter cycling at nanosecond speeds).  Different tolerances on clocks mean more opportunities for unmodelable noise to enter the system.  And since the core lie of your computer is that it’s just one computer, as opposed to a small network of independent nodes running on their own time, there should be no shortage of bits to mine.

At least, that’s the theory.

As announced at Defcon 20 / Black Hat, here’s DakaRand 1.0.  Let me be the first to say, I don’t know that this works.  Let me also be the first to say, I don’t know that it doesn’t.  DakaRand is a collection of modes that tries to convert the difference between clocks into enough entropy that, whether or not it survives academic attack, would certainly force me (as an actual guy who breaks stuff) to go attack something else.

A proper post on DakaRand is reserved, I think, for when we have some idea that it actually works.  Details can be seen in the slides for the aforementioned talk; what I’d like to focus on now is recommendations for trying to break this code.  The short version:

1) Download DakaRand, untar, and run “sh build.sh”.
2) Run dakarand -v -d out.bin -m [0-8]
3) Predict out.bin, bit for bit, in less than 2^128 work effort, on practically any platform you desire with almost any level of active manipulation you wish to insert.

The slightly longer version:

  1. DakaRand essentially tries to force the attacker into having no better attack than brute force, and then tries to make that work effort at least 2^128.  As such, the code is split into generators that acquire bits, and then a masking sequence of SHA-256, Scrypt, and AES-256-CTR that expands those bits into however much is requested.  (In the wake of Argyros and Kiayias’s excellent and underreported “I Forgot Your Password:  Randomness Attacks Against PHP Applications“, I think it’s time to deprecate all RNG’s with invertable output.  At the point you’re asking whether an RNG should be predictable based on its history, you’ve already lost.)  The upshot of this is that the actual target for a break is not the direct output of DakaRand, but the input to the masking sequence.  Your goal is to show that you can predict this particular stream, with perfect accuracy, at less than 2^128 work effort.  Unless you think you can glean interesting information from the masking sequence (in which case, you have more interesting things to attack than my RNG), you’re stuck trying to design a model of the underlying clock jitter.
  2. There are nine generators in this initial release of DakaRand.  Seriously, they can’t all work.
  3. You control the platform.  Seriously — embedded, desktop, server, VM, whatever — it’s fair game.  About the only constraint that I’ll add is that the device has to be powerful enough to run Linux.  Microcontrollers are about the only things in the world that do play the nanosecond accuracy game, so I’m much less confident against those.  But, against anything ARM or larger, real time operation is simply not a thing you get for free, and even when you pay dearly for it you’re still operating within tolerances far larger than DakaRand needs to mine a bit.  (Systems that are basically cycle-for-cycle emulators don’t count.  Put Bochs and your favorite ICE away.  Nice try though!)
  4. You seriously control the platform.  I’ve got no problem with you remotely spiking the CPU to 100%, sending arbitrary network traffic at whatever times you like, and so on.  The one constraint is that you can’t already have root — so, no physical access, and no injecting yourself into my gathering process.  It’s something of a special case if you’ve got non-root local code execution.  I’d be interested in such a break, but multitenancy is a lie and there’s just so many  interprocess leaks (like this if-it’s-so-obvious-why-didn’t-you-do-it example of cross-VM communication).
  5. Virtual machines get special rules:  You’re allowed to suspend/restore right up to the execution of DakaRand.  That is the point of atomicity.
  6. The code’s a bit hinky, what with globals and a horde of dependencies.  If you’d like to test on a platform that you just can’t get DakaRand to build on, that makes things more interesting, not less.  Email me.
  7. All data generated is mixed into the hash, but bits are “counted” when Von Neumann debiasing works.  Basically, generators return integers between 0 and 2^32-1.  Every integer is mixed into the keying hash (thus, you having to predict out.bin bit for bit).  However, each integer is also measured for the number of 1′s it contains.  An even number yields a 0; an odd number, a 1.  Bits are only counted when two sequential numbers have either a 10 or a 01, and as long as there’s less than 256 bits counted, the generator will continue to be called.  So your attack needs to model the absolute integers returned (which isn’t so bad) and the amount of generator calls it takes for a Von Neumann transition to occur and whether the transition is a 01 or a 10 (since I put that value into the hash too).
  8. I’ve got a default “gap” between generator probes of just 1000us — a millisecond.  This is probably not enough for all platforms — my assumption is that, if anything has to change, that this has to become somewhat dynamic.

Have fun!  Remember, “it might fail somehow somewhere” just got trumped by “it actually did fail all over the place”, so how about we investigate a thing or two that we’re not so sure in advance will actually work?

(Side note:  Couple other projects in this space:  Twuewand, from Ryan Finnie, has the chutzpah to be pure Perl.  And of course, Timer Entropyd, from Folkert Van Heusden.  Also, my recommendation to kernel developers is to do what I’m hearing they’re up to anyway, which is to monitor all the interrupts that hit the system on a nanosecond timescale.  Yep, that’s probably more than enough.)

Categories: Security

Black Ops 2012

August 6, 2012 3 comments

Here’s my slides from Black Hat and Defcon for 2012.  Pile of interesting heresies — should make for interesting discussion.  Here’s what we’ve got:

1) Generic timing attack defense through network interface jitter
2) Revisiting Random Number Generation through clock drift
3) Suppressing injection attacks by altering variable scope and per-character taint
4) Deployable mechanisms for detecting censorship, content alteration, and certificate replacement
5) Stateless TCP w/ payload retrieval

I hate saying “code to be released shortly”, but I want to post the slides and the code’s pretty hairy.  Email me if you want to test anything, particularly if you’d like to try to break this stuff or wrap it up for release.  I’ll also be at Toorcamp, if you want to chat there.

Categories: Security

RDP and the Critical Server Attack Surface

March 18, 2012 18 comments

MS12-020, a use-after-free discovered by Luigi Auriemma, is roiling the Information Security community something fierce. That’s somewhat to be expected — this is a genuinely nasty bug. But if there’s one thing that’s not acceptable, it’s the victim shaming.

As people who know me, know well, nothing really gets my hackles up like blaming the victims of the cybersecurity crisis. “Who could possibly be so stupid as to put RDP on the open Internet”, they ask. Well, here’s some actual data:

On 16-Mar-2012, I initiated a scan across approximately 8.3% of the Internet (300M IPs were probed; the scan is ongoing). 415K of ~300M IP addresses showed evidence of speaking the RDP protocol (about twice as many had listeners on 3389/tcp — always be sure to speak a bit of the protocol before citing connectivity!)

Extrapolating from this sample, we can see that there’s approximately five million RDP endpoints on the Internet today.

Now, some subset of these endpoints are patched, and some (very small) subset of these endpoints aren’t actually the Microsoft Terminal Services code at all. But it’s pretty clear that, yes, RDP is actually an enormously deployed service, across most networks in the world (21767 of 57344 /16′s, at 8.3% coverage).

There’s something larger going on, and it’s the relevance of a bug on what can be possibly called the Critical Server Attack Surface. Not all bugs are equally dangerous because not all code is equally deployed. Some flaws are simply more accessible than others, and RDP — as the primary mechanism by which Windows systems are remotely administered — is a lot more accessible than a lot of people were aware of.

I think, if I had to enumerate the CSAS for the global Internet, it would look something like (in no particular order — thanks Chris Eng, The Grugq!):

  • HTTP (Apache, Apache2, IIS, maybe nginx)
  • Web Languages (ASPX, ASP, PHP, maybe some parts of Perl/Python. Maybe rails.)
  • TCP/IP (Windows 2000/2003/2008 Server, Linux, FreeBSD, IOS)
  • XML (libxml, MSXML3/4/6)
  • SSL (OpenSSL, CryptoAPI, maybe NSS)
  • SSH (OpenSSH, maybe dropbear)
  • telnet (bsd telnet, linux telnet) (not enough endpoints)
  • RDP (Terminal Services)
  • DNS (BIND9, maybe BIND8, MSDNS, Unbound, NSD)
  • SNMP
  • SMTP (Sendmail, Postfix, Exchange, Qmail, whatever GMail/Hotmail/Yahoo run)

I haven’t exactly figured out where the line is — certainly we might want to consider web frameworks like Drupal and WordPress, FTP daemons, printers, VoIP endpoints and SMB daemons, not to mention the bouillabaisse that is the pitiful state of VPNs all the way into 2012 — but the combination of “unfirewalled from the Internet” and “more than 1M endpoints” is a decent rule of thumb. Where people are maybe blind, is in the dramatic underestimation of just how many Microsoft shops there really are out there.

We actually had the opposite situation a little while ago, with this widely discussed bug in telnet. Telnet was the old way Unix servers were maintained on the Internet. SSH rather completely supplanted it, however — the actual data revealed only 180K servers left, with but 20K even potentially vulnerable.

RDP’s just on a different scale.

I’ve got more to say about this, but for now it’s important to get these numbers out there. There’s a very good chance that your network is exposing some RDP surface. If you have any sort of crisis response policy, and you aren’t completely sure you’re safe from the RDP vulnerability, I advise you to invoke it as soon as possible.

(Disclosure: I do some work with Microsoft from time to time. This is obviously being written in my personal context.)

Categories: Security

Open For Review: Web Sites That Accept Security Research

February 26, 2012 11 comments

So one of the core aspects of my mostly-kidding-but-no-really White Hat Hacker Flowchart is that, if the target is a web page, and it’s not running on your server, you kind of need permission to actively probe for vulnerabilities.

Luckily, there are actually a decent number of sites that provide this permission.

Paypal
Facebook
37 Signals
Salesforce
Microsoft
Google
Twitter
Mozilla
UPDATE 1:
eBay
Adobe
UPDATE 2, courtesy of Neal Poole:
Reddit (this is particularly awesome)
GitHub
UPDATE 3:
Constant Contact

One could make the argument that you can detect who in the marketplace has a crack security team, by who’s willing and able to commit the resources for an open vulnerability review policy.

Some smaller sites have also jumped on board (mostly absorbing and reiterating Salesforce’s policy — cool!):

Zeggio
Simplify, LLC
Team Unify
Skoodat
Relaso
Modus CSR
CloudNetz
UPDATE 2:
EMPTrust
Apriva

There’s some interesting implications to all of this, but for now lets just get the list out there. Feel free to post more in the comments!

Categories: Security

White Hat Hacker Flowchart

February 20, 2012 4 comments

For your consideration. Rather obviously not to be taken as legal advice. Seems to be roughly what’s evolved over the last decade.

Categories: lulz, Security

Primal Fear: Demuddling The Broken Moduli Bug

February 17, 2012 6 comments

There’s been a lot of talk about this supposed vulnerability in RSA, independently discovered by Arjen Lenstra and James P. Hughes et al, and Nadia Heninger et al. I wrote about the bug a few days ago, but that was before Heninger posted her data. Lets talk about what’s one of the more interesting, if misunderstood, bugs in quite some time.


SUMMARY
INTRODUCTION
THE ATTACK
IT’S NOT ABOUT RON AND WHIT
WHO MATTERS
FAILURE TO ENROLL
THE ACTUAL THREAT
ACTIONABLE INTELLIGENCE
CONCLUSION


SUMMARY

  • The “weak RSA moduli” bug is almost (and possibly) exclusively found within certificates that were already insecure (i.e. expired, or not signed by a valid CA).
  • This attack almost certainly affects not a single production website.
  • The attack utilizes a property of RSA whereby if half the private key material is shared between two public keys, the private key is leaked. Researchers scaled this method to cross-compare every RSA key on the Internet against every other RSA key on the Internet.
  • The flaw has nothing to do with RSA or “multi-secret” systems. The exact same broken random number generator would play just as much havoc, if not more, with “single-secret” algorithms such as ECDSA.
  • DSA, unlike RSA, leaks the private key with every signature under conditions of faulty entropy. That is arguably worse than RSA which leaks its private key only during generation, only if a similar device emits the same key, and only if the attacker finds both devices’ keys.
  • The first major finding is that most devices offer no crypto at all, and even when they do, the crypto is easily man-in-the-middled due to a presumption that nobody cares whether the right public key is in use.
  • Cost and deployment difficulty drive the non-deployment of cryptographic keys even while almost all systems acquire enough configuration for basic connectivity.
  • DNSSEC will dramatically reduce this cost, but can do nothing if devices themselves are generating poor key material and expecting DNSSEC to publish it.
  • The second major finding is that it is very likely that these findings are only the low hanging fruit of easily discoverable bad random number generation flaws in devices. It is specifically unlikely that only a third of one particular product had bad keys, and the rest managed to call quality entropy.
  • This is a particularly nice attack in that no knowledge of the underlying hardware or software architecture is required to extract the lost key material.
  • Recommendations:
    1. Don’t panic about websites. This has very little to absolutely nothing to do with them.
    2. When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.
    3. Audit smartcard keys.
    4. Stop buying or building CPUs without hardware random number generators.
    5. Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.
    6. When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.
    7. Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.

INTRODUCTION
If there’s one thing to take away from this entire post, it’s the following line from Nadia Heninger’s writeup:

Only one of the factorable SSL keys was signed by a trusted certificate authority and it has already expired.

What this means, in a nutshell, is that there was never any security to be lost from crackable RSA keys; due to failures in key management, almost certainly all of the affected keys were vulnerable to being silently “swapped out” by a Man-In-The-Middle attacker. It isn’t merely the fact that all the headlines proclaiming “0.2% of websites using RSA are insecure” are straight up false, because the flaws are concentrated on devices. It’s also the case that the devices themselves were already using RSA insecurely to begin with, merely by being deployed outside a key management system.

If there’s a second thing to take away, it’s that this is important research with real actions that should be taken by the security community in response. There’s no question this research is pointing to things that are very wrong with the systems we depend on. Do not make the mistake of discounting this work as merely academic. We have a problem with random numbers, that is even larger than Lenstra and Hughes and Heninger are finding with their particular mechanism.


THE ATTACK

To grossly simplify, RSA works by:

1) Generating two random primes, p and q. These are private.
2) Multiplying p ad q together into n. This becomes public, because figuring out the exact primes used to create n is Hard.
3) Content is signed or decrypted with p and q.
4) Content is verified or encrypted with n.

It’s obvious that if p is not random, q can be calculated: n/p==q, because p*q==n. What’s slightly less obvious, however, is that if either p or q is repeated across multiple n’s, then the repeated value can be trivially extracted using an ancient algorithm by Euclid called the Greatest Common Denominator test. The trick looks something like this (you can follow along with Python, if you like):

>>> from gmpy import *
>>> p=next_prime(rand(“next”))
>>> q=next_prime(rand(“next”))
>>> p
mpz(136093819)
>>> q
mpz(118190411)
>>> n=p*q
>>> n
mpz(16084984402169609)

Now we have two primes, multiplied into some larger n.

>>> q2=next_prime(rand(“next”))
>>> n2=p*q2
>>> n2
mpz(289757928023349293)

Ah, now we have n2, which is the combination of a previously used p, and a newly generated q2. But look what an attacker with n and n2 can do:

>>> gcd(n,n2)
mpz(136093819)
>>> p
mpz(136093819)

Ta-da! p falls right out, and thus q as well (since again, n/p==q). So what these teams did was gcd every RSA key on the Internet, against every other RSA key on the Internet. This would of course be quite slow — six million keys times six million keys is thirty six trillion comparisons — and so they used a clever algorithm to scale the attack such that multiple keys could be simultaneously compared against one another. It’s not clear what Lenstra and Hughes used, but Heninger’s implementation leveraged some 2005 research from (who else?) Dan Bernstein.

That is the math. What is the impact, upon computer security? What is the actionable intelligence we can derive from these findings?

Uncertified public keys or not, there’s a lot to worry about here. But it’s not at all where Lenstra and Hughes think.


IT’S NOT ABOUT RON AND WHIT

Lenstra and Hughes, in arguing that “Ron was Wrong, Whit was right”, declare:

Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for “multiple-secrets” cryptosystems such as RSA is signi cantly riskier than for “single-secret” ones such as ElGamal or (EC)DSA which are based on Die-Hellman.

The argument is essentially that, had those 12,000 keys been ECDSA instead of RSA, then they would have been secure. In my previous post on this subject (a post RSA Inc. itself is linking to!) I argued that this paper, while chock-full of useful and important data, failed to make the case that the blame could be laid at the feet of the “multiple-secret” RSA. Specifically:

  • Risk in cryptography is utterly dominated, not by cipher selection, but by key management. It does not matter the strength of your public key if nobody knows to demand it. Whatever the differential risk is that is introduced by the choice of RSA vs. ECDSA pales in comparison to whether there’s any reason to believe the public key in question is the actual key of the desired target. (As it turns out, few if any of the keys with broken moduli, had any reason to trust them in the first place.)
  • There aren’t enough samples of DSA in the same kind of deployments to compare failure rates from one asymmetric cipher to another.
  • If one is going to blame a cipher for implementation failures in the field, it’s hard to blame the maybe dozens of implementations of RSA that caused 12,000 failures, while ignoring the one implementation of ECDSA that broke millions upon millions of Playstation 3′s.

But it wasn’t until I read the (thoroughly excellent) blog post from Heninger that it became clear that, no, there’s no rational way this bug can be blamed on multi-secret systems in general or RSA in particular.

However, some implementations add additional randomness between generating the primes p and q, with the intention of increasing security:

prng.seed(seed)
p = prng.generate_random_prime()
prng.add_randomness(bits)
q = prng.generate_random_prime()
N = p*q

If the initial seed to the pseudorandom number generator is generated with low entropy, this could result in multiple devices generating different moduli which share the prime factor p and have different second factors q. Then both moduli can be easily factored by computing their GCD: p = gcd(N1, N2).

OpenSSL’s RSA key generation functions this way: each time random bits are produced from the entropy pool to generate the primes p and q, the current time in seconds is added to the entropy pool. Many, but not all, of the vulnerable keys were generated by OpenSSL and OpenSSH, which calls OpenSSL’s RSA key generation code.

Would the above code become secure, if the asymmetric cipher selected was the single-secret ECDSA instead of the multi-secret RSA?

prng.seed(seed);
ecdsa_private_key=prng.generate_random_integer();

No. All public keys emitted would be identical (as identical as they’d be with RSA, anyway). I suppose it’s possible we could see the following construction instead:

prng.seed(seed);
first_half=prng.generate_random_integer();
prng.add_randomness(bits);
second_half=prng.generate_random_integer();
ecdsa_private_key=concatenate(first_half, second_half);

…but I don’t think anyone would claim that ECDSA, DSA, or ElGamel is secure in the circumstance where half its key material has leaked. (Personal note: Prodded by Heninger, I’ve personally cracked a variant of RSA-1024 where all bits set to 1 were revealed. It wasn’t difficult — cryptosystems fail quickly when their fundamental assumptions are violated.)

Hughes has made the argument that RSA is unique here because the actions of someone else — publishing n composed of your p and his q — impact an innocent you. Alas, you’re no more innocent than he is; you’re republishing his p, and exposing his q2 as much as he’s exposing your q. Not to mention, in DSA, unlike RSA, you don’t need someone else’s help to expose your private key. A single signature, emitted without a strong RNG, is sufficient. As we saw from the last major RNG flaw, involving Debian:

Furthermore, all DSA keys ever used on affected Debian systems for signing or authentication purposes should be considered compromised; the Digital Signature Algorithm relies on a secret random value used during signature generation.

The fact that Lenstra and Hughes found a different vulnerability rate for DSA keys than RSA keys comes from the radically different generator for the former — PGP keys as generated by GNU Privacy Guard or PGP itself. This is not a thing commonly executed on embedded hardware.

There is a case to be made for the superiority of ECDSA over RSA. Certainly this is the conclusion of various government entities. I don’t think anyone would be surprised if the latter was broken before the former. But this situation, this particular flaw, reflects nothing about any particular asymmetric cipher. Under conditions of broken random number generation, all ciphers are suspect.

Can we dismiss these findings entirely, then?

Lets talk about that.


WHO MATTERS

The headlines have indeed been grating. “Only 99.8% of the worlds PKI uses secure randomness”, said TechSnap. “EFF: Tens of thousands of websites’ SSL “offers effectively no security” from Boing Boing. And, of course, from the New York Times:

The operators of large Web sites will need to make changes to ensure the security of their systems, the researchers said.

The potential danger of the flaw is that even though the number of users affected by the flaw may be small, confidence in the security of Web transactions is reduced, the authors said.

“Some people may say that 99.8 percent security is fine,” he added. That still means that approximately as many as two out of every thousand keys would not be secure.

The operators of large web sites will not need to make any changes, as they are excluded from the population that is vulnerable to this flaw. It is not two out of every thousand keys that is insecure. Out of those keys that had any hope of being secure to begin with — those keys that participate in the (flawed, but bear with me) Certificate Authority system — approximately none of these keys are threatened by this attack.

It is not two out of every 1000 already insecure keys that are insecure. It is 1000 of 1000. But that is not changed by the existence of broken RSA moduli.

To be clear, these numbers do not come from Lenstra and Hughes, who have cautiously refused to disclose the population of certificates, nor from the EFF, who have apparently removed those particular certificates from the SSL Observatory (interesting side project: Diff against the local copy). They come from Heninger’s research, which not only discloses who isn’t affected (“Don’t worry, the key for your bank’s web site is probably safe”), but analyzes the population that is:

Firewall product X:
52,055 hosts in our SSL dataset
6,730 share public keys
12,880 have factorable keys

Consumer-grade router Y:
26,952 hosts in our SSL dataset
9,345 share public keys
4,792 have factorable keys

Enterprise remote access solution Z:
1,648 hosts in our SSL dataset
24 share public keys
0 factorable

Finally, we get to why this research matters, and what actions should be taken in response to it. Why are large numbers of devices — of security devices, even! — not even pretending to successfully manage keys? And worse, why are the keys they do generate (even if nobody checks them) so insecure?


FAILURE TO ENROLL

Practically every device on a network is issued an IP address. Most of them also receive DNS names. Sometimes assignment is dynamic via DHCP, and sometimes (particularly on corporate/professionally managed networks) addressing is managed statically through various solutions, but basic addressing and connectivity across devices from multiple vendors is something of a solved problem.

Basic key management, by contrast, is a disaster.

You don’t have to like the Certificate Authority system. I certainly don’t; while I respect the companies involved, they’re pretty clearly working with a flawed technology. (I’ve been talking about this since 2009, see the slides here. Slide 22 talks about the very intermediate certificate issuance that people are now worrying about with respect to Trustwave and GeoTrust) However…

Out of hundreds of millions of servers on the Internet with globally routable IP addresses, only about six million present public keys to be authenticated against. And of those with keys, only a relatively small portion — maybe a million? — are actually enrolled into the global PKI managed by the Certificate Authorities.

The point is not that the CA’s don’t work. The point is that, for the most part, clients don’t care, nothing is checking the validity of device certificates in the first place. Most devices, even security devices, are popping these huge errors every time the user connects to their SSL ports. Because this is legitimate behavior — because there’s no reason to trust the provenance of a public key, right or wrong — users click through.

And why are so many devices, even in the rare instance that they have key material on their administrative interfaces, unenrolled in PKI? Because it requires this chain of cross organizational interaction to be followed. The manufacturer of the device could generate a key at the factory, or on first boot, but they’re not the customer so they can’t have certificates signed in a customers namespace (which might change, anyway). Meanwhile, the customer, who can easily assign IP addresses and DNS names without constantly asking for anyone’s permission, has to integrate with an external entity, extract key material from the manufacturer’s generated content, provide it to that entity, integrate the response back into the device, pay the entity, and do it all over again within a year or three. Repeat for every single device.

Or, they could just not do all that. Much easier to simply not use the encrypted interface (it’s the Internal Network, after all), or to ignore the warning prompts when they come up.

One of the hardest things to convince people of in security, is that it’s not enough to make security better. Better doesn’t mean anything, it’s a measure without a metric. No, what’s important is making security cheaper. How do we start actually delivering value, without imagining that customers have an infinite budget to react with? How do we even measure cost?

I can propose at least one metric: How many meetings need to be held, to deploy a particular technology? Money is one thing but the sheer time required to get things done is a remarkably effective barrier to project completion. And the further away somebody is from a deploying organization — another group, another company, another company that needs to get paid thus requiring a contract and Purchasing’s involvement — the worst things are.

Devices are just not being enrolled into the CA system. Heninger found 19,610 instances of a single firewall — a security device — and not one of those instances was even pretending to be secure. That’s a success rate not of 99.8%, but of 0.00%.

Whatever good we’ve done on production websites has been met with deafening silence across, “firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones”. Believe me, this abject security failure cares not a whit about the relative merits of RSA vs. ECDSA. The key management penalty is just too damn high.

The solution, of course, is DNSSEC. It’s not entirely true that third parties aren’t involved with IP assignment; it’s just that once you’ve got your IP space, you don’t have to keep returning to the well. It’s the same with DNS — while yes, you need to maintain your registration for whatever.com, you don’t need to interact with your registrar every time you add or remove a host (unless you’d like to, anyway). In the future, you’ll register keys in your own DNS just as you register IP addresses. It will not be drama. It will not be a particularly big deal. It will be a simple, straightforward, and relatively inexpensive mechanism by which devices are brought into your network, and then if you so choose, recognized as yours worldwide.

This will be very exciting, and hopefully, quite boring.

It’s not quite so simple, though. DNSSEC offers a very nice model — one that will be completely vulnerable to the findings of Lenstra, Hughes, and Heninger, unless we start dealing with these badly generated private keys.


THE ACTUAL THREAT

DNSSEC will make key management scale. That is not actually a good thing, if the keys it’s spreading are in fact insecure! What we’re finding here is that a lot of small devices are generating keys insecurely. The biggest mistake we can make here is thinking that only the keys vulnerable to this particular attack, were badly generated.

There are many ways to fail at key generation, that aren’t nearly as trivially discoverable as a massive group GCD. Heninger found that Firewall Vendor Z had over a third of their keys either crackable via GCD or repeated elsewhere (as you’d expect, if both p and q were emitted from identical entropy). One would have to be delusional to expect that the code is perfectly secure two thirds of the time, and only fails in this manner every once in a while. The primary lesson from this incident is that there’s a lot more from where the Debian flaw came from. According to Heninger, much of the hardware they saw was actually running OpenSSL, pretty much the gold standard in available libraries for open cryptographic work.

It still failed.

This particular attack is nice, in that it’s independent of the particular vagaries of a device implementation. As long as any two nodes share either p or q, both p and q will be disclosed. But this is a canary in the coal mine — if this went wrong, so too must a lot more have. What I predict is that, when we crack devices open, we’re going to see mechanisms that rarely repeat, but still have an insufficient amount of entropy to “get the job done” — nodes seeding with MAC addresses (far fewer than 48 bits of entropy, when you think about it), the ever classic clock seeds, even fixed “entropic starting points” stored on the file system are going to be found.

Tens of thousands of random keys being not quite so random is one thing. But that a generic mechanism found this many issues, strongly implies a device specific mechanism will be even more effective. We should have gone looking after the Debian bug. I’m glad Lenstra, Hughes, and Heninger et al. did.


ACTIONABLE INTELLIGENCE

So what to do? Here is my advice.

1) Obviously, don’t panic about any website being vulnerable. This is pretty much a web interface problem.

2) When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.

This is cryptographic heresy, of course. Private keys should not be moved, as bits are cheap. But, you know, some bits are cheaper than others, and I’ve got a lot more faith now in a PC generating key material at this point than some random box. (No way I would have argued this a few weeks ago. Like I said from the beginning, this is excellent work.) In particular, I’d regenerate keys used to authenticate clients to servers.

3) Audit smartcard keys.

The largest population of key material that Lenstra, Hughes, and Heninger could never have looked at, are the public keys contained within the world’s smartcard population. Unless you’re the manufacturer, you have no way to “look inside” to see if the private keys are being reasonably generated. But you do have access to the public keys, and most likely you don’t have so many of them that the slow mechanism (gcd’ing each moduli against each other moduli) is infeasibly slow. Ping me if you’re interested in doing this.

4) Stop buying or building CPUs without hardware random number generators.

Alright, CPU vendors. There’s not many of you, and it’s time for you to stop being so deterministic. By 2014, if your CPU does not support cryptographically strong random number generation, it really doesn’t belong in anything trying to be secure. Since at this point, that’s everything, please start providing a simple instruction that isn’t hidden in some random chipset somewhere for developers and kernels to use to seed proper entropy. It doesn’t even need to be fast.

I hear Intel is adding a Hardware RNG to “Ivy Bridge”. One would also be nice for Atom.

5) Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.

This is the most controversial advice I’ve ever given, and most of you have no idea what I’m talking about. From the code, written originally seventeen years ago by D. P. Mitchell and modified by the brilliant Matt Blaze (who’s probably going to kill me):

* Truerand is a dubious, unproven hack for generating “true” random
* numbers in software. It is at best a good “method of last resort”
* for generating key material in environments where there is no (or
* only an insufficient) source of better-understood randomness. It
* can also be used to augment unreliable randomness sources (such as
* input from a human operator).
*
* The basic idea behind truerand is that between clock “skew” and
* various hard-to-predict OS event arrivals, counting a tight loop
* will yield a little bit (maybe one bit or so) of “good” randomness
* per interval clock tick. This seems to work well in practice even
* on unloaded machines…

* Because the randomness source is not well-understood, I’ve made
* fairly conservative assumptions about how much randomness can be
* extracted in any given interval. Based on a cursory analysis of
* the BSD kernel, there seem to be about 100-200 bits of unexposed
* “state” that changes each time a system call occurs and that affect
* the exact handling of interrupt scheduling, plus a great deal of
* slower-changing but still hard-to-model state based on, e.g., the
* process table, the VM state, etc. There is no proof or guarantee
* that some of this state couldn’t be easily reconstructed, modeled
* or influenced by an attacker, however, so we keep a large margin
* for error. The truerand API assumes only 0.3 bits of entropy per
* interval interrupt, amortized over 24 intervals and whitened with
* SHA.

Disk seek time is often unavailable on embedded hardware, because there simply is no disk. Ambient network traffic rates are often also unavailable because the system is not yet on a production network. And human interval analysis (keystrokes, mouse clicks) may not be around, because there is no human. (Remember, strong entropy is required all the time, not just during key generation.)

what’s also not around on embedded hardware, however, is a hypervisor that’s doling out time slices. And consider: What makes humans an interesting thing to build randomness off of, is that we’re operating on a completely different clock and frequency than a computer. We are a “slow oscillator” that chaotically interacts with a fast one.

Well, most every piece of hardware has at least two clocks, and they are not in sync: The CPU clock, and the Real Time Clock. You can add a third, in fact, if you include system memory. All three may be made “slow oscillators” relative to the rest, if only by running in loops. At the extreme scale, you’re going to find slight deviations that can only be explained not through deterministic processes but through the chaotic and limited tolerances of different clocks running at different frequencies and different temperatures.

On the one hand, I might be wrong. Perhaps this research path only ends in tears. On the other hand, it is the most persistent delusion in all of computing that there’s only one computer inside your computer, rather than a network of vaguely cooperative subsystems — each with their own clocks that are in no way synchronized against one another.

It wouldn’t be the first time asynchronous behavior made a system non-deterministic.

Ultimately, the truerand approach can’t make things worse. One of the very nice things about entropy sources is that you can stir all sorts of total crap in there, and as long as there’s at least 80 or so bits that are actually difficult to predict, you’ve won. I see no reason we shouldn’t investigate this path, and possibly integrate it as another entropy source to OpenSSL and maybe libc as well.

(Why am I not suggesting ‘switch your apps over to /dev/random? Because if your developers did enough to break what is already enabled by default in OpenSSL, they did so because tests showed the box freezing up, and aren’t going to appreciate some security yutz telling them to ship something that locks on boot.)

6) When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.

7) Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.


CONCLUSION

It’s 2012, and we’re still having problems with Random Number Generators. Crazy, but we’re still suffering (mightily) from SQL injection and that’s a “fixed problem” too. This research, even if accompanied by some flawed analysis, is truly important. There are more dragons down this path, and it’s going to be entertaining to slay them.

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 473 other followers