DakaRand 1.0: Revisiting Clock Drift For Entropy Generation

August 15, 2012 21 comments

“The generation of random numbers is too important to be left to chance.”
Robert R. Coveyou

“One out of 200 RSA keys in the field were badly generated as a result of standard dogma.  There’s a chance this might fail less.”

[Note:  There are times I write things with CIO's in mind.  This is not one of those times.]

So, I’ve been playing with userspace random number generation, as per Matt Blaze and D.P. Mitchell’s TrueRand from 1996.  (Important:  Matt Blaze has essentially disowned this approach, and seems to be honestly horrified that I’m revisiting it.)  The basic concept is that any system with two clocks has a hardware number generator, since clocks jitter relative to one another based on physical properties, particularly when one is operating on a slow scale (like, say, a human hitting a keyboard) while another is operating on a fast scale (like a CPU counter cycling at nanosecond speeds).  Different tolerances on clocks mean more opportunities for unmodelable noise to enter the system.  And since the core lie of your computer is that it’s just one computer, as opposed to a small network of independent nodes running on their own time, there should be no shortage of bits to mine.

At least, that’s the theory.

As announced at Defcon 20 / Black Hat, here’s DakaRand 1.0.  Let me be the first to say, I don’t know that this works.  Let me also be the first to say, I don’t know that it doesn’t.  DakaRand is a collection of modes that tries to convert the difference between clocks into enough entropy that, whether or not it survives academic attack, would certainly force me (as an actual guy who breaks stuff) to go attack something else.

A proper post on DakaRand is reserved, I think, for when we have some idea that it actually works.  Details can be seen in the slides for the aforementioned talk; what I’d like to focus on now is recommendations for trying to break this code.  The short version:

1) Download DakaRand, untar, and run “sh build.sh”.
2) Run dakarand -v -d out.bin -m [0-8]
3) Predict out.bin, bit for bit, in less than 2^128 work effort, on practically any platform you desire with almost any level of active manipulation you wish to insert.

The slightly longer version:

  1. DakaRand essentially tries to force the attacker into having no better attack than brute force, and then tries to make that work effort at least 2^128.  As such, the code is split into generators that acquire bits, and then a masking sequence of SHA-256, Scrypt, and AES-256-CTR that expands those bits into however much is requested.  (In the wake of Argyros and Kiayias’s excellent and underreported “I Forgot Your Password:  Randomness Attacks Against PHP Applications“, I think it’s time to deprecate all RNG’s with invertable output.  At the point you’re asking whether an RNG should be predictable based on its history, you’ve already lost.)  The upshot of this is that the actual target for a break is not the direct output of DakaRand, but the input to the masking sequence.  Your goal is to show that you can predict this particular stream, with perfect accuracy, at less than 2^128 work effort.  Unless you think you can glean interesting information from the masking sequence (in which case, you have more interesting things to attack than my RNG), you’re stuck trying to design a model of the underlying clock jitter.
  2. There are nine generators in this initial release of DakaRand.  Seriously, they can’t all work.
  3. You control the platform.  Seriously — embedded, desktop, server, VM, whatever — it’s fair game.  About the only constraint that I’ll add is that the device has to be powerful enough to run Linux.  Microcontrollers are about the only things in the world that do play the nanosecond accuracy game, so I’m much less confident against those.  But, against anything ARM or larger, real time operation is simply not a thing you get for free, and even when you pay dearly for it you’re still operating within tolerances far larger than DakaRand needs to mine a bit.  (Systems that are basically cycle-for-cycle emulators don’t count.  Put Bochs and your favorite ICE away.  Nice try though!)
  4. You seriously control the platform.  I’ve got no problem with you remotely spiking the CPU to 100%, sending arbitrary network traffic at whatever times you like, and so on.  The one constraint is that you can’t already have root — so, no physical access, and no injecting yourself into my gathering process.  It’s something of a special case if you’ve got non-root local code execution.  I’d be interested in such a break, but multitenancy is a lie and there’s just so many  interprocess leaks (like this if-it’s-so-obvious-why-didn’t-you-do-it example of cross-VM communication).
  5. Virtual machines get special rules:  You’re allowed to suspend/restore right up to the execution of DakaRand.  That is the point of atomicity.
  6. The code’s a bit hinky, what with globals and a horde of dependencies.  If you’d like to test on a platform that you just can’t get DakaRand to build on, that makes things more interesting, not less.  Email me.
  7. All data generated is mixed into the hash, but bits are “counted” when Von Neumann debiasing works.  Basically, generators return integers between 0 and 2^32-1.  Every integer is mixed into the keying hash (thus, you having to predict out.bin bit for bit).  However, each integer is also measured for the number of 1’s it contains.  An even number yields a 0; an odd number, a 1.  Bits are only counted when two sequential numbers have either a 10 or a 01, and as long as there’s less than 256 bits counted, the generator will continue to be called.  So your attack needs to model the absolute integers returned (which isn’t so bad) and the amount of generator calls it takes for a Von Neumann transition to occur and whether the transition is a 01 or a 10 (since I put that value into the hash too).
  8. I’ve got a default “gap” between generator probes of just 1000us — a millisecond.  This is probably not enough for all platforms — my assumption is that, if anything has to change, that this has to become somewhat dynamic.

Have fun!  Remember, “it might fail somehow somewhere” just got trumped by “it actually did fail all over the place”, so how about we investigate a thing or two that we’re not so sure in advance will actually work?

(Side note:  Couple other projects in this space:  Twuewand, from Ryan Finnie, has the chutzpah to be pure Perl.  And of course, Timer Entropyd, from Folkert Van Heusden.  Also, my recommendation to kernel developers is to do what I’m hearing they’re up to anyway, which is to monitor all the interrupts that hit the system on a nanosecond timescale.  Yep, that’s probably more than enough.)

Categories: Security

Black Ops 2012

August 6, 2012 3 comments

Here’s my slides from Black Hat and Defcon for 2012.  Pile of interesting heresies — should make for interesting discussion.  Here’s what we’ve got:

1) Generic timing attack defense through network interface jitter
2) Revisiting Random Number Generation through clock drift
3) Suppressing injection attacks by altering variable scope and per-character taint
4) Deployable mechanisms for detecting censorship, content alteration, and certificate replacement
5) Stateless TCP w/ payload retrieval

I hate saying “code to be released shortly”, but I want to post the slides and the code’s pretty hairy.  Email me if you want to test anything, particularly if you’d like to try to break this stuff or wrap it up for release.  I’ll also be at Toorcamp, if you want to chat there.

Categories: Security

RDP and the Critical Server Attack Surface

March 18, 2012 18 comments

MS12-020, a use-after-free discovered by Luigi Auriemma, is roiling the Information Security community something fierce. That’s somewhat to be expected — this is a genuinely nasty bug. But if there’s one thing that’s not acceptable, it’s the victim shaming.

As people who know me, know well, nothing really gets my hackles up like blaming the victims of the cybersecurity crisis. “Who could possibly be so stupid as to put RDP on the open Internet”, they ask. Well, here’s some actual data:

On 16-Mar-2012, I initiated a scan across approximately 8.3% of the Internet (300M IPs were probed; the scan is ongoing). 415K of ~300M IP addresses showed evidence of speaking the RDP protocol (about twice as many had listeners on 3389/tcp — always be sure to speak a bit of the protocol before citing connectivity!)

Extrapolating from this sample, we can see that there’s approximately five million RDP endpoints on the Internet today.

Now, some subset of these endpoints are patched, and some (very small) subset of these endpoints aren’t actually the Microsoft Terminal Services code at all. But it’s pretty clear that, yes, RDP is actually an enormously deployed service, across most networks in the world (21767 of 57344 /16’s, at 8.3% coverage).

There’s something larger going on, and it’s the relevance of a bug on what can be possibly called the Critical Server Attack Surface. Not all bugs are equally dangerous because not all code is equally deployed. Some flaws are simply more accessible than others, and RDP — as the primary mechanism by which Windows systems are remotely administered — is a lot more accessible than a lot of people were aware of.

I think, if I had to enumerate the CSAS for the global Internet, it would look something like (in no particular order — thanks Chris Eng, The Grugq!):

  • HTTP (Apache, Apache2, IIS, maybe nginx)
  • Web Languages (ASPX, ASP, PHP, maybe some parts of Perl/Python. Maybe rails.)
  • TCP/IP (Windows 2000/2003/2008 Server, Linux, FreeBSD, IOS)
  • XML (libxml, MSXML3/4/6)
  • SSL (OpenSSL, CryptoAPI, maybe NSS)
  • SSH (OpenSSH, maybe dropbear)
  • telnet (bsd telnet, linux telnet) (not enough endpoints)
  • RDP (Terminal Services)
  • DNS (BIND9, maybe BIND8, MSDNS, Unbound, NSD)
  • SNMP
  • SMTP (Sendmail, Postfix, Exchange, Qmail, whatever GMail/Hotmail/Yahoo run)

I haven’t exactly figured out where the line is — certainly we might want to consider web frameworks like Drupal and WordPress, FTP daemons, printers, VoIP endpoints and SMB daemons, not to mention the bouillabaisse that is the pitiful state of VPNs all the way into 2012 — but the combination of “unfirewalled from the Internet” and “more than 1M endpoints” is a decent rule of thumb. Where people are maybe blind, is in the dramatic underestimation of just how many Microsoft shops there really are out there.

We actually had the opposite situation a little while ago, with this widely discussed bug in telnet. Telnet was the old way Unix servers were maintained on the Internet. SSH rather completely supplanted it, however — the actual data revealed only 180K servers left, with but 20K even potentially vulnerable.

RDP’s just on a different scale.

I’ve got more to say about this, but for now it’s important to get these numbers out there. There’s a very good chance that your network is exposing some RDP surface. If you have any sort of crisis response policy, and you aren’t completely sure you’re safe from the RDP vulnerability, I advise you to invoke it as soon as possible.

(Disclosure: I do some work with Microsoft from time to time. This is obviously being written in my personal context.)

Categories: Security

Open For Review: Web Sites That Accept Security Research

February 26, 2012 11 comments

So one of the core aspects of my mostly-kidding-but-no-really White Hat Hacker Flowchart is that, if the target is a web page, and it’s not running on your server, you kind of need permission to actively probe for vulnerabilities.

Luckily, there are actually a decent number of sites that provide this permission.

37 Signals
UPDATE 2, courtesy of Neal Poole:
Reddit (this is particularly awesome)
Constant Contact

One could make the argument that you can detect who in the marketplace has a crack security team, by who’s willing and able to commit the resources for an open vulnerability review policy.

Some smaller sites have also jumped on board (mostly absorbing and reiterating Salesforce’s policy — cool!):

Simplify, LLC
Team Unify
Modus CSR

There’s some interesting implications to all of this, but for now lets just get the list out there. Feel free to post more in the comments!

Categories: Security

White Hat Hacker Flowchart

February 20, 2012 4 comments

For your consideration. Rather obviously not to be taken as legal advice. Seems to be roughly what’s evolved over the last decade.

Categories: lulz, Security

#Nerdflix. Because LOL

February 19, 2012 9 comments

So, one of my goals for 2012 has been to write quite a bit more — more code, and more long form analysis. So far, so good.

Doesn’t mean it’s all going to be about security, or even that it’s all going to be all that serious. My whole #Protolol thing worked out pretty amusingly…and so, here’s #nerdflix.

LOL. Some highlights. I can’t even imagine the horrible punnery I’m going to wake up to.

UPDATE 1: Apparently at some point #nerdflix was trending worldwide? It’s certainly taken San Francisco by storm ;)

Anyway, here’s more.

Update 2: Also, Germany. This is not surprising.

@dakami: Memory Resident Evil
@dakami: Driving Miss Daisy Chain (h/t @wadebaker)
@dakami: Raiders of the Lost ARP (h/t @quadling)
@dakami: –wx—— (h/t @ennor)
@AtheistHacker: A Beautiful BIND
@RySub Desperate house wifi’s
@JGoldOrlando: no SALT
@johngineer: Ohm Alone
@kickfroggy: I Now Pronounce You PCI Compliant
@kickfroggy: There’s Something About Mary’s Code
@KrisPaget: Full Metal Packet
@l0qii: Beverly Hills Pcap
@marshray: Batman Returns-into-libc
@mratzlaff: Extremely pwned and Incredibly hosed
@otanistudio: ~/alone
@otanistudio: HTML5 Gordon
@plaverty24: Born on December 31, 1969
@redtwitdown: The Kaminsky Code (cc @dakami)
@wadebaker: Gone with the rm Memorable line: “frankly Scarlet, -rf”
@Walshman23: Girl, Interrupt-driven
@jadedsecurity: Apache Down
@chenb0x: The Netbook
@manzuik: Whitehats Can’t Code.
@afabmedia: Magnum, 3.1415926535897932384626433832795028841971693993751058209
@felipeLvalero: RT @Mackaber: RT @Voulnet: Despicable WindowsME / Lol
@0x17h: Gone with the Windows
@vogon: @dakami The SHA-1 Redemption
@vogon: @dakami 1U Over the Cuckoo’s Nest
@ThemsonMester: Oceans 802.11
@enriquereyes: 500 days of evaluation period
@a_r_w: The TEMPEST (staring Van Eck)
@marshray: Batman Returns-into-libc
@marshray: The Bitcoin Miner’s Daughter
@gbrunkhorst: The Little Endian in the Cupboard
@bm_: RT @_jtmelton: @dakami Lost in Machine Translation
@kickfroggy: Paul Blart: Mall CISSP << lol
@0x17h: The Deprecated
@0x17h: All About Eve Online
@afabmedia: So I Married a VAX Murderer
@infojeff: Pink Floyd The Wall of Sheep
@jdunck: Referer never dies /cc @kevinmarks
@kickfroggy: Fast Times at 802.3bg
@kickfroggy: How to Train Your User
@liambeeton: Firewall Club
@longobord: Schindler’s Linked List
@mfukar: reinterpret_cast Away
@obscuresec: Inglorious Hackers
@obscuresec: Toy Story 2++ (should be Toy Story ++2, actually. –Dan)
@pageman: The preg_replace ment Killers
@rattis: hackers on a plane (come one, someone had to do it). (HOAP, is a real thing).
@RySub: Puss in boot sector
@someara: @dakami Oh Backup Where Art Thou
@theharmonyguy: @dakami XSS in the City, with sequel XSS in the City: alert(2)
@ThemsonMester: Oceans 802.11
@Twirrim: 12 Angry Sysadmins
@redtwitdown: @Twirrim Saving Private Ryan [XX...................] 5.7% 16h47m remaining.
@0x17h: Twelve Code Monkeys
@davehull: APT Pupil
@Mackaber: The Angry Birds
@spridel11: $ whoami Legend
@Pickering: The King’s Speech Recognition Software
@BrightApollo: Saving Private Keys
@fkorling: Apocalype System.currentTimeMillis()
@OhAxi: Teenage Mutant Logo Turtles
@andreasudo: Planet of the APIs
@Ryallcowling: #00FF00 Lantern
@Pickering: The FIOS Connection
@andreasudo: Citizen DANE
@courtneyBolton: Gone with the Meme
@danvesma: Sense and adSensibility
@fkorling: In Her Majesty’s SQL Server
@ThemsonMester: To Kill an AWKing Bird books to movie…
@eikonoklastic: I Know What You Did Last Summer of Code
@mfukar: Port Knocked Up
@mfukar: Fantastic ARCFOUR
@ennor: Top GNU
@SecurityHumor: Two Macs One CUPS
@Madrox: Harry Potter and the Order Of Operations
@hmier: The Manchurian release candidate
@rogueclown: Kernighan and Ritchie do America
@alanpdx: Death Race Condition 2000
@hmier: A beautiful BIND
@Tylos: The Ring 0
@Tylos: ulimitless
@0x17h: The Hunchback of Notre DOM
@afabmedia: Information Technology Came from Outer Space
@Voulnet: BackTrack Mountain
@scottymuse: Sopadish
@davidgropper: The Girl With The Snapdragon Tattoo (cc: @0x17h)
@liambeeton: Independence 0day
@johngineer: 2 Fast 2 Fourier
@l0qii: elements[4]
@Slanderous23: Beauty and the BSD
@dakami: The A* Team (h/t @GKokoris)
@GKokoris: Jurassic PARC
@drb0n3z: James and the Giant Peach Fuzzer
@Mackaber: Indiana Jones and the Last Cross-site scripting
@artisan002: DIMM and DIMMer
@yawnbox: Stateless in Seattle
@addelindh: Low Orbit Ion Canonball Run
@buckstwits: Sudo The Right Thing
@ma1: A Phisher Called from Rwanda
@speno: Enemy Minecraft
@Rzieher: Sheldon Island
@Rzieher: The Men Who Stare at Goatse
@damphi: The good, the bad and the code you inherited from your predecessor
@damphi: Python on a plane
@agnat: 007 – License to … read, write and execute
@b4seb4nd: A League of Their Chown
@damphi: Deep Packet Inspector Gadget
@damphi: RoboCopy
@hvcco: Terminate and stay resident evil
@Nightwolf42: rm -rf / now
@0x17h: How to Train Your Dragon NaturallySpeaking
@FnordZilla: The Breakpoint Club
@0x17h: Paypal It Forward
@bocki_nbg: monty.py
@ArchangelAmael: Guess Who’s coming to Audit
@aymro: Google Intentions 2.0
@alech: When Alice Met Bob.
@stronate: We Need to Talk About Kelvin
@ArchangelAmael: The Birth of an Exploit
@meznak: vi for vendetta
@j4rkiLL: Alien vs. Administrator
@ArchangelAmael: The Grapes of $PATH
@artisan002: Two and a Half Men in the Middle
@angbor3D: Wall-E The Garbagecollector
@SeveQ: The Hills have iPhones
@ThemsonMester: Zack and Miri Configure, Make, Make Install a Porno
@artisan002: Dirty Twisted Pair
@swiftruss: Cowboys and Alienware
@mechtroid: Crank IPv4: Chev must surf the internet constantly. If he doesn’t find something funny every minute, he goes into cardiac arrest.
@eikonoklastic: How Stella Got Her Algorithm Back
@Nightwolf42: It’s a wonderful runtime
@KlaasJohansen: The Big Kernel Lock Theory #ReplaceFilmTitlesWithKernel
@Megaglest: Four web browsers and an Internet
@j0sema: LDAP Confidential
@FnordZilla: Clock Generator Orange
@TeethGrind3r: Crouching Tigerteam, Hidden Night Dragon
@_7and6_: “We can’t stop here, this is bash-country!” – Fear and Loathing in /usr/bin #nerdflix
@TeethGrind3r: Citizen Cain & Abel
@damphi: Dick Tracert
@_7and6_: Sex, Lies and Avi-Files
@yooogan: Watchdogmen
@stevelord: No country for old code
@_c_v_e_n: The Seven Layer Itch
@0x17h: Annie Hall-effect sensor
@apilosov: Da Vixie Code
@ArchangelAmael: The Magnificent Seven; layers of the OSI model
@xomexx: One overflow the cuckoo’s nest
@krizzzn: SELECT American FROM London WHERE wolf = 1
@GoddamnGeek: @dakami EIP Man
@0x17h: Malcolm X11
@imlxgr: The Bucket Sort
@ArchangelAmael: Who >iFramed< Roger Rabbit
@Meihrenfacht: @wlet @BeerweasleDev Lost in compilation ROFL
@ex1up: momopeche: Weekend At Bernoulli’s
@SubwayDealer: SHA Wars
@Voulnet: Batman BEGINS; DECLARES;
@Voulnet: netcat in the Hat
@fmiffal: grep -c “Monte Cristo”
@ira_victor: Monty Python and The Advanced Persistent Grail
@HamdanD: Sharepoint must die
@mrdaiber: wget -shorty / i See Deadpeople / wag the .doc / ctrl-s ‘lastdance.txt’
@HeshamAboElMagd: Pulp Function
@EmadMokhtar: RT @Voulnet: this.IsSparta();
@Ahmad_Hadeed: The Matrix[][]
@juphoff: Tim Cook, the Thief, His Wife & Her Lover.
@Voulnet: VBScript She Wrote
@old_man_mose: Indiana Jones and the Mountain of Dew
@Ibrahimism: How I hacked your mother
@f70r1: Grave of the Firefoxes
@otac0nFluX: the /x41 team
@mrdaiber: Alice in WLANd
@jochenWolters: Family GUI
@saschaleib: 0x20, the final frontier #NotAMovieTitle
@MagnusRevang: Enemy of the Statemachine
@f70r1: American History XML – an experienced coder wants to prevent his young colleague from taking the same wrong path that he did.
@cmhscout: Rear Windows 3.1
@imlxgr: 2038
@yeahyeahyens: dude, where’s my cdr?
@deepsec: Game of Inodes.
@lnxkid: The XORcist
@f70r1: The Old Man and the C
@webtonull: Ah, forgot “NoSQL for old men”
@mrngm: gcc -Wall -E
@mrdaiber: my big fat32 Greek wedding
@mrdaiber: revenge of the sithadmin
@kinesias: Mozilla (by Roland Emmerich)
@bluehavana: Sk!diocracy
@Politik2_0: 9 1/2 leaks
@SwieSchnubb: The Hurt Logger
@brx0: I dream of JNI
@mbeison: Murder on the Object Orient Express
@robotviki: Das BOOTP
@0xerror: How I Met Your Motherboard
[redacted]: Break Point
@Bashar3A: She’s Out Of My Scope
@JarrarM: AJAX Shrugged
@brx0: Spongebob O(N^2)Pants

Categories: lulz

Primal Fear: Demuddling The Broken Moduli Bug

February 17, 2012 6 comments

There’s been a lot of talk about this supposed vulnerability in RSA, independently discovered by Arjen Lenstra and James P. Hughes et al, and Nadia Heninger et al. I wrote about the bug a few days ago, but that was before Heninger posted her data. Lets talk about what’s one of the more interesting, if misunderstood, bugs in quite some time.



  • The “weak RSA moduli” bug is almost (and possibly) exclusively found within certificates that were already insecure (i.e. expired, or not signed by a valid CA).
  • This attack almost certainly affects not a single production website.
  • The attack utilizes a property of RSA whereby if half the private key material is shared between two public keys, the private key is leaked. Researchers scaled this method to cross-compare every RSA key on the Internet against every other RSA key on the Internet.
  • The flaw has nothing to do with RSA or “multi-secret” systems. The exact same broken random number generator would play just as much havoc, if not more, with “single-secret” algorithms such as ECDSA.
  • DSA, unlike RSA, leaks the private key with every signature under conditions of faulty entropy. That is arguably worse than RSA which leaks its private key only during generation, only if a similar device emits the same key, and only if the attacker finds both devices’ keys.
  • The first major finding is that most devices offer no crypto at all, and even when they do, the crypto is easily man-in-the-middled due to a presumption that nobody cares whether the right public key is in use.
  • Cost and deployment difficulty drive the non-deployment of cryptographic keys even while almost all systems acquire enough configuration for basic connectivity.
  • DNSSEC will dramatically reduce this cost, but can do nothing if devices themselves are generating poor key material and expecting DNSSEC to publish it.
  • The second major finding is that it is very likely that these findings are only the low hanging fruit of easily discoverable bad random number generation flaws in devices. It is specifically unlikely that only a third of one particular product had bad keys, and the rest managed to call quality entropy.
  • This is a particularly nice attack in that no knowledge of the underlying hardware or software architecture is required to extract the lost key material.
  • Recommendations:
    1. Don’t panic about websites. This has very little to absolutely nothing to do with them.
    2. When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.
    3. Audit smartcard keys.
    4. Stop buying or building CPUs without hardware random number generators.
    5. Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.
    6. When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.
    7. Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.

If there’s one thing to take away from this entire post, it’s the following line from Nadia Heninger’s writeup:

Only one of the factorable SSL keys was signed by a trusted certificate authority and it has already expired.

What this means, in a nutshell, is that there was never any security to be lost from crackable RSA keys; due to failures in key management, almost certainly all of the affected keys were vulnerable to being silently “swapped out” by a Man-In-The-Middle attacker. It isn’t merely the fact that all the headlines proclaiming “0.2% of websites using RSA are insecure” are straight up false, because the flaws are concentrated on devices. It’s also the case that the devices themselves were already using RSA insecurely to begin with, merely by being deployed outside a key management system.

If there’s a second thing to take away, it’s that this is important research with real actions that should be taken by the security community in response. There’s no question this research is pointing to things that are very wrong with the systems we depend on. Do not make the mistake of discounting this work as merely academic. We have a problem with random numbers, that is even larger than Lenstra and Hughes and Heninger are finding with their particular mechanism.


To grossly simplify, RSA works by:

1) Generating two random primes, p and q. These are private.
2) Multiplying p ad q together into n. This becomes public, because figuring out the exact primes used to create n is Hard.
3) Content is signed or decrypted with p and q.
4) Content is verified or encrypted with n.

It’s obvious that if p is not random, q can be calculated: n/p==q, because p*q==n. What’s slightly less obvious, however, is that if either p or q is repeated across multiple n’s, then the repeated value can be trivially extracted using an ancient algorithm by Euclid called the Greatest Common Denominator test. The trick looks something like this (you can follow along with Python, if you like):

>>> from gmpy import *
>>> p=next_prime(rand(“next”))
>>> q=next_prime(rand(“next”))
>>> p
>>> q
>>> n=p*q
>>> n

Now we have two primes, multiplied into some larger n.

>>> q2=next_prime(rand(“next”))
>>> n2=p*q2
>>> n2

Ah, now we have n2, which is the combination of a previously used p, and a newly generated q2. But look what an attacker with n and n2 can do:

>>> gcd(n,n2)
>>> p

Ta-da! p falls right out, and thus q as well (since again, n/p==q). So what these teams did was gcd every RSA key on the Internet, against every other RSA key on the Internet. This would of course be quite slow — six million keys times six million keys is thirty six trillion comparisons — and so they used a clever algorithm to scale the attack such that multiple keys could be simultaneously compared against one another. It’s not clear what Lenstra and Hughes used, but Heninger’s implementation leveraged some 2005 research from (who else?) Dan Bernstein.

That is the math. What is the impact, upon computer security? What is the actionable intelligence we can derive from these findings?

Uncertified public keys or not, there’s a lot to worry about here. But it’s not at all where Lenstra and Hughes think.


Lenstra and Hughes, in arguing that “Ron was Wrong, Whit was right”, declare:

Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for “multiple-secrets” cryptosystems such as RSA is signi cantly riskier than for “single-secret” ones such as ElGamal or (EC)DSA which are based on Die-Hellman.

The argument is essentially that, had those 12,000 keys been ECDSA instead of RSA, then they would have been secure. In my previous post on this subject (a post RSA Inc. itself is linking to!) I argued that this paper, while chock-full of useful and important data, failed to make the case that the blame could be laid at the feet of the “multiple-secret” RSA. Specifically:

  • Risk in cryptography is utterly dominated, not by cipher selection, but by key management. It does not matter the strength of your public key if nobody knows to demand it. Whatever the differential risk is that is introduced by the choice of RSA vs. ECDSA pales in comparison to whether there’s any reason to believe the public key in question is the actual key of the desired target. (As it turns out, few if any of the keys with broken moduli, had any reason to trust them in the first place.)
  • There aren’t enough samples of DSA in the same kind of deployments to compare failure rates from one asymmetric cipher to another.
  • If one is going to blame a cipher for implementation failures in the field, it’s hard to blame the maybe dozens of implementations of RSA that caused 12,000 failures, while ignoring the one implementation of ECDSA that broke millions upon millions of Playstation 3’s.

But it wasn’t until I read the (thoroughly excellent) blog post from Heninger that it became clear that, no, there’s no rational way this bug can be blamed on multi-secret systems in general or RSA in particular.

However, some implementations add additional randomness between generating the primes p and q, with the intention of increasing security:

p = prng.generate_random_prime()
q = prng.generate_random_prime()
N = p*q

If the initial seed to the pseudorandom number generator is generated with low entropy, this could result in multiple devices generating different moduli which share the prime factor p and have different second factors q. Then both moduli can be easily factored by computing their GCD: p = gcd(N1, N2).

OpenSSL’s RSA key generation functions this way: each time random bits are produced from the entropy pool to generate the primes p and q, the current time in seconds is added to the entropy pool. Many, but not all, of the vulnerable keys were generated by OpenSSL and OpenSSH, which calls OpenSSL’s RSA key generation code.

Would the above code become secure, if the asymmetric cipher selected was the single-secret ECDSA instead of the multi-secret RSA?


No. All public keys emitted would be identical (as identical as they’d be with RSA, anyway). I suppose it’s possible we could see the following construction instead:

ecdsa_private_key=concatenate(first_half, second_half);

…but I don’t think anyone would claim that ECDSA, DSA, or ElGamel is secure in the circumstance where half its key material has leaked. (Personal note: Prodded by Heninger, I’ve personally cracked a variant of RSA-1024 where all bits set to 1 were revealed. It wasn’t difficult — cryptosystems fail quickly when their fundamental assumptions are violated.)

Hughes has made the argument that RSA is unique here because the actions of someone else — publishing n composed of your p and his q — impact an innocent you. Alas, you’re no more innocent than he is; you’re republishing his p, and exposing his q2 as much as he’s exposing your q. Not to mention, in DSA, unlike RSA, you don’t need someone else’s help to expose your private key. A single signature, emitted without a strong RNG, is sufficient. As we saw from the last major RNG flaw, involving Debian:

Furthermore, all DSA keys ever used on affected Debian systems for signing or authentication purposes should be considered compromised; the Digital Signature Algorithm relies on a secret random value used during signature generation.

The fact that Lenstra and Hughes found a different vulnerability rate for DSA keys than RSA keys comes from the radically different generator for the former — PGP keys as generated by GNU Privacy Guard or PGP itself. This is not a thing commonly executed on embedded hardware.

There is a case to be made for the superiority of ECDSA over RSA. Certainly this is the conclusion of various government entities. I don’t think anyone would be surprised if the latter was broken before the former. But this situation, this particular flaw, reflects nothing about any particular asymmetric cipher. Under conditions of broken random number generation, all ciphers are suspect.

Can we dismiss these findings entirely, then?

Lets talk about that.


The headlines have indeed been grating. “Only 99.8% of the worlds PKI uses secure randomness”, said TechSnap. “EFF: Tens of thousands of websites’ SSL “offers effectively no security” from Boing Boing. And, of course, from the New York Times:

The operators of large Web sites will need to make changes to ensure the security of their systems, the researchers said.

The potential danger of the flaw is that even though the number of users affected by the flaw may be small, confidence in the security of Web transactions is reduced, the authors said.

“Some people may say that 99.8 percent security is fine,” he added. That still means that approximately as many as two out of every thousand keys would not be secure.

The operators of large web sites will not need to make any changes, as they are excluded from the population that is vulnerable to this flaw. It is not two out of every thousand keys that is insecure. Out of those keys that had any hope of being secure to begin with — those keys that participate in the (flawed, but bear with me) Certificate Authority system — approximately none of these keys are threatened by this attack.

It is not two out of every 1000 already insecure keys that are insecure. It is 1000 of 1000. But that is not changed by the existence of broken RSA moduli.

To be clear, these numbers do not come from Lenstra and Hughes, who have cautiously refused to disclose the population of certificates, nor from the EFF, who have apparently removed those particular certificates from the SSL Observatory (interesting side project: Diff against the local copy). They come from Heninger’s research, which not only discloses who isn’t affected (“Don’t worry, the key for your bank’s web site is probably safe”), but analyzes the population that is:

Firewall product X:
52,055 hosts in our SSL dataset
6,730 share public keys
12,880 have factorable keys

Consumer-grade router Y:
26,952 hosts in our SSL dataset
9,345 share public keys
4,792 have factorable keys

Enterprise remote access solution Z:
1,648 hosts in our SSL dataset
24 share public keys
0 factorable

Finally, we get to why this research matters, and what actions should be taken in response to it. Why are large numbers of devices — of security devices, even! — not even pretending to successfully manage keys? And worse, why are the keys they do generate (even if nobody checks them) so insecure?


Practically every device on a network is issued an IP address. Most of them also receive DNS names. Sometimes assignment is dynamic via DHCP, and sometimes (particularly on corporate/professionally managed networks) addressing is managed statically through various solutions, but basic addressing and connectivity across devices from multiple vendors is something of a solved problem.

Basic key management, by contrast, is a disaster.

You don’t have to like the Certificate Authority system. I certainly don’t; while I respect the companies involved, they’re pretty clearly working with a flawed technology. (I’ve been talking about this since 2009, see the slides here. Slide 22 talks about the very intermediate certificate issuance that people are now worrying about with respect to Trustwave and GeoTrust) However…

Out of hundreds of millions of servers on the Internet with globally routable IP addresses, only about six million present public keys to be authenticated against. And of those with keys, only a relatively small portion — maybe a million? — are actually enrolled into the global PKI managed by the Certificate Authorities.

The point is not that the CA’s don’t work. The point is that, for the most part, clients don’t care, nothing is checking the validity of device certificates in the first place. Most devices, even security devices, are popping these huge errors every time the user connects to their SSL ports. Because this is legitimate behavior — because there’s no reason to trust the provenance of a public key, right or wrong — users click through.

And why are so many devices, even in the rare instance that they have key material on their administrative interfaces, unenrolled in PKI? Because it requires this chain of cross organizational interaction to be followed. The manufacturer of the device could generate a key at the factory, or on first boot, but they’re not the customer so they can’t have certificates signed in a customers namespace (which might change, anyway). Meanwhile, the customer, who can easily assign IP addresses and DNS names without constantly asking for anyone’s permission, has to integrate with an external entity, extract key material from the manufacturer’s generated content, provide it to that entity, integrate the response back into the device, pay the entity, and do it all over again within a year or three. Repeat for every single device.

Or, they could just not do all that. Much easier to simply not use the encrypted interface (it’s the Internal Network, after all), or to ignore the warning prompts when they come up.

One of the hardest things to convince people of in security, is that it’s not enough to make security better. Better doesn’t mean anything, it’s a measure without a metric. No, what’s important is making security cheaper. How do we start actually delivering value, without imagining that customers have an infinite budget to react with? How do we even measure cost?

I can propose at least one metric: How many meetings need to be held, to deploy a particular technology? Money is one thing but the sheer time required to get things done is a remarkably effective barrier to project completion. And the further away somebody is from a deploying organization — another group, another company, another company that needs to get paid thus requiring a contract and Purchasing’s involvement — the worst things are.

Devices are just not being enrolled into the CA system. Heninger found 19,610 instances of a single firewall — a security device — and not one of those instances was even pretending to be secure. That’s a success rate not of 99.8%, but of 0.00%.

Whatever good we’ve done on production websites has been met with deafening silence across, “firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones”. Believe me, this abject security failure cares not a whit about the relative merits of RSA vs. ECDSA. The key management penalty is just too damn high.

The solution, of course, is DNSSEC. It’s not entirely true that third parties aren’t involved with IP assignment; it’s just that once you’ve got your IP space, you don’t have to keep returning to the well. It’s the same with DNS — while yes, you need to maintain your registration for whatever.com, you don’t need to interact with your registrar every time you add or remove a host (unless you’d like to, anyway). In the future, you’ll register keys in your own DNS just as you register IP addresses. It will not be drama. It will not be a particularly big deal. It will be a simple, straightforward, and relatively inexpensive mechanism by which devices are brought into your network, and then if you so choose, recognized as yours worldwide.

This will be very exciting, and hopefully, quite boring.

It’s not quite so simple, though. DNSSEC offers a very nice model — one that will be completely vulnerable to the findings of Lenstra, Hughes, and Heninger, unless we start dealing with these badly generated private keys.


DNSSEC will make key management scale. That is not actually a good thing, if the keys it’s spreading are in fact insecure! What we’re finding here is that a lot of small devices are generating keys insecurely. The biggest mistake we can make here is thinking that only the keys vulnerable to this particular attack, were badly generated.

There are many ways to fail at key generation, that aren’t nearly as trivially discoverable as a massive group GCD. Heninger found that Firewall Vendor Z had over a third of their keys either crackable via GCD or repeated elsewhere (as you’d expect, if both p and q were emitted from identical entropy). One would have to be delusional to expect that the code is perfectly secure two thirds of the time, and only fails in this manner every once in a while. The primary lesson from this incident is that there’s a lot more from where the Debian flaw came from. According to Heninger, much of the hardware they saw was actually running OpenSSL, pretty much the gold standard in available libraries for open cryptographic work.

It still failed.

This particular attack is nice, in that it’s independent of the particular vagaries of a device implementation. As long as any two nodes share either p or q, both p and q will be disclosed. But this is a canary in the coal mine — if this went wrong, so too must a lot more have. What I predict is that, when we crack devices open, we’re going to see mechanisms that rarely repeat, but still have an insufficient amount of entropy to “get the job done” — nodes seeding with MAC addresses (far fewer than 48 bits of entropy, when you think about it), the ever classic clock seeds, even fixed “entropic starting points” stored on the file system are going to be found.

Tens of thousands of random keys being not quite so random is one thing. But that a generic mechanism found this many issues, strongly implies a device specific mechanism will be even more effective. We should have gone looking after the Debian bug. I’m glad Lenstra, Hughes, and Heninger et al. did.


So what to do? Here is my advice.

1) Obviously, don’t panic about any website being vulnerable. This is pretty much a web interface problem.

2) When possible and justifiable, generate private key material outside your embedded devices, and push the keys into them. Have their surrouding certificates signed, if feasible.

This is cryptographic heresy, of course. Private keys should not be moved, as bits are cheap. But, you know, some bits are cheaper than others, and I’ve got a lot more faith now in a PC generating key material at this point than some random box. (No way I would have argued this a few weeks ago. Like I said from the beginning, this is excellent work.) In particular, I’d regenerate keys used to authenticate clients to servers.

3) Audit smartcard keys.

The largest population of key material that Lenstra, Hughes, and Heninger could never have looked at, are the public keys contained within the world’s smartcard population. Unless you’re the manufacturer, you have no way to “look inside” to see if the private keys are being reasonably generated. But you do have access to the public keys, and most likely you don’t have so many of them that the slow mechanism (gcd’ing each moduli against each other moduli) is infeasibly slow. Ping me if you’re interested in doing this.

4) Stop buying or building CPUs without hardware random number generators.

Alright, CPU vendors. There’s not many of you, and it’s time for you to stop being so deterministic. By 2014, if your CPU does not support cryptographically strong random number generation, it really doesn’t belong in anything trying to be secure. Since at this point, that’s everything, please start providing a simple instruction that isn’t hidden in some random chipset somewhere for developers and kernels to use to seed proper entropy. It doesn’t even need to be fast.

I hear Intel is adding a Hardware RNG to “Ivy Bridge”. One would also be nice for Atom.

5) Revisit truerand, an entropy source that only requires two desynchronized clocks, possibly integrating it into OpenSSL and libc.

This is the most controversial advice I’ve ever given, and most of you have no idea what I’m talking about. From the code, written originally seventeen years ago by D. P. Mitchell and modified by the brilliant Matt Blaze (who’s probably going to kill me):

* Truerand is a dubious, unproven hack for generating “true” random
* numbers in software. It is at best a good “method of last resort”
* for generating key material in environments where there is no (or
* only an insufficient) source of better-understood randomness. It
* can also be used to augment unreliable randomness sources (such as
* input from a human operator).
* The basic idea behind truerand is that between clock “skew” and
* various hard-to-predict OS event arrivals, counting a tight loop
* will yield a little bit (maybe one bit or so) of “good” randomness
* per interval clock tick. This seems to work well in practice even
* on unloaded machines…

* Because the randomness source is not well-understood, I’ve made
* fairly conservative assumptions about how much randomness can be
* extracted in any given interval. Based on a cursory analysis of
* the BSD kernel, there seem to be about 100-200 bits of unexposed
* “state” that changes each time a system call occurs and that affect
* the exact handling of interrupt scheduling, plus a great deal of
* slower-changing but still hard-to-model state based on, e.g., the
* process table, the VM state, etc. There is no proof or guarantee
* that some of this state couldn’t be easily reconstructed, modeled
* or influenced by an attacker, however, so we keep a large margin
* for error. The truerand API assumes only 0.3 bits of entropy per
* interval interrupt, amortized over 24 intervals and whitened with
* SHA.

Disk seek time is often unavailable on embedded hardware, because there simply is no disk. Ambient network traffic rates are often also unavailable because the system is not yet on a production network. And human interval analysis (keystrokes, mouse clicks) may not be around, because there is no human. (Remember, strong entropy is required all the time, not just during key generation.)

what’s also not around on embedded hardware, however, is a hypervisor that’s doling out time slices. And consider: What makes humans an interesting thing to build randomness off of, is that we’re operating on a completely different clock and frequency than a computer. We are a “slow oscillator” that chaotically interacts with a fast one.

Well, most every piece of hardware has at least two clocks, and they are not in sync: The CPU clock, and the Real Time Clock. You can add a third, in fact, if you include system memory. All three may be made “slow oscillators” relative to the rest, if only by running in loops. At the extreme scale, you’re going to find slight deviations that can only be explained not through deterministic processes but through the chaotic and limited tolerances of different clocks running at different frequencies and different temperatures.

On the one hand, I might be wrong. Perhaps this research path only ends in tears. On the other hand, it is the most persistent delusion in all of computing that there’s only one computer inside your computer, rather than a network of vaguely cooperative subsystems — each with their own clocks that are in no way synchronized against one another.

It wouldn’t be the first time asynchronous behavior made a system non-deterministic.

Ultimately, the truerand approach can’t make things worse. One of the very nice things about entropy sources is that you can stir all sorts of total crap in there, and as long as there’s at least 80 or so bits that are actually difficult to predict, you’ve won. I see no reason we shouldn’t investigate this path, and possibly integrate it as another entropy source to OpenSSL and maybe libc as well.

(Why am I not suggesting ‘switch your apps over to /dev/random? Because if your developers did enough to break what is already enabled by default in OpenSSL, they did so because tests showed the box freezing up, and aren’t going to appreciate some security yutz telling them to ship something that locks on boot.)

6) When doing global sweeps of the net, be sure to validate that a specific population is affected by your attack before including it in the vulnerability set.

7) Start seriously looking into DNSSEC. You are deploying a tremendous number of systems that nobody can authenticate.


It’s 2012, and we’re still having problems with Random Number Generators. Crazy, but we’re still suffering (mightily) from SQL injection and that’s a “fixed problem” too. This research, even if accompanied by some flawed analysis, is truly important. There are more dragons down this path, and it’s going to be entertaining to slay them.

Categories: Security

Get every new post delivered to your Inbox.

Join 694 other followers