Home > Security > On The RSA SecurID Compromise

On The RSA SecurID Compromise

TL;DR: Authentication failures are getting us owned. Our standard technology for auth, passwords, fail repeatedly — but their raw simplicity compared to competing solutions drives their continued use. SecurID is the most successful post-password technology, with over 40 million deployed devices. It achieved its success by emulating passwords as closely as possible. This involved generating key material at RSA’s factory, a necessary step that nonetheless created the circumstances that allowed a third party compromise at RSA to affect customers like Lockheed Martin. There are three attack vectors that these hackers are now able to leverage — phishing attacks, compromised nodes, and physical monitoring of traveling workers. SecurID users however are at no greater risk of sharing passwords, a significant source of compromise. I recommend replacing devices in an orderly fashion, possibly while increasing the rotation rate of PINs. I dismiss concerns about source compromise on the grounds that both hardware and software are readily reversed, and anyway we didn’t change operational behavior when Windows or IOS source leaked. RSA’s communications leave a lot to be desired, and even though third party security dependencies are tolerable, it’s unclear (even with claims of a “shiny new HSM”) whether this particular third party security dependency should be tolerated.


Table Of Contents:

INTRODUCTION
MINIMIZING CHANGE
MANDATORY RECOVERY
ACTIONABLE INTELLIGENCE
ARE THINGS BETTER?

INTRODUCTION
There’s an old story, that when famed bank robber Willie Sutton was asked why he robbed banks, he replied: “Because that’s where the money is.” Why would hackers hit RSA Security’s SecurID authenticators?

Because that’s where the money is.

The failure of authentication technologies is one of the three great issues driving the cyber security crisis (the other two being broken code and invulnerable attackers). The data isn’t subtle: According to Verizon Business’ Data Breach Investigative Report, over half of all compromised records were lost to attacks in which a credential breach occurred. Realistically, that means passwords failed us in some way yet again.

No passwords.
Default passwords.
Shared passwords.
Stolen passwords.

By almost any metric, passwords are a failed authentication technology. But there’s one metric in which they win.

Passwords are really simple to implement — and through their simplicity, they’re incredibly robust to real world circumstances. Those silly little strings of text can be entered by users on any device, they can be tunneled inside practically any protocol across effectively any organizational boundary, and even a barely conscious programmer can compare string equality from scratch with no API calls required.

Password handling can be more complex. But they don’t have to be. The same can’t be said about almost any of the technologies that were pegged to replace passwords, like X.509 certificates or biometrics.

It is a hard lesson, but ops — operations — is really the decider of what technologies live or die, because they’re the guys who put their jobs on the line with every deployment. Ops loves simplicity. While “secure in the absence of an attacker” is a joke, “functional in the presence of a legitimate user” is serious business. If you can’t work when you must, you won’t last long enough to not work when you must not.

So passwords, with all their failures, persist in all but the most extreme scenarios. There’s however been one technology that’s been able to get some real world wins against password deployments: RSA SecurIDs. To really understand their recent fall, it’s important to understand the terms of their success. I will attempt to explain that here.


MINIMIZING CHANGE
Over forty million RSA SecurID tokens have been sold. By a wide margin, they are the most successful post-password technology of all time. What are they?

Password generators.

Every thirty seconds, RSA SecurID tokens generate a fresh new code for the user to identify himself with. They do require a user to maintain possession of the item, much as he might retain a post-it note with a hard to remember code. And they do require the server to eventually query a backend server for the correctness of the password, much as it might validate against a centralized password repository.

Everything between the user and the server, however, stays the same. The same keys are used to enter the same character sets into the same devices, which use the same forms and communication protocols to move a simple string from client A to server B.

Ops loves things that don’t change.

It gets better, actually. Before, passwords needed to be cycled every so often. Now, because these devices are “self-cycling”, the rigamarole of managing password rotation across a large number of users can be dispensed with (or kept, if desired, by rotating PIN numbers).

Ops loves the opportunity to do less work.

When things get interesting is when you ask: How do these precious new devices get configured? Passwords are fairly straightforward to provision — give the user something temporary and hideous, then pretend they’ll generate something similar under goofy “l33tspeak” constraints. Providing the user with a physical object is in fact slightly more complex, but not unmanageably so. It’s not like new users don’t need to be provisioned with other physical objects, like computers and badges. No, the question is how much work needs to go into configuring the device for a user.

There are no buttons on a SecurID token. That doesn’t just mean less things for a user to screw up and call help desk over, annoying Ops. It also means less work for Ops to do. Tapping configuration data into a hundred stupid little tokens, or worse, trying to explain to a user how to tap a key in, is expensive at best and miserable at worst. So here’s what RSA hit on:

Just put a random key, called a seed, on each token at the factory.
Have the device combine the seed, with time, to create the randomized token output.
Link each seed to a serial number.
Print the serial number on the device.
Send Ops the seeds for the serial numbers in their possession.

Now, all Ops needs to know is that a particular user has the device with a particular serial number, and they can tie the account to the token in possession. It’s as low configuration as it comes.

There are three interesting quirks of this deployment model:

First, it’s not perfect. SecurID tokens are still significantly more complicated to deploy than passwords, even independent of the cost. That they’re the most successful post-password technology does not make them a universal success, by any stretch of the imagination.

Second, the resale value of the SecurID system is impacted — anyone who owned your system previously, owns your systems today because they have the seeds to your authenticators and you can’t change them. This is not a bug (if you’re RSA), it’s a feature.

Third, and more interestingly, users of SecurID tokens don’t actually generate their own credentials. A major portion, the seed for each token, comes from RSA itself. What’s more, these seeds — or a “Master Key” able to regenerate them — were stored at RSA. It is clear that the seeds were lost to hackers unknown.

It seems obvious that storing the seeds to customer devices was a bug. I argue it was a feature as well.


MANDATING RECOVERY
I fly, a lot. Maybe too much. For me, the mark of a good airline is not how it functions when things go right. It’s what they do when thy don’t.

There’s far more room for variation when things go wrong.

Servers die. It happens. There’s not always a backup. That happens too. The loss of an authentication services is about as significant an event as can be, because (by definition) an outage is created across the entire company (or at least across entire userbases).

Is it rougher when a SecurID server dies, than when a password server dies?

Yes, I argue that it is. For while users have within their own memories backups of their passwords, they have no knowledge of the seeds contained within their tokens. They’ve got a token, it’s got a serial number, they’ve got work to do. The clock is ticking. But while replacing passwords is a matter of communication, replacing tokens is a matter of shipping physical items — in quantities large enough that nobody would have them simply lying around.

Ops hates anything that threatens a multi-day outage. That sort of stuff gets you fired.

And so RSA retained the ability to regenerate seed files for all their customers. Perhaps they did this by storing truly randomly generated secrets. Most likely though they combined a master secret with a serial number, creating a unique value that could nonetheless be restored later.

The ability to recover from a failure has been a thorn in the side of many security systems, particularly Full Disk Encryption products. It’s much easier to have no capacity to escrow keys, or to survive bad blocks. But then nobody deploys your code.

Over the many years that SecurID has been a going concern, these sorts of recoveries were undoubtedly far more common than deep compromises of RSA’s deepest secrets.

Things change. RSA finally got broken into, because that’s where the money was. And we’re seeing the consequences of the break being admitted at Lockheed Martin, who recently cycled the credentials of every user in their entire organization.

That ain’t cheap.


ACTIONABLE INTELLIGENCE
It’s important to understand how we got here. There have been some fairly serious accusations of incompetence, and I think they reflect a simple lack of understanding regarding why the SecurID system was built as it was. But where do we go from here? That requires actionable intelligence, which I define as knowing:

What can an attacker do today, that he couldn’t do yesterday, for what class attacker, to what class customer?

The answer seems to be:

An attacker with a username, a PIN, and at least one output from a SecurID token (but possibly a few more), can extend their limited access window indefinitely through the generation of arbitrary token outputs. The attacker must be linked to the hackers who stole the seed material from RSA. The customer must have deployed RSA SecurID.
This extends into three attack scenarios:

1) Phishing. Attackers have learned they can impersonate sites users are willing to log into, and can sometimes get them to provide credentials willingly. (Google recently accused a Chinese group of doing just this to a number of high profile targets in the US.) Before, the loss of a single SecurID login would only allow a one time access to the phished user’s network. Now the attacker can impersonate the user indefinitely.

2) Keylogging. It does happen that attackers acquire remote access to a user’s machine, and are able to monitor its resources and learn any entered passwords. Before, for the attacker to leverage the user’s access to a trusted internal network, he’d have to wait for the legitimate user to log in, and then run “along side” him either from the same machine or from a foreign one. This could increase the likelihood of discovery. Now the attacker can choose to run an attack at the time, and from the machine of his choosing.

3) Physical monitoring. Historically, information security professionals have mostly ignored the class of attacks known as TEMPEST, in which electromagnetic emissions from computing devices leak valuable information. But in the years since TEMPEST attacks were discovered other classes of attacks against keyboards — as complicated as lasers against the back of a screen, and as simple as telephoto lenses, have proven effective. Before, an attacker who managed to pick off a single log in had to use their findings within minutes in order for them to be useful. Now, the attacker can break in at their leisure. Note that there’s also some risk from attackers who merely pick up the serial number from the back of a token, if those attacker can learn or guess usernames and PINs.

There is one attack scenario that is not impacted: Many compromises are linked to shared credentials, i.e. one user giving their password to another willingly, or posting their credentials to a shared server. SecurID does still prevent password sharing of this type, since users aren’t in the class of attacker that has the seeds.

What to do in defense? My personal recommendation would be to implement an orderly replacement of tokens, unless there was some evidence of active attack in which case I’d say to replace tokens aggressively. In the meantime, rotating PINs on an accelerated basis — and looking for attempted logins that try to recycle old PINs — might be a decent short term mitigation.

But is that all that went wrong?


RAMPANT SPECULATION
The RSA “situation” has been going on for a couple months now, with no shortage of rumors swirling about what was lost, and no real guidance from RSA on the risk to their customers (at least none outside of NDA). Two other possible scenarios should be discussed.

The first is that it’s possible RSA lost their source code. Nobody cares. We didn’t stop using Windows or IOS when their source code leaked, and we wouldn’t stop using SecurID now even if they went Open Source tomorrow. The reality is that any truly interesting hardware has long since been dunked in sulfuric acid and analyzed from photographs, and any such software has been torn apart in IDA Pro. The fact that attackers had to break in to get the seed(s) reflects the fact that there wasn’t any magic backdoor to be found in what was already released.

The second scenario, brought up by the brilliant Mike Ossman, is that perhaps RSA employees had some crazy theoretical attack they’d discussed among themselves, but had never gone public with. Maybe this attack only applied to older SecurIDs, which had not been replaced because nobody outside had learned of the internal discovery. Under the unbound possibilities of this attack, who knows, maybe that occurred. It’s viable enough to cite in passing here, but there’s no evidence for it one way or another.

But whose fault is that?


STRANGE BEHAVIOR
It’s impossible to discuss the RSA situation without calling them out for the lack of actionable intelligence provided by the company itself. Effectively, the company went into full panic mode, then delivered guidance that amounted to tying one’s shoelaces.

That was a surprise. Ops hates surprises, especially when they concern whether critical operational components can be trusted or not.

It was doubly bizarre, given that by all signs half the security community guessed the precise impact of the attack within days of the original announcement. With the exception of the need to support recovery operations, most of the above analysis has been discussed ad nauseum.

But confirmations, and hard details, have only been delivered by RSA under NDA.

Why? A moderately cynical bit of speculation might be that RSA could only replace so many tokens at once, and they were afraid they’d simply get swamped if everybody decided their second factor authenticator had been fully broken. Only now, after several months, are they confident they can handle any and all replacement requests. In the wake of the Lockheed Martin incident, they’ve indeed publicly announced they will indeed effect replacements upon requests.

Whatever the cause, there are other tokens besides RSA SecurID, and I expect their sales teams to exploit this situation extensively. (A filing to the SEC that there would be no financial impact from this hack seemed…premature…given the lack of precedent either regarding the scale of attack or the behavior towards the existing customer base.)

Realistically, customers sticking with RSA should probably seek contractual terms that prevents a repeat of this scenario. That means two things. First, actual customers should obligate RSA to tell them of their exposure to events that may affect them. This is actually quite normal — shows up in HIPAA arrangements all the time, for example.

Second, customers should discover the precise nature of their continued exposure to seed loss at RSA HQ, even after their replacement tokens arrive.


ARE THINGS BETTER?
Jeff Moss recently asked the following question: “When I heard RSA had a shiney new half million dollar HSM to store seed files I wondered where had they been stored before?”

HSM’s are rare, so likely on a stock server that got owned.

I think the more interesting question is, what exactly does the HSM do, that is different than what was done before? HSMs generally use asymmetric cryptography to sign or decrypt information. From my understanding, there’s no asymmetric crypto in any SecurID token — it’s all symmetric crypto and hash functions. The best case scenario, then, would be the HSM receiving a serial number, internally combining it with a “master seed”, then returning the token seed to the factory.

That’s cool, but it doesn’t exactly stop a long term attack. Is there rate limiting? Is there alarming on recovery of older serial numbers? Is this in fact a custom HSM backend? If they are, is the new code in fact running inside the physically protected core?

It is also possible the HSM stores a private key that is able to decrypt randomly generated keys, encrypted to a long term public key. In this case, the HSM could be kept offline, only to be brought up in the occasional case of a necessary recovery.

We don’t really know, and ultimately we should.

See, here’s the thing. Some people are angry that vulnerabilities in a third party can affect their organization. The reality is that this is a ship that has long since sailed. Even if you ignore all the supply chain threats in the underlying hardware within an organizaton, there’s an entire world of software that’s constantly being updated, and automatically shipped to high percentages of clients and servers. There is no question that the future of software is, and must be, dynamic updates — if for no other reason that static defenses are hopelessly vulnerable to dynamic threats. The reality is that more and more of these updates are being left in the hands of the original vendors.

Compromise of these third party update chains is actually much more exploitable than even the RSA hit. At least that required partial credentials from the users to be exploited.

But the fact remains, while these other third parties might get targeted and might have their implicit trust exploited, RSA did. The attackers have not, and realistically will not get caught. They know RSA’s network. Can they come back and do it again?

It is one thing to choose to accept the third party risk from RSA’s authenticators. It’s another to have no choice. It’s not really clear whether RSA’s “shiny new HSMs” make the new SecurIDs more secure than the old ones. RSA has smart people, and it’s likely there’s a concrete improvement. But we need a little more than likeliness here.

In the presence of the HSM, the question should be asked: What are attackers unable to do today, that they could do yesterday, for the attackers who broke into RSA before, to RSA now?

Categories: Security
  1. June 9, 2011 at 5:20 pm

    “‘where had they been stored before?’ HSM’s are rare, so likely on a stock server that got owned.”

    Given the parent company, I wonder if it wasn’t a cloud-based server?

    Ossman’s theory that there might have been an internal weakness is, I think, right and wrong. The older generation of SecurID used a proprietary algorithm that was, in fact, broken by cryptanalysis. RSA says they now “use AES”, hopefully a straightforward application of it. We can assume this one is much better. The seeds could have been generated insecurely, but that’s baseless speculation and not necessary to explain the observed events.

    “A filing to the SEC that there would be no financial impact from this hack seemed…premature” In terms of lost sales, or even product-recall replacement costs, SecurID is barely a blip in the revenue of EMC. Which to me raises the question of whether or not a security company can really do anything else as a primary business.

    The serial numbers are a bit of a detour, I think. Given two 6-digit codes (an attacker could probably get these the same way they got the password) from a token, an attacker with a database of all existing keys could search it and identify the token in a very short period of time. From what I hear the serial number looks like it’s only 32 bits long.

    Full Disclosure: I work on a product (PhoneFactor) that provides a competing service. I think I’m commenting here mainly because I like thinking and talking about authentication.

    – Marsh

    • June 9, 2011 at 5:28 pm

      Marsh–

      This is an old, old product, and my experience is that most acquisitions don’t actually change the underlying technological infrastructure.

      It’s an interesting point you make about the SEC filings. In the context of EMC, it is indeed likely that this break is irrelevant. The entire line of business could fall and it wouldn’t really matter.

      The serial numbers are actually less valuable than the other attack vectors, because they don’t come with username or pin. The real question is how much entropy is there in the master secret. You can extract seeds from the hardware, after all.

  2. June 9, 2011 at 9:48 pm

    “This is an old, old product, and my experience is that most acquisitions don’t actually change the underlying technological infrastructure.”

    That’s my experience too, but in this case they have a battery in them with a max 5-year support life. 🙂

    Here’s http://eprint.iacr.org/2003/205 is an analysis of the old algorithm from 2003 and mentions the recommendation to update to the new AES-based algorithm even back then.

    I hear there are software implementations as well (aka “soft tokens”) which are said to use the same algorithm.

  3. June 10, 2011 at 1:59 pm

    In the past, if an attacker had captured login credentials with SecurID details, that window of exploit was pretty small, as you mention above. But if you have the seed mappings in hand from an old attack (potentially years ago), you can probably turn a dead authentication back into a live one. Sort of like a zombie auth! Granted, the username and PIN would have to remain unchanged.

    So even old data is nearly as at risk as newly acquired data, for attackers linked to the RSA intrusion.

  4. June 10, 2011 at 2:00 pm

    Wow, my bad. “But if you have the seed mappings *and your old pilfered data* in hand from an old attack…”

  5. June 12, 2011 at 5:52 pm

    I recently ran across called Yubikey where you can seed the device with your own secret key (it comes pre-programmed with one just like SecurID) if you desire, so you don’t have to worry about the security of third parties:

    http://blog.nominet.org.uk/tech/2009/03/18/

    One major design difference is that it is USB powered, and not operated on batteries, so it cannot have a time/clock in it. To enforce the “one-time” part in OTP they use a bunch of counters instead, and the authenticating server keeps track of them so ‘old’ values can’t be re-used—only ones going forward in time/count value.

    The devices are also possible to reprogrammed to use the HOTP algorithm instead of Yubico’s ‘proprietary’ one:

    http://en.wikipedia.org/wiki/HOTP

    I haven’t tried it myself, but I’m curious to know what the opinion of the security wizards is on it.

  6. December 14, 2011 at 9:57 pm

    “In the past, if an attacker had captured login credentials with SecurID details, that window of exploit was pretty small, as you mention above. But if you have the seed mappings in hand from an old attack (potentially years ago), you can probably turn a dead authentication back into a live one. Sort of like a zombie auth! Granted, the username and PIN would have to remain unchanged.

    So even old data is nearly as at risk as newly acquired data, for attackers linked to the RSA intrusion.”

    This is very true as it a situation happened like this recently at a hospital in brooklyn (Don’t want to name drop) but an attacker was using a zombie log in to rip information off of patient medical records and in turn selling them for profit to who be identify thieves. The hospital almost fired their entire nightshift but gratefully found out who was the leak.

  1. No trackbacks yet.

Leave a comment