Archive

Archive for the ‘Security’ Category

Not Safe For Not Working On

September 3, 2014 1 comment

There’s an old Soviet saying:

If you think it, don’t say it.
If you say it, don’t write it.
If you write it, don’t be surprised.

It’s not a pleasant way to live.  The coiner of this quote was not celebrating his oppression.  The power of the Western brand has long been associated with its outright rejection of this sort of thought policing.

In the wake of a truly profound compromise of sensitive photographs of celebrities, those of us in Information Security find ourselves called upon to answer what this all means – to average citizens, to celebrities, to a global economy that has found itself transformed in ways not necessarily planned.  Let’s talk about some things we don’t normally discuss.

Victim Shaming Goes Exponential

Dumdum?  Really?

We shouldn’t entirely be surprised.  Victim shaming is par for the course in Infosec, and more than Infosec, for uncomfortably similar reasons.  When social norms are violated, a society that cannot punish the transgressor will punish the victim.  If the victim is not made a monster, their unpunished victimization today could become our unpunished victimization tomorrow.  And that’s such a dark and scary conclusion that it’s really quite tempting to say –

No, it’s OK.  Only these celebrities got hacked, not me, because they were so stupid they took sexy photos.  It attracted the hackers.

As if the hackers knew there had to be such photos in the first place, and only stole the photos.  As if we don’t all have private lives, with sensitive bits, that could be or already have been acquired by parties unknown.  We’ve all got something to lose.  And frankly, it may already be lost.

You Don’t Necessarily Know When You’ve Been Hit, Let Alone What’s Gone

There’s a peculiar property of much criminality in the real world:  You notice.  A burgled home is missing things, an assaulted body hurts.  These crimes still occur, but we can start responding to them immediately.  If there’s one thing to take away from this compromise, it’s that when it comes to information theft you might find out quickly, or you may never find out at all.  Consider this anonymous post, forwarded by @OneTrueDoxbin to Daniel Wolf (better known as the surprisingly insightful @SwiftOnSecurity):

anontheory

It is a matter of undisputable fact that “darknet” information trading networks exist.  People collected stamps, after all, and there’s way rarer stuff floating around out there than postal artifacts.  This sort of very personal imagery is the tip of a very large and occasionally deeply creepy iceberg.  The most interesting aspects of Daniel Wolf’s research have centered on the exceptions – yes, he found, a significant majority of the files came from iPhones, implicating some aspect of the Apple iCloud infrastructure.  But not all – there’s some JVC camcorder data here, a Motorola RAZR EXIF extension there – and there’s a directory structure that no structured database might have but a disorganized human whose photo count doesn’t number in the billions would absolutely use.  The exceptions show more of a network and less of a lone operator.

The key element of a darknet is, of course, staying dark.  It’s hard to do that if you’re taunting your victims, and so generally they don’t.  Some of the images Daniel found in his research went back years.  A corollary of not discovering one attack is not detecting many, extending over many victims and coming from multiple attackers.

Of course, darknets have operational security risks, same as anyone, and eventually someone might come in to game the gamers.  From someone who claims to be the original leaker:

“People wanted shit for free. Sure, I got $120 with my bitcoin address, but when you consider how much time was put into acquiring this stuff (i’m not the hacker, just a collector), and the money (I paid a lot via bitcoin as well to get certain sets when this stuff was being privately traded Friday/Saturday) I really didn’t get close to what I was hoping.

Real?  Fake?  Can’t really know.  Pretty risky, trying to draw together a narrative less than a hundred hours since the initial compromise was detected.  It’s the Internet, people lie, even more so anonymously.  It fits with my personal theory that the person who acquired these images isn’t necessarily the person who’s distributing them (I figured hacker-on-hacker theft), but heh.  It’s amazingly easy to have your suspicions confirmed.

One reporter asked me how it was possible that J.P. Morgan could immediately diagnose and correct their extended infection, while Apple couldn’t instantaneously provide similar answers.  As I told them, J.P. Morgan knew without question they were hit, and had the luxury of deciding its disclosure schedule (with some constraints); this particular case simply showed up on 4Chan during Labor Day Weekend when presumably half the people who would investigate were digging their way back from Burning Man.  Internal discoveries and external surprises just follow different schedules.

I’ve personally been skeptical that an account brute forcing bug that happened to be discovered around now, was also used for this particular attack.  There’s only so many days in the year and sometimes multiple events happen close in time just randomly.  As it happens, Apple has confirmed at least some of these celebrity raids have come via account attacks, but whether brute forcing was required hasn’t been disclosed.  It does seem that this exploit has been used in the field since at least May, however, lending some credibility.

We have, at best, a partial explanation.  Much as we desperately would like this to be a single, isolated event, with a nice, well defined and well funded defender who can make sure this never happens again – that’s just not likely to be the case.  We’re going to learn a lot more about how this happened, and in response, there will be improvements.  But celebrity (and otherwise) photo exploitation will not be found to be an isolated attack and it won’t be addressed or ended with a spot fix to password brute forcing.

So there’s a long investigation ahead, quite a bit longer than a single press cycle.

Implications For Cloud Providers

Are we actually stuck right now at another password debate?  Passwords have failed us yet again, let’s have that tired conversation once more?  Sam Biddle, who otherwise did some pretty good research in this post, did have one somewhat amusing paragraph:

To fix this, Apple could have simply forced everyone to use two-factor verification for their accounts. It’s easy, and would have probably prevented all of this.

Probably the only time “simply”, “easy”, and “two-factor verification” have ever been seen in quite such proximity, outside of marketing materials anyway.  There’s a reason we use that broken old disco tech.

Still, we have to do better.  So-called “online brute-forcing” – where you don’t have a database of passwords you want to crack, but instead have to interact with a server that does – is a far slower, and far noisier affair.

But noise doesn’t matter if nobody is listening.  Authentication systems could probably do more to detect brute force attacks across large numbers of accounts.  And given the wide variety of systems that interface with backend password stores, it’s foolish to expect them all to implement rate limiting correctly.  Limits need to exist as close as possible to the actual store, independent of access method.

Sam’s particularly right about the need to get past personal entropy.  Security questions are time bombs in a way even passwords aren’t.  In a world of tradeoffs, I’m beginning to wonder if voice prints across sentences aren’t better than personal information widely shared.  Yes, I’m aware of the downsides, but look at the competition.

iamgroot

OK.  It’s time to ban Password1.  Many users like predictable passwords.  Few users like their data being compromised.  Which wins?  Presently, the former.  Perhaps this is the moment to shift that balance.  Service providers (cloud and otherwise) are burying their heads in the sand and going with password policies that can only be called inertial.   Defenders are using simple rules like “doesn’t have an uppercase letter” and “not enough punctuation” to block passwords while attackers are just straight up analyzing password dumps and figuring out the most likely passwords to attempt in any scenario.  Attackers are just way ahead.  That has to change.  Defenders have password dumps too now.  It’s time we start outright blocking passwords common enough that they can be online brute forced, and it’s time we admit we know what they are.

We’re not quite ready to start generating passwords for users, and post-password initiatives like Fido are still some of the hardest things we’re working on in all of computer engineering.  But there’s some low hanging fruit, and it’s probably time to start addressing it.

And we can’t avoid the elephant in the room any longer.

In Which The Nerd Has To Talk About Sex Because Everyone Else Won’t

It’s not all victim shaming. At least some of the reaction to this leak of celebrity nudity can only be described as bewilderment.  Why are people taking these photos in the first place?  Even with the occasional lack of judgment… there’s a sense of surprise.  Is everybody taking and sending sexy photos?

No.  Just everyone who went through puberty owning a cell phone.

I’m joking, of course.  There are also a number of people who grew up before cell phones who nonetheless discovered that a technology able to move audio, still images, and videos across the world in an instant could be a potent enabler of Flirting-At-A-Distance.  This tends to reduce distance, increasing…happiness.

Every generation thinks it invents sex, but to a remarkable degree generations don’t talk to each other about what they’ve created.  It’s rarely the same, and though older generations can (and do) try, there is nothing in all of creation humans take less advice about than mating rituals.

So, yeah.  People use communication technologies for sexy times.  Deal with it.

Interestingly, to a very limited extent, web browsers actually do.  You may have noticed that each browser has Porn Mode.  Oh, sure, that particular name never makes it through Corporate Branding.  It gets renamed “InPrivate Browsing” or “Incognito Mode” or “The I’m Busy I’ll Be Out In A Minute Window”.  The actual feature descriptions are generally hilarious parallel constructions about wanting to use a friend’s web browser to buy them a gift, but not having the nature of the gift show up in their browser cache.  But we know the score and so do browser developers, who know the market share they’d lose if they didn’t support safer consumption of pornography (at least in the sense that certain sites don’t show up on highly visible “popular tabs” pages during important presentations).

I say all this because of a tweet that is accurate, and needs to be understood outside the context of shaming the victim:

Technology can do great, wonderful, and repeatedly demanded things, and still have a dark side.  That’s not limited to sexy comms.  That applies to the cloud itself.

True story:  A friend of mine and I are at the airport in Abu Dhabi a few years back.  We get out of the taxi, she drops her cell phone straight into a twenty foot deep storm drain.  She starts panicking:  She can’t leave her phone.  She can’t lose her phone.  She’s got pictures of her kids on that phone that she just can’t lose.  “No problem, I get police” says the taxi driver, who proceeds to drive off, with all our stuff.

We’re ten thousand miles away from home, and our flight’s coming.  Five minutes go by. Ten.  Fifteen…and the taxi driver returns, with a police officer, who rounds up some airport workers who agree to help us retrieve the phone (which, despite being submerged for twenty minutes, continued to work just fine).

Probably the best customer service I’ve ever received while traveling, but let me tell you, I’d have rather just told my friend to pull the photos off the cloud.

The reality is that cell phone designers have heard for years what a painful experience it is to lose data, and have prioritized the seamless recovery of those bits best they can. It’s awful to lose your photos, your emails, your contacts.  No, really, major life disruption.  Lot of demand to fix that.  But in all things there are engineering tradeoffs, and data that is stored in more than one location can be stolen from more than one location.  98% of the time, that’s OK, you really don’t want to lose photos of your loved ones.  You don’t care if the pics are stolen, you’re just going to post them on Facebook anyway.

2% of the time, those pictures weren’t for Facebook.  2% of the time, no it’s cool, those can go away, you can always take more selfies.  Way better to lose the photos than see them all over the Internet.  Terrible choice to have to make, but not generally a hard decision.

So the game becomes, separate the 98% from the 2%.

3_inbox_screen_new-bgg

So, actual concrete advice.  Just like browsers have porn mode for the personal consumption of private imagery, cell phones have applications that are significantly less likely to lead to anyone else but your special friends seeing your special bits.  I personally advise Wickr, an instant messaging firm that develops secure software for iPhone and Android.  What’s important about Wickr here isn’t just the deep crypto they’ve implemented, though it’s useful too.  What’s important in this context is that with this code there’s just a lot fewer places to steal your data from.  Photos and other content sent in Wickr don’t get backed up to your desktop, don’t get saved in any cloud, and by default get removed from your friend’s phone after an amount of time you control.  Wickr is of course not the only company supporting what’s called “ephemeral messaging”; SnapChat also dramatically reduces the exposure of your private imagery (with the caveat that with SnapChat, unlike Wickr, SnapChat itself gets an unencrypted copy of your imagery and messaging so you have to hope they’re not saving anything.  Better for national intelligence services, worse for you).

Sure, you can hunt down settings that reduce your exposure to specific cloud services.  You’ve still got to worry about desktop backups, and whatever your desktop is backing up to.  But really, if you can keep your 2% far away from your 98%, you’ll be better off.

In the long run, I think the standard photo applications will get a UI refresh, to allow “sensitive photographs” (presumably of surprise birthday parties in planning) to be locked to the device, allowed through SMS only in directly authenticated circumstances, and not backed up.  Something like a lock icon on the camera screen.  I don’t think the feature request could have happened before this huge leak.  But maybe now it’s obvious just how many people require it.

 

Categories: Security

Never Let A Good Crisis Go To Waste

(This post is something of a follow-up to a previous post on browser security, but that went long enough that I decided to split the posts.)

I’ll leave most of what’s been said about Heartbleed to others.  In particular, I enjoyed Adam Cecchetti’s Heartbleed: A Beat In Time presentation.  But, a few people have been poking me for comments in response to recent events.  A little while ago, I called for a couple responses to Heartbleed:

  1. Accepting that some software has become Critical Infrastructure
  2. Supporting that software, with both money and talent
  3. Identifying that software — lets find the most important million lines of code that would be the most dangerous to have a bug, and actively monitor them

Thanks to the Linux Foundation and a host of well respected companies…this is actually happening.  Whoa.  That was fast.  I’m not even remotely taking credit — this has been a consensus, a long time brewing.  As Kennedy opined, “Victory has a thousand fathers.”  Right after the announcement, I was asked what I thought about this.  Here’s my take:

The Linux Foundation has announced their Core Infrastructure project, with early supporters including Amazon, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Qualcomm, Rackspace, and VMWare.  $100K from each, for three years, to identify and support the Open Source projects we depend on.

This is fantastic.

The Internet was not our first attempt at building the Internet.  Many other systems, from Minicom to AOL, came first.  What we call the Internet today was just the first time a global computer telecom infrastructure really took hold, pushing data and global communications onto every desk, into every home, onto mobile phones the world over.  Open Source software played, and continues to play, a tremendous role in the continuing success of the Internet, as the reason this platform worked was people could connect without hindrance from a gatekeeper.

It’s hard for some people to believe, but the phone company used to own your telephone.  Also, long distance calls used to be an event, because they were so very expensive.

We do live in a different world, than when the fundamental technologies of the Internet were created, and even when they were deployed.  Bad actors abound, and the question is:  How do we respond?  What do we do, to make the Internet safe, without threatening its character?  It’s a baby and the bathwater scenario.

I am profoundly grateful to see the Core Infrastructure project pledging real money and real resources to Open Source projects.  There are important things to note here.  First, yes, Core Infrastructure is a great name that avoids the baggage of Critical Infrastructure while expressing the importance of attention.  Second, we are seeing consensus that we must have the conversation about exactly what it is we depend on, if only to direct funds to appropriate projects.  Third, there’s an actual stable commitment of money, critical if there’s to be full time engineers hired to protect this infrastructure.

The Core Infrastructure project is not the only effort going on to repair or support Open Source.  The OpenBSD team has started a major fork of OpenSSL, called LibreSSL.  It will take some time to see what that effort will yield, but everyone’s hopeful.  What’s key here is that we are seeing consensus that we can and should do more, one that really does stay within the patterns that helped all these companies be the multi-billion dollar concerns they are now.  The Internet grew their markets, and in many cases, created them.  This isn’t charity.  It’s just verywise business.

In summary, I’m happy.  This is what I had hoped to see when I wrote about Heartbleed, and I am impressed to see industry stepping up so quickly to fill the need to identify and secure Core Infrastructure.

Making the Internet a safer place is not going to be easy.  But I’m seeing a world with a Core Infrastructure Project, the Internet Bug Bounty, the Bluehat Prize (which never gets enough attention), and even real discussion around when the US Government should disclose vulnerabilities.

That’s,..not the world I’m used to.  If this world also has Steven Colbert cracking wise about IE vulns…OK.

(Quick,vaguely subversive thought:  We always talk about transparency in things like National Security Letters and wiretaps.  What about vulnerability reports that lead to fixes?  Government is not monolithic.  Maybe transparency can highlight groups that defend the foundations our new economies are built upon, wherever they happen to be.  Doing this without creating some really perverse incentives would be…interesting.)

Categories: Security

Of Expectations And Rewards

So some people want me to say a few more things about Heartbleed.

Meanwhile, this happened.

colbert

 

My immediate reaction, of course, was that there was no way IE shipped that early.  Windows 95 didn’t even have TCP/IP, the core protocols of the Internet, enabled by default!  Colbert had no idea what he’s Tolkein about.

Nope.  Turns out, the Colbert Report crew did their homework.   But, wait.  What?  In what universe has yet another browser bug become fodder for late night?

Hacking has become a spectator sport.  Really can’t say it’s surprising — how many people play sports every day?  Now how many people look at a big glowing rectangle?

We make the glowing box do some very strange things.

I mentioned earlier that one of the things that made Heartbleed so painful, is that it was a bug where we least expected one to be.  That is not the situation with browsers.  Despite genuinely herculean efforts, any security professional worth their salt completely expects web browser vulnerabilities to be found, and exploited, from time to time. The simple explanation is that web browsers expose a tremendous amount of attack surface to relatively anonymous attackers.

Let’s get a bit beyond the simple explanation.

It’s important to realize that web browsers, in general, are particularly vulnerable creations.  Despite the three major platforms (IE, Firefox, Chrome) being developed essentially independently, they’re all implementing the same specifications, leading to something akin to convergent evolution:  A slow language (like HTML, CSS, or JavaScript) plays puppeteer to a fast language’s object model (C/C++) via some sort of formalized translation layer (IDL for COM/XPCOM/WebkitIDL).  Take a look at this analysis of the gory details of browser internals.  See how often they just refer to browsers, in general?

There’s a reason we have very different codebases, but very similar bugs.

So why all the noise?  Why now?  In this particular case, this is the first bug after Microsoft’s genuinely unprecedented campaign to announce the end of XP support.  The masses were marketed to,  and basically told in no uncertain terms “It’s time to upgrade, the next bug is going to burn you if you’re still on XP.”  Well, here’s the next bug, and there’s Microsoft keeping their word.

(Update:  MS is issuing an XP patch after all.  Actual attacks in the field generally do trump everything else.  Really, the story of XP is tragic.  Microsoft finally makes the first consumer OS that doesn’t crash when you look at it funny…and then it crashes when I look at it funny.  Goalposts with rockets on em…)

This is also the first bug after Heartbleed, whose response somehow metastasized into the world being told to freak out and change all their passwords.

Neither of these events have anything to do with the quality of the browser itself, but they’re certainly driving the noise.  Yes, IE’s got some somewhat unique issues.  Back in the day, when Microsoft argued that they couldn’t remove IE from Windows, it was too integrated — yep, pretty much.  Internet Explorer is basically Windows: The Remix feat. The Internet.  Which makes sense, because at the end of the day browsers have long been the new operating systems.  So of course, Microsoft would basically have lots of stuff for the browser lying about.  None of that stuff was ever designed to be executed by untrusted parties, so when it ended up rigged up for remote scripting…sometimes just through the magic of COM…bad things happened.

Of course, I said somewhat unique.  Firefox and its predecessors implemented various amounts of themselves not at the C++ layer, but in JavaScript itself via a language called XUL and various interesting things reachable via the Components object.  Leaks into this trusted code have happened, with unpleasant side effects.

Where all the browsers uniformly struggle, though, is with object memory management.  Everything on a web page takes memory, and there’s only so much to go around.  Eventually, you’ve got to clear out some old objects you’re not viewing any more, to make room for more pictures of cats.  So, when can you do that?  What it comes down to is that JavaScript is really flexible, while  C++ is really fast.  We bridge the two to get the best of both worlds — the flexibility (and security!) of the former, the speed of the latter. What happens when they disagree?  What happens when the language exposed to the developer (for various values of ”developer’) still thinks there’s a picture somewhere, while C++ (yes, I know, the heap allocator, I’m trying to simplify this) has long since destroyed that picture and reused that particular memory for a video file?

Boom.  And this is pretty common.  As the great poet of our modern era, The Grugq wrote recently:

At this point, you may be feeling rather sad about the state of security in general.  The three major browsers — IE, Firefox, and Chrome — are some of the most well funded and deeply audited ongoing development efforts in the world.  If they all fail, in similar ways even, what hope do we have?

There is hope on the horizon (and through this path, we finally get back to Heartbleed).  While the browsers remain imperfect, that they work at all — let alone with ever increasing performance and usability — is nothing short of miraculous.  There is literally nothing else where it’s conceivable that you’d just wander around, executing random chunks of code from random suppliers, and not get compromised instantaneously.  (Sandboxes are things children walk into and out of with relative ease.  I’ve always wondered why we called them that.)  And there is motion towards making useful languages that are both memory safe and fast enough to do the heavy lifting browsers require — Rust being the prime example.

It’s not enough to be secure.  Hardest lesson anyone in security can ever learn.  Some never do.

Interesting things happen as JavaScript becomes a fast language — particularly the hyper-optimizable subset of JavaScript known as asm.js.  Ultimately, most virtual machines are like most sandboxes.  Not the JS VM’s — there’s been an ongoing battle to lock those down for a decade.  A VM with less to attack, that can still leverage all the optimization knowledge being absorbed into LLVM, is Interesting.

There’s a crazy project to try to run all of Webkit (the engine in Chrome and Safari, more or less) inside of JavaScript, including JavaScript itself.  Yo dawg.

There’s hope, but it comes at some price.  We don’t know what solutions will actually work, and we shouldn’t assume any of them will ever reach 100% security.  We absolutely should not assume we knew how to do security decades ago, and just “forgot” or got lazy or whatnot.  For a while it seemed like the answer was obviously Java, or C#/.NET.  Useful languages for many problem sets, sure.  But Microsoft once tried to rewrite chunks of Windows in .NET. It did not go well.  To this day, “Longhorn” will bring chills to old-school MS’ers (“Rosebud…”), and there is genuine excitement around the .NET to C++ compiler.

There are things that looked like they worked in the past, but it was a mirage.  There are things that were a mirage in the past, but technology or other factors have changed.  (Anyone remember DHTML?  Great idea, but it took a few generations before client side interactivity really became a thing.)

Who knows.  Maybe Google will someday make a sandbox that impresses even Pinkie Pie.  And perhaps I have something up my sleeve…

But expecting no bugs is like expecting no crime, nobody to die in the ER, no cars to crash, no businesses to fail.  It’s not just unreasonable.  It’s also kind of awkward to see it become a spectator sport.

(Splitting the Heartbleed commentary to a second post.)

Categories: Security

Bloody Cert Certified

April 12, 2014 7 comments

Oh, Information Disclosure vulnerabilities.  Truly the Rodney Dangerfield of vulns, people never quite know what their impact is going to be.  With Memory Corruption, we’ve basically accepted that a sufficiently skilled attacker always has enough degrees of freedom to at least unreliably achieve arbitrary code execution (and from there, by the way, to leak arbitrary information like private keys).  With Information Disclosure, even the straight up finder of Heartbleed has his doubts:

So, can Heartbleed leak private keys in the real world or not?  The best way to resolve this discussion is PoC||GTFO (Proof of Concept, or you can figure out the rest).  CloudFlare threw up a challenge page to steal their key.  It would appear Fedor Indutny and Illkka Mattila have successfully done just that.  Now, what’s kind of neat is that because this is a crypto challenge, Fedor can actually prove he pulled this off, in a way everyone else can prove but nobody else can duplicate.

I’m not sure if key theft has already occurred, but I think this fairly conclusively establishes key theft is coming.  Lets do a walk through:

First, we retrieve the certificate from http://www.cloudflarechallenge.com.

$ echo “” | openssl s_client -connect www.cloudflarechallenge.com:443 -showcerts | openssl x509 > cloudflare.pem
depth=4 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root
verify error:num=19:self signed certificate in certificate chain
verify return:0
DONE

This gives us the certificate, with lots of metadata about the RSA key.  We just want the key.

$ openssl x509 -pubkey -noout -in cloudflare.pem > cloudflare_pubkey.pem

Now, we take the message posted by Fedor, and Base64 decode it:

$ wget https://gist.githubusercontent.com/indutny/a11c2568533abcf8b9a1/raw/1d35c1670cb74262ee1cc0c1ae46285a91959b3f/1.bash
–2014-04-11 18:14:48– https://gist.githubusercontent.com/indutny/a11c2568533abcf8b9a1/raw/1d35c1670cb74262ee1cc0c1ae46285a91959b3f/1.bash
Resolving gist.githubusercontent.com… 192.30.252.159
Connecting to gist.githubusercontent.com|192.30.252.159|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/plain]
Saving to: `1.bash’

[ <=> ] 456 –.-K/s in 0s

2014-04-11 18:14:49 (990 KB/s) – `1.bash’ saved [456]

$ cat 1.bash
> echo “Proof I have your key. fedor@indutny.com” | openssl sha1 -sign key.pem -sha1 | openssl enc -base64
aKmd7OEXbTvlh6CQNczA+XYVsru4b4t1RMcHCqvjQOW3sxrLFX0laO47fks1G90c
6CtcQwYO06uVxwa9XAr8TAXbmZZj+kGoSNkRzpaNeeqqNkcvVUEv5dV5wy4Q2Mfr
aL6YQSVCfbGGlU0SwBYKdI3cssfPPiClM+i9psi655zDYcgYDdZIvl3/XpGBHeKu
CH9gmt9R9ey448IdZ2uCF+oX+KAogcl9AgpX7GPmJNXZWN13ASCjIFxPdQUguj5p
Z1e8sv6DmenKABaWGmMtQDf/JAzHDCl8CaYlReXQW1wja3I4crD2tDqxq6+Hbajn
SOUFr2XOSpuj5wGxlo20KA==

$ cat 1.bash | grep -v fedor | openssl enc -d -base64 > fedor_signed_proof.bin

So, what do we have?  A message, a public key, and a signature linking that specific message to that specific public key.  At least, we’re supposed to.  Does OpenSSL agree?

$ echo “Proof I have your key. fedor@indutny.com” | openssl dgst -verify cloudflare_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verified OK

Ah, but what if we tried this for some other RSA public key, like the one from Google?

$ echo “” | openssl s_client -connect www.google.com:443 -showcerts | openssl x509 > google.pem
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify error:num=20:unable to get local issuer certificate
verify return:0
DONE

$ openssl x509 -pubkey -noout -in google.pem > google_pubkey.pem

$ echo “Proof I have your key. fedor@indutny.com” | openssl dgst -verify google_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verification Failure

Or what if I changed things around so it looked like it was me who cracked the code?

$ echo “Proof I have your key. dan@whiteops.com” | openssl dgst -verify cloudflare_pubkey.pem -signature fedor_signed_proof.bin -sha1
Verification Failure

Nope, it’s Fedor or bust :)  Note that it’s a common mistake, when testing cryptographic systems, to not test the failure modes.  Why was “Verified OK” important?  Because “Verification Failure” happened when our expectations weren’t met.

There is, of course, one mode I haven’t brought up.  Fedor could have used some other exploit to retrieve the key.  He claims not to have, and I believe him.  But that this is a failure mode is also part of the point — there have been lots of bugs that affect something that ultimately grants access to SSL private keys.  The world didn’t end then and it’s not ending now.

I am satisfied that the burden of proof for Heartbleed leaking  private keys has been met, and I’m sufficiently convinced that “noisy but turnkey solutions” for key extraction will be in the field in the coming weeks (it’s only been ~3 days since Neel told everyone not to panic).

Been getting emails asking for what the appropriate response to Heartbleed is.  My advice is pretty exclusively for system administrators and CISOs, and is something akin to “Patch immediately, particularly the systems exposed to the outside world, and don’t just worry about HTTP.  Find anything moving SSL, particularly your SSL VPNs, prioritizing on open inbound, any TCP port.  Cycle your certs if you have them, you’re going to lose them, you may have already, we don’t know.  But patch, even if there’s self signed certs, this is a generic Information Leakage in all sorts of apps.  If there is no patch and probably won’t ever be, look at putting a TLS proxy in front of the endpoint.  Pretty sure stunnel4 can do this for you.”

QUICK EDIT:  One final note — the bug was written at the end of 2011, so devices that are older than that and have not been patched at all remain invulnerable (at least to Heartbleed).  The guidance is really to identify systems that are exposing OpenSSL 1.01  SSL servers (and eventually clients) w/ heartbeat support, on top of any protocol, and get ‘em patched up.

Yes, I’m trying to get focus off of users and passwords and even certificates and onto stemming the bleeding, particularly against non-obvious endpoints.  For some reason a surprising number of people think SSL is only for HTTP and browsers.  No, it’s 2014.  A lot of protocol implementations have basically evolved to use this one environment middleboxes can’t mess with.

A lot of corporate product implementations, by the way.  This has nothing to do with Open Source, except to the degree that Open Source code is just basic Critical Infrastructure now.

Categories: Security

Be Still My Breaking Heart

April 10, 2014 36 comments

Abstract:  Heartbleed wasn’t fun.  It represents us moving from “attacks could happen” to “attacks have happened”, and that’s not necessarily a good thing.  The larger takeaway actually isn’t “This wouldn’t have happened if we didn’t add Ping”, the takeaway is “We can’t even add Ping, how the heck are we going to fix everything else?”.  The answer is that we need to take Matthew Green’s advice, start getting serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.  It took three years to find Heartbleed.  We have to move towards a model of No More Accidental Finds.

======================

You know, I’d hoped I’d be able to avoid a long form writeup on Heartbleed.  Such things are not meant to be.  I’m going to leave many of the gory technical details to others, but there’s a few angles that haven’t been addressed and really need to be.  So, let’s talk.  What to make of all this noise?

First off, there’s been a subtle shift in the risk calculus around security vulnerabilities.  Before, we used to say:  “A flaw has been discovered.  Fix it, before it’s too late.”  In the case of Heartbleed, the presumption is that it’s already too late, that all information that could be extracted, has been extracted, and that pretty much everyone needs to execute emergency remediation procedures.

It’s a significant change, to assume the worst has already occurred.

It always seems like a good idea in security to emphasize prudence over accuracy, possible risk over evidence of actual attack.  And frankly this policy has been run by the privacy community for some time now.  Is this a positive shift?  It certainly allows an answer to the question for your average consumer, “What am I supposed to do in response to this Internet ending bug?”  “Well, presume all your passwords leaked and change them!”

I worry, and not merely because “You can’t be too careful” has not at all been an entirely pleasant policy in the real world.  We have lots of bugs in software.  Shall we presume every browser flaw not only needs to be patched, but has already been exploited globally worldwide, and you should wipe your machine any time one is discovered?  This OpenSSL flaw is pernicious, sure.  We’ve had big flaws before, ones that didn’t just provide read access to remote memory either.  Why the freak out here?

Because we expected better, here, of all places.

There’s been quite a bit of talk, about how we never should have been exposed to Heartbleed at all, because TLS heartbeats aren’t all that important a feature anyway.  Yes, it’s 2014, and the security community is complaining about Ping again.  This is of course pretty rich, given that it seems half of us just spent the last few days pinging the entire Internet to see who’s still exposed to this particular flaw.  We in security sort of have blinders on, in that if the feature isn’t immediately and obviously useful to us, we don’t see the point.

In general, you don’t want to see a protocol designed by the security community.  It won’t do much.  In return (with the notable and very appreciated exception of Dan Bernstein), the security community doesn’t want to design you a protocol.  It’s pretty miserable work.  Thanks to what I’ll charitably describe as “unbound creativity” the fast and dumb and unmodifying design of the Internet has made way to a hodge podge of proxies and routers and “smart” middleboxes that do who knows what.  Protocol design is miserable, nothing is elegant.  Anyone who’s spent a day or two trying to make P2P VoIP work on the modern Internet discovers very quickly why Skype was worth billions.  It worked, no matter what.

Anyway, in an alternate universe TLS heartbeats (with full ping functionality) are a beloved security feature of the protocol as they’re the key to constant time, constant bandwidth tunneling of data over TLS without horrifying application layer hacks.  As is, they’re tremendously useful for keeping sessions alive, a thing I’d expect hackers with even a mild amount of experience with remote shells to appreciate.  The Internet is moving to long lived sessions, as all Gmail users can attest to.  KeepAlives keep long lived things working.  SSH has been supporting protocol layer KeepAlives forever, as can be seen:

The takeaway here is not “If only we hadn’t added ping, this wouldn’t have happened.”  The true lesson is, “If only we hadn’t added anything at all, this wouldn’t have happened.”  In other words, if we can’t even securely implement Ping, how could we ever demand “important” changes?  Those changes tend to be much more fiddly, much more complicated, much riskier.  But if we can’t even securely add this line of code:

if (1 + 2 + payload + 16 > s->s3->rrec.length)

I know Neel Mehta.  I really like Neel Mehta.  It shouldn’t take absolute heroism, one of the smartest guys in our community, and three years for somebody to notice a flaw when there’s a straight up length field in the patch.  And that, I think, is a major and unspoken component of the panic around Heartbleed.  The OpenSSL dev shouldn’t have written this (on New Years Eve, at 1AM apparently).  His coauthors and release engineers shouldn’t have let it through.  The distros should have noticed.  Somebody should have been watching the till, at least this one particular till, and it seems nobody was.

Nobody publicly, anyway.

If we’re going to fix the Internet, if we’re going to really change things, we’re going to need the freedom to do a lot more dramatic changes than just Ping over TLS.  We have to be able to manage more; we’re failing at less.

There’s a lot of rigamarole around defense in depth, other languages that OpenSSL could be written in, “provable software”, etc.  Everyone, myself included, has some toy that would have fixed this.  But you know, word from the Wall Street Journal is that there have been all of $841 in donations to the OpenSSL project to address this matter.  We are building the most important technologies for the global economy on shockingly underfunded infrastructure.  We are truly living through Code in the Age of Cholera.

Professor Matthew Green of Johns Hopkins University recently commented that he’s been running around telling the world for some time that OpenSSL is Critical Infrastructure.  He’s right.  He really is.  The conclusion is resisted strongly, because you cannot imagine the regulatory hassles normally involved with traditionally being deemed Critical Infrastructure.  A world where SSL stacks have to be audited and operated against such standards is a world that doesn’t run SSL stacks at all.

And so, finally, we end up with what to learn from Heartbleed.  First, we need a new model of Critical Infrastructure protection, one that dedicates real financial resources to the safety and stability of the code our global economy depends on – without attempting to regulate that code to death.  And second, we need to actually identify that code.

When I said that we expected better of OpenSSL, it’s not merely that there’s some sense that security-driven code should be of higher quality.  (OpenSSL is legendary for being considered a mess, internally.)  It’s that the number of systems that depend on it, and then expose that dependency to the outside world, are considerable.  This is security’s largest contributed dependency, but it’s not necessarily the software ecosystem’s largest dependency.  Many, maybe even more systems depend on web servers like Apache, nginx, and IIS.  We fear vulnerabilities significantly more in libz than libbz2 than libxz, because more servers will decompress untrusted gzip over bzip2 over xz.  Vulnerabilities are not always in obvious places – people underestimate just how exposed things like libxml and libcurl and libjpeg are.  And as HD Moore showed me some time ago, the embedded space is its own universe of pain, with 90’s bugs covering entire countries.

If we accept that a software dependency becomes Critical Infrastructure at some level of economic dependency, the game becomes identifying those dependencies, and delivering direct technical and even financial support.  What are the one million most important lines of code that are reachable by attackers, and least covered by defenders?  (The browsers, for example, are very reachable by attackers but actually defended pretty zealously – FFMPEG public is not FFMPEG in Chrome.)

Note that not all code, even in the same project, is equally exposed.    It’s tempting to say it’s a needle in a haystack.  But I promise you this:  Anybody patches Linux/net/ipv4/tcp_input.c (which handles inbound network for Linux), a hundred alerts are fired and many of them are not to individuals anyone would call friendly.  One guy, one night, patched OpenSSL.  Not enough defenders noticed, and it took Neel Mehta to do something.

We fix that, or this happens again.  And again.  And again.

No more accidental finds.  The stakes are just too high.

Categories: Security

On Brinksmanship

January 18, 2013 6 comments

Reality is what refuses to go away when you stop believing in it.

The reality – the ground truth—is that Aaron Swartz is dead.

Now what.

Brinksmanship is a terrible game, that all too many systems evolve towards.  The suicide of Aaron Swartz is an awful outcome, an unfair outcome, a radically out of proportion outcome.   As in all negotiations to the brink, it represents a scenario in which all parties lose.

Aaron Swartz lost.  He paid with his life.  This is no victory for Carmen Ortiz, or Steve Heymann, or JSTOR, MIT, the United States Government, or society in general.  In brinksmanship, everybody loses.

Suicide is a horrendous act and an even worse threat.  But let us not pretend that a set of charges covering the majority of Aaron’s productive years is not also fundamentally noxious, with ultimately a deeply similar outcome.  Carmen Ortiz (and, presumably, Steve Heymann) are almost certainly telling the truth when they say – they had no intention of demanding thirty years of imprisonment from Aaron.  This did not stop them from in fact, demanding thirty years of imprisonment from Aaron.

Brinksmanship.  It’s just negotiation.  Nothing personal.

Let’s return to ground truth.  MIT was a mostly open network, and the content “stolen” by Aaron was itself mostly open.  You can make whatever legalistic argument you like; the reality is there simply wasn’t much offense taken to Aaron’s actions.  He wasn’t stealing credit card numbers, he wasn’t reading personal or professional emails, he wasn’t extracting design documents or military secrets.  These were academic papers he was ‘liberating’.

What he was, was easy to find.

I have been saying, for some time now, that we have three problems in computer security.  First, we can’t authenticate.  Second, we can’t write secure code.  Third, we can’t bust the bad guys.  What we’ve experienced here, is a failure of the third category.  Computer crime exists.  Somebody caused a huge amount of damage – and made a lot of money – with a Java exploit, and is going to get away with it.  That’s hard to accept.  Some of our rage from this ground truth is sublimated by blaming Oracle.  But some of it turns into pressure on prosecutors, to find somebody, anybody, who can be made an example of.

There are two arguments to be made now.  Perhaps prosecution by example is immoral – people should only be punished for their own crimes.  In that case, these crimes just weren’t offensive enough for the resources proposed (prison isn’t free for society).  Or perhaps prosecution by example is how the system works, don’t be naïve – well then.

Aaron Swartz’s antics were absolutely annoying to somebody at MIT and somebody at JSTOR.  (Apparently someone at PACER as well.)  That’s not good, but that’s not enough.  Nobody who we actually have significant consensus for prosecuting, models himself after Aaron Swartz and thinks “Man, if they go after him, they might go after me”.

The hard truth is that this should have gone away, quietly, ages ago.  Aaron should have received a restraining order to avoid MIT, or perhaps some sort of fine.  Instead, we have a death.  There will be consequences to that – should or should not doesn’t exist here, it is simply a statement of fact.  Reality is what refuses to go away, and this is the path by which brinksmanship is disincentivized.

My take on the situation is that we need a higher class of computer crime prosecution.  We, the computer community in general, must engage at a higher level – both in terms of legislation that captures our mores (and funds actual investigations – those things ain’t free!), and operational support that can provide a critical check on who is or isn’t punished for their deeds.  Aaron’s law is an excellent start, and I support it strongly, but it excludes faux law rather than including reasoned policy.  We can do more.  I will do more.

The status quo is not sustainable, and has cost us a good friend.  It’s so out of control, so desperate to find somebody – anybody! – to take the fall for unpunished computer crime, that it’s almost entirely become about the raw mechanics of being able to locate and arrest the individual instead of about their actual actions.

Aaron Swartz should be alive today.  Carmen Ortiz and Steve Heymann should have been prosecuting somebody else.  They certainly should not have been applying a 60x multiple between the amount of time they wanted, and the degree of threat they were issuing.  The system, in all of its brinksmanship, has failed.  It falls on us, all of us, to fix it.

Categories: Security

Actionable Intelligence: The Mouse That Squeaked

December 14, 2012 3 comments

[Obligatory disclosures — I’ve consulted for Microsoft, and had been doing some research on Mouse events myself.]

So one of the more important aspects of security reporting is what I’ve been calling Actionable Intelligence. Specifically, when discussing a bug — and there are many, far more than are ever discovered let alone disclosed — we have to ask:

What can an attacker do today, that he couldn’t do yesterday, for what class attacker, to what class victim?

Spider.IO, a fraud analytics company, recently disclosed that under Internet Explorer attackers can capture mouse movement events from outside an open window. What is the Actionable Intelligence here? It’s moderately tempting to reply: We have a profound new source of modern art.

psmap
(Credit: Anatoly Zenkov’s IOGraph tool)

I exaggerate, but not much. The simple truth is that there are simply not many situations where mouse movements are security sensitive. Keyboard events, of course, would be a different story — but mouse? As more than a few people have noted, they’d be more than happy to publish their full movement history for the past few years.

It is interesting to discuss the case of the “virtual keyboard”. There has been a movement (thankfully rare) to force credential input via mouse instead of keyboard, to stymie keyloggers. This presupposes a class of attacker that has access to keyboard events, but not mouse movements or screen content. No such class actually exists; the technique was never protecting much of anything in the first place. It’s just pain-in-the-butt-as-a-feature.  More precisely, it’s another example of Rick Wash’s profoundly interesting turn of phrase, Folk Security. Put simply, there is a belief that if something is hard for a legitimate user, it’s even harder for the hacker. Karmic security is (unfortunately) not a property of the universe.

(What about the attacker with an inline keylogger? Not only does he have physical access, he’s not actually constrained to just emulating a keyboard. He’s on the USB bus, he has many more interesting devices to spoof.)

That’s not to say spider.io has not found a bug. Mouse events should only come from the web frame for which script has dominion over, in much the same way CNN should not be receiving Image Load events from a tab open to Yahoo. But the story of the last decade is that bugs are not actually rare, and that from time to time issues will be found in everything. We don’t need to have an outright panic when a small leak is found. The truth is, every remote code execution vulnerability can also capture full screen mouse motion. Every universal cross site scripting attack (in which CNN can inject code into a frame owned by Yahoo) can do similar, though perhaps only against other browser windows.

I would like to live in a world where this sort of very limited overextension of the web security model warrants a strong reaction. It is in fact nice that we do live in a world where browsers effectively expose the most nuanced and well developed (if by fire) security model in all of software. Where else is even the proper scope of mouse events even a comprehensible discussion?

(Note that it’s a meaningless concept to say that mouse events within the frame shouldn’t be capturable. Being able to “hover” on items is a core user interface element, particularly for the highly dynamic UI’s that Canvas and WebGL enable. The depth of damage one would have to inflict on the browser usability model, to ‘secure’ activity in what’s actually the legitimate realm of a page, would be profound. When suggesting defenses, one must consider whether the changes required to make them reparable under actual assault ruins the thing being defended in the first place. We can’t go off destroying villages in order to save them.)

So, in summary: Sure, there’s a bug here with these mouse events. I expect it will be fixed, like tens of thousands of others. But it’s not particularly significant.  What can an attacker do today, that he couldn’t do yesterday?  Not much, to not many.  Spider.io’s up to interesting stuff, but not really this.

Categories: Security
Follow

Get every new post delivered to your Inbox.

Join 720 other followers