The DNSSEC Diaries, Ch. 5: Implicit Policies, Explicit Public Keys

December 20, 2010 Leave a comment

Well, we’ve got something that works.  So, of course we have to muck with it 🙂

The immediate architectural question is whether we should support the storage of full keying data in DNS.  See, right now, we’re just storing the hash of keying data — a nice, fixed size blob that can fit into a text record without much fuss.  There’s a fundamental assumption with this approach:  Any protocol we happen to use, will negotiate a public key (presently inside a certificate) at the application layer.  DNSSEC only needs to be used to validate the data received at that layer.

That happens to be true for most existing protocols, because most existing protocols were designed to interface with X.509.  The classic model is to connect, receive a certificate, and then interrogate that certificate against various requirements like:

  • Is the subject name correct?
  • Do I trust this particular Certificate Authority?
  • Has this certificate expired?
  • Has this certificate been revoked?
  • Are there any policies embedded in this certificate that mean I should reject it despite everything else being in order?

Shockingly, this required a decent amount of expertise to be able to handle correctly.  So most programmers don’t — the amount of software out there that doesn’t even make sure it’s encrypting to the right identity is basically ‘almost everything that isn’t a browser’.  The blame isn’t only on the APIs — certificates are really expensive to deploy organizationally, so a lot of companies just throw up their hands and do what they can.  See:  The Little Black Box project, which archives fixed private keys shipped on devices.

In the new model, all the above requirements effectively get subsumed at the DNSSEC layer.  If the TXT KEY1 record resolves at all, that means:

  • The name was correct
  • The one root — the DNS root — has validated the record
  • The record hasn’t expired
  • Since the record is short lived, it doesn’t need to be revoked.  Revocation basically happens all the time, by default
  • Policies in the TXT KEY1 (Secure-Transport-Security, Secure Renegotiation) are available for review

In other words, most of the complexity around certificate validation disappears; if DNSSEC reports the name was resolved securely, you’re done.  (The cost is that, unlike with certificates, you need to have network access to validate the name.)

So, you end up with an interesting question:  Why should a protocol even push a certificate at the application layer?  Practically all the interesting semantics that were supposed to be extracted from the certificate are sort of inherited for free from DNS, now that it’s been enhanced with DNSSEC.  So, at least in the common protocol case, why parse a cert?  Why expose oneself to ASN.1, to manually checking policy fields, to all that rigamarole?  In fact, why suffer the application layer burden of moving the cert blob itself?

It’s a legitimate question, and it doesn’t just affect new protocol work.  Google has been investing a tremendous amount in TLS to try to reduce the number of round trips the protocol requires.  See, the more round trips between client and server, the slower the protocol has to be.  If TLS requires the certificate before initiating an encrypted communication, that’s another round trip.

If DNS can remove that round trip, TLS gets faster.  Huzzah!

(Yes, DNS has its own round trips — but DNS, unlike HTTPS or even HTTP, has a site-wide caching layer in front of effectively all users.  It was the failure to support this caching layer that ultimately made DNSCurve not work.)

It’s not just TLS.  Most crypto protocols could be simplified fairly significantly if they had a secure DNSSEC endpoint from which to request key material.  If we’re to win the war on passwords, if we’re to get globalized IPsec security in the cloud, if we’re to get secure email, we desperately need simplification.  Requiring certificates only when necessary, as opposed to any time a public key is needed, is a pretty big win from where I stand.

There are also very nice things that happen to IPSEC, if key material can be directly extracted.  There’s a lot of complexity in IPSEC key exchange, and DNSSEC can potentially drop this entire portion of IPsec’s complexity load.  That’s very nice.

Thus, the first thing I’ll probably be implementing for Phreebird Suite 1.03 is support for inserting full key material into the DNS, instead of just a hash.  This might look something like: IN TXT “v=key1 pka=rsa e=65537 m=ANknyBHye+RFyUa2Y3WDsXd+F0GJgDjxRSegPNnoqABL2

Not sure if this is the precise format I’ll implement — there are pros and cons to Base64, and I need to do my homework on PKCS1.5 type issues — but I’m convinced that being able to store the raw public key in DNS is useful.  I’ll come closer to a decision as I get closer to writing this code.  (Thanks to John Gilmore, who convinced me that something like this was necessary and helpful.)

What I’m pretty sure about, however, is that I don’t ever want to put full certificates into the DNS.  This has been tried — in fact, there’s an RRTYPE for it: CERT.  A number of implementations have actually gone with CERT, only to abandon the format after a while.  The things are just too big.

So, does that mean DNSSEC can only bootstrap a small amount of trusted data?  No.  Lets talk in the next post about embedding large form data.

Categories: Security

The DNSSEC Diaries, Ch. 4: A Schema For TXT

December 19, 2010 5 comments

Right.  Enough theory.  Lets get down to bits and bytes.  As of 1.02, the basic name/key mapping for looks roughly like: IN TXT “v=key1 ha=sha1 h=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15”

Lets break this down:

  •  We’re attaching the key record directly
  • v=key1:  The subtype of this TXT record is key, version 1.
  • ha=sha1:  The hash algorithm used on the attached data is SHA-1.
  • h=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15:  When connecting to, a certificate will be presented.  The hash of this certificate will be f1d2d2f924e986ac86fdf7b36c94bcdf32beec15, as per the hash algorithm declared in ha.

Yes, it really can be this easy.  One of the expensive aspects of X.509 was learning how to generate and maintain certificates.  Doing the above is much simpler.

Of course, that’s not allowed to be the end of the story.  What else might we put in there?

(Later, we’ll discuss the finer points of this particular grammar, effectively a space delimited flat key-value pair architecture.   There are other approaches, and I could be convinced to shift to them.  But this is what we’re using now, mainly for parity with most everything else using TXT.)

The power of DNSSEC is the ability to bootstrap trust.  There’s a very interesting technology out there called Strict-Transport-Security, that’s slowly being integrated into each browser.  Strict-Transport-Security, or STS, enforces the use of TLS when accessing web sites.  In effect, it turns off insecure HTTP.

In an era where hijacking HTTP sessions is as easy as burning sheep, this is a big deal.

STS has a problem — right now, use of it requires something of a “leap of faith” — the first time a site is connected to, there will be no STS bit to enforce that initial use being insecure. And there’s a second problem — once you’ve cached a value for STS, what if you’re wrong?    As my friend Damon Cortesi pointed out:

Just spent the past hour debugging what turned out to be HSTS – – This is why developers hate security!

Why should it be any harder to find out whether to use TLS, than it is to discover the IP to connect to?  Wouldn’t it be nice if you could ask via DNSSEC?

It would.  Thus, though Phreebird (Phreeload, really) isn’t really in a position to enforce STS, not being in the HTTP pipeline, KEY1 is explicitly supporting sts=1 as a way of securely expressing TLS only.  The idea has come up before, in Barth and Jackson’s ForceHTTPS, and it’s a good one.  We basically end up with the following in DNS: IN TXT “v=key1 ha=sha1 h=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15 sts=1”

Is this enough?  I’m not sure.

Another thing I’m supporting is sn=1 — Marsh Ray did some pretty severe things against TLS a while back, and fixing them required a breaking change to the TLS protocol.  Right now, there’s no safe way to know if the host you’re communicating with has Marsh et al’s fix.   We can put a bit into this record saying Secure Negotiation is supported.

This does mean, by the way, that we’re mixing identity and policy.  I’m not convinced that’s a bad thing.  I am convinced that I don’t want to run a new query for each individual bit of policy that’s attached to a name.  It’s not necessarily slow — when you query TXT, you get all TXT records, as opposed to having to burn an RRTYPE on each policy bit — but it invites confusion attacks where it’s unclear which bits of policy apply to which representation of identity.  That, I am scared of.

But neither of the above tricks actually addresses key retrieval itself.  One case that I am intentionally supporting right now is that of the server farm which contains too many individual keys to place in DNS in advance.  By standard security theory, bits are cheap, so if you have a hundred TLS accelerators, you should never move one private key into all of them.  Instead, you should make a hundred different keys, and sign them all with the same Certificate Authority.  Well, a hundred private keys means a hundred public keys, and that’s too much for us.  Instead, I support something called Livehashing.  If “lh=1” shows up in the TXT KEY1 record, and the hash of the certificate is unrecognized, I’ll do a second lookup — this one, actually containing the hash of the witnessed cert.  So, for example, if I connect to, get a certificate with the hash of e242ed3bffccdf271b7fbaf34ed72d089537b42f, and see: IN TXT “v=key1 ha=sha1 lh=1”

I will do a second lookup, for:

At present, if I get any secure record back from this name, I’ll assume that means the name and hash are acceptably linked.  A future release will have a better mechanism for confirmation.

Though this method adds a round trip, it significantly improves deployability for large sites.  That matters.  It also only adds that round trip in circumstances where the large site would otherwise be unable to deploy the technology, rather than slowing everyone down for the benefit of a few.  That matters even more.

It also causes issues with CNAMEs and Wildcards, as discussed in Ch. 1.  I think I know how to address that, though.

Another area of significant controversy is precisely what is stored in the DNS.  There’s basically the choice of the certificate vs. the public key, and whether the data is stored hashed or not.  Right now, I have an option called hr, for Hash Range.  If hr=cert, or is unset, then the thing to hash is the entire certificate.  If hr=pubkey, then the data to be hashed is simply the public key of the endpoint.

What’s the difference?  Ah, welcome to the weeds.  If one hashes the certificate, then the data asserted by DNSSEC includes all the policies embedded in X.509.  Important: That doesn’t mean you can necessarily trust said policies — in fact, there are policies that we know we can’t trust when exclusively asserted by DNSSEC, like EV — but the semantics are at least there to continue supporting them.

However, if one hashes the public key, that means an implementation doesn’t need to support X.509, or even ASN.1 at all.  That’s potentially a significant reduction in complexity.

So, that’s what we have now as optional settings:  sts=[0|1], sn=[0|1], lh=[0|1], and hr=[cert|pubkey].

What’s coming soon, or at least being considered?  We’ll talk about that in the next post.

Categories: Security

The DNSSEC Diaries, Ch. 3: Contradictions

December 18, 2010 Leave a comment

“The server at will have the following public key” is not, seemingly, a particularly complicated message to express.

It can become one, though. Should the public key be encapsulated by an X.509 certificate? Or should it be expressed directly? Perhaps we should instead insert a hash, to save space (a principle we’ve reluctantly abandoned with DNSSEC, but bear with me). Do we hash the key or the cert? Which algorithms do we support?

And that’s just semantics. What of syntax?  Do we encode our desires with strings? Or magic numbers?

And just how do we manage upgrades over time?

So there are engineering challenges. And, as it turns out, when I say there are no right answers, that means there are no right answers. There are only “ways we did things in the 90’s”, or “ways we did things in the 00’s”, and a battle over which disasters we’d prefer not to repeat.

Finally, as if this discussion wasn’t meta enough — the lack of dead bodies in computer “engineering” is the thing that puts quotes around computer “engineering”.  The pain of failed IT projects, even to the tune of billions of dollars, strangely drives far less change than the dead bodies, past and future, that haunt all other engineers.  (Did you know you can’t even call yourself an engineer in Canada without a license?  There, you’re an Engineer.  Because you might kill people.)

So, how do we go about building things correctly?  There are some things we know.  We know that there is a fundamental tension between feature count and complexity.  The more features a system has, the more uses (and users) it can support.  But as features pile on, complexity — at its core, a measure of how many things an implementor must get right, in order for the system to run correctly — increases, with subtle but terrifying side effects.

Past a certain level of complexity, a system cannot be secured, because no individual person can wrap their head around the full set of possibilities for the entire system.

There is another tension:  Getting things right the first time, versus adapting a protocol in response to user experience.  Put simply, we know we won’t get things right the first time.  As the quote goes, “plan to write it twice.  You will, anyway.”  But we also know, in adapting implementations to discoveries from the field, that we experience enormous complexity penalties from migration costs — penalties that, at the extreme, are high enough that we can’t actually use the new features we so expensively designed.

X.509v3 has some important new stuff — Name Constraints, for example.  Will any CA sell you a certificate that allows you to use Name Constraints?  No, not even one.

So, there’s a bit of a mess.  We know that no security protocol, no matter how well designed, will entirely survive contact with the enemy.  But we know we’re going to pay dearly for having to change our plans.

I truly suspect that security is more vulnerable to Wicked Problems than most anything else in computer engineering.  Ah well.  Didn’t sign up for the easy stuff.

So, to try to cut this Gordian knot, I’m going to follow the classical approach:  Release early, release often.

By definition, each protocol revision will be wrong.  But we should get feedback, a little at a time.  (One nice thing about BSD licensing, of course, is that at minimum, this code should provide a good foundation for what may end being a radically different proposal.)  Over time, we’ll see what we all converge on — and hopefully standardize something like it.

This is not a strategy I’ve invented.  It’s effectively where all the actually successful Internet standards have come from.

What might be a little different is that I’m effectively trying to liveblog the process.  Lets see how that goes.

Next up — rubber meets the road.

Categories: Security

The Optical Illusion That’s So Good, It Even Fools DanKam

December 17, 2010 16 comments

We’ll return to the DNSSEC Diaries soon, but I wanted to talk about a particularly interesting optical illusion first, due to it’s surprising interactions with DanKam.

Recently on Reddit, I saw the following rather epic optical illusion (originally from Akiyoshi Kitaoka):

What’s the big deal, you say?  The blue and green spirals are actually the same color.  Don’t believe me?  Sure, I could pull out the eyedropper and say that the greenish/blue color is always 0 Red, 255 Green, 150 Blue.  But nobody can be told about the green/blue color.  They have to see it, for themselves:

(Image c/o pbjtime00 of Reddit.)

So what we see above is the same greenish blue region, touching the green on top and the blue on bottom — with no particular seams, implying (correctly) that there is no actual difference in color and what is detected is merely an illusion.

And the story would end there, if not for DanKam.  DanKam’s an augmented reality filter for the color blind, designed to make clear what colors are what.  Presumably, DanKam should see right through this illusion.  After all, whatever neurological bugs cause the confusion effect in the brain, certainly were not implemented in my code.

Hmm.  If anything, the illusion has grown stronger.  We might even be tricking the color blind now!  Now we have an interesting question:  Is the filter magnifying the effect, making identical colors seem even more different?  Or is it, in fact, succumbing to the effect, confusing the blues and greens just like the human visual system is?

DanKam has an alternate mode that can differentiate the two.  HueWindow mode lets the user select only a small “slice” of the color spectrum — just the blues, for example, or just the greens.  It’s basically the “if all else fails” mode of DanKam.  Lets see what HueWindow shows us:


There’s no question:  DanKam’s confusing the greens and blues, just like those of us armed with brains.  What’s going on?
According to most discussion in the Perception world, the source of the green vs. blue confusion involves complex surround effects, with pattern completion.  For example, this discussion from The Bad Astronomer:

The orange stripes go through the “green” spiral but not the “blue” one. So without us even knowing it, our brains compare that spiral to the orange stripes, forcing it to think the spiral is green. The magenta stripes make the other part of the spiral look blue, even though they are exactly the same color. If you still don’t believe me, concentrate on the edges of the colored spirals. Where the green hits the magenta it looks bluer to me, and where the blue hits the orange it looks greener. Amazing.

The overall pattern is a spiral shape because our brain likes to fill in missing bits to a pattern. Even though the stripes are not the same color all the way around the spiral , the overlapping spirals makes our brain think they are. The very fact that you have to examine the picture closely to figure out any of this at all shows just how easily we can be fooled.

See, that looks all nice and such, but DanKam’s getting the exact same effect and believe me, I did not reimplement the brain!  Now, it’s certainly possible that DanKam and the human brain are finding multiple paths to the same failure.  Such things happen.  But bug compatibility is a special and precious thing from where I come from, as it usually implies similarity in the underlying design.  I’m not sure what’s going on in the brain.  But I know exactly what’s going on with DanKam:

There is not a one-to-one mapping between pixels on the screen and pixels in the camera.  So multiple pixels on screen are contributing to each pixel DanKam is filtering.  The multiple pixels are being averaged together, and thus orange (605nm) + turquoise (495nm) is averaged to green (550nm)  while magenta (~420nm) + turquoise (495nm)  is averaged to blue (457nm).

There is no question that this is what is happening with DanKam.  Is it just as simple with the brain?  Well, lets do this:  Take the 512×512 spiral above, resize it down to 64×64 pixels, and then zoom it back up to 512×512.  Do we see the colors we’d expect?

Indeed!  Perhaps the brightness is little less than expected, but without question, the difference between the green and blue spirals is plain as day!    More importantly, perceived hues are being recovered fairly accurately!  And the illusion-breaking works on Akiyoshi Kitaoka‘s other attacks:


So, perhaps a number of illusions are working not because of complex analysis, but because of simple downsampling.

Could it be that simple?

No, of course not, the first rule of the brain is it’s always at least a little more complicated than you think, and probably more 🙂  You might notice that while we’re recovering the color, or chroma relatively accurately, we’re losing detail in perceived luminance.  Basically, we’re losing edges.

It’s almost like the visual system sees dark vs. light at a different resolution than one color vs. another.

This, of course, is no new discovery.  Since the early days of color television, color information has been sent with lower detail than the black and white it augmented.  And all effective compressed image formats tend to operate in something called YUV, which splits the normal Red, Green, and Blue into Black vs. White, Red vs. Green, and Orange vs. Blue (which happen to be the signals sent over the optic nerve).  Once this is done, the Black and White channels are transmitted at full size while the differential color channels are halved or even quartered in detail.  Why do this?  Because the human visual system doesn’t really notice.  (The modes are called 4:2:2 or 4:1:1, if you’re curious.)

So, my theory is that these color artifacts aren’t the result of some complex analysis with pattern matching, but rather the normal downsampling of chroma that occurs in the visual system.  Usually, such downsampling doesn’t cause such dramatic shifts in color, but of course the purpose of an optical illusion is to exploit the corner cases of what we do or do not see.


One of my color blind test subjects had this to say:

“Put a red Coke can, and a green Sprite can, right in front of me, and I can tell they’re different colors.  But send me across the room, and I have no idea.”

Size matters to many color blind viewers.  One thing you’ll see test subjects do is pick up objects, put them really close to their face, scan them back and forth…there’s a real desire to cover as much of the visual field as possible with whatever needs to be seen.  One gets the sense that, at least for some, they see Red vs. Green at very low resolution indeed.


Just because chroma is downsampled, with hilarious side effects, doesn’t mean running an attack in luminance won’t work.  Consider the following images, where the grey spade, seems to not always be the same grey spade.  On the left, is Kitaoka’s original illusions.  On the right, is what happens when you blur things up.  Suddenly, the computer sees (roughly, with some artifacts) the same shades we do.  Interesting…


I’ve thought that DanKam works because it quantizes hues to a “one true hue”.  But it is just as possible that DanKam is working because it’s creating large regions with exactly the same hue, meaning there’s less distortion during the zoom down…

Somebody should look more into this link between visual cortex size and optical illusions.  Perhaps visual cortex size directly controls the resolution onto which colors and other elements are mapped onto?  I remember a thing called topographic mapping in the visual system, in which images seen were actually projected onto a substrate of nerves in an accurate, x/y mapping.  Perhaps the larger the cortex, the larger the mapping, and thus the stranger ?

I have to say, it’d be amusing if DanKam actually ended up really informing us re: how the visual system works.  I’m just some hacker playing with pixels here… 😉

Categories: Imagery

DanKam: Augmented Reality For Color Blindness

December 15, 2010 253 comments

[And, we’re in Forbes!]
[DanKam vs. A Really Epic Optical Illusion]

So, for the past year or so, I’ve had a secret side project.

Technically, that shouldn’t be surprising.  This is security.  Everybody’s got a secret side project.

Except my side project has had nothing to do with DNS, or the the web, or TCP/IP.  In fact, this project has nothing to do with security at all.

Instead, I’ve been working on correcting color blindness.  Check it out:

These are the Ishihara test plates.  You might be familiar with them.

If you can read the numbers on the left, you’re not color blind.

You can almost certainly read the numbers on the right.  That’s because DanKam has changed the colors into something that’s easier for normal viewers to read, but actually possible for the color blind to read as well.  The goggles, they do something!

Welcome to DanKam, a $3 app being released today on iPhone and Android (ISSUES WITH CHECKOUT RESOLVED!  THIS CODE IS LIVE!).  DanKam is an augmented reality application, designed to one of several unique and configurable filters to images and video such that colors — and differences between colors — are more visible to the color blind.

[EDIT:  Some reviews.  Wow.  Wow.

@waxpancake: Dan Kaminsky made an augmented-reality iPhone app for the colorblind. And it *works*.

Oh, and I used it today in the real world. It was amazing! I was at Target with my girlfriend and saw a blue plaid shirt that I liked. She asked me what color it was so I pulled up DanKam and said “purple.” I actually could see the real color, through my iPhone! Thanks so much, I know I’ve said it 100x already, but I can’t say it another 100x and feel like I’ve said it enough. Really amazing job you did.
–Blake Norwood

I went to a website that had about 6 of those charts and screamed the number off every one of those discs! I was at work so it was a little awkward.
–JJ Balla, Reddit

Thank you so much for this app. It’s like an early Christmas present! I, too, am a color blind Designer/Webmaster type with long-ago-dashed pilot dreams. I saw the story on Boing Boing, and immediately downloaded the app. My rods and cones are high-fiving each other. I ran into the other room to show a fellow designer, who just happened to be wearing the same “I heart Color” t-shirt that you wore for the Forbes photo. How coincidental was that? Anyway, THANKS for the vision! Major kudos to you…
–Patrick Gerrity

Just found this on BoingBoing, fired up iTunes, installed and tried it. Perfectly done!
That will save me a lot of trouble with those goddamn multicolor LEDs! Blinking green means this, flashing red means that… Being color blind, it is all the same. But now (having DanKam in my pocket), I have a secret weapon and might become the master of my gadgetry again…

Can’t thank you enough…!

Holy shit. It works.

It WORKS. Not perfectly for my eyes, but still, it works pretty damned well.

I need a version of this filter I can install on my monitors and laptop, asap. And my tv. Wow.

That is so cool. Thanks for sharing this. Wish I could favorite it 1000 more times.
–zarq on Metafilter

As a colorblind person I just downloaded this app. How can I f*cking nominate you to the Nobel Prize committee? I am literally, almost in tears writing this, I CAN FRICKIN’ SEE PROPER COLORS!!!! PLEASE KEEP UP THIS WORK! AND CONTINUE TO REFINE THIS APP!
–Aaron Smith

At this point, you’re probably asking:

  1. Why is some hacker working on vision?
  2. How could this possibly work?

Well, “why” has a fairly epic answer.

So, we’re in Taiwan, building something…amusing…when we go out to see the Star Trek movie.  Afterwords, one of the engineers mentions he’s color blind.  I say to him, these fateful words:

“What did you think about the green girl?”

He responds, with shock:  “There was a green girl???  I thought she was just tan!


But I was not yet done with him.  See, before I got into security, I was a total graphics nerd.  (See:  The Imagery archive on this website.)  Graphics nerds know about things like colorspaces. Your monitor projects RGB — Red, Green, Blue.  Printouts reflect CYMK — Cyan, Yellow, Magenta, blacK.  And then there’s the very nice colorspace, YUV — Black vs. White, Orange vs. Blue, and Red vs. Green.  YUV is actually how signals are sent back to the brain through the optic nerve.  More importantly, YUV theoretically channelizes the precise area in which the color blind are deficient:  Red vs. Green.

Supposedly, especially according to web sites that purport to simulate color blindness for online content, simply setting the Red vs. Green channel to a flat 50% grey would simulate nicely the effects of color blindness.

Alas, H.L. Mencken:  “For every complex problem there is an answer that is clear, simple, and wrong.”  My color blind friend took one look at the above simulation and said:

“Heh, that’s not right.  Everything’s muddy.  And the jacket’s no longer red!”

A color blind person telling me about red?  That would be like a deaf person describing the finer points of a violin vs. a cello. I was flabbergasted (a word not said nearly enough, I’ll have you know).

Ah well.  As I keep telling myself — I love being wrong.  It means the world is more interesting than I thought it was.

There’s actually a lot of color blind people — about 10% of the population.  And they aren’t all guys, either — about 20% of the color blind are female (it totally runs in families too, as I discovered during testing).  But most color blind people are neither monochromats (seeing everything in black and white) or dichromats (seeing only the difference between orange and blue).  No, the vast majority of color blind people are in fact what are known as anomalous trichromats.  They still have three photoreceptors, but the ‘green’ receptor is shifted a bit towards red.  The effect is subtle:  Certain reds might look like they were green, and certain greens might look like they were red.

Thus the question:  Was it possible to convert all reds to a one true red, and all greens to a one true green?

The answer:  Yes, given an appropriate colorspace.

HSV, for Hue, Saturation, Value, is fairly obscure.  It basically refers to a pattern of computing a “true color” (the hue), the relative proportion of that color to every other color (saturation), and the overall difference from darkness for the color (value).  I started running experiments where I’d leave Saturation and Value alone, and merely quantize Hue.

This actually worked well — see the image at the top of the post!  DanKam actually has quite a few features; here’s some tips for getting the most out of it:

  1. Suppose you’re in a dark environment, or even one with tinted lights.  You’ll notice a major tint to what’s coming in on the camera — this is not the fault of DanKam, it’s just something your eyes are filtering!  There are two fixes.  The golden fix is to create your own, white balanced Light.  But the other fix is to try to fix the white Balance in software.  Depending on the environment, either can work.
  2. Use Source to see the Ishihara plates (to validate that the code works for you), or to turn on Color Wheel to see how the filters actually work
  3. Feel free to move the slider left and right, to “tweak” the values for your own personal eyeballs.
  4. There are multiple filters — try ’em!  HueWindow is sort of the brute forcer, it’ll only show you one color at a time.
  5. Size matters in color blindness — so feel free to put the phone near something, pause it, and then bring the phone close.  Also, double clicking anywhere on the screen will zoom.
  6. The gear in the upper left hand corner is the Advanced gear.  It allows all sorts of internal variables to be tweaked and optimized.  They allowed at least one female blue/green color blind person to see correctly for the first time in her life, so these are special.

In terms of use cases — matching clothes, correctly parsing status lights on gadgets, and managing parking structures are all possibilities.  In the long run, helping pilots and truckers and even SCADA engineers might be nice.  There’s a lot of systems with warning lights, and they aren’t always obvious to the color blind.

Really though, not being color blind I really can’t imagine how this technology will be used.  I’m pretty sure it won’t be used to break the Internet though, and for once, that’s fine by me.

Ultimately, why do I think DanKam is working?  The basic theory runs as so:

  1. The visual system is trying to assign one of a small number of hues to every surface
  2. Color blindness, as a shift from the green receptor towards red, is confusing this assignment
  3. It is possible to emit a “cleaner signal”, such that even colorblind viewers can see colors, and the differences between colors, accurately.
  4. It has nothing to do with DNS  (I kid!  I kid!  But no really.  Nothing.)

If there’s interest, I’ll write up another post containing some unexpected evidence for the above theory.

Categories: Imagery

The DNSSEC Diaries, Ch. 2: For Better or Worse, How The TXT Was Won

December 14, 2010 10 comments

So, I think it’s fairly elegant and straightforward to put a key into DNS like so: IN TXT “v=key1 ha=sha1 h=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15”

On connecting to over TLS, a certificate will be delivered.  Traditionally, this certificate would be interrogated to make sure it was signed by a trusted certificate authority.  In the new model:

  • If DNSSEC says the above TXT record chains to the (one) DNS root, and
  • The subtype of the TXT record is key1, and
  • The SHA-1 hash of the delivered certificate is precisely f1d2d2f924e986ac86fdf7b36c94bcdf32beec15, and
  • TLS is happy with the public key in the certificate

…then the connection is considered authenticated, whether or not a CA ultimately signed the certificate.  That’s it.  Move on with life.  Cheap.

Can it really be this easy?  I’ll be honest, some people aren’t the biggest fans of this design.  Implementers like it.  Ops guys like it.  I like it.

Some old school DNS people do not like it.  They don’t want to see everything shoved under a single record type (TXT or otherwise), and they don’t like the idea of a seemingly unstructured textual field containing anything important.  Heck, they even wrote an Informational RFC on the subject. The core arguments in this doc:

  • No semantics to prevent collision with other use
  • Space considerations in the DNS message

It’s really not a great RFC, but these aren’t meaningless concerns.  However, the heart of engineering is understanding not just why systems succeed, but why they fail.  Lets talk about why, despite the above, I’m convinced public keys in DNS should be encoded as TXT records.

The primary reason is that the story of TXT is the story of real world systems trying non-TXT, and, well, running away screaming. In the last ten years, four major projects have all gone TXT:

  1. SPF (Sender Policy Framework)
  2. DKIM (DomainKeys)
  3. GPG’s PKA (PGP Key Retrieval)
  4. FreeSWAN’s IPSec (X-IPSec-Server)

They have company.  From a sample of DNS traces, looking at the count of unique types per unique name:

0.40	143453512	A
0.31	110097927	NS
0.13	48033288	CNAME
0.07	23989120	MX
0.04	12841932	RRSIG
0.02	8314220	PTR
0.02	7665020	TXT
0.01	2575267	NSEC
0.00	1557321	NSEC3
0.00	563675	SRV
0.00	276820	AAAA
0.00	53855	NULL
0.00	44411	SPF
0.00	24308	DS
0.00	3190	DNSKEY
0.00	1498	RP
0.00	1216	HINFO
0.00	1118	DLV
0.00	687	DNAME
0.00	557	LOC
0.00	545	514
0.00	491	NAPTR

Excluding DNSSEC, which is its own universe, and AAAA, which is a trivial blob of bits (a 128 bit IPv6 address), TXT is about three orders of magnitude more popular than all RR types invented in the last ten years combined.  Data encoded in TXT is almost as popular as Reverse DNS!

And it’s not like projects didn’t have the opportunity to use custom RRTYPEs.

Consider:  For half of these packages (GPG and FreeSWAN), RFC’s existed to store their data already, but (and this is important) either the developers found the RFCs too difficult to implement, or users found the code too tricky to deploy.  In FreeS/WAN’s case, this was so true that:

As of FreeS/WAN 2.01, OE uses DNS TXT resource records (RRs) only (rather than TXT with KEY). This change causes a “flag day”.

Wow.  That is fairly tremendous.  Flag days — the complete shift of a system from a broken design to a hopefully better one — are rare to the point of being almost unheard of in networking design.  They are enormously painful, and only executed when the developer believes their product is absolutely doomed to failure otherwise.

There’s a lot of talk about how custom RR’s aren’t that bad.  Talk is cheap.  Evidence that a design choice forced a flag day is rare, precious, and foreboding.

The reality — the real precedent, far larger than the world of DNS or even security — is that for operational and developmental reasons, developers have abandoned binary formats. XML, and even its lighter cousin JSON, are not the smallest possible way to encode a message the world has ever known.  It is possible to save more space.

Sorry, X.509.  We’ve abandoned ASN.1, even if bit for bit it’s more space efficient.  We abandoned it for a reason.  The programming community is simply done with the sort of bit-fiddly constructs that were popular in the 90’s, and that most DNS RR’s are constructed from, even if they are easy to parse in C.

Binary parsers (outside of media formats) are dead.  Frankly, as the number one source of exploitable crashes in most software, I’m not going to miss them.  Heck, we’re not even using TCP ports anymore — we have port 80, and we subtype the heck out of it at the URL layer (much as TXT is subtyped with v=key1).

It doesn’t help that, most of the time, DNS knows what fields it’s going to have to store, and identifies them positionally.  Contrast this with the coming reality of storing keys in DNS, where over time, we’re absolutely going to be adding more flags and bits.  That binary format will not be pretty.

Now, I have no interest in putting XML or even JSON into DNS (though the latter wouldn’t be the worst thing ever).  But what we’ve seen is reasonable consensus regarding what should go into TXT records:

  • SPF:  “v=spf1 a -all”
  • GPG: “v=pka1;fpr=[#1];uri=[#2]”
  • FreeS/WAN: “X-IPsec-Server(10)= AQMM…3s1Q==”

Identical?  No.  But straight key value pairs, and only one protocol that doesn’t begin with a magic cookie?  That’s doable.  That’s easy.

That’s cheap to deploy.  And cheap to deploy matters.

Now, here’s an interesting question:

“Why not use a custom RRTYPE, but fill it with key value pairs, or even JSON?  Why overload TXT?”

It’s a reasonable question.  The biggest source of content expansion comes from the fact that now irrelevant records need to be sent, if there are multiple applications sitting on TXT.  Why not have multiple TXTs?

Perhaps, in the future, we could move in this direction.  Nowhere does it say that the same record can’t show up with multiple types.  (SPF tried this, without much success though.)  But, today, a big part of what makes non-TXT expensive to deploy is that name servers traditionally needed to be rebuilt to support a new record type.  After all, the server wasn’t just hosting records, it was compiling them into a binary format.

(Those who are IT administrators, but not DNS administrators, are all facepalming).

Eventually, RFC 3267 was built to get around this, allowing zone files to contain blobs like this:

sshfp  TYPE44  \# 22 01 01 c691e90714a1629d167de8e5ee0021f12a7eaa1e

To be gentle, zone files filled with hex and length fields, requiring record compilers to update, aren’t exactly operation’s idea of a good, debuggable time.  This is expensive.

Of course, a lot of people aren’t directly editing zone files anymore.  They’re operating through web interfaces.  Through these interfaces, either you’re putting in a resource type the server knows of, you’re putting text into a TXT record, or…you’re putting in nothing.

It’s possible to demand that web interfaces around the Internet be updated to support your new record type.  But there is no more expensive ask than that which demands a UI shift.  A hundred backends can be altered for each frontend that needs to be poked at.  The backends support TXT.  They don’t support anything else, and that fact isn’t going to change anytime soon.

And then there are the firewalls — the same firewalls which, like it or not, have made HTTP on 80/tcp the Universal Tunneling Protocol.  Quoting the above RFC 5507 (which observes the following, and then just seems to shrug):

…Firewalls have dropped queries or responses with Resource Record Types that are unknown to the firewall.  This is, for example, one of the reasons the ENUM standard reuses the NAPTR Resource Record, a decision that today might have gone to creating a new Resource Record Type instead.

There’s a reason all those packages abandoned non-TXT.  Whatever downsides there are to using TXT, and I do admit there are some, those who have chosen an architecturally pure custom RRTYPE have failed, while those who’ve “temporarily” went TXT have prospered.

Next up, lets talk about the challenges of putting bootstrap content into the DNS itself.

Categories: Security

The DNSSEC Diaries, Ch. 1: Names And Places

December 13, 2010 Leave a comment

DNSSEC gives developers the ability to authenticate small bits of trusted data in a namespace that transcends organizational boundaries.

That’s great, because application developers have a real problem authenticating across organizational boundaries. PKI based on X.509 was supposed to fix this, but a couple billion dollars in failed deployments later, it’s become painfully clear:  X.509 is really quite expensive and painful.

But developers don’t have a problem doing DNS lookups — complex as distributed databases are, DNS operations coalesce into a small bit of code that’s completely abstracted away.  If developers could authenticate peers, as easily as they could look up an IP address — well, we’d see a lot more authentication.

And a lot fewer password prompts.

But — to be plainly honest — DNSSEC was not originally designed for user applications to validate just any small bits of trusted data.  The original vision was somewhat constrained to validating IP addresses being sought after by a special subset of systems known as Name Servers.

That’s OK.  Believe me, Tim Berners-Lee had no idea when he originally designed much of the web, that practically every page delivered would be dynamically generated from some managed language like Java, C#, or PHP, or would come from complex, worldwide load-balanced Content Distribution Networks.  But there was nothing in the protocol that required pages be static files on a hard drive somewhere.  And look what you could do, if you just dreamed a little bigger…

So, lets say we agree that DNSSEC should be used to push more than just IP addresses to name servers.  There are a number of questions to be asked, regarding how this would work — questions I’ll address in further DNSSEC Diaries — but chief among them:

What exactly are we going to put into DNS?

Note, I said DNS, not DNSSEC.  Really, DNSSEC is really just DNS with signatures, making it a more conservative design than people realize.  You put whatever you want into your DNS.  It’s just that DNSSEC validates that it’s you who put it there.

At this point, a lot of people have recognized that the first thing we want to migrate into DNS are public keys, or least hashes of them.  (If you need help understanding what a public key is:  Think of it as your face — easy for you to validate, hard for anyone else to clone.  Putting a public key in DNS and securing it with DNSSEC is then just like getting a passport:  Take a photo, put your name next to it, and then overlay an anti-counterfeiting hologram to link the two.)

Sounds pretty nice — but what does it look like on the wire?  There’s actually quite a few options here.  There are two major degrees of freedom when designing what’s known as the schema for trusted key information in DNS:

  1. Where will public key data for reside?
  2. What will public key data in look like?

There are no right answers here, only engineering tradeoffs. Quite a few teams are working on balancing the various engineering constraints involved, to make the best possible product — the KeyAssure group, for example.  What I’d like to set out to do, is explain my thinking behind what’s presently implemented in Phreebird Suite 1.02.  It’s my engineering philosophy that standards design works much better when people can understand why others built things a certain way, compare working code, and see what actually works well operationally in the field.

Lets tackle the first question:  Where will trusted data reside?  Given a domain,, there are in fact three places trusted information can be stored:

  1. Direct:
  2. Indirect:
  3. Indirect with Marker Label:

It also turns out that there is something called an RRTYPE, or a Resource Record Type.  When you query for a name, you can say that you’re interested in an IPv4 address, or an IPv6 address, or any of a number of other things.  There are three competitors for what RRTYPE to query:

  1. CERT — an already specified, somewhat sub-typable record type
  2. TXT — a “blank” RRTYPE designed for textual data
  3. Custom — a new, unassigned RRTYPE, designed from scratch to contain certificate data

As you can see, even before we’ve gotten around to what the DNS answer looks like, specifying the DNS question is enough to start a holy war.  That’s OK — holy wars are oddly common in DNS.  Even simple things like returning different IP addresses depending on load or geographic location of the user are controversial!  But we’ve got to start somewhere.

For Phreebird Suite 1.02, I’ve presently chosen:  Direct naming (whenever possible, which won’t be always), with TXT encoding.

Lets start by discussing why Direct naming.

Direct naming maintains maximum compatibility with DNS redirection and wildcards.  Most of the time, DNS is explained as a simple chain, i.e.:

  1. = [root] -> com’s name server ->’s name server ->

Often however, a CNAME is involved.  A CNAME, or Canonical Name, lets DNS say “the answer to the name you resolved, is actually the answer to this entirely different name”.  CNAME redirection is quite common, especially on larger sites.  For example, for

  1. = [root] -> com’s name server ->’s name server -> CNAME to
  2. = [root] -> com’s name server -> Edgesuite’s name server -> CNAME to
  3. = [root] -> com’s name server -> Akamai’s name server ->

And thus, through twelve or so lookups, resolves to

(Performance isn’t as bad as the above implies, mainly because so many of the links are easily cached.  Akamai basically invented CDNs, so they wouldn’t be doing this if it wasn’t speeding things up.)

Now, here’s where things get interesting.  Suppose we want to express, in DNS, that the public key for has a hash of 50ac1234.  With Direct naming, all we have to do is put this information in, and we’re done.  A lookup against will transition through the existing edgesuite and a16669 CNAMEs and resolve perfectly.

To put it another way — when CBS CNAMEs to Edgesuite or Akamai, it doesn’t need to think about what particular IP address Akamai will choose to route traffic to.  It also doesn’t need to think about who else is being routed to that particular domain within Akamai.  It just forwards the name, and Akamai figures out the IP.

Now, Akamai can figure out the public key as well.  It’s really quite elegant, way easier than what we have to do with X.509 today.

By contrast, any other name will require new CNAMEs — to, and to, for example.  Can you imagine how many meetings this would take?  This is not operationally viable.

And this only covers the possible.  Suppose for a moment that there exists *  With Direct naming, it’s trivial to have this wildcard CNAME to a single Akamai address, with its particular public key.  But Indirect naming is difficult to impossible to support, since it requires a _keyhash.* wildcard — a style of wildcard not supported by any present name server, and extremely difficult to implement within the confines of most DNSSEC implementations.

So, as much as I like the named types we get from prefixes, they would cost us too much of DNS’s real world expressiveness.  I do still have a use for them, though — I’ll describe it later.

Next post:  Why TXT records, instead of CERT or something custom.

Categories: Security