The DNSSEC Diaries, Ch. 3: Contradictions
“The server at http://www.foo.com will have the following public key” is not, seemingly, a particularly complicated message to express.
It can become one, though. Should the public key be encapsulated by an X.509 certificate? Or should it be expressed directly? Perhaps we should instead insert a hash, to save space (a principle we’ve reluctantly abandoned with DNSSEC, but bear with me). Do we hash the key or the cert? Which algorithms do we support?
And that’s just semantics. What of syntax? Do we encode our desires with strings? Or magic numbers?
And just how do we manage upgrades over time?
So there are engineering challenges. And, as it turns out, when I say there are no right answers, that means there are no right answers. There are only “ways we did things in the 90’s”, or “ways we did things in the 00’s”, and a battle over which disasters we’d prefer not to repeat.
Finally, as if this discussion wasn’t meta enough — the lack of dead bodies in computer “engineering” is the thing that puts quotes around computer “engineering”. The pain of failed IT projects, even to the tune of billions of dollars, strangely drives far less change than the dead bodies, past and future, that haunt all other engineers. (Did you know you can’t even call yourself an engineer in Canada without a license? There, you’re an Engineer. Because you might kill people.)
So, how do we go about building things correctly? There are some things we know. We know that there is a fundamental tension between feature count and complexity. The more features a system has, the more uses (and users) it can support. But as features pile on, complexity — at its core, a measure of how many things an implementor must get right, in order for the system to run correctly — increases, with subtle but terrifying side effects.
Past a certain level of complexity, a system cannot be secured, because no individual person can wrap their head around the full set of possibilities for the entire system.
There is another tension: Getting things right the first time, versus adapting a protocol in response to user experience. Put simply, we know we won’t get things right the first time. As the quote goes, “plan to write it twice. You will, anyway.” But we also know, in adapting implementations to discoveries from the field, that we experience enormous complexity penalties from migration costs — penalties that, at the extreme, are high enough that we can’t actually use the new features we so expensively designed.
X.509v3 has some important new stuff — Name Constraints, for example. Will any CA sell you a certificate that allows you to use Name Constraints? No, not even one.
So, there’s a bit of a mess. We know that no security protocol, no matter how well designed, will entirely survive contact with the enemy. But we know we’re going to pay dearly for having to change our plans.
I truly suspect that security is more vulnerable to Wicked Problems than most anything else in computer engineering. Ah well. Didn’t sign up for the easy stuff.
So, to try to cut this Gordian knot, I’m going to follow the classical approach: Release early, release often.
By definition, each protocol revision will be wrong. But we should get feedback, a little at a time. (One nice thing about BSD licensing, of course, is that at minimum, this code should provide a good foundation for what may end being a radically different proposal.) Over time, we’ll see what we all converge on — and hopefully standardize something like it.
This is not a strategy I’ve invented. It’s effectively where all the actually successful Internet standards have come from.
What might be a little different is that I’m effectively trying to liveblog the process. Lets see how that goes.
Next up — rubber meets the road.