Home > Security > Towards The Next DNS Fix

Towards The Next DNS Fix

Ultimately, I can’t at all complain about armchair engineering.  The whole point of Source Port Randomization as an interim fix was to get things to the level that we could all have the big messy discussion about what to do now, without being illuminated by the actively burning state of the DNS infrastructure.

Now.  When it comes to fixing DNS, we have to operate under the same constraint as when we suggest fixes to web browsers.  Just as you’re not allowed to break the web, you’re not allowed to break DNS.  There are indeed many things we could do to make the web a safer place, “if only a bunch of people would re-code their web sites”.  That is, unfortunately, a naive approach that doesn’t actually lead to things getting any safer.  If nobody will deploy the fix, it’s just as if the fix didn’t happen.

We needed this DNS fix to happen.

As I’ve said a couple of times, Dan Bernstein was right.  Source Port Randomization (SPR) is not perfect — I’m pretty embarrassed that we didn’t recognize how common interactions would be with firewalls — but it’s a remarkably flexible and thorough improvement to the status quo.  When I said in my talk that there’s fifteen ways around the TTL, I wasn’t kidding.  From magic query types that are uncached by a recursive server, to nonexistent query types that are ignored by an authoritative server, there may not be a TTL to override.  Or perhaps the attacker actually provides records for 1.google.com, 2.google.com, 3.google.com, and so on.  In other words, the attacker might not even try to overwrite the NS for a domain — he may just want to get a domain in.  How would this be useful?  Consider the web security model, and Mike Perry’s research on cookies.  1.google.com will collect the cookie for Google just fine.

Or perhaps, as in the case of Google Analytics and Facebook and most large, CDN hosted sites, the actual TTL to override needs to be small, for reliability and scaling purposes.

In all of these situations, Source Port Randomization — a solution forged in 1999, long before we recognized all these problematic variant attacks — poses a significant barrier to attack.  It’s not a panacea, but it was never said to be one.  The hope, and it’s not unreasonable, is that it’s a lot easier for secondary defenses to detect and correct for a flood of billions of packets, than a couple of thousand.  SPR’s purpose was to provide a safer environment for an active discussion that would hopefully yield better fixes.  And that’s what it’s doing!

So, lets finally start talking about the better fixes that are emerging.  Specifically, the problem is — how do we stop the blind attacker who’s willing to send us four billion packets in order to pollute a name?  Four major strategies are, at least from what I’ve seen, making real strides towards a better fix.

1) DNSSEC. Say what you will about the perceived technical and political impossibility of this actually happening, but wow there’s been progress these last few weeks:  Besides lots of excited chatter that the roots are finally going to get signed, .GOV seems to be throwing some pretty serious resources at making DNSSEC happen. I’m neutral thus far on all the post-SPR solutions, and I’m really, aggressively neutral on DNSSEC.  The reality is there’s no harder task in all of IT than building a PKI, and the inescapable reality is that DNSSEC is a new identity infrastructure on the order of X.509.  It does solve the problems though, at least for the authoritative servers that opt into it, and the side benefits of having the system fixed in this particular way are rather compelling.

2) Layered Point Fixes. This is the approach Nominum is taking:  Basically, they’re bundling every point fix they can, and actively getting themselves into the position with their customers that as new bypasses are discovered, they can react quickly.  For example, when Nominum receives a packet with an incorrect TXID, they switch to TCP for that particular query.  This constrains an attacker in two ways:  First, they must force as many lookups as there are fake responses.  In other words, instead of being able to send 99.8% fake responses for each forced request, the attacker must send 50% requests, 50% responses.  Second, the attacker is constrained to the query rate that Nominum will actually send queries to a particular domain.

That alone, is not enough.  A slightly less efficient attack does not a fix make.  And so they port randomize.  But that too, is not enough — at least not for the long term.  And so they’re systematically building filters that attempt to detect as many weird variants as possible and attempt to address them on an attack-by-attack basis.

It’s certainly my preference to have a comprehensive fix.  But, pragmatically, I can’t deny that Nominum’s approach is yielding an increasingly harder target.

3) Attack Mode.  I’ll admit, this one appeals to me — that’s a change, I used to be a pretty staunch opponent, as I expect many people to still be.  But bear with me for a second.  Probably the most consistent signal of a blind cache poisoning attack is a spike in the number of responses received per second with an incorrect TXID (and, if you’re monitoring the network, incorrect destination port).  Even with a fully non-response upstream name server, this signal still survives, as the attacker needs to guess transaction IDs and ports and is going to for a very long time guess wrong.  This appears to hold true for all variants, known and even suspected.  Now, the concept of the SPR interim defense is that the brute force will either go too slow to be relevant for an attacker, or fast enough that the raw traffic levels will be noticed by even trivial network monitoring.

We can do better monitoring of DNS traffic with an IDS rather than just a traffic monitor, but you know who’s in a really good position to notice this attack?  The name server itself.  There’s no reason, inside the name server, that we can’t adapt to the attack — and change our posture to compensate.

Imagine for a moment that we monitored the absolute number of packets received with at least the wrong TXID.  (Depending on how we manage sockets, we might not see all the packets with the wrong source port.  We may not need to, or if we do, we can do so fairly trivially with libpcap filtering for source port 53.)  Assuming we were indeed receiving too many packets with the wrong transaction ID, we could deem ourselves…under attack.  What now?

I’ll tell you what we probably shouldn’t do:  Rate limit, either for all IP addresses, or for those that are specifically being spoofed.  (Remember, DNS servers enforce source address on incoming packets so they can correctly calculate bailiwicks — whether a particular server is allowed to speak for a given name in the first place.)  The problem with rate limiting is that, while it works very well to slow an attacker down, it also provides an attacker with a very consistent way to implement targeted denial of service attacks against DNS infrastructure.  Just flood bad replies, and the real reply will consistently get dropped.

A lot of security people are willing to tolerate DoS, in lieu of data corruption.  On one level, yes, it’s true, I’d rather have no service than corrupted service.  On the other, no service is in and of itself bad for business.  A trivial DoS that takes out Google for an ISP is more than just a problem — it’s a deployment blocker.

Again.  If nobody deploys your fix, it’s like you didn’t even write it.

That being said, DNS is a cruel mistress.  Due to the chained nature of DNS, reliable DoS attacks actually enable data corruption, by allowing an attacker to break the chain.  This has already been shown to cause headaches when an IPS blocks traffic to an authoritative server (mentioned earlier, and described in depth in my 2005 Black Ops talk).  But there are also implications to DNS clients, who will themselves now end up with nothing in their cache because a rate limited server couldn’t collect the data in the first place.

So, we shouldn’t drop traffic.  What can we do?  Perhaps, switch to TCP during the attack?  We know Nominum does this, at least on a per-query basis, when it detects an attack for that particular query.  So there’s some precedent.  But the resistance and nervousness around anything that allows you to force large numbers of servers to switch to TCP, for any reason, is significant.  It’s also impossible to ignore that a decent portion of recursive name servers cannot get 53/tcp out of their network, and that there are even  a good number of authoritative name servers that refuse to host their DNS records over TCP.

There’s much less fear around debouncing — at least, well scoped debouncing.  This is just the technical way of saying, if you’re not sure about something, look it up twice.  You do need to make sure you get the same answer back both times — or else an attacker just forces you to debounce, and hopes he gets his contrary answer in both times.  And there remains interesting questions about what to do when the answers legitimately differ, because they come from a CDN that shuffles responses on a per-response basis for load balancing.  What now?  I’d like to avoid TCP, and triple and quadruple querying is only a little more likely to generate multiple queries with the same reply.  One option is to make use of this trick thought up by this neat new nameserver Paul Vixie showed me — I can’t find it right now, but I’ll put a link up once I do.  The idea he had was to wait around a few hundred milliseconds, seeing if a real server would show up with another reply.  If so, there’s an attack.  Now, when he did this, he was doing it all the time, so it was killing performance on DNS for all users of the protocol (again, deployment blocker).  But we’d only be doing this in attack mode.

Yes, I think Akamai would accept slightly slower DNS resolution during an active attack against their particular names, on the particular name server that’s being attacked.

There is one funny variant we’d need to handle, if we were to depend on the real name server exposing the fake reply.  What if the real name server is non-responsive, for whatever reason?  I think the answer here is to handle situations where no answer comes back, by then and only then refusing to accept any packets from that IP address for ten seconds.  In other words, if a query fails, and nobody replies successfully, blackhole that server for ten seconds.  Legitimate servers have an easy way around this DoS — actually respond to that first query — so I think it’s the one DoS I can accept.

One matter that hadn’t really come up was scope.  There are three scopes we can defend against:  Per-query, per-NS, and global.  In other words, we can apply attack mode logic, whatever it may be, to one specific query, all queries to a name server that we see under attack, or all queries in the world.  My suspicion is that unless we actively detect attacks against just an absurd number of name servers (in other words, if the absolute number of incorrect TXIDs is not accounted for by any particular NS, thus meaning an attacker who doesn’t care which names he poisons as long as he gets someone), then per-NS scope is good.

I don’t like per-query, due to variants that it’s just not going to cover.  There’s some controversy here too, though, “query-fate-sharing” scares people a little.

So, in summary, all this ends up collapsing to some variant of:

Monitor the absolute rate of packets received with the wrong TXID, and possibly Port.  (BIND already does this — check the stats code.)
If the rate of packets exceeds some threshold — possibly dynamically set by the number of outstanding queries per second — start tracking which IP’s are “sending” packets with the wrong TXID/Port.
If there are too many NS’s to track, go into global attack mode.  Otherwise, go into per-NS attack mode for those NS’s, for ten seconds.  Hold this attack mode open as long as the spawning incorrect TXID/Port behavior continues, plus twenty seconds.  (This prevents twiddling attack mode on and off really fast, which defeats the purpose.)
During attack mode, debounce within the scope of that attack mode.  If two answers are received that disagree, issue a single query, and make sure one and only one reply comes back.  If no replies come back, suppress queries to that address for some small number of seconds.

The actual thresholds and constants would need to be figured out, but that’s roughly something I’m liking right now.  Sure, it looks complicated, but amusingly it’s still the simplest of the solutions listed thus far!

4) Case Sensitive DNS Responses (or ‘0x20’). This is David Dagon’s concept, and it’s interesting.  The concept is that DNS ignores case (www.foo.com is http://wWw.FOO.coM) but preserves case (if you ask for http://wWw.fOO.coM, you’ll get back http://wWw.fOO.coM).  So if we want more bits of randomness — if we want to get past 4 billion packets into more-packets-than-have-ever-been-sent-in-history — maybe we can use this trait.  As mentioned earlier, the problem with 0x20 is that an attacker can select names that don’t have enough case sensitive characters to add entropy.  Specifically, you can have numbers in a DNS name!  And so, when an attacker forces lookups for:

1.a11111111
1.a11111112
1.a11111113
1.a11111114
1.a11111115

0x20 can only provide one additional bit of entropy — and it’s not clear that one a is even required (it’s there to deal with the complaint ‘well, we’ll just detect completely numeric domains’).  And since all the above names have to be queried against the root servers, whoever corrupts those names gets to include whatever extra records he wants, because they’re all in bailiwick.  This is the exact problem that DNSSEC has — securing http://www.foo.com doesn’t just require securing foo.com, you also have to secure com and the roots themselves.  (XQID thought they got around this.  So close, but no.  I’ll post why later — this post is about fixes.)

Bottom line, 0x20 can’t secure the roots when there’s not enough characters to add sufficient entropy.

That being said, almost all real world names do have enough characters in them to add lots of entropy.  In fact, of all the non-DNSSEC solutions, 0x20 is the one that can not only work for the common case, but survive without source port randomization.  (The attack mode above just doesn’t work well enough when the attacker has a 1/65K chance of winning.)  It does need some coverage in those synthetic cases where there’s not enough entropy, or even in the real world cases of very short domains (ibm.com, for example).

Well, we have an entire debouncing framework described for Attack Mode.  Could we debounce when we don’t get enough entropy from the name?  Or perhaps we do so only when we detect 0x20 under attack, or is deployed on a network that from either the authoritative or recursive side canonicalizes away the case variation?

I’m not sure what the exact fix looks like.  But what’s clear to me is now is what I was pretty sure of back in March:  The real fix, the comprehensive fix, is not going to be trivial.  It may be DNSSEC, it may not be, but it’s not going to be a one-character call-it-a-day point fix.  Say what you will about Source Port Randomization — conceptually, it’s several orders of magnitude cleaner than everything that’s yielding fruit now.  Dan Bernstein’s solution is good.  Doing better — by crypto, by filtering, by defending ourselves, or by another entropy source — will be hard.

Not impossible, but not the sort of thing 16 engineers in a room could pragmatically hope to accomplish.

Categories: Security
  1. Steve Rhoton
    August 29, 2008 at 3:27 pm

    >It may be DNSSEC, it may not be, but it’s not >going to be a one-character call-it-a-day point >fix.

    Dan,
    Why the fixation on the ‘one character’ fix? It almost sounds like a straw-man argument because you can’t argue against the merits of the actual patch. Could we discuss the patch on the merits without the snark?

  2. August 29, 2008 at 3:52 pm

    Steve,

    I spent the entire previous blog post talking about the issues with that particular patch.

    The point I’m making is, having seen a fair number of things suggested, what seems like it’s going to work is going not going to be as simple and straightforward as SPR. The four solutions that are looking interesting — DNSSEC, Layered Point Fixes, Attack Mode, and 0x20 — all require chaining together multiple fixes to achieve comprehensive coverage. (OK, technically DNSSEC is one “fix”, but the thing is enormous.)

    What I was trying to do was make it clear that we’re not going to see the sort of elegance we had in DJB’s fix, for what comes next. I do apologize if that came off a little snarky. Gabriel’s approach, suitably tuned, may actually be useful. But his attitude that we can just blame the people running the authoritative servers for the way they configured their NS records — nope, that sort of attitude prevents your patch from being deployed. And then why did you work so hard to build it?

  3. Steve Rhoton
    August 29, 2008 at 6:30 pm

    Dan,

    I finally finished reading your latest blog, and it was highly informative. Excellent information, all around.

    We all know that DNSSEC can fix all of this, but I don’t think I need to point out that the vast majority of us will have DNSSEC implemented around the same time that we have IPv6 completely rolled out and/or we all have cheap, highly efficient photovoltaic cells on our roofs providing most of our electricity and hot water.

    That said, I think for a lot of us DNS admins out here, there’s been a lot of waiting and very little info out there about what’s in the works to fix the flaw. As far as most of us were concerned, we were stuck in a waiting game – chewing on the implementation realities of DNSSEC and waiting for the community to put out something that helps us sleep at night better than the SPR patch (and maybe some uRPF) does.

    In the absence of true community fixes, I applaud Gabriel’s simple (and easily reversible) approach that:

    – Was done in the right way to ask the community for input

    – May help while we wait for some of the fixes you mention in your latest blog to come to be

    I agree that this patch may have received more attention that it should have, but you seemed to attack the patch less on its technical merits and more on (1) your perceptions of the author’s attitude and/or (2) the author even attempting to find a simple fix to a complicated flaw.

    In fact, I’m glad we can finally get to a real technical discussion about all of this, because, to be completely honest, I felt like it was your attitude against Gabriel’s patch that detracted from the points I thought you were trying to make in your original blog. I think everyone following this issue understands that you are a resident expert with this technology and vulnerability, and that you have the greater good (i.e. of DNS as an holistic, distributed system) in mind.

    That all said, can we have your input on both the positive technical aspects and negative aspects of the one character patch? Also, should we take this technical discussion back to the BIND list?

  4. D2
    August 29, 2008 at 8:39 pm

    I’m afraid it’s always a computational cost or time trade-off. Essentially Penny Black project style challenge or LaBrea tarpitting is required.

    What *DO* we own re: logical assets? RRs!!!!!

    Could we ask the remote server somehow to lookup a temp record in our own domain, generate one(as we own our own servers hopefully) and then wait for the lookup from the remote NETBLOCK? Kinda like SMTP authentication for websites and mailing lists with an OTP. One would have to be in-path to know the variable or understand some characteristic of the remote network/domain.

    Let’s use the attack in reverse to secure the attack? Use the attack to secure ourselves as we can generate an arbitrary RR in our own domain e.g. our resolver talks to our domain NS and tells it to inject a local variable/record… think of it as a magic number, assume the attacker is not in-path.. then force the remote domain to ask our domain about it, before they give us the original new query for a host… haven’t totally thought this through fully 🙂 might be worse re: BW 😦

    Or maybe debounce but record “IP TTL” tolerance. Sure IP TTL’s can change with backbone routing updates but less likey in the course of lookups for random/new hosts not actually in local cache already.

    Firewalls or any policy point invalidates host based rate limiting somewhat.

  5. D2
    August 29, 2008 at 8:48 pm

    The above comment by myself would be aptly summed up as a tunnelled type of CHAP for DNS.. and based upon DNS server function separation I know it’s improbable, but hey, it’s an idea!

  6. Paul Wouters
    August 29, 2008 at 10:42 pm

    You wrote:

    This is the exact problem that DNSSEC has — securing http://www.foo.com doesn’t just require securing foo.com, you also have to secure com and the roots themselves. (XQID thought they got around this. So close, but no. I’ll post why later — this post is about fixes.)

    That is of course not true. You can have as many secure entry points as you wish, and if the amount of these is becoming unmanagable, you can use a DLV.

    Specifically, you can secure YOUR OWN sub.com domain fine, without .com having been signed. And you can even tell other people that you are signed by adding a DLV record to your own zone and submitting it to a DLV registry.

  7. August 29, 2008 at 11:55 pm

    Steve–

    More details later, but saying TTL overwrite is unnecessary is foolhardy. If you look at CNAME overwrite, it leads to an immediate requery, which is actually a safe thing to do. We might be able to do the same thing in the NS realm.

    It is not at all impossible that Gabriel’s suggestion is a useful, if small, component of the larger fix. What is absolutely impossible is that Gabriel’s suggestion *is* the fix — that ultimately his was the fix for the root bug. It’s just not that simple. Really what we need are comprehensive fixes that handle all scenarios, not just the few we think we wish we just had to worry about.

    If you don’t fix Google Analytics, you’ve done nothing. Bottom line. Low TTL records must be safe, and high TTL records must be replaceable in bailiwick. That’s the technical judgement.

  8. August 30, 2008 at 11:05 am

    Paul,

    So, which DLV roots will everyone trust? Seriously. Which DLV root can I add doxpara.com to, that everyone will now magically do DNSSEC for? How will that DLV root authenticate people who wish to claim a particular name?

    There’s nothing harder in IT than trying to create a PKI. DLV is a way of building a PKI when some rather important members of that PKI aren’t playing along — but lets not pretend the problems go away just because some new people are looking to solve them.

    Let me be clear. I will be nothing but impressed to watch DNSSEC finally happen. There are some major, possibly even critical problems that DNSSEC offers the opportunity to solve. But “you can tell other people” is not even approaching the realm of scalable solution. We could solve all these cache poisoning problems by going back to host files too. I suspect however there might be some real world problems with that.

  9. antibozo
    August 30, 2008 at 8:05 pm

    So here’s a question. With source port randomization, at least as implemented in BIND, what prevents the following attack?

    Attacker sets up a nameserver for example.com that receives queries but sends no responses, with an appropriate delegation. Attacker then asks the victim nameserver ~2^16 distinct queries under example.com in a short time frame. This effectively consumes the entire source port range. The example.com nameserver observes the source port for each inbound recursive query from the victim nameserver; the one remaining source port that was not observed is the only port left to the victim nameserver for its next query. Attacker then asks victim nameserver for the domain to be poisoned, and forges the poison response targeting the remaining port.

    I’m presenting an idealized attack here for the purpose of illustration. The principle is to starve the entropy included in source port randomization by consuming the majority of available ports with outstanding queries. The attack would only be effective against a nameserver that allows close to 2^16 outstanding queries; a nameserver that could have only 1024 outstanding requests, for example, would be DoSed before it would lose significant entropy. The outstanding requests could be distributed among many domains and distinct nameservers if necessary, as long as the nameservers coordinate. The attack could be partly mitigated on nameservers that allow ~2^16 outstanding requests if the victim nameserver can reuse randomly chosen, already open sockets for multiple successive requests once ~2^16 ports are bound. The efficacy of this attack depends on the precise algorithm by which the victim uses new ports, and what limits are enforced.

    I have to wonder how this has been addressed in, say, BIND, and this seems like as good a place to ask as any. Where has this scenario been discussed? I’ve done some Googling but haven’t come up with anything.

  10. antibozo
    August 30, 2008 at 8:13 pm

    Dan,

    Your response to Paul is limited to the DLV scenario. Non-root trust anchors can always be maintained explicitly in resolvers, and this is useful in the near term for the limited number of TLDs that are doing DNSSEC. It’s just a pain in butt, since resolver operators have to keep the trust anchors up to date by hand (although RFC4986 presents a framework of requirements for an automated system to help solve this problem).

    I’m not claiming that manual trust anchors solve as large of a problem that DLV does, but it’s certainly not to be dismissed as a medium-term workaround for DNSSEC operations all under a given TLD, e.g. .gov.

  11. Jesse Cantu
    August 30, 2008 at 10:56 pm

    Perhaps that is because PKI is broke?

    Additionally, perhaps the verification model should be managed or have an oversight agency in each of the world’s governments as opposed to private sector?

    What is DNSSEC though? Just one more band-aide for a system that was never designed or meant to function in the form we force it to on a daily basis. A Internet re-write is the only plausible fix for this problem and assuredly someone will miss one more thing there that will be capable of bringing it to it’s knees.

  12. Robert Carnegie
    September 2, 2008 at 8:13 am

    0x20: will work better universally if TLDs are made longer. For instance not .com but .commercial , not .uk but .united-kingdom . But… that’s only good when the request does include a top-level domain. And of course it break Internet, but I think the world would be content to type out “ibm.commercial” as a site address and not be ripped off. (I mean, unexpectedly.)

    Incidentally, what about international character set names? For instance Chinese or Cyrillic? I haven’t kept up on this field but I gather it’s quite piecemeal, so presumably vulnerable all over. (However, it mostly applies to the people whom Westerners are scared -of-.)

  13. ac_wingless
    September 3, 2008 at 9:24 am

    Greetings
    I feel a little embarassed speaking of things I just barely started to understand today, thanks to the explanation for dummies (http://www.unixwiz.net/techtips/iguide-kaminsky-dns-vuln.html linked in your previous post), but I thought I’d throw that idea.
    In the list of mitigation solutions or fixes that you make, you mention performance issues for the servers, and the deployment problems that would likely be the logical consequence.
    I can’t help but think that the DNS lookup algorithm as explained in the link above is a pretty straghtforward piece of software (there may be more to it, but there has to be engineers capable of writing this at MS, or for Linux, MacOS…), which could be strengthened via an attack mode. The performance penalty could be paid by the end-users, i.e. the clients. In normal situations, the local cache would behave just the same as the nameserver caches, and probably reduce bandwidth since little outbound DNS-related traffic would even exit the PC (at least you save the round-trip to your local nameserver). In attack mode, then the world’s PCs and their massively underused power would spread the load and be able to implement very robust (even if costly) defenses along the ideas you mention.
    What prevents for example MS from deploying via Vista/XP updates a client DNS lookup, entirely cached on the PC, implementing the whole DNS lookup algorithm? Do you have to be particularly trusted to “talk” to the root servers?
    Cell phones’s power is also increasing spectacularly, so eventually that scheme would be scalable to a sizeable chunk of web clients.
    What am I getting wrong?

  14. September 3, 2008 at 11:57 am

    The problem with eliminating the DNS servers is that now DNS load explodes — instead of having 30K systems behind a single NS, you have 30K NS’s. The DNS does not have spare capacity to handle that.

  15. September 3, 2008 at 11:59 am

    Robert–

    AFAIK, UTF-8 isn’t supported in DNS; we have Punycode for that. Punycode will work with 0x20 just fine.

  16. September 5, 2008 at 7:42 am

    I think a lot of things need to be done especially on the consumer side. People need to turn off unneeded services to prevent attacks entering through those unneeded services. The problem with Windows XP Professional is that it just is to easy to break into and all the external security in the world will not safeguard you if one does not have internal safety. I applaud Chris Quirke, Most Valuable Professional from southern Africa for realizing this much better than people in the States. People in the States rush to adopt new technology before it even cools and we Americans need to relax and chill out and realize sometimes the older technology is just better and more secure in this day and age. I am not limiting myself to computers but talking about all kinds of information and I bet many of you do not know that Voip is not really all that secure or safe but hey why not use it because our company and/or government can save money. Heck, people discuss confidential information over cell phones that with the proper hardware and software at your disposal can be retrieved out of thin air. You think you are all so safe and secure with cordless phones well guess what you are not because people can block that frequency and interrupt your phone call. A corded land line phone with the proper filters installed is fairly safe and secure. You think email is safe especially when both ends are not encrypted with at least 3DES — 168 bit or better encryption. The industry sadly enough is stuck at using only 128 bit RC4 encryption and seems to think that it is good enough because Microsoft promotes it as being good enough. Well, with Mozilla Firefox users can get 3DES/168 bit cipher strength and even 256 bit AES encryption strength if the website supports it like let us say reporting an incident to us-cert.gov through the Department of Homeland Security’s website. In addition, people need to block 3rd party cookies and customize their internet settings. Do you really want less priviledged zones to navigate to a more priveleged zone then well change it because this is a default setting in Internet Explorer. Mozilla Firefox has changed the scenario and smart users will use Mozilla Firefox as well as other safeguards like SpywareBlaster in their arsenal in order to protect themselves. If you are really good and knowledgeable then as Steve Riley from MSFT mentions you can set up honey pots to trap people who are trying to hack your computer and try to lure people in as I have done recently but these hackers were too stupid and so I shut them out. I am looking to trap individuals from countries like China, Russia and even here in the good old U.S.A. that are trying to hack individuals and I want people that are smart, quick and know their stuff but ever since the September 2007 incident now all these people just ignore me. LOL!! DNS security will happen but it will just take time, money and patience.

  17. Lennie
    September 15, 2008 at 3:11 pm

    Maybe I missed something but if you get 2 answers for one question from one nameserver and you go into attack mode, I presume you are going to ask a different nameserver for that domain ?

    __

    On a completely other note, going back to hosts-files might not be so far off if we don’t deploy IPv6, because if we’re gonna stay with IPv4 admins will keep the IP’s they have for as long as they can. 😉

  1. August 30, 2008 at 2:36 am
  2. September 2, 2008 at 4:44 am
  3. September 8, 2008 at 6:43 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: