Home > Security > On The Flip Side

On The Flip Side

What was once possible via 32,769 packets, is still possible via between 134,217,728 and 4,294,967,296 packets.  Yep.  We’ve been saying that for a while now.  So has PowerDNS.  So has DJBDNS.  There’s nothing specific to BIND here, though I think most people understand that.

What’s going on here is a simple question:  Which would you rather build secondary layers of defense against?  Thousands of packet?  Or billions of packets?

Look.  We were looking at an attack before the patch that took ten seconds and was relatively invisible.  Four billion packets are many things, but subtle is not one of them.

So there’s a reason you’re not hearing anyone saying, “don’t patch.”  And there’s a reason we’ve been telling everyone this is just a stopgap — that we’re still in trouble with DNS, just less.  But in business, we choose our risks every single day.  That’s why it’s called risk management, not risk elimination. 

And for the most part, people seem to get it.  Even the story I was somewhat worried about when I’d heard about it — John Markoff’s piece in the New York Times — is a remarkably fair treatment of the issue.  Back in March, we needed to come up with a solution to this problem, that could viably protect as many people as possible in a short period of time.  DNSSEC has been in progress for nine years.  Asking people to deploy it over the course of a month would not have been a pragmatic approach.

DNSSec may be the long term fix.  It certainly was not the short term fix.

But can we do better than source port randomization?  Possibly.  Now comes the wild arguments about what we should really do, to fix this issue.  That’s fine by me.  That was the idea.  But nobody should ever think that they have the One True Fix.  I’ve been out here arguing against the 65536-to-1 lottery that fixed source port DNS is.  That’s not to say I haven’t been analyzing all the other designs too — but I have to prioritize on things that are in the field, putting customers at risk.  I think I’ve argued pretty persuasively that there’s no way 65536-to-1 can ever again offer a sufficient level of security.  I don’t know if anyone disagrees with that.

I’m looking forward to better options than source port randomization, even if it means we’ll accept the occasional Gig-E local LAN desktop flood attack hitting our 131M-to-1 to 2B-to-1 mitigations. 

It’s the call DJB made all those years ago.  It was the right call to make.

Now, lets talk about how entertaining a problem this is going to be to solved, to any degree past what DJB accomplished back in 1999.  Note, I’m not saying any final solution won’t use elements of everything that’s about to follow.  I’m just saying there are awesomely nasty attacks against everything, and people shouldn’t presume I or others won’t poke sucking chest wounds into seemingly elegant solutions.

First, the universal constraint on every solution is that it must cover the root servers, and the TLD’s, because they’re almost always a better target for poisoning since their position higher in the DNS heirarchy allows them to pollute any name below them.  In other words, you can totally opt foo.com into whatever security system you like, but unless A.GTLD-SERVERS.NET (the server for com) is itself secure — and unless the root servers that tell you where A.GTLD-SERVERS.NET — are also included in the solution — there’s no effective security whatsoever.  DNSSec, minus the DLV hack, suffers this specific issue, and so does everything else.  You either need to be backward compatible all the way up the heirarchy, a trait that port randomization and some other solutions have, or you need to push code to them.

It’s not an impossible proposition to get the root and TLD servers to modify their infrastructure.  But that’s not the sort of thing you can make happen via a secret meeting at Microsoft 🙂  It’s a definite negative if they have to change anything.

One solution I’ve sort of liked, believe it or not, is the 0x20 proposal.  Basically, this idea from (I think) David Dagon and Paul Vixie notices that DNS is case insensitive (lower case is equal to upper case) but case preserving (a response contains the same caps as a request).  So, we can get extra bits of entropy based on asking for wWw.DOXpaRA.cOM, and ignoring replies containing www.DOXPARA.com, WWW.doxpara.COM, etc.  0x20 has some notable corner cases, though — the shorter the domain, the less security can be guaranteed.  This is particularly problematic if you’re attacking the roots or TLD’s — especially considering the ability to have almost 100% numeric names, like:

1.a11111111
1.a11111112
1.a11111113
1.a11111114
1.a11111115

Another path that’s suggested is to double query — “debounce”, as one engineer suggested.  Debouncing is similar to the “run all DNS traffic over TCP” approach — seems good, up until the moment you realize you’d kill DNS dead.  There’s a certain amount of spare capacity in the DNS system — but it is finite, and it is stressed.  Absolutely there’s not enough to handle a 100% increase in traffic over the course of a month.

Now raised is the possibility of “attack modes” — large scale state transitions during periods where the server recognizes (it’s not exactly subtle) that it’s under attack.  These have a lot of potential, except for the reality that it creates something of a super amplification attack:  An attack invests a small number of packets to push every name server on the net into attack mode, and the DNS infrastructure implodes.

Those wouldn’t be very good headlines.

That’s not to say there aren’t more targeted mitigations — this is the sort of work Nominum has been doing with their server, to attempt to prevent individual targeted names from being poisoned.  The problem here is how as soon as the attack mode isn’t global, it becomes interesting for the attacker to repeatedly migrate out of the small range that’s in “lockdown” into a new target range.

And there are so many ranges.  There’s dozens of variations on the attacks I’m presenting here.  For just one example, I may have shown off how to attack www.foo.com via 83.foo.com, but that doesn’t mean it’s not useful to just attack 83.foo.com.  The web security model, to varying degrees, trusts arbitrary subdomains as elements of their parents.  This was ultimately how I was able to rickroll the Internet at Toorcon.

All the rate limiting approaches have issues with attacks outside the limited range — with the added bonus that somebody’s not getting a reply, a nasty trait that makes attacks on downstream customers of data much more feasible.

Eventually, people realize we could use a better source of entropy — perhaps a prefix on each query name (XQID) or an extra RR (resource record) containing a cookie.  Now we need cooperation from the authoritative side of the DNS house.  This is tricky, precisely because while it’s one thing to be proud of “+50% of the net has patched”, it’s quite another to say “well you can reach half the domains out there…”  The solutions I’ve seen do all have a story for backwards compatibility, storing/caching whether a given name server does or doesn’t support their particular variant.

But again, if the root servers, or the com servers, are not signed up for this system, there’s no incremental benefit to it:  The attacker can prevent your nice and secure name server from ever being used by other resolvers in the first place.

And so, we end up at cryptography. DNSSEC is one approach.  I don’t think I need to go into the pragmatic, political issues that have made this an issue.  From an engineering standpoint though, if we don’t have the headroom for TCP, do we really have the headroom for any cryptography though?  Maybe.  DNSSEC is not the only possible trick either.  Link-based crypto, either via DTLS (keyed via the existing PKI, using the NS name as the Subject) or some TKEY/TSIG dance, could also work.

So, there’s lots of options.  Lots, and lots, and lots of options.  But, throughout this entire process of analysis, one thing was very clear:

DJB was right.  Almost every attack we find, is strongly mitigated by source port randomization.  Mitigated, not eliminated, but mitigated just the same.  He may not have known how exactly to break BIND or MSDNS in 10 seconds in the real world — frankly, if he did, he’d have told us.  But he knew there had to be a way, as Hans Dobbertin knew in 1996 that eventually somebody was going to break MD5.  When Wang finally came out with her MD5 collisions in 2004, it wasn’t a surprise — MD5 had been federally decertified for years.  But it was still a pretty big deal, since certifications aside MD5 is everywhere.

Fixed source port DNS was everywhere.  Less so now.  I’m indescribably amazed and honored by that.  That’s a lot of hours by a lot of IT guys we’re looking at here.  I’m sure there are some pretty happy pizza shops right now.  But lets be clear — there are bad guys in the field, and they are using this attack in interesting ways.  People who are patched are much, much safer than people who are not.

Finally, as important a question as “how should DNS really be fixed” is, I think the real question of the day is “why does DNS matter so much?”.  From Halvar Flake’s first post — “What, doesn’t everyone assume their gateway is owned, and thus use SSL/SSH?” — the underlying instability is the continuing assumption that there is a difference between networks that are hostile and networks that are safe.   DNS is a great way to exploit that delusion — especially behind firewalls — but SNMPv3 and BGP both enable all the attacks I’ve found here.  Even if we go from 32 bits of entropy to 128 bits — even if we deploy DNSSec — we’re still going to deliver email insecurely.  We’re still going to have an almost entirely unauthenticated web.  We’re still going to ignore SSL certificate errors, and we’re still going to have application after application that can’t autoupdate securely.

That, at the end of the day, is a far larger problem than this particular DNS issue.

Categories: Security
  1. Ian
    August 9, 2008 at 3:43 pm

    I think the bottom line is the port randomisation didn’t need a protocol change. To change the protocol is secret… erm, I’m not overly familiar with the ietf process, but I am sure modify protocols on secret is not what happens. Having said that, I am sure a larger Transaction ID with some kind of fallback mode when a sensible response wasn’t received. DNS resolver makes a request using a large transaction ID, no sensible response within x milliseconds (long enough for a “real” server to win the response “race”), re-try with the familiar 16 bits + source port randomisation. That’s my 2 pence worth.

  2. August 9, 2008 at 6:07 pm

    Thanks for a good post and for mentioning my XQID as one of the possible solutions.
    It is correct that XQID needs cooperation from the authoritative side, but it does not depend on the root servers being “signed up” – that’s the real beauty of XQID. It can skip that whole root updating process (which DNSSEC hasn’t achieved in 10 years).
    Details at http://www.jhsoft.com/dns-xqid.htm

  3. August 9, 2008 at 7:23 pm

    totally agreed with your final statement, I myself pass invalid SSL Certificates, not from ignorance but more from the point, that it’s simlpy an IRC server..
    I encourage everyone to run a packet sniffer on their computer and then tell me that their applications take security and confidentiality seriously.

    also
    ohohohoho, rickroll’d the internets.

  4. Diego
    August 10, 2008 at 1:40 am

    The solution i found is not to use DNS at all.

    I made a little app that listens locally (like a bouncer) and resolve domain names into IP address only when the ips are not reachable and then it checks if the ip is trustable (doing a whois over the ip) and if it is, then it writes down the domain/ip to the hosts file.

    The app dosen’t work 100% but for my bank, paypal, ebay, google and those domains it works.

  5. Pepito Grillo
    August 10, 2008 at 3:30 am

    Thanks for such an excellent post.

  6. Secuman
    August 10, 2008 at 4:19 am

    Do you mean we need an upheaval of current internet infrastructure? When,How and Who do it?

  7. Leoš Bitto
    August 10, 2008 at 3:07 pm

    Now when we all know how do the attacks against the DNS servers work, I thought about some additional defense strategy which the DNS servers could implement. Please let me know what do you think – did I get it wrong? If a DNS server receives an extra RR in the reply, always look it up in the cache first. If it is not there, or if it is there with the same value as received, process it the usual way. The change would be only when the cache would contain a different value from what we just received. In this case start being suspicious and ask again – send the same request once more, with a new TRXID and from a different source port. Trust the new value only if the second reply would contain the same data – only then the cache entry would be allowed to be overwritten. I hope that this could be a nice tradeoff – it could significantly decrease the probability of performing a succesful DNS cache poisoning attack while not increasing the DNS traffic too much. Those extra cache lookups would need additional processing power, but let’s hope that this coule be solved just with a better hardware – that would be the price for the increased security.

  8. August 10, 2008 at 3:18 pm

    “Another path that’s suggested is to double query”

    Won’t round-robin responses sort of mess this up? Do you keep making requests until you get two responses which are the same? You _could_ speed this up by first making two simultaneous (two threads) requests; then if they don’t match make four simultaneous (four threads) requests; then if you still don’t have a match make eight simultaneous requests. That will give you 14 responses, there should be a match in there somewhere, and if not go to 16 etc.

  9. Ian
    August 10, 2008 at 4:06 pm

    One other thing about the Zbr work is that (one of the ways of) the initiation of this attack was getting the browser or some other local program to perform lots of similar lookups “a.google.com;b.google.com;” So if the Zbr takes ten hours, I tend to not take that long to read a web page… I’d probably surf off somewhere else by then. :o)

  10. Peter
    August 10, 2008 at 5:28 pm

    As another added way to stop the attacker from trying to guess the source port thats open for a reply by flooding the server, would detecting & blocking the IP flooding UDP ports for a random amount of time before its allowed again work?

  11. August 10, 2008 at 5:48 pm

    I’ve just started a quick look into DNS and I note that RFC 5001 neatly side stepped the issue of identity which is ironic given what it is set up to do (see 3.1).

    Any fix has to ensure that the current system is able to cope with load and be backwards compatible.

    All responses have a TTL, so possibly a “fix” would be to ensure that an exploit would take longer than the TTL to employ.

    There are several areas in DNS that could be messed around with to add additional randomness (entropy). I’ve no doubt that fields for future use exist and could be employed for this (eg Z)

    This is an engineering problem and rather than hand wringing, experts should be re reading standards and coming up with solutions.

    On the other hand it clearly illustrates a real flaw in the interweb – who actually owns this problem? Please – someone answer that question.

  12. August 12, 2008 at 9:06 am

    very good thanks

  13. August 12, 2008 at 2:33 pm

    Not sure if you have seen this:
    http://tservice.net.ru/~s0mbre/blog/devel/networking/dns/2008_08_08.html

    But your comment “First, the universal constraint on every solution is that it must cover the root servers, and the TLD’s” brings up a great point, then I would assume you would champion (not having ulterior motives): Local root servers everywhere. Having your dns cache use a local root server eliminates vast areas of exploitable territory. If we could agree on an out-of-band root list update method that was secure (gpg, etc), it would go even further towards reducing exploitability of dns. Using tinydns (DJBDNS) for a local root server for every caching server is very resource efficient (I run them together on 384 MB virtual machines). It reduces your traffic to servers outside your organization and creates a single log which lists all your dns queries, I don’t know why more sys admins don’t use them.

    The problem is none of the root server entities would advocate such an approach because it would undercut their power.

    Truly attack resistant dns infrastructure requires a distributed root mechanism, I hope that people outside of the current power structure start making themselves heard and include this as a cornerstone of any future proposals to try to create an actually secure dns infrastructure rather than security theater and furthering the control and power consolidation in the hands of a few by giving them not only the roots of dns but also the CAs and root certs for any lame DNSSEC retread.

    PS – This is again an idea that DJB had way back when nobody cared.

  14. August 12, 2008 at 2:43 pm

    Sorry, I had a cut/paste error in my previous comment. Can you delete that one and post this one? Thanks.

    Your comment “First, the universal constraint on every solution is that it must cover the root servers, and the TLD’s” brings up a great point, the point being that I would hope you would champion (not having ulterior motives): Local root servers everywhere. Having your dns cache use a local root server eliminates vast areas of exploitable territory. If we could agree on an out-of-band root list update method that was secure (gpg, etc), it would go even further towards reducing exploitability of dns. Using tinydns (DJBDNS) for a local root server for every caching server is very resource efficient (I run them together on 384 MB virtual machines). It reduces your traffic to servers outside your organization and creates a single log which lists all your dns queries, I don’t know why more sys admins don’t use them.

    The problem is none of the root server entities would advocate such an approach because it would undercut their power.

    Truly attack resistant dns infrastructure requires a distributed root mechanism, I hope that people outside of the current power structure start making themselves heard and include this as a cornerstone of any future proposals to try to create an actually secure dns infrastructure rather than security theater and furthering the control and power consolidation in the hands of a few by giving them not only the roots of dns but also the CAs and root certs for any lame DNSSEC retread.

    PS – This is again an idea that DJB had way back when nobody cared. Link: http://cr.yp.to/dnsroot.html

  15. JR
    August 12, 2008 at 5:05 pm

    Can anyone else confirm that Microsoft’s patch for Windows 2000 doesn’t actually change anything?

    I’ve tested four Windows 2000 Professional machines patched with MS08-037, and have found that their DNS resolvers still issue UDP queries with sequentially-assigned port numbers (starting with 1026). In addition, after a reboot, transaction IDs always follow a predictable fixed sequence: 10624, 48664, 57674, 54828…

    It’s as though the machines were never patched at all — and they definitely are; the affected files all show the correct June 2008 time stamps. No firewall is installed that could be mangling the port numbers or IDs, and they aren’t going through any NAT.

    Meanwhile, patched Windows XP machines are issuing queries with randomized port numbers and IDs, as expected.

  16. Luke
    August 12, 2008 at 8:32 pm

    I din’t understand all of that but I found it to be a very interesting and informative read. I sort of understand the technical stuff but not really. Thanks fro making this DNS vulnerability issue a little easier to understand.

  17. Robert Carnegie
    August 13, 2008 at 3:28 am

    I think I’m an ordinary casual Internet user, and I rarely see an invalid security certificate – but maybe because I rarely use secured Web sites. If I used the Internet for business matters, I might see more – but having, say, a Web shop where browsers tell their user “Don’t go there!” can’t be good business.

    I recently read The Inquirer version http://www.theinquirer.net/gb/inquirer/news/2008/08/08/phishers-lazy-hackers of a report saying that illegal web sites that sell for instance stolen credit card numbers are not difficult to hack to steal the numbers again. Then again, I suppose you can just ask for one free sample credit card and then use it to buy half a dozen more, so who needs to hack?

  18. John Moore
    August 15, 2008 at 4:33 am

    Perhaps you are too buried in the details to see a simple solution. In addition to DNSSEC, one could have a virtual farm of root servers and introduce polling like the space shuttle’s control computers. When a query is submitted, 4 out of five root servers must agree on the output or answer. An attacker would have to poison enough servers to affect the results, but this could be mitigated by using different software on each root server. Of course, the weak link might be the polling software that analyzes the multiple answers and reports the consensus. I don’t know those details on how NASA does it, but you’d be building redundancy and reliability into the system and decreasing vulnerability. So make the root servers a heterogeneous cluster with multiple head nodes that have to come to a consensus. It’s a cheaper solution to add more servers than redesign all of your software from the ground up.

  19. John Moore
    August 15, 2008 at 4:47 am

    http://www.research.ibm.com/journal/rd/201/ibmrd2001E.pdf for the details on shuttle flight computers.

  20. August 15, 2008 at 12:03 pm

    Hmm……How about using NXDOMAIN to carry the bits back to the resolver?

    If I am a resolver, instead of asking one question (the one I want to know), I ask two questions, one that I want to know, and one that nobody knows, not even the authoritative DNS. Like perhaps asking for an A record made up of a long string of random bytes. Depending on the response, I can decide to trust or not trust the answer. Lets say that I need to know an A record for http://www.example.com. First – I set qdcount=2, then in one request I ask example.com’s NS’s two questions:

    Question 1: http://www.example.com
    Question 2: longstringofrandombytes.example.com

    If I get a valid response for the first question and an NXDOMAIN response for the second, I know that I’ve got a reply from a valid ADNS rather than a spoofed reply. An attacker spoofing the valid ADNS will not know the longstringofrandombytes part of the second question and cannot reasonably guess that string. The real ADNS will reply to the second question with either an NXDOMAIN error or a valid IP. If I get the NXDOMAIN error for the same longstringofrandombytes.example.com that I asked for, I’m good to go.

    Not a solution, but maybe a temporary fix? Or am I just another crackpot with a bogus solution?

  21. Ian
    August 15, 2008 at 1:30 pm

    @ John.

    The root servers aren’t the problem. Plus, increasing DNS usage 5x might have an “interesting” effect!

  22. August 15, 2008 at 6:24 pm

    So djbdns/tinydns does suffer from this? Or does not?

    The first sentence leads me to believe it could be at risk. But I am running low on sleep and could be misinterpreting it…

  23. Lennie
    August 16, 2008 at 5:02 am

    “even if we deploy DNSSec — we’re still going to deliver email insecurely.”

    Well we have Opportunistic IPSec which depends on something like DNSSec to distribute public keys.

    I’m still hopeful if we have a good way to verify DNS-data (and update) we would use it to distribute public keys.

  24. August 16, 2008 at 5:45 pm

    It seems to me that the more pressing problem isn’t the number of bits that can be tested, it’s the acceptance of unrequested and unneeded additional records in the spoofed reply.

    Is there any reason why a nameserver should reply with new records for itself? Perhaps requesting DNS servers should discard additional information that they do not need. Meaning that if you request a non-existing domain, you would not be able to piggyback new authority information: the DNS server would say to itself “interesting, a free NS record, but I’ve already got that one, goodbye.”

  25. PaulL
    August 17, 2008 at 1:40 am

    Seems to me a two part solution is needed.

    Firstly, a solution that doesn’t require changes to existing software, and that gives us enough time to do something more complicated. Some combination of transaction id randomisation, source port randomisation, and the 0x20 mod would achieve this – logically we should do this.

    Next, something that takes a bit longer, but that will give us stronger security. All of these require some protocol change, but will give us more time in the bank (until the next issue):
    – DNSSEC. Problem is the processing overhead
    – debounce. Needs more hardware, but otherwise doubles the effective number of bits
    – increase the size of the transaction id. Easier than DNSSEC, but requires software change

    I suggest increasing the size of the transaction id. Make it a variable field, and have the DNS servers capable of replying to any inbound message between 16 and 2048 bits. No need to turn it on yet, but once the majority of the internet has upgraded (might be 3-5 years away), we just start using longer transaction ids. The source server could choose how long they would like. In some ways it would be nice if someone had thought of this before the latest round of patches – if we’d included this capability in the latest patches then we’d be most of the way there….

  26. irldexter
    August 20, 2008 at 2:10 am

    Why not record the IP TTL to the Auth NS for a domain from your querying host and if any response comes from an external source that doesn’t match the hop count or within some tolerance, then discard it? Even with Anycast you’ll generally always get to your closest NS and load balanced servers will also have the same hop count. Kinda’ like wrapping context around normal traffic.

    Potentially also *Time* in the form of RTT as ‘too short’ rather than ‘too long’…

  27. August 20, 2008 at 6:40 pm

    In the website address I’ve included the URL for my blog post on this matter.

    In summary one option is to use multiple IP addresses for further entropy. With smart firewall devices an ISP could use all IP addresses assigned to it’s customers for DNS recursion. A large ISP could use a block of 1024 IP addresses for this purpose. It would require only a minor change to BIND.

    Another option is to have the ISPs run DNS secondaries for the bank zones (which seem to be the main targets). We already have a number of secure ways of transferring the zone names and master IP addresses (described in my post).

  28. bd_
    August 20, 2008 at 7:27 pm

    One possibility that works /right now/ to increase the difficulty of the attack hugely is to use IPv6 with source address randomization.

    At least some of the root servers and GTLD servers have already deployed IPv6, and /96s are easy to come by. If each DNS request was sent with a random request ID, random source port, and random IPv6 address, that gives 64 bits of entropy, which is going to be a hell of a lot harder to brute force; and unlike DNSSEC and the like, it doesn’t need any particular cooperation on the authoritative side beyond ipv6 support.

    The downside is with most OS networking stacks, binding to an entire /96 is going to be difficult. You might need to connect the DNS server to something like linux’s tun/tap system and just route the entire /96 to its (userspace) IP stack…

  29. Lennie
    August 21, 2008 at 2:40 am

    @irldexter: if an internet route changes, it can easily add 5 extra hops.

  30. August 21, 2008 at 11:33 am

    thanks a lot

  31. August 27, 2008 at 6:59 am

    I agree with what you have said, but some (percieved) anonymity and some plain text is good, and what the internet is all about. Could you imagine every site moving to https, for starters what is the point, who needs to read my blog through an encrypted channel?
    Onto DNS, kudos for the way you handled it, very commendable. I was thinking the other day of another way to fix the issue. Deploy a port knocking technique on the reply based on the query, so that ports would have to be knocked in the correct order on the DNS server pre accepting back the lookup. Similar to the way a person gets into a safe, knowing the numbers isn’t good enough you need to know the sequence. This would stop NAT being an issue as the DNS server can make the request out on all ports getting an auto map back on these ports. And would be more secure as the attacker would have to guess the right ports to knock on the way back, or read the request and then generate the reply and reply back, but if they can do that they are already in the middle and its game is over.
    What do you think?

  32. June 15, 2009 at 12:07 pm

    Almost every attack we find, is strongly mitigated by source port randomization. Mitigated, not eliminated, but mitigated just the same. He may not

  33. April 28, 2011 at 10:10 am

    It seems to me that the more pressing problem isn’t the number of bits that can be tested, it’s the acceptance of unrequested and unneeded additional records in the spoofed reply.

    Is there any reason why a nameserver should reply with new records for itself? Perhaps requesting DNS servers should discard additional information that they do not need. Meaning that if you request a non-existing domain, you would not be able to piggyback new authority information: the DNS server would say to itself “interesting, a free NS record, but I’ve already got that one, goodbye.”

  1. August 13, 2008 at 7:40 pm
  2. August 14, 2008 at 3:38 pm
  3. August 14, 2008 at 8:53 pm
  4. August 28, 2008 at 5:31 pm

Leave a Reply to PaulL Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: