Well, I didn’t get to make it to SIGGRAPH this year — but, as always, Ke-Sen Huang’s SIGGRAPH Paper Archive has us covered. Here’s some papers that caught my eye:
Is there any realm of technology that moves faster than digital imaging? Consumer cameras are on a six month product cycle, and the level of imaging we’re getting out of professional gear nowadays absolutely (and finally) blows away what we were able to do with film. And yet, there is so much more possible. Much of the intelligence inside consumer and professional cameras is locked away. While there are some efforts at exposing scriptability (see CHDK for Canon consumer/prosumer cams), the really low level stuff remains just out of reach.
That’s changing. A group out of Stanford is creating the Frankencamera — a completely hackable, to the microsecond scale, base for what’s being referred to as Computational Photography. Put simply, CCDs are not film. They don’t require a shutter to cycle, their pixels do not need to be read simultaneously, and since images are actually fundamentally quite redundant (thus their compressibility), a lot can be done by merging the output of several frames and running extensive computations on them, etc. Some of the better things out of the computational photography realm:
- Flutter Shutter Photography: If you ever wanted “Enhance” a la CSI, this is how we’re going to get it.
- Light Field Photography With A Handheld Camera: What if you didn’t need to focus? What if you could capture all depths at once?
Getting these mechanisms out of the lab, and into your hands, is going to require a better platform. Well, it’s coming :)
Electrical Engineering has Alternating Current (AC) and Direct Current (DC). Computer Security has Code Review (human-driven analysis of software, seeking faults) and Fuzzing (machine-driven randomizing bashing of software, also seeking faults). And Computer Graphics has Polygons (shapes with textures and lighting) and Images (flat collections of pixels, collected from real world example). To various degrees of fidelity, anything can be built with either approach, but the best things often happen at the intersections of each tech tree.
In graphics, a remarkable amount of interesting work is happening when real world footage of humans is merged with an underlying polygonal awareness of what’s happening in three dimensional space. Specifically, we’re getting the ability to accurately and trivially alter images of human bodies. In the video above, and in MovieReshape: Tracking and Reshaping of Humans in Videos from SIGGRAPH Asia 2010, complex body characteristics such as height and musculature are being programmatically altered to a level of fidelity that the human visual system sees everything as normal.
That is actually somewhat surprising. After all, if there’s anything we’re supposed to be able to see small changes in, it’s people. I guess in the era of Benjamin Button (or, heck, normal people photoshopping their social networking photos) nothing should be surprising, but a slider for “taller” wasn’t expected.
In The Beginning, we thought it was all about capturing the data. So we did so, to the tune of petabytes.
Then we realized it was actually about exploring the data — condensing millions of data points down to something actionable.
Am I talking about what you’re thinking about? Yes. Because this precise pattern has shown up everywhere, as search, security, business, and science itself finds themselves absolutely flooded with data, but not quite so much as much knowledge of what to do with all that data. We’re doing OK, but there’s room for improvement, especially for some of the richer data types — like video.
In this paper, the authors extend the sort of image correlation research we’ve seen in Photosynth to video streams, allowing multiple streams to be cross-referenced in time and space. It’s well known that, after the Oklahoma City bombing, federal agents combed through thousands of CCTV tapes, using the explosion as a sync pulse and tracking the one van that had the attackers. One wonders the degree to which that will become automatable, in the presence of this sort of code.
Note: There’s actually a demo, and it’s 4GB of data!
Second note: Another fascinating piece of image correlation, relevant to both unstructured imagery and the unification of imagery and polygons, is here: Building Rome On A Cloudless Day.
In terms of “user interfaces that would make my life easier”, I must say, something that does seeking better than the “sequence of tiny freeze frames” (at best) or “just a flat bar I can click on” (at worst) into “a jigsaw puzzle of useful images” would be mighty nice.
(See also: Video Tapestries)
You know, some people ask me why I pay attention to pretty pictures, when I should be out trying to save the world or something.
Well, lets be honest. A given piece of code may or may not be secure. But the world’s most optimized algorithm for turning Anime into ASCII is without question totally awesome.
In December of 2009, Bill Brenner from CSO Magazine asked me what to expect from 2010. I’d actually never done one of these “predict the future” writeups before, but I took a shot at it. Bill ended up posting the more CxO of these. Here’s (roughly) the full set:
1. Economics will cause a few members of the “Old and Hoary Prediction Club” to finally come true.
One of the defining laws of security in general can be thought of as: “What could possibly go wrong is much more than what actually does.” Most bad things do not happen because they’re prevented. They don’t happen because “the bad guys” simply do not choose to do them. But this is truly the first major economic recession of the Information age, and whatever the numbers say, a *lot* of people are struggling. That’s motive. People who struggle get creative, by which I mean “start doing creative and profitable things they heard about”. Not everything that’s been predicted in previous years will come true, but at the end of 2010, look back to predictions for 2007, 2008, and 2009. Some of the wrong ones will have happened. One in particular is number two on this list:
2. Cyber extortion will finally enter the public consciousness.
There’s no good data on — wait, this is security. There’s not much in the way of good data on *anything*. But, facing a credible threat to a downtime sensitive, computer driven infrastructure, extortion demands do in fact get paid. Sometimes the system is large, like a public utility or a manufacturing facility. Sometimes it’s not exactly on the side of angels, like an online gaming establishment. And sometimes it’s just some random mom and pop, or even citizen, being told to spend $50 if they ever want to see their documents again. There’s no good way of knowing how big a problem this has been, but this may be the year that extortion, like credit card fraud and even more like identity theft, becomes part of the national conversation. Expect stock filings to start having to disclose unexpected expenses, some truly ill-advised marketing campaigns by security vendors, backlash (not at all entirely undeserved) that the threat is being wildly overblown, and continuing aggression towards the channels by which the extortionists get paid.
3. Prosecution for cybercrime will begin in earnest, starting with the sloppy rich.
Albert Gonzalez, one of the hackers behind the Heartland and 7/11 attacks, reportedly spent over $75,000 on a birthday party. The worst of the worst know to have far lower profiles, but what that party should tell you is that there’s a lot of low hanging fruit for law enforcement to scoop up, with some serious ill-gotten gains to recover.
As a corrolary to this, the international jurisdiction problem that has stymied prosecutions in the past will be dealt with — possibly by agreement, maybe by treaty, and maybe even (if you will forgive me speaking about things I truly know little about) the application of anti-terrorist compacts to cybercriminal activity. Put simply, there aren’t enough terrorists, and the ones there are, are political hot potatoes of the first order. The cybercriminals will have far less baggage.
4. Data sharing will struggle, but will actually begin — driven by compliance requirements and the push for “public/private partnership”
One of the great challenges of security is operationalization: Sure, there are small cadres of attackers and defenders who know how this field works, but spreading them thin across the industry as consultants doesn’t scale. To make a real difference, the knowledge of a few must be pushed into process for many. Existing efforts along these lines — involving compliance regimes — have actually achieved more penetration than they’re given credit for. People are actually doing what the rules say. But the rules, without naming names, can leave something to be desired. In the face of deep compromises of fully compliant systems, the standards bodies will lay the blame on insufficient data sharing between victims. Right or wrong (and, given the terrible state of data in security, more the former than the latter), this will lead to American legislation centered on funding a LE clearing house for, and a yearly report on, attacks seen against American cyber assets. Compliance standards, by in large, will compel participation in this regime. (There’s a small chance this effort will be fast-tracked by end of 2010, but only in the face of a front-page-news scale attack — if I was to predict an example, electronic interference with military-related logistics within a civilian supplier. Otherwise, it will be a program that is well underway, but not operational by end of year.) Europe will follow.
5. Ineffective security technologies will finally get called out as such, but not without cost
Many cyber defense technologies do not work. Specifically, given a large sample of environments with the defense, and a large sample without, differences in infection rates in the former and the latter will not in fact be statistically significant. Some that do work, only work through the multicultural effect: The defense simply doesn’t haven’t enough market share to spawn evolution in attackers. Figuring what doesn’t work, what won’t work in the long term, and what’s a genuine defensible security boundary will become a major driver for the next generation of compliance standards. Given the money at stake, expect this process to be brutal and politicized.
6. The Cloud will get worse before it gets better. But it will get better.
The Cloud is *going to win*. I don’t know how else to say this: It’s faster. It’s better. It’s cheaper. But there are security issues, and they’re not simply the sort of problems that can be worked out by taking a CIO out to golf and promising everything’s going to be OK. Genuine, technical security faults in cloud technology will garner a huge amount of attention. It may appear to some that all is lost. But the faults will be addressed, because existing investments are so very high. And anyway, it’s not like the status quo is anything to be proud of.
7. DNSSEC will continue its inexorable progress towards replacing X.509.
According to Verizon Business, 61% of compromises can be traced back not to vulnerabilities, but to failures in authentication. Technically impressive attacks are fun and all, but no passwords, bad passwords, default passwords, and shared passwords are the bread and butter of real world exploitation. (That, and SQL injection.) PKI was supposed to eliminate this password problem, ages ago, but it didn’t exactly work out. We built PKI on X.509 — but X.509-based PKI was obviously a scalability non-starter in 2002. That it’s still the best we have going into 2010 is an embarassment. 2010 will see major DNS TLD’s — and yes, I predict .com will get something up early, alongside the July 2010 signing of the root — spin up DNSSEC operations. And then…no, the Internet will not become safe overnight, and there will be some snarking about “well, what now?”.
By end of year, we’ll see what: A stream of security products, previous unable to scale due to their dependency on X.509, will transition their trust systems over to DNSSEC. And then the race in 2011 will be for much larger suppliers to adapt to the architectural shift their scrappier competitors have shown to be viable.
That being said, there’s at least a 25% politics will scutttle all of this, and we’ll be stuck with our auth-driven 61% (which I don’t see getting better anytime soon, compliance or not).
8. “Personalized Prices” — price discrimination via identity-discovery technology already deployed for targeted advertising — will become a fundamental battleground in the privacy wars.
Lets be honest: The Web knows who you are. For over a decade, identity discovery technologies have been deployed on major websites simply to better target advertising. Beyond banner ads, major e-commerce sites like Amazon have discovered that the right product shown to the right user at the right time will significantly improve sales. But in 2010, expect to see discovery of something much deeper: A major e-commerce site will be discovered to have significantly altered prices based on prior, disturbingly detailed knowledge of the particular user browsing their site Put simply, different users have different sensitivity to prices, but traditional retail has always had too high friction to exploit this efficiently — a price tag is a price tag, for everybody. But online, everybody can technically receive a “personalized price”, tuned precisely to the likelihood that they’ll buy.
There’s been some rumblings in this direction already — witness the recent claims (apparently inaccurate) regarding a retailer raising their prices in the presence of Bing’s Cashback scheme (http://bountii.com/blog/2009/11/23/negative-cashback-from-bing-cashback/) — but nothing compared to what will be unambiguously proven in 2010. Small guys will use IP Geolocation and ZIP Code databases to apply a “rich buyer” tax (and perhaps a “high fraud rate” tax) to incoming purchases, but at least one large organization will be found to have thrown serious quantitative analysis against databases of buyer names, addresses, credit scores, average bank balances, previous shopping history (long term and recent), and presence or absence of comparison shopping *for that particular product*. Even social network data may get involved — if one of your friends just bought a Nikon camera, yours may become more expensive since it’s statistically likely you’ve received a word of mouth vouch. If your family is having a wedding (an inelastic demand if there ever was one), your plane tickets could become much more expensive.
The discovery of Personalized Pricing in 2010 will be fascinating to watch. The geeks will immediately start pushing privacy enhancing technologies like Tor, which will suddenly become the cheapest way to shop online — at least, when they successfully block identification, which they won’t always or even often. The marketers, having invested heavily in this technology (and having made ridiculous amounts of money with it), will push an enormous PR campaign, arguing that “Personalized Prices” (get used to it, it’s going to be given a cutesy name like this) mean better deals for consumers. (Indeed, businesses have long since operated under this reality, though some troubling variants re: internal email hacking may be discovered. However, businesses have an entire negotiation class to deal with this reality. Consumers do not.)
Where it will all go wrong, are in two places. First, the quants will ultimately have charged protected classes more in some non-zero number of instances. Maybe it’s females, maybe it’s African Americans, maybe it’s just residents of a ZIP code that has been historically “redlined”. The political hay to be made from this, especially going into the American November 2010 elections, will be extensive. Second, it will be realized that e-commerce sites have much more interesting data to steal than even credit card numbers. Significant personal information will show up on Wikileaks, and the question will become:
How the heck did all this data get collected in the first place?
Some will have been provided in previous transactions, though not necessarily with that particular company. But a disturbing amount will have been pulled from deep packet inspection engines at ISPs — and what won’t have come from them, will be sourced to toolbars pushed into people’s browsers. This will change the nature of the Net Neutrality debate entirely…
Our scanners improve! A new nmap Beta 7 is out (Windows, OSX, Source), with slightly more accurate scan logic research by Renaud Deraison of Nessus. This tends to find a few extra boxes per network, so it’s definitely worth grabbing. It doesn’t take too much work to spin up another scan, and heh, it’s an opportunity to play with ndiff, the nmap diffing engine.
McAfee also has a really nice Windows based scanner out the door — check it out.
Of course, you may be thinking: The world didn’t come to an end. Clearly, this whole thing was just a Y2K hypefest. I’m sorry the bad guys aren’t quite the eschatologists some people would like them to be, but somebody’s been investing extraordinary amounts of resources making a worm very difficult to kill. It’s not like there was a contingent of rogue coders, sitting around figuring out where they could put two-character date fields after January 1st, 2001. There’s a bad guy out there, and while we shouldn’t panic, we shouldn’t quite ignore the situation either. Botnets — even much smaller botnets than would have otherwise have been created, thanks to rapid patching and automatic updates by Microsoft — are big business. As my friend Jason Larsen says, it’s not about ownage, it’s about continued ownage.
What to do? That’s what makes these scanners nice. They represent clean, actionable, operationally viable guidance for IT staff that aren’t exactly bursting at the seams with free time. I continue to be appreciative of all the developers who worked this last weekend to push Tillmann and Felix’s code into their products. It goes a long way towards moving us closer to less fear, more certainly, and no doubt.
There be news!
First off, the Honeynet Project has released Know Your Enemy: Containing Conficker. Tillmann and Felix have done a tremendous job of analyzing Conficker, and their paper makes for a great read. Plus, you get to find out exactly what Conficker is doing differently on the anonymous network surface.
Please, be careful to actually use –script-args=safe=1 , like so:
nmap -PN -d -p 445 –script=smb-check-vulns –script-args=safe=1 18.104.22.168
There’s something else cool, but I’m pretty exhausted. More in the morn.
Couple useful things for IT admins out there. I’ve packaged up Werner and Feder’s PoC scanner via py2exe here. You can now simply run:
Simple Conficker Scanner
scans selected network ranges for
Felix Leder, Tillmann Werner 2009
[WARNING] 22.214.171.124 seems to be infected by Conficker!
C:\Python26\scs\scs>scs.exe 126.96.36.199 188.8.131.52
Simple Conficker Scanner
scans selected network ranges for
Felix Leder, Tillmann Werner 2009
[WARNING] 184.108.40.206 seems to be infected by Conficker!
Note that scs.exe will also take an IP list, so if you can generate that in a fast sweeper, scs.exe will go much faster.
Of course, what would be really helpful is getting nmap going — and the code is in fact in SVN! I believe the nmap guys are working on packages right now, but in the meantime, here’s what their dev says to do if you’re on Unix:
If you prefer to install it:
svn co –username=guest –password=” svn://svn.insecure.org/nmap
./configure && make
sudo make install
nmap -PN -d -p445 –script=smb-check-vulns –script-args=safe=1 <host>
If you don’t want to install it:
svn co –username=guest –password=” svn://svn.insecure.org/nmap
./configure && make
./nmap -PN -d -p445 –script=smb-check-vulns –script-args=safe=1 <host>
Building nmap is a little tricky on Windows apparently, but happily enough, this isn’t necessary. Follow these steps to get high speed scanning on Windows:
C:\Documents and Settings\dan>nmap -PN -d -p 445 –script=smb-check-vulns –script-args=safe=1 220.127.116.11
Winpcap present, dynamic linked to: WinPcap version 4.0.2 (packet.dll version 4.
0.0.1040), based on libpcap version 0.9.5
Starting Nmap 4.85BETA4 ( http://nmap.org ) at 2009-03-30 08:42 Pacific Daylight
————— Timing report —————
hostgroups: min 1, max 100000
rtt-timeouts: init 1000, min 100, max 10000
max-scan-delay: TCP 1000, UDP 1000
parallelism: min 0, max 0
max-retries: 10, host-timeout: 0
min-rate: 0, max-rate: 0
mass_rdns: Using DNS server 18.104.22.168
Initiating Parallel DNS resolution of 1 host. at 08:42
mass_rdns: 0.38s 0/1 [#: 1, OK: 0, NX: 0, DR: 0, SF: 0, TR: 1]
Completed Parallel DNS resolution of 1 host. at 08:42, 0.36s elapsed
DNS resolution of 1 IPs took 0.38s. Mode: Async [#: 1, OK: 1, NX: 0, DR: 0, SF:
0, TR: 1, CN: 0]
Initiating SYN Stealth Scan at 08:42
Scanning 21Cust103.tnt1.kingston.on.da.uu.net (22.214.171.124) [1 port]
Packet capture filter (device eth0): dst host 192.168.1.103 and (icmp or ((tcp o
r udp) and (src host 126.96.36.199)))
Discovered open port 445/tcp on 188.8.131.52 Completed SYN Stealth Scan at 08:42, 0.75s elapsed (1 total ports)
Overall sending rates: 1.33 packets / s, 58.67 bytes / s.
NSE: Initiating script scanning.
NSE: Script scanning foo (184.108.40.206).
NSE: Initialized 1 rules
NSE: Matching rules.
NSE: Running scripts.
NSE: Runlevel: 2.000000
Initiating NSE at 08:42
Running 1 script threads:
NSE (1.438s): Starting smb-check-vulns against 220.127.116.11.
NSE: SMB: Extended login as \guest failed, but was given guest access (username
may be wrong, or system may only allow guest)
NSE (6.001s): Finished smb-check-vulns against 18.104.22.168.
Completed NSE at 08:42, 4.58s elapsed
NSE: Script scanning completed.
Host foo (22.214.171.124) appears to be up … goo
Scanned at 2009-03-30 08:42:46 Pacific Daylight Time for 6s
Interesting ports on foo (126.96.36.199):
PORT STATE SERVICE REASON
445/tcp open microsoft-ds syn-ack
Host script results:
| MS08-067: NOT RUN
| Connficker: Likely INFECTED
|_ regsvc DoS: NOT RUN (add –script-args=unsafe=1 to run)
Final times for host: srtt: 360000 rttvar: 360000 to: 1800000
Read from C:\Program Files\Nmap: nmap-services.
Nmap done: 1 IP address (1 host up) scanned in 6.00 seconds
Raw packets sent: 1 (44B) | Rcvd: 1 (44B)
I’m sure there’s some way to use the NMAP GUI too — if someone wants to post a doc for that, I’ll link to it.
We may not know what the Conficker authors have in store for us on April 1st, but I doubt many network administrators want to find out. Maybe they don’t have to: I’ve been working with the Honeynet Project’s Tillmann Werner and Felix Leder, who have been digging into Conficker’s profile on the network. What we’ve found is pretty cool: Conficker actually changes what Windows looks like on the network, and this change can be detected remotely, anonymously, and very, very quickly. You can literally ask a server if it’s infected with Conficker, and it will tell you. Tillmann and Felix have their own proof of concept scanner, and with the help of Securosis‘ Rich Mogull and the multivendor Conficker Working Group, enterprise-class scanners should already be out from Tenable (Nessus), McAfee/Foundstone, nmap, ncircle, and Qualys.
We figured this out on Friday, and got code put together for Monday. It’s been one heck of a weekend.
The technical details are not complicated — Conficker, in all its variants, makes NetpwPathCanonicalize() work quite a bit differently than either the unpatched or the patched MS08-067 version — but I’ll let Tillmann and Felix describe this in full in their “Know Your Enemy” paper, due out any day now with all sorts of interesting observations about this annoying piece of code. (We didn’t think it made sense to hold up the scanner while finishing up a few final edits on the paper.)
So the general theme of my talks for the last year has been about the extraordinary damage that infrastructure attacks are capable of. I do believe it’s possible to bolster endpoint security — to achieve end-to-end trust — but it will take more than what we’re doing right now. It will take, somewhat to my surprise, DNSSEC.
It will also take treating infrastructure itself with more care, and more security due diligence, than we do today. Forget patching infrastructure. When my DNS bug hit, a remarkable number of sites suddenly found themselves simply identifying the DNS servers they were dependent on. We can do better. We need better operational awareness of our infrastructure. And we need infrastructure, over time, to become a lot safer and easier to update. That means automatic update isn’t just for desktops anymore, that firmware patches need to have a much higher likelihood of not bricking the hardware, and possibly, that we need fewer builds with more testing for the new production environment, that is increasingly under attack.
The reality is the bad guys are out there, and they’re learning. Just as attackers moved from servers to clients, some are moving from compromising a single client to compromising every client behind vulnerable infrastructure. Psyb0t, a worm that has been bouncing around since January, was recently found by DroneBL and reported on by Ryan Naraine. It targets home routers, and early estimates are that it has hit over 100K of them. Home routers are a wonderful, enabling technology for users, and even for security, they carried us through 2001-2004’s years of widespread server side vulnerabilities. So we shouldn’t be too down on them. But they do have vulnerabilities, and they are getting exposed.
This, of course, is something quite a few people have been talking about. CSRF — Cross Site Request Forgery — attacks have affected everyone from Linksys to Motorola to Siemens to Cisco. More problematically, the DNS Rebinding attacks discussed by myself, David Byrne, Dan Boneh/Adam Barth/Collin Jackson, and others in 2007 still affect home routers. And I’m not talking about Java and Flash sockets, like at this year’s CanSecWest talk. I’m talking about simply running rebinding against the browser itself, to make a remote website and a local router appear to be the same name, thus able to script against one another.
This should sound familiar, because this is what I discussed at RSA last year.
Yep, that still works, in all browsers. It has to. Moral of the video, please don’t have a default password on your home router — and, maybe, home routers can someday only allow default passwords to work 10 minutes after power cycling.
Why can’t this be fixed in the browser? Now that’s a fascinating question, which we’ll discuss in another blog post.