Wednesday, January 14, 2009

Builders, Breakers, and Malicious Hackers

There is a new meme in Web security that states we should focus the bulk of our attention on building secure software instead of breaking it. As Jeff Williams of Aspect Security says, we are not going to “hack our way secure.” For example, repeatedly crash-testing the same automobile without taking further action would not directly result in saving lives. However, what crashing cars and breaking software does provide is proof. Proof that seat belts and air bags are essential, just as integrating security controls within the software development life-cycle is. Without this proof, there would be no action because the costs could not be justified. At the same time, breaking things is much better at capturing headlines because its opposite, success in security, causes nothing unexpected to happen. “Nothing unexpected” obviously does not make for very interesting stories. In my view the Builder vs. Breaker meme isn’t necessarily wrong, only an oversimplification.

First its difficult, if not impossible, to future-proof software against attacks techniques that don’t yet exist. Secondly, very few organizations would justify expending resources to mitigate possible attacks they don’t respect. This was true for buffer overflows. Known for over 20 years (thanks to breakers) yet they remain an issue despite all the attention they received during the last ten. Also true for Cross-Site Scripting (XSS), SQL Injection, and Cross-Site Request Forgery (CSRF). These Web attacks, technically “known” since the late 90s, again thanks to breakers, but only a handful of researchers picked up on their potential early on. Back then few, including infosec insiders and software developers, truly understood how severe the repercussions would be down the road. All three issues and many others laying dormant are either unknown, unrespected, or unjustified to resolve -- ultimately leading to a lack of action. Fast forward ten years and one-hundred million websites, the Web security problems have become beyond pervasive and lay predominately unexploited until fairly recently.

In 2005 XSS first commanded respect when a not-so-malicious hacker released the first major Web Worm upon MySpace that infected over one million user profiles in under 24 hours. Now XSS attacks carrying JavaScript malware payloads, including extremely convincing Phishing w/ Superbait scams, are routine. SQL Injection, despite dozens of white papers and notable compromises, didn’t receive mainstream attention until late 2007 when malicious hackers used it to infect millions of websites with browser-based exploit code. What about CSRF, the sleeping giant that is every bit as pervasive as XSS? Well, we are still waiting and have no idea when CSRF will be taken seriously. Bank on the day, in the not-too-distant future, when widespread exploitation occurs. Lets also not forget about Clickjacking, made famous due to two breakers (Robert Hansen and yours truly) in 2008. Technically “known” years prior, Clickjacking, despite all the media attention, probably caused few builders to alter their design decisions. They will wait for the malicious hackers to force them take protective measures, likely sometime after another 100 million new websites are built.

In my opinion, we have too few skilled breakers to cover all the mature technologies let alone the new ones. It took the exceptional work of clever breaker Dan Kaminsky and the media to expose the dangerous flaws within DNS to stimulate a concerted effort towards resolution. Also recently, Alexander Sotirov and a team of researchers exposed why Certification Authorities should not be using MD5. They took a sophisticated attack technique known years prior and turned into a reality, and in doing so justified action now rather than later. The information security landscape is littered with such examples. Now emerging technologies such as Google Android and Apple iPhone applications, Web Widgets on Facebook/MySpace/etc, Web browser add-ons, Flash, Silverlight, and so on are popping up. Even with all the security resources invested by these companies, what expert or vendor in their right mind is going to claim security perfection? The reality is as long as software bugs and design flaws exist, this is how builders, breakers and malicious hackers will interoperate.

Builders build software, which gives breakers something to break. Breakers break software, a defensive sanity checking process, and provide insights into what attacks are theoretically possible. Notice I said possible, NOT probable. A slight, but extremely important distinction. This is why conferences like Black Hat and Defcon exist, to expose forward thinking people to the most cutting-edge issues that could possibly be used in the future, even without a guarantee of later exploitation. Then at some point malicious hackers hack said software, making what was previously possible probable. This probability justifies action to mitigate the issues, both immediately and proactively. I’m not saying this is the right way, the best way, or that we can’t do better. I’m saying this is how security of all kinds tends to work. Clearly if everyone knew back then what we know now about XSS, SQL Injection, CSRF, DNS, and MD5 we might have done something sooner. Hindsight is always 20/20. As its been said, "life can only be understood backward, but it must be lived forward."

7 comments:

Rafal said...

@Jeremiah:
I think we had this conversation at OWASP '08 in NYC, didn't we? I've been saying for quite some time that there are too many people focusing their energies on *only* breaking things, then moving on. A perfect example is Kaminsky's DNS issue. It's been known forever, true, and because there were no "cool exploits" it was a rare bird in our security luminaries that took notice enough to clamour for remediation (DNS-Sec?)... then the breakers moved on to the next sexy thing, the vuln-of-the-day, and stopped driving DNS to be fixed. I realize there are too many technologies to try and effectively analyze and offer up fixes for, and even tougher to push and justify those fixes - but this is our cross.

As security folk, it is our *duty* to not only feed the yin/yang of build/break but to effectively balance that out and drive breakage to fix-age... and only THEN move on to the next.

Jeremiah Grossman said...

@Rafal, we probably did. :)

It's VERY difficult though for a person to specialize in one area of discipline (builder vs. break), let alone both. One plays the role of building architect and other demolisher. An extremely limited few may play in both worlds. Off hand I can't think of a single person in the industry who has successfully made a reputation for themselves in both field.

And count on breakers, DanK included, to grow tired trying to motivate folks to do the right thing. Their time and patience is limited just like everyone else. Motivation to act is primarily derived from the activities of malicious hackers.

Kevin Riggins said...

@Jeremiah

Extending your metaphor regarding wrecking cars even farther, they don't stop wrecking cars when it is determined that air bags are effective risk control devices. They continue to wreck them to ensure that the air bags are installed correctly, that the crumple zones crumple as expected, etc.

Just because we have found a flaw or deficient practice, does not mean the flaw or practice has been corrected.

-Kevin

MikeA said...

Sorry Jeremiah, but I don't agree fully with...

"First its difficult, if not impossible, to future-proof software against attacks techniques that don’t yet exist".

I believe that it's pretty easy to protect against *most* attacks that we don't know about just yet - it's called validation and compartmentalization. Pretty much all of the issues you mentioned would have been protected if the app/site/system only limited itself to data that a) it expected and b) needed to operate. The only problem here is that programmers are lazy/overworked/busy, and don't restrict these things enough (myself included - I'm not going to say that I'm a saint myself - I've seen enough of my own bad code!)

The "big" issues are often design issues - Dan's DNS bug for example, or the MD5 CA Certs. These wouldn't (necessarily) been caught with the above techniques as they are conceptual problems that we haven't thought through things quite well enough (and I agree with you on that part).

Funny enough, I wrote about something like this for the MSFT BlueHat Blog just the other day :)

http://blogs.technet.com/bluehat/archive/2009/01/12/learning-by-our-mistakes.aspx

Jeremiah Grossman said...

@Kevin, well said.

@MikeA, there is a lot of wisdom in your words. I had a tough time with that particular line and have difficultly articulating exactly what I mean. New types of bugs, as defined by Gary McGraw, are indeed few and far between. Design issues are a whole other matter.

Perhaps its like Kevin just said above, even when security control are put in place, we still need to crash test them just the same.

The main point I guess I was trying to make was that malicious hackers are really the ones that motivate action. The breakers serve as sort of an early warning.

MikeA said...

I certainly agree that we have to crash test Jeremiah - software is a human process, and we sometimes (perhaps it's more frequent than that ;p) get it wrong. Just like sampling products of of a production line for quality, we have to do the with any software system - ensure the machine is cranking out the quality we want, and then tune.

I'm really starting to get fed-up in a why though that "hackers" (malicious or otherwise) prompt this "action" you talk about :) I'm not saying that there will ever be perfect software, or I'll never write another bug, but by *far* we know how to deal with a wide range of vulnerabilitys, both current and potentially future ones, and in the larger world simply don't mitigate them.

"Something" is broken - it might be the people, or the process of software development, by my money is on that we won't see any any improvement unless we a) make the technology (the tools that build software) more secure by default, and b) link all these factors together and work on them as a whole

Jeremiah Grossman said...

@Mike, I think the fundamental problem we are dealing with is a business problem. One of alignment of interests and the subject or an article Im currently writing... as a taste..

"Those in the best position to provide security are not necessarily those who suffer the losses, and those who suffer the losses are often incapable of doing much to protect themselves."