Wednesday, May 14, 2008

Does secure software really matter?

If you ask the average expert what organizations should do about Web security you’d almost universally hear what’s become like a religious commandment, “Thou shall add security as part of the application from the beginning. Blessed are those who develop secure code.” Amen. I am a loyal follower of security in the SLDC church. I’ll humbly try to ensure my code does what I preach others should also do. The problem is code security by itself will NOT deliver us unto to the pearly gates of Web security that many people wish for. There are other issues at play.

As an information security professional my responsibility is assisting organizations mitigate the risk of their website being compromised. If the process requires rewriting some insecure code, great, let’s do it. The responsibility also means being open to solutions such Web application firewalls, configuration hardening, patching, system decommissioning, obscurity, a lucky rabbits foot, etc. Anything and everything should be used to our advantage because the odds are stacked in the bad guys favor. Lest we forget the bad guys don’t need more than to exploit a single weakness.

At WhiteHat we assist the effort by rapidly identifying Web application vulnerabilities and helping to get them fixed before attackers exploit them. We also invest significant R&D analyzing website vulnerability data, matching them up to publicized incidents, measuring the benefits of various security strategies, and ascertaining what best practices provide the most bang for the buck in a given situation. And software security proves to be one of those things that’s difficult to measure, however there are a few thing we do know for sure about it.

Important as it is the SDLC processes can't always take into consideration unknown attack techniques, current techniques we don’t fully appreciate and ignore, or the massive amounts of old insecure code we depend upon already in circulation. Think 165 million websites and mountains of new code being piled on top all the time. How do we defend our code against attacks that don’t yet exist? And once these the techniques are disclosed its obvious we can’t instantaneously update all the world’s Web-based code (far far from it). As an industry we fail to realize these SLDC limitations, as a result don’t prepare for them, and inevitably pay a heavy price. Sin of omission.

Only a short time ago we didn’t know that integer and heap overflows were exploitable and were something to worry about. Code inspected and declared clean all of a sudden was vulnerable even though not a single line changed. The same happened in the webappsec with Cross-Site Scripting (XSS), ignored for years until the bad guys loudly demonstrated its potential. The same is happening with Cross-Site Request Forgery (CSRF), HTTP Response Splitting, and hundreds of other attack variants. Now the vultures are circling null pointers attacks. Secure code is only secure, if there is such a thing, for a period of time impossible to predict. We can’t future-proof our code and I’ll guarantee new attack techniques are on the way with the existing ones often becoming ever more powerful.

On the horizon are clever and evilly lucrative uses for timing attacks, passive intelligence gathering, application DoS, CSRF, and several other rarely explored examples I plan to present at Black Hat USA (if accepted). And that’s not to mention vulnerabilities that have nothing at all to do with the code. Crossdomain.xml, Predictable Resource Location, Abuse of Functionality, and a dozen other issues. Lately I’ve also been noticing in our data a link between a website’s security posture and when it was actually launched/built - equally or more so than the technology in use. Newer websites developed after an attack class became mainstream appears to stand a higher chance of being immune. If true this would make a lot of sense to me, more than developers suddenly having learned the virtue of input validation.

Secure coding best practices even if implemented perfectly mostly only account for the attack techniques we’re currently aware of, but once something new comes up, we got a big problem because of the scale of the Web. That’s why XSS, SQL Injection, and CSRF are biting us in ass so hard. For years we really didn’t fully understand what they could do or effectively get the message out where anyone would care. Now significant portions of the Web are vulnerable, we just don’t know where exactly, and even if we did are we really going to go back line-by-line? Now we’re in a spot where hundred of thousands of pages are being infected with JavaScript malware. I don’t expect this to end anytime soon, get worse if anything because the bad guys have a lot of green field to work with.

My point is we need to look at Web security in a new way and accept that code (or developers) will never be perfect or even close to it. To compensate we need solutions, including Web application firewalls (virtual patches), wrapped around our code to protect it. Some might call this approach a band-aid or a short-term solution. Whatever, I call it realistic. Just ask those who are actually responsible for securing a website and they’ll tell you the same thing. We need nimble solutions/products/strategies that help us identify emerging threats, react faster to them, and adapt better to a constantly changing landscape. Now when a vulnerability or new attack class shows up IT Security should have a fourth option for the business to consider giving the developers time to fix the code:

1. Take the website off-line
2. Revert to older code (known to be secure)
3. Leave the known vulnerable code online
4. Vulnerability Mitigation (“virtual patch”)

19 comments:

dre said...

lol. i've trolled enough today

Anonymous said...

Jeremiah, that was a long blog post and maybe I missed something in all those words, but your list at the end there seems to leave out the thing most people think of when they hear about a vulnerability in their code.

0. Fix the vulnerable code.

Jeremiah Grossman said...

Darn, there's some context that got lost in the ending, which I clearly need to rework it if that question arises.

What the list is trying to articulate are the options available to IT Security between the point in time when a vulnerability is discovered and the organization fixes the code. In most instances IT Security themselves can't fix the code so that option is absent for them.

This make more sense?

dre said...

@ Jeremiah:

Sorry to chime in here so late.

Can't IT Security hire/retain an external organization to help fix the vulnerable code?

While I was at eBay (and while you were at Yahoo), I specifically recall the moment of realization that we couldn't fix our vulnerable code. We were looking for options. The first option we chose was to hire a third-party security review and consulting company to help us figure out what to do. They were extremely helpful.

This was around 2001 (when XSS was just hitting on the scene and SQLi had been around for 3 years) but today there are many more third-party options available. Today, there are options such as "Hackers for Charity"

http://ihackcharities.org

which will provide these kinds of services to those in need -- potentially for free.

software security proves to be one of those things that’s difficult to measure

Microsoft and others appear to be doing fine with measurements and showing improvement with the Microsoft SDL and the Trustworthy Computing Initiative (Bill Gates' Memo). They do not stand alone.

What is it about software security that makes it so difficult to measure? Find a bug; fix a bug. This doesn't have to be complex. Developers and quality engineers have been doing this for 30+ years.

If the application is vulnerable and the fix must come in the code, then why bother messing around with web application firewalls or other misplaced options? Why not solve the problem at the source (and end this comment on a happy note with a pun)?

kuza55 said...

I think at some point you have to go back to the code no matter what; e.g. CSRF, you *could* use a WAF to patch it (e.g. OWASP's CSRF guard), but at some point the performance hit generated by it will become unworkable, or more expensive to run the WAFs than to get some devs to fix the code, so if you can fix the code (i.e. you have developers who aren't already buried under piles of vulnerabilities) then IMO it is almost always better to fix the code than to write a band-aid because then you have a dependency on the band-aid, rather than safe code.

Having said that, I think your WAF solution is great since you can get things done quickly and for apps whose code you cannot change, esp for sysadmins who want to secure their websites but who can't force developers to do anything, so I'm not saying it's dead, I just think that fixing the code is still the best option, even if your devs could be developing more software that is useful to the business instead of fixing issues, unless they get slapped about in a code/site review they'll keep writing bad code, which leaves you with a bigger problem.

Miguel Lourenco said...

Jeremiah,

SDLC process can and do take into account unknown attack techniques. If you design your application securely, and taking into account Saltzer and Schroeder's design principles, you can greatly limit or avoid the damage that unknown attack types might pose to your application. Secure coding best practices, even if implemented perfectly, can only account for attack techniques we're aware of. However we should be aiming for correct and secure code, not just secure-at-the-moment code. Sure, we didn't know heap overflows were exploitable, but we knew that having an application crash because its heap got corrupted was caused by incorrect, unreliable, code.

That said, I do agree with you that we won't have perfect code any time soon but we should continue to strive for better software and for technologies that are fundamentally more secure. A lot of the security problems we're stuck with are caused by the insecure design of the underlying technologies that applications depend on.

On a related note, check out Dan Bernstein's "Invulnerable software" presentation and related paper.

Arshan Dabirsiaghi said...

Developers can stop the OWASP Top 10 - if they do that, they're surely outrunning their friend (maybe not the bear).

Anonymous said...

Jeremiah - We need a Web Application Security Professional Certification exam. Do you foresee such a cert becoming available ? I'm a web-developer looking to get into the webappsec space.

Anonymous said...

Web application firewalls, while being a band-aid as you've stated, provide the flexibility to at least buy the developers some time into possibly reviewing, and rewriting their code in order to better suit it to more recent patterns of attack. I also believe that "code security by itself will not deliver us unto to the pearly gates of Web security that many people wish for" to be a spot-on statement. There will never be 100% security in any application for various reasons, and I've said it many times before, but it's truly all about layering your security in order to prevent as many issues as possible while remaining realistic enough to realize that it would be virtually impossible to defend oneself (or one's application) from all forms of attacks both past and future.

Jeremiah Grossman said...

@kuza55, I thinks its difficult to generalize exactly what solution is best in every instance. That decision will differ between organizations and even individual sites and vulnerabilities for that matter. Personally I try to providing IT Security with as many options as possible and an understanding of the pros/cons (like performance). Eventually the business is going to have to decide what to do (or not do) based upon the information at hand. Whether an organization actually goes and fixes their code or not may not be any of my business.

@Miguel Lourenco, I think we're probably 99% on the same page. I also should probably reword the "can't take into consideration" a bit as it technically can, just not all the time with the proportions being unknown. For example, I don't recall reading any web development book or framework documentation from way back when that said to be careful of CSRF. The same can be said of the crossdomain.xml and all the rest of these client-side issues websites are trying to compensate for.

That's really what I'm trying to get figure out how to say. You could be doing everything "right", and then later something no one knew of take you by suprise. On an individual scale that might not be so so bad, but on a internet-wide scale it make a monster to tackle. Either way we should we do everything we can to make sure our new code going in is secure as possible. Over time this will pay dividends.

Thanks for the references. Going to do some study.

@Arshan, LOL. True indeed. :) The problem with the OWASP Top Ten though is that it exists purely because the problems it outlines got so bad that people finally noticed. Its not like a proactive thing. CSRF only recently appeared on in the 2007 version. Now we have to go back and refactor.

Jeremiah Grossman said...

@Ron, SANS and WASC are actively working on it actually. I wouldn't be surprised if something is offered this year. Presently, not much exists, which I find to be a problem. Hiring managers really have no baseline to go by as they do in other areas.

Jeremiah Grossman said...

@Andrew - hallelujah brother! ;)

Arshan Dabirsiaghi said...

@Andrew, WAFs certainly could be used to "give developers time to fix the bug". Unfortunately, many companies look at it as an either/or situation, e.g., have a WAF or do review the code (like in PCI).

Nobody's going to disagree that a WAF can really help an organization in many ways, it's just that they don't think in the same context we do.

Jeremiah Grossman said...

@Arshan, in the end you may be right, but its hard to say right now if that's the general view. Very few WAFs deployed (still small) are set-up in block mode. Hopefully I'll help change that but it'll be interesting to see what happens after that. I'll be tracking.

dre said...

I've put some more thought into this, and I think it's socially irresponsible for Jeremiah to recommend WAF's as even a temporary solution to web application vulnerabilities (and even when combined with an automated solution such as Sentinel's integration with F5).

When web application exploits become weaponized to the level of crimeware that some of the recent SQLi, file injection, and client-side exploits have lately in recent events -- we are now in a situation where there are those that
1) have WAF's or a WAF plan
2) have nothing

For those where WAF or other "virtual-patch" is impossible, financially difficult, or other infeasible -- this becomes a serious problem. If they are using COTS web application software, third-party components, validators/encoders out of a framework, or outsourcing custom code from an external dev shop -- this software is now integrated with their data, often customer data which has a trust relationship to the web application provider.

In other words, without software assurance there is no way to universally improve our situation. Some will survive, but many will fall. We need to cater to the lowest common denominator; we need to repair the "broken window" problem of the Internet (see: GEEKONOMICS).

It's also obvious to many that Jeremiah's approach only helps WhiteHatSec, F5, et al. The purpose of this marketing promotional material (i.e. "Does software security really matter?") thinly-veiled in blog posts by what used to be a reknown web application security expert is really an attempt to get momentum towards WhiteHatSec integrated into F5 as a line-item feature so that WhiteHatSec can be sold and its founders have their pockets lined with cash. The most likely outcome of this situation is that we'll never hear from these experts again, and all the work they've done becomes lost to those who most need it.

If you look at the kinds of threats we face in 2008, traditional approaches to combat risk is a zero-sum game. WAF is a traditional approach -- it's a solution looking for a problem.

The hard part about Secure SDLC is that it's mostly about adding process and people -- something that traditional IT Security is not used to doing. However, as proven by Visible Ops Security and laid out as the tenants of the New School of Information Secuirty: products fail without people and process behind them. Threats will go after new targets, assets that are not easily guarded and protected.

Blocking-mode WAF's only work best when you've discovered that your applications are under attack with specific attacks that a WAF would block. This is typically only detectable with a WAF or APIDS. By the time you could detect the issues and respond to them appropriately, it's more likely that code could have been reviewed and fixed -- and a new Secure SDLC process added to find and fix future (even new) kinds of vulnerabilities.

Bryan Sullivan said...

@andre, I'm not sure I agree that you could fix the inherent code problem in the same time it would take you to configure a WAF. I've been pretty vocal with my criticism of WAFs in the past, but I'm changing my position: I think WAFs can be valuable, especially as a first-response tool, but they must never be used as a replacement for secure code.

My biggest fear with WAFs is that they'll get put up as first-response tools, or as defense-in-depth measures, and then the actual underlying problems will never get fixed. "Why should we spend all this money to change the code? We're already secure!"

I've written a little more on this subject on my blog this morning.

Stephen Craig Evans said...

@dre:

It's not a 0% or 100% choice for many (and software security folks cannot yet dictate to companies what they have to do).

In Asia Pacific, most webappsec is at the nascent stage of pentesting; mostly it's being done because somebody somewhere from above is forcing the IT department to pentest the web apps they are hosting (I see this often in government agencies).

Say you're the IT department also in charge of security and a pentester finds vulnerabilities. You can't change the code in the foreseeable future because of one or more of the following: (1) the 3rd party app company doesn't exist or doesn't have to support the app; (2) getting budget to modify the source code is a lengthy process; (3) modifying, testing, and certifying the code for deployment is a lengthy process; and/or (4) the web applications are handled by a completely different department (or agency) and there's lots of political friction between your department/agency and theirs.

But, as IT and in charge of the network, you can deploy a WAF, hire a consultant that can write patches for some or most of the vulnerabilities, re-test, and you can CYA and keep your job until the app(s) gets fixed.

An ideal solution? No.

A palatable solution? Barely.

Will the web site get hacked eventually? Maybe/maybe not.

Will the IT manager keep his job because he got the pentest done and he mitigated the vulnerabilities in the report to the best of his ability? Yes.

Arshan Dabirsiaghi said...

I was also a little disappointed with the title, and the content was less subtly WAF-driven than it could have been.

Anonymous said...

Great post