Friday, June 25, 2010

The Low Hanging Fruit scanner strategy can get you into trouble

Vulnerabilities identifiable in an automated fashion, such as with a scanner, can be loosely classified as “low-hanging fruit" (LHF) -- issues easy, fast, and likely for bad guys to uncover and exploit. Cross-Site Scripting, SQL Injection, Information Leakage, and so on are some of the most typical forms of website LHF. Some approach their website vulnerability assessment (VA) by basing their programs around a scanner focusing primarily on LHF. The belief is also that by weeding out LHF vulnerabilities break-ins become less likely as the organizations security posture rises above the lowest common denominator. They realize they’re not going to get everything, but in return they expect this VA testing depth to be “better than nothing,” perhaps it meets PCI compliance requirements, and very importantly is low-cost.

Unfortunately things often don’t turn out that way. Due to shortcomings in Web application scanning technology, the LHF scanner strategy in highly unlikely to achieve that desired result. First off all Web application scanners can help tremendously with locating LHF vulnerabilities, no question. However, scanners DO NOT identify all the LHF on a given website, not by a long shot. No doubt many of you have come across URLs laden with account_id parameters where all one needs to do is rotate the number up or down to access someone else’s bank/mail/cms account. What about admin=0 params begging for a true string? Money transfers with a negative value?

We could go on all day.

The point is these are not ninja-level AES crypto padding reverse engineering attacks nor are they edge cases. No scanner, no HTTP proxy, and not even view-source is required to spot them. Just a Web browser. How much easier does it get!? Oh right, CSRF vulnerabilities. CSRF remains one of the most pervasive website vulnerabilities and also the easiest to find. That is, find by hand. Web application scanners have a REALLY hard time identifying CSRF issues, false-positive and false-negative city, without a lot of manual assistance.

Secondly, the output from commercial and open source Web application scanners are routinely inconsistent between each other. As can be seen from Larry Suto’s “Accuracy and Time Costs of Web Application Security Scanner [PDF]" study, products including Accunetix, AppScan, Hailstorm, etc. report different LHF vulnerabilities in type and degree -- even on the exact same website. A scanner may technically find more vulnerabilities than another, this does not necessarily mean it found everything the others did. Also, it is not uncommon for the same scanners to produce varied results from successive scans on the same website. This lack of Web application scanner LHF comprehensiveness and consistency presents a hidden dilemma.

If your LHF strategy says run a scanner, find what you can, and fix whatever it finds. Great, but consider what if your daily LHF adversary runs a different scanner across your website than you did, which is entirely likely. Anyone may buy/download/pirate one of the many scanners available on the market. They might also do some manual hunting and pecking. Most importantly though are the favorable odds that they’ll uncover something your scanner missed. Scanners possess very different capabilities with respect to crawling, session-state management, types of injections, and vulnerability detection algorithms. Individual scans can be configured to run under different user accounts with varied privilege levels or instructed to fill forms in a slightly different way. All of this presents a slightly different attack surface to test, which results in missed vulnerabilities and general inconsistency.

While the LHF scanner strategy is in place often none of this is readily apparent. That is until, someone from outside the organization such as a business partner, customer, or security researcher reports a missed vulnerability from a simple scan they ran. This makes it appear like you don’t test your websites well enough, which is at least slightly embarrassing, but usually nothing truly serious results. What really nails organizations in the end, and we’ve all likely seen it many times, is when a website is compromised and management/customers/media asks why. Saying, “we are PCI compliant” is just asking for the rod. As is, “we take security seriously, our top-of-the-line scanner didn’t find the vulnerability” all while the website is defaced with profanity, infecting visitors with malware due to some mass SQL Injection payload, or defrauded by a cyber criminal. Now the organization is sending out disclosure letters, figuring our who to blame or sue, and answering to an FTC investigation.

Maybe doing nothing was a better idea after all, because at least you didn’t pay good money to get hacked just the same. Anything worth doing, is worth doing right. Web application security included.


Dan said...

I hear you.

But I disagree that this is not "better then nothing".

Do scanners find everything? Nope.

Does running a scanner make you secure? Nope.

Do scanners really find the most dangerous flaws? Not usually.

But do they find flaws that should be fixed? Absolutely.

I don't know what your experience is, but mine is that there are still a lot of really basic XSS and SQLI issues out there.

Finding and fixing those, IMO, results in a bit more security. And maybe more importantly, finding and reporting these "LHF" type vulnerabilities can change developers and management's opinions about the need for more important things like training, SDL, white-box testing/analysis, etc.

I think there is a place for "better then nothing". Even if it is just to start the conversation and to refute the, "We've never been hacked so we must be secure" mindset.

Maybe the most important thing we can do is to clearly communicate the limitations of scanners and to make the connection that when an automated scanner can find numerous "LHF" flaws, there are probably bigger issues (i.e., where there is smoke there is fire - LHF are the smoke).

Jeremiah Grossman said...

@Dan, good point. Running simple scans and producing scary proof-of-concept exploit can wake up an otherwise complacent software development group or unbelieving management. At the same time you run the risk of those same people basing their actions, whatever they may be, to solely counteract the smoke test. Bad bad.

Have you seen a good approach to prevent this outcome?

Dan said...

>Have you seen a good approach to prevent this outcome?

No, sorry. I haven't at least not universally.

I try to show more impact then just the normal alert boxes, etc. Sometimes this makes it click for some people.

Taking some extra time and tracing an indivdual flaw all the way back to requirements also sometimes helps.

A big part of the problem, IMO, might be the general consulting model. Where you come in, do your testing, write and present a report, then leave. Reports are probably, improperly, expected to be comprehensive and our real value probably would be better realized after the report is delivered and not before like we are organized now.

AnonSecurityGuy said...

I find it interesting to contrast this blog entry about web scanners with the previous blog entry about WAFs.

Apparently, WAFs are good: they're cheap and find low-hanging fruit, and that's good because many organizations have limited budgets, and hey, you gotta do the best you can with the budget you've got. However, apparently, web scanners are bad: yes, they're cheap and find low-hanging fruit, but anything worth doing is worth doing well -- i.e., anything worth doing is worth spending a lot on. If your budget is limited, tough noogies.

I'm not clear on how these two contrasting viewpoints are supposed to be reconciled. Was the contradiction intentional?

Jeremiah Grossman said...

@AnonSecurityGuy: great question, very glad you asked. If it's confusing to one person, that contrast is probably confusing to another. Furthermore, I'm certainly not beyond occasions of cognitive dissonance, but I don't think that's the case here.

Let me start by saying that I do not believe WAFs or scanners as technology classes are categorically "good," "bad," or somewhere in between. Perhaps I should have made that more clear in the posts. Both scanners and WAFs can be used in a good and bad ways.

If the goal of a security program is to not get hacked, or reduce the likelihood and impact of when it may happen... then:

"Good" is when scanners are used as part of a vulnerability assessment program (+ process & people) that routinely measures relative "hackability" against against threat-agents of a given skill-set. "Bad" is basing a VA program solely around a scanner output, fixing whatever LHF it finds, and thinking the result will make the organization even 5min more secure.

"Good" is when WAFs are used a part of a comprehensive operational website security program -- where incoming attacks can be monitored/blocked and some outstanding vulnerabilities can be virtually patched quickly while code fixes are developed. "Bad" is when a WAF deployed without proper maintenance and configuration, again like scanner, thinking this replaces the need for an SDL and makes you even 5min more secure.

Hopefully that reconciles things a bit. Obviously these are imperfect technologies that attempt to address an extremely complex problem, Web application security. For my part, find your websites, value them, assign your tolerance for risk against attacker with a given skill-set, measurable your vulnerability to them, and seek mitigation and remediation options. Do this often enough and the data for what really does makes an organization more secure as part of the SDL becomes readily apparent.

Evan Oslick said...

Saying that "good" is a WAF/Scanner in a full program "people + processes" should be obvious to anyone in the industry.

That includes, requirements reviews (along with business understanding), architecture reviews, code reviews, manual/automated tests, and even WAFs (which I have to admit, I am torn on believing their value as more than a "virtual patch").

The value from the scanners comes from being able to quickly identify something of value. Say you find a bunch of "XSS" or SQLi attacks -- then your process needs to indicate: our validation routines need improvement.

The main issue that I have with WAFs, is their reactive nature. If you aren't being attacked, then they show nothing and everything is deemed ok. It doesn't have the opporunity to drive improvement.

When a corporation has a small contigent of App Security guys and needs justification for spending the money, sometimes its better to use the "LHF" to get their.

I've been in organizations that have used both mechanisms (one had scanner, one had WAF). They both have their downsides, but given my preference: I'd take the scanner.

I understand the risk of not getting a pacth or being alerted -- but given that WAFs can be bypassed as well, at least I hope the scanner can give a better feel for data moving around the site.

kingthorin said...

"Anything worth doing, is worth doing right. Web application security included."

Agreed. The question is what's the real answer. Obviously doing nothing isn't really an answer or solution to the problem.

Even with manual testing if we test an app you may find something I don't find, Dan might find something neither of us finds. Either of you may try attacks or bypasses I wouldn't try because of our different background/experiences/knowledge.

While companies obviously want to pay for experts the FACT is there is almost always someone that has different experiences, more time, or greater incentive to find a vulnerability or exploit.
(This shouldn't keep companies from hiring consultants but needs to be accepted as a FACT within the industry.)

In the end, the solution has many parts; including, but likely not limited to:
- Use multiple tools.
- Do manual testing.
- Test and re-test regularly.
- Pay for insurance (because even doing the previous 3 things there will always be a "what if").