Wednesday, August 19, 2009

Website VA Vendor Comparison Chart

Update: 09.03.2009: "Production-Safe Website Scanning Questionnaire" posted to add context to the chart and ensuing discussion. Also, new vendors have been added to the sheet.

Update 08.24.2009
: Billy Hoffman (HP) and I have been having some email dialog about the production-safe heading. Clearly this is contentious issue. Scanning coverage and depth are directly tied to the risk of production-safety, and every vendor has a slightly different approach to how they address the concerns. Basically I asked if vendors made a production-safe claim, that they have some reasonable verbiage/explanation for how they do so -- no assumption of production safety will be made. Billy publicly posted how HP does so (complete with the highlights of our dialog) and got check mark. Simple. Still for the immediate future I'm going to eliminate the heading from the chart until I can draft up a decent set of criteria that will make things more clear. This of course will be open to public scrutiny. In the meantime, if anyway vendors want to post links about how their achieve "production-safe" they should be feel free to do so.

As you can imagine I spend a good portion of my time keeping a close watch on the movements of website vulnerability assessment market. Part of that requires identifying the different players, who is really offering what (versus what they say they do), how they do it, how well, and for how much. Most of the time it is easier said than done, parsing vague marketing literature, and it is never "done." Every once in a while I post a chart listing the notable SaaS/Cloud/OnDemand/Product vendors and how some of their key features compare, not so much in degree, but at least in kind. If anything is missing or incorrect, which there probably is, please comment and I’ll be happy to update.





24 comments:

Unknown said...

Nice chart. I'm wondering how you define "production safe?"

Short of a scanner that somehow does an automated backup of all server-side resources, does an audit, and than auto-restores, I don't see how how anyone can guarantee a scan is 100% safe. It's seems to me that "production safe" is pretty gray instead of a clear this is safe and this is not. Do you disagree?


If you do agree, and "production safe" is just shades of gray. I would say most dynamic scanners can be production safe. You can configure most dynamic scanners to not submit forms or make posts. That is a degree of "production safe" at expense of coverage.

Thoughts?

Jeremiah Grossman said...

@Billy, thank you and yes the headings, including "Production-safe", probably need additional clarity. Of course no one can being production-safe, there are several things that can be done to reduce the positive of disruption in the vast majority of cases (shades of gray).

Adjusting scan speed, simultaneous thread, and ensuring the tests themselves do not have active payloads is most common. Also very important is having a person mark forms as safe for testing. I'd argue though that these steps are function of people/process and not a feature of the scanner itself. Hence, no check mark.

If the offering does provide that level of configuration as standard, then I'd laid claim as reasonably production-safe (w/ checkmark).

Hope that explanation clears things up.

Ron Gula said...

Hey there,

Great chart!

We've added a lot of web application testing functions to Nessus, as well as direct SQL auditing of databases and web configuration auditing of web servers. I'd love to see Nessus included on a chart like this.

A PDF of how to do this sort of testing and what sort of tests are supported is located here:

http://www.nessus.org/whitepapers/NessusWebAppTesting.pdf

Jeremiah Grossman said...

@Ron, right you are, should have added Nessus. After going through all the literature the chart has been updated.

Anonymous said...

arent you missing vxclass / binnavi from zynamics.com?

Zacharias said...

Hello,

A quick question regarding the SaaS services and the "Business Logic Flaws in Custom Web Applications" section.

I would assume that all - if not most - of the SaaS offerings have a person behind a keyboard at some point, the only requirement for testing business logic flaws to date. How come that only Whitehat Sentinel and Cenzic ClickToSecure have a checkmark in this category? Is it something that is explicitly there - and missing from the others - in their offerings? Is there some other rationale behind it?

Disclaimer: I haven't read any of the "players'" service offerings.

Best Regards,
./Z

Jeremiah Grossman said...

@Zacharias, that heading does deserve more clarification. Billy Hoffman and I have been discussing how exactly to do it fairly behind the scenes.

"I would assume that all - if not most - of the SaaS offerings have a person behind a keyboard at some point, the only requirement for testing business logic flaws to date."

This is not the case. Some offer SaaS providers offer it, some don't, some charge extra for the service. It is very confusing and hard to find out.

To keep productions scans safe at a minimum you'd want a person adjusting scan speed, configuring simultaneous threads, and ensuring the tests themselves do not have executable (XSS, SQLi, etc.) payloads. Also very important is having a person mark forms/links as safe for testing.

To the best of my knowledge/research, WhiteHat Sentinel and ClickToSecure provides this as standard, while the others do not. Hence the differing checkmarks. Should the vendors like the correct me and describe how they ensure production-saftey... I'm all ears.

Hope this clarifies.

lennykaufman said...

jg: integrated network + webapp scanners not being production safe? we have our benefits and challenges, but you mis-represented that one. i'll look for an edit to your table.

i'd also like to see network scanning and db scanning pulled into separate columns, but we can discuss that in another thread.

Ken J said...

Where's Burp Scanner? I hve used five of those listed in the past, and none of them were as good as Burp.

Jon Zucker said...

Hey Jeremiah,

Good overview of the Web App scanner offerings out there.

Two small points about Cenzic Hailstorm:

1. The product tests production apps using the Vmware integration

2. Hailstorm has a number of policies (we call them SmartAttacks) that detect business/application logic exploits.

thx

Jeremiah Grossman said...

@lennykaufman, please see my comment to Zacharias for how I'm current viewing "production-safe". Would welcome the feedback. Also, if there is an integrated network + webapp scanner you are aware of that is production-safe... please let me know why you think so. Even better if you can point me to where the vendor makes the claim.

@Ken J, oh yes BURP! Dang, major oversight. Would someone mind helping me fill out the appropriate boxes?

@Jon Zucker, eh? #1 is that actually scanning production then? Not sure I completely understand how it is being set-up.

#2 - To earn a checkmark for business logic flaw testing you have to assure/claim a solid level of comprehensiveness. Saying Hailstorm finds/checks for just a handful of bizlogic flaws and that is enough to earn a checkmark on those ground would be very misleading. So, what is the claim?

PortSwigger said...

Hey Jer,

If you want to add Burp, I'd suggest:

- Dynamic
- Product
- Biz logic: no
- Web vulns: yes
- Web server: no
- Network: no
- Custom auth support: yes
- HTML workflow: yes (user-driven)
- Form depth: indefinite
- Prod safe: is anything really?

Cheers

Jeremiah Grossman said...

@PortSwigger Thanks! Added.

Jon Zucker said...

No problem… A quick description of the vmware integration:

Using Hailstorm and Vmware’s Lab Manager or VirtualCenter, users can initiate a clone of their production environment and then have Hailstorm assess that cloned version of their production app.

For more details, go here:
http://www.cenzic.com/technology/testing-apps/

Jeremiah Grossman said...

@Jon Zucker, by that explanation one could only conclude that Hailstorm is NOT production-safe. Otherwise, there would be no need to "clone" production into a VM'ed environment to scan.

lennykaufman said...

@jg: to address your comments about the integrated scanning vendors, I will reference Rapid7 specifically (since I work there). Scan configuration capabilities and template are standard shipping capabilities and usage of NeXpose. I understand your criteria and how it is particularly tied to your company's approach, but NeXpose scans thousands of web applications in conjunction with enterprise wide infrastructure every single day - on internal networks and external scans including PCI ASV activities. These scans do not affect the performance or availability of *production* customer systems. If they did, we wouldn't be in business, and NeXpose is not alone with that. I used to work for another top tier scanning vendor, so I'm not blindly advocating my company's solution here.

The network scanners were designed to scan *production* systems and those of us considered top tier vendors scan millions of IP's every single day. We have not decided to abandon our core value proposition of being production safe for a subset of the technologies that we scan. If you feel that your product+service offers an additional assurance of safety for production systems, that's cool ... no issue with you having that opinion. If you feel that the threshold establishing "production safe" lies in criteria tied to your business model rather than any demonstrable impact to the performance or availability of thousands of web applications running on *production* systems, that is something I feel I have to call you on.

You will not see an integrated scanning vendor make an explicit claim about being production safe, because that would be akin to a bottled water company making an explicit claim about being safe for human consumption in response to gerber claiming that only liquid administered from a bottle can truly be considered safe for human consumption. Top tier scanning vendors will not take this bait, although I don't blame you for throwing it in the water. :)

"Production safe" is ultimately tied to "risk to production systems", which can only be measured by impact and likelihood. We can debate impact of payload until the end of time, but the results of thousands of daily scans against production systems cannot be ignored in an accurate assessment of likelihood. If your theory is that integrated scanners represent a real likelihood of impact to production systems, thousands of daily production scans refute your theory.

If you're positioning your solution as the Volvo of web app scanning, that's a good business approach. Lots of people like Volvos and they're quality vehicles. It is misleading, however to suggest that Porsche, BMW, and Mercedes are somehow not safe vehicles as a result.

I would recommend calling out specific columns for "active payloads" and "scan tuning capabilities" if that is what you're really talking about and save the "production safe" column for distinguishing the rest of us from systems that have demonstrated a propensity for knocking over boxes, apps, or services. Otherwise it diminishes the validity of otherwise excellent industry/solution comparison research.

Jeremiah Grossman said...

@lennykaufman Your sensitivity to a lack of a production-safe label is understandable. Remember, the chart is a comparison of vendor functionality claims combined with some reasonable explanation for how it is achieved. To make your case, you’ve basically said, “trust me,” and Rapid7 should be assumed production-safe because we’re in business. While security experts may trust they also verify. Assumption is not good enough to earn a check mark, which is what makes the chart of value.

To use you analogies, automobile manufactures must publish safety ratings. The food industry has standards for preparation and/or mandated ingredient lists. Both industries have government oversight (FDA / DoT). So again, we don’t assume blind trust. While the VA industry has no formal safety standards or governmental involvement, vendors should at least be somewhat transparent in how they go about scanning production systems without causing harm. IMHO anyway and you may not agree.

What we CAN assume is the more thoroughly a production system is scanned, the more risk to disruption is assumed because more (potentially dangerous) functionality is exercised. This much is well-known and not tied to our business model, we just happen to address the concerns in our own way. Your are being asked to describe no more than that. So your comments lead me to conclude one of the following.

#1 Claims about the history of production safety are untrue. But, I’m not inclined to believe you are in the habit of going around spreading falsehoods. #2 Rapid7’s testing depth and comprehensiveness is very low. This would not be uncommon among integrated network scanning vendors designed to checkbox PCI-ASV activities. #3 Rapid7 actually does have some secret sauce to somehow automatically and comprehensively process mutli-step form work flows, detect out-of-band links, uncover inter-website dependency/ relationships, safely deploy executable payloads, scan while maintaining login state, etc. But, since you won’t speak to that open, you can kind of see where this leaves us.

Rodrigo Sp0oKeR Montoro said...

Hi Jeremiah

Why not N-Stalker ? =)

http://www.nstalker.com

Nice work .

Thiago Zaninotti said...

Hi Jeremiah,

You have also missed N-Stalker in your comparison chart.

Our latest version (2009) will do dynamic analysis and will cover OWASP Top10. As a tool, we probably fall into "depth is configured by user".

I have seem some of your researches that do not include N-Stalker -- we have been doing Web security assessments since 2000. If you need to get more information, let me know (thiago nstalker com).

Jeremiah Grossman said...

@Rodrigo just didn't come to mind, but glad you pointed it out.

@Thiago can you help me fill in the details so I can plug them into the chart just like the rest? Thanks.

Unknown said...

Sure Jeremiah.. Do you wanto to send me the details? (thiago at nstalker)

Jeremiah Grossman said...

@tmzani, just post them in the comments here like they'd show up in the chart. PortSwigger did that above for reference.

Thiago Zaninotti said...

@jeremiah

N-Stalker

- Dynamic
- Product
- Biz logic: user customization required
- Web vulns: yes
- Web server: yes
- Network: no
- Custom auth support: yes
- HTML workflow: yes
- Form depth: requires configuration

SaaS Services said...

Thanks for sharing nice article onWebsite VA Vendor Comparison Chart, now days SaaS demand are increasing and hope this will increase the wok efficiency in IT industry.