Monday, August 31, 2009

Production-Safe Website Scanning Questionnaire

Hackers target websites. Why? Because that’s where the money is. Organizations should at least know as much as the bad guys do, and that means hacking themselves before someone less desirable does it first. It also means scanning “production” websites, which carries a certain risk. Scanners can and will disrupt normal website operations unless proper precautions are taken. In theory, production systems are identical to preproduction, which would seem to alleviate the need to test them. In theory, there is no difference between theory and practice. But, in practice, there is.

Even in those websites with the most stringent change control processes, experience shows identical production and preproduction deployments are extremely rare. It is incredibly common to find hidden files and directories containing source code and logs, mismatched security configurations, infrastructure differences, and more, with each impacting the website’s true security posture. Also, for those websites with required to maintain PCI-DSS 6.6 compliance, the standard mandates scanning publicly facing websites. If scanning production websites for vulnerabilities is important to you, and it should be, then production-safety is likely equally important.

The more thorough a website vulnerability scan is, the greater the risk of disruption due to exercising potentially sensitive functionality. For example, an authenticated scanner is capable of testing more areas of an application’s attack surface than one that is unauthenticated. The same is true of a scanner custom-configured to process multi-form work flows (i.e. an online shopping cart). Furthermore, scanners testing for most of the ~26 known website vulnerability classes similarly increase the odds for causing damage. “Damage” may be as minor as flooding customer support with error email all the way up to a denial of service condition.

Clearly production-safe scanning is a legitimate concern. Below is a questionnaire about what organizations ought to know, so they can better understand the risks of production scanning and mitigate them accordingly. Please feel free to use this document to probe vendors about how their offerings ensure production-safe website scanning while achieving required levels of testing coverage and depth. As a guide, I’ve supplied the answers that apply to WhiteHat Sentinels beneath each question. Vendors may choose to do the same in the comments below, on their sites and blogs, or of course privately by customer request.

1) How is the scanner tuned, manually or dynamically, as to not exhaust website resources that could lead to a Denial-of-Service?
Scanners must share resources with website visitors. Open connections, bandwidth, memory, and disk space usage by the scanner can seriously impact operations. High-end scanners can easily generate a load equivalent to a single user, or even up to a hundred or more, unintentionally causing serious events like DoS when resources are exhausted. Each website’s infrastructure should be considered unique in load capacity. Scanning processes should be adjusted accordingly.

WhiteHat Sentinel
WhiteHat Sentinel scans consume the load of a single user. They are single threaded, exceeding no more than a user-defined number of requests per second, and generally do not download static content (i.e.. images) thereby reducing bandwidth consumption. WhiteHat Sentinel also monitors the performance of the website itself. If performance degrades for any reason, scan speed slows down gracefully. If a website looks like it is failing to respond or incapable of creating new authentication sessions, in most cases Sentinel will stop testing and wait until adequate performance returns before resuming.


2) How are multi-form application flows marked as safe-for-testing and/or removed from testing?
Depending on the website, complex application functionality can only be located by filling out multi-form work flows with valid data. Insurance applications, bill payment, password recovery, purchase processes, and more are prime examples. Some application flows are more sensitive than others, including those with a direct monetary cost when activated. Automatically scanning these areas can have consequences.

WhiteHat Sentinel
WhiteHat Sentinel does NOT automatically fill out or test HTML forms. In our experience doing so is extremely dangerous without significant preexisting knowledge about the website. Each HTML form discovered during crawling, including multi-form process flows, are custom configured by our Operations Team with valid data. The Operations team also marks individual forms as being safe-for-testing. Those HTML forms that cannot be tested safely are either tested manually or not at all.


3) Does the scanner send executable attack payloads? If so, how are such tests made safe?
To identify vulnerabilities, scanners may inject executable, even malicious, payloads. For SQL Injection, testing may include executing back-end system commands, elicit errors messages, and/or retrieving/modifying data -- each potentially impacting or halting database performance. Another example is found when scanning for Cross-Site Scripting. Scanners may submit browser interpretable payloads (i.e. live HTML/JavaScript code) that could be returnd in the website code in an unknown number of locations. Should these payloads be encountered by Web visitors, they could easily interfere with or break the user experience entirely.

WhiteHat Sentinel
Sentinel, by default, performs the majority of its tests using proprietary pseudo-code. This enables the scanner to identify vulnerabilities without the payload being interpreted by parsers within the application. This helps ensure that no errant process execution occurs that can negatively impact production software execution. Also, Sentinel does not perform any requests that are non-idempotent, perform write activity, or potentially destructive actions without explicit authorization from either a security engineer or the asset owner.


4) How are links (URLs) that point to sensitive application functionality marked safe-for-testing and/or removed from testing?
According to RFC specification, HTTP GET requests (i.e. from hyperlinks) should be treated as idempotent by applications. Meaning, no significant action other than data retrieval should be taken, even upon multiple link clicks. In practice though, many links (URLs) discovered during (authenticated) crawls can indeed delete, modify, and submit data potentially causing disruption very similarly to non-idempotent POST requests.

WhiteHat Sentinel
As part of the WhiteHat Sentinel assessment process, customers may alert us to functionality that may execute dangerous non-idempotent requests. When such areas are identifiied, they can be ruled out of the scanning process and tested manually. Also, authenticated scans are restricted to special test accounts, so any potential negative impact is restricted to those areas and do not extend to other users.

10 comments:

Sven Türpe said...

Shameless self-promotion: we look at production-safety from a testing point of view in our paper: Testing Production Systems Safely: Common Precautions in Penetration Testing. Scanning is just one testing technique, or really a set of techniques. The issue of production-safety however arises whenever one touches production systems for testing. We therefore tried to generalize our discussion of risks and mitigation options.

Unknown said...

Do actual attackers take such pre-cautions?

Sven Türpe said...

Does it matter whether they do? The point of vulnerability assessment, security testing, or however you liek to call it is not to simulate attacks. The point is to execute testing procedures that tell you where and how a system is vulnerable to attacks. Of course any limitation that you impose on the set of tests allowed bears the risk of missing something. But this risk is inherent in any type of testing, be it only due to time limits.

Jeremiah Grossman said...

@Jackson brings up a good point, the bad guys probably don't care a whole lot if they damage a website. And it is also true that any restriction you put on the "good guy" the less accurate the measurement of vulnerability assessment. For most organizations through that difference is acceptable when trying to find exploitable conditions is a "safe" fashion. Anyone can cause a DoS.

Unknown said...

If there is missing something, why should we adhere to it? Clients by no means should deploy a test environments for us where we can get full simulations like attackers.

Jeremiah Grossman said...

They can and do from time to time to test is a destructive fashion without fear of business disruption.

Unknown said...

For large sites whose source codes are updated regularly, they can syn source updates to testing lab for automatic scanners/pentesters.
No need for the fear of disruption.

Nitin Reddy Katkam said...

From the perspective of a software developer who writes code and maintains deployments to a production environment, I would say we shouldn't be testing on production servers. Getting testing data off the production servers isn't something the maintenance staff would want to do and the business users wouldn't be very happy if you told them that the statistics reported by your application are inflated figures.

Most development teams have either a dedicated QC environment, or a development environment that can be used for the test.

Unknown said...

That's why I mean, Nitin.

Production web sites are used for many various business purposes such as SEO, Page Visit Analysis ..etc - montly/yearly basis.

If a vulnerability scanner mess up, then all those stats become unqualified.

James Dorrian said...

The only issue we've had with scanning in production is the large amount of exception notification emails (via Microsoft.EnterprizeLibrary)that we were previously unaccustomed to seeing.