Sunday, February 22, 2009

SQL Injection, eye of the storm

Originally published by Security Horizon in the Winter 2009 edition of Security Journal.

In 2008 SQL Injection became the leading method of malware distribution, infecting millions of Web pages and foisting browser-based exploits upon unsuspecting visitors. The ramifications to online businesses include data loss, PCI fines, downtime, recovery costs, brand damage, and revenue decline when search engines blacklist them. According to WhiteHat Security, 16 percent of websites are vulnerable to SQL Injection. This is likely under-reported given that the statistics are largely based on top-tier Web properties that employ a website vulnerability management solution to identify the problem. The majority of websites do not and as such may be completely unaware of the extent of the issue. In addition, some recommended security best-practice have ironically benefited malicious hackers. Websense now reports that "60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008." Let’s examine the forces that have aligned to create the storm that allows SQL Injection to thrive.

Any custom Web application that lacks proper input-validation, fails to use parameterized SQL statements, and/or creates dynamic SQL with user-supplied data potentially leave themselves open to SQL Injection attacks -- unauthorized commands passed to back-end databases. When Rain Forest Puppy first described SQL Injection ten years ago, on Christmas Day 1998, it was a targeted one-off attack capable of exploiting only a single website at a time. Custom Web applications contain custom vulnerabilities and require custom exploits. Successfully extracting data out of an unfamiliar database is different in each instance and greatly aided by error messages revealing snippets of server-side code.

To solve the SQL Injection problem, preferably in the code, first we must identify what is broken. The easiest method to-date has been through remote black-box testing, submitting meta-characters (single quotes and semicolons) into Web applications. If the website returns a recognizable response, such as an ODBC error message, there is a high probability that a weakness exists. Comprehensive security testing, typically aided by black-box vulnerability scanners, performs the same procedure on every application input-point including URL query parameters, POST data, cookies, etc. and repeated with each code update. This software security testing process is also now one of the assessment options mandated by PCI-DSS section 6.6.

In the wake of highly publicized compromises like the 2006 incident at CardSystems, a back-end credit card transaction processor, in which millions of stolen credit card numbers fell into the wrong hands, website owners were strongly encouraged to update their Web application code and suppress error messages to defend against SQL Injection attacks. Many implemented solely the latter since it only required a simple configuration change, hindering the bad guy’s ability to identify SQL Injection vulnerabilities. Since the vulnerabilities couldn’t be found easily, perhaps this contributed to incorrectly training developers that security through obscurity was enough. Widespread attacks were not seen as prevalent enough to justify a serious software security investment. Despite cutting-edge Blind SQL Injection research helping to improve black-box testing, error message suppression contributed to three very important side effects:
  1. Black-box vulnerability scanner false-positive and false-negative rate skyrocketed.
  2. SQL Injection became significantly harder to identify, but ironically not exploit.
  3. Extracting data out of a database became exceptionally more laborious than injecting data in.
In 2007, cyber criminals in China created a new technique capable of generically exploiting SQL Injection vulnerabilities without reliance on error messages. Perpetrators inserted malicious JavaScript IFRAMEs (pointing to malware servers) into back-end databases, as opposed to pulling data out, and used the capability to exploit unpatched Web browsers and plugi-ins in record numbers. "The scale of this global criminal operation has reached such proportions that Sophos discovers one new infected webpage every 4.5 seconds – 24 hours a day, 365 days a year." No longer limited to one-off hacks, SQL Injection quickly became a method of choice to target anything and everything on the Web. According to Websense, “75 percent of Web sites with malicious code are legitimate sites that have been compromised.” Furthermore, attacks have been widely successful without even needing to authenticate. That means any website functionality beyond a login screen represents further attack targets. Worse still, security pros are unable to leverage these techniques to improve SQL Injection vulnerability detection because doing so risks adversely affecting the website.

For example, when vulnerability assessments are conducted on production systems, a cardinal rule must be followed, “Do no harm.” This often requires that testing rates be limited (X number of requests per second), testing windows be respected (usually during off-peak hours), and tests be nondestructive. It should go without saying that we do not want to crash websites or fill databases with undesirable content. Malicious hackers have no such restrictions as they may test however they want, whenever they want, for as long as they want. The other disadvantage for website defenders is that they must be 100% accurate at finding and fixing every issue all the time, while the attacker need only to exploit a single missed issue. This is an unfortunate, but inescapable reality in Web security and why testing frequency and comprehensiveness approach is vital.

Source code review (aka white-box testing) is the other option to locate SQL Injection vulnerabilities and often able to peer deeper into the problem than black-box testing. Of course you must have access to the source code. Before considering this as a scalable solution begin by asking yourself if executive management would allocate enough resources to perform source code reviews on every website every time they change. Thinking globally, let’s consider that there are over 186 million websites. While not all are “important,” if 16% had just a single vulnerability as previously cited, that means a staggering 30 million issues are in circulation. Would it be reasonable to project that finding (not fixing) each unique issue through white-box testing, even assisted by automation, would require $100 in personnel and technology costs? If so, we are talking about at least a $3 billion dollar price tag to simply locate SQL Injection -- to say nothing about other more prevalent issues such as Cross-Site Scripting and Cross-Site Request Forgery that will be left undiscovered.

Secure software education is another valuable long-term strategy that will help prevent another 30 million vulnerabilities being added to the pile over the next 15 years, but will not provide a short-term fix to the problem. Currently, there are roughly 17 million developers worldwide who are not educated on the basic concepts of secure coding that could help them tackle the SQL injection and other issues. Would $500 per developer be an acceptable rate for professional training (commercial 2-day classes typically start at $1,000 on up)? If so, the market must be prepared to make an $8.5 billion investment and then wait for everyone to come up to speed. Obviously the private sector is not going to shoulder such a financial burden alone, not in any economy, let alone in a recession. The fact is education costs must be shared in amongst colleges, enterprises, vendors, developers themselves, or through materials made freely available by organizations such as OWASP and WASC.

The climate for SQL Injection vulnerabilities has all the makings of a perfect storm, one we are already experiencing. The issue is extremely dangerous, incredibly pervasive, difficult to identify, easy to exploit, and expensive to fix. Over the last 15 years organizations have accumulated a lot Web security debt that eclipses our currently estimated spending of ~$300 million, combining outlays for scanning tools, professional services, training, and Web application firewalls. Perhaps we should ask president-elect Obama for a Web security bailout. Not likely. The fact is not every organization will invest adequately to protect themselves, at least not over night. Those who do not will undoubtedly become the low hanging fruit bad guys target first. The smart money says vast numbers of compromises will continue throughout 2009.

For those wishing to do all they can to prevent compromises, the answer is adopting a holistic approach addressing overall Web security, including SQL Injection. While fully articulating the details of each solution is beyond the scope of this article, it is important to highlight several of the most important and why they are a good ideas.
  • Security throughout the Software Development Life-Cycle, because an ounce of prevention is worth a pound of cure.
  • Education, teach a man to fish.
  • Vulnerability Assessment, because you cannot secure what you cannot measure.
  • Web Application Firewalls, because software is not and will never be perfect.
  • Web Browser security, because one must be able to protect themselves against a hostile website.
This combined strategy will prevent us from having the same conversation about Cross-Site Scripting and Cross-Site Request Forgery in the near future.

12 comments:

dre said...

Great article, with some very poignant topics and insights. I hope this sends a message out to the SQLi-happy pen-testing companies out there that cling to their remote execution and database dumping "showy" kung-fu.

However, I also think the industry is growing tired of a focus against code review. Lighten it up a little bit. Everyone knows that web application security scanners are the worst way to find SQLi bugs.

Also: you cling to old terminology such as white-box and black-box. Of course it is possible to do both (i.e. hybrid), and white-box does not have to be static analysis, partially or fully automated through a security review tool, and/or manual code review.

Regarding "white-box testing", you said, "Before considering this as a scalable solution begin by asking yourself if executive management would allocate enough resources to perform source code reviews on every website every time they change". This is incorrect. There are many different kinds of white-box analysis methods, not just the popular kind that you specifically refer to (i.e. I think you're talking about Fortify or Ounce used by lead-developers and/or security "experts").

The best thing about white-box analysis is that it solves *exactly* what you claim is needed: "To solve the SQL Injection problem, preferably in the code, first we must identify what is broken". When using code comprehension, dynamic analysis (no, not the black-box functional testing kind that web application security scanners do, I'm talking about walking the code paths with the binaries, bytecode, and/or source code), code coverage (e.g. Fortify PTA), security review tools, manual code review, unit testing, integration unit testing, component testing, design review, checking acceptance testing against requirements, and about a thousand other different methods...

Almost all of the techniques in the above paragraph get to the root-cause and source of the SQLi problem. Web application security scanners, by themselves, never do. They can't. The reality is that web application security scanners alone, even when backed by human experts such as the people who wrote the tools in the first place, require more regression testing than any white-box method. WAF requires more regression testing than any white-box method. They never solve the root-cause, and thus are always going to create unknown situations.

While static analysis tools have a notoriously high false-positive rate -- they only have it once (i.e. the first time it is thoroughly run, which is normally the only time you need to run one). WAF and web application security scanners share a continuously high rate of false-positives and false-negatives that cannot be significantly reduced through time and effort. Nobody is ever going to get better than 65% coverage using these methods, and the typical rate today is less than 10% 1sigma and 30% 4sigma.

Your last paragraph speaks volumes. I guess the only part where we differ in opinion is that you think Vulnerability Assessment for web applications is the WhiteHatSec method, and I think the same Vulnerability Assessment is everything *but* the WhiteHatSec method. Also, the WhiteHatSec choice of Web Application Firewalls is incorrect -- instead of F5, Cisco, Citrix, and Imperva -- WAF operators should be looking into Microsoft AntiCSRF / AntiXSS SRE, GDS-Security SPF, Java Enterprise HDIV, Mod-Security using Lua (and not huge regex blacklists), etc.

While I understand that some recommendations that I propose may not be fully PCI-DSS requirement 6.6 compliant -- I do understand that there are plenty of alternate and compensating controls around every PCI-DSS requirement.

Anonymous said...

Great summary of where SQL injection has evolved to. But I agree with Andre that the code-review bashing is really overdone.

First of all, getting access to the code is generally no harder than gaining access to test systems on an internal network. You need a process and some supporting technology, but we do it all the time with critical applications from the largest enterprises. (If you're reviewing your own code this is even less of a problem).

Second, the math is wildly in favor of code review. To test an application requires fuzzing potentially thousands of parameters, detecting success is spotty at best, and there are false alarms to deal with. In a code review, you can find and verify all the actual queries (generally not that many) quickly and with damn near perfect accuracy.

I'll personally take the John Henry challenge on any application any time to prove that manual human code review is faster and more accurate than any scanning approach. It's even more efficient when applied to an entire application portfolio.

The financial argument you set out against code review is also specious. Using numbers from the entire Internet is pretty misleading. If you used scanning as the basis finding SQL injection in the entire Internet, it would cost more, and would be far less accurate.

To address your argument directly, verifying that a 500,000 LOc application doesn't have SQL injection with manual code review can cost a little as 10 minutes if it uses parameterized queries properly everwhere. That's 100% coverage with a false alarm rate of exactly zero. If they've used dynamic queries, it can take up to a few minutes for each query in the application (i.e. not very long).

The cost to do the same thing with scanning is dramatically higher. Tailoring the scanner to the application takes time. Running the scan takes time (even if automated). And verifying that the findings are real (aka eliminating the false alarms) takes time (a lot of time). Note that the best way to eliminate false alarms is to look at the code, so what have you gained. Also, if you have to prove the vulnerability with an exploit, crafting it blind can take hours or days, but with the code it is trivial. And of course there are the missed negatives to consider.

But the real problem is that the vast majority of the cost comes from the "fixing" part. That's why giving developers excellent findings that indicate the exact line of code and the exact remedy tailored to the code (not a generic recommendation) is so important. Fixing the problem takes much longer if all you give developers is a broken URL and parameters and not a specific line of code and custom recommendation.

There are other problems in application security where static analysis, penetration testing, and scanning are the most efficient approach. But if you want to find SQL injection, there's no question that getting into the code just the right approach.

Let me finish by zooming out even a bit more. ALL of this is just about *finding* SQL injection. Finding vulnerabilities is just a small part of a responsible application security program. The whole point of it is to show you where you made mistakes earlier in the process. If you really want to eliminate SQL injection from your enterprise, set a standard requiring the use of parameterized queries, train developers to follow it, use code review process to verify that they did it, and manage the process with some metrics. It takes about a year and you'll never have to deal with it again.

I agree with the rest of your post about the size of the challenges we face, and the need for a comprehensive approach to application security.

Arshan Dabirsiaghi said...

This seems to be a good place to let people know about a new tool we just released: Scrubbr - a standalone GUI tool for helping you find stored XSS attacks in your database.

It's not perfect, but it's light years beyond:

> select * from foo where bar like '%<script%'"

Anyway, you can read about it on the link above or download it directly from the Google Code page.

Jeremiah Grossman said...

Hey Jeff, Alright! Now we’re getting somewhere!

My intention was not to bash any VA approach, but instead place the monetary costs into context. If a firm can afford a full source review enterprise wide, perceives adequate value, then by all means they should. What I don’t believe is any VA solution can be generalized as the “best” approach in all case. It must come down to what works best in a given situation or one risks arguing ideology rather than logic. Also from my experience, access to source often is difficult to obtain, maybe not for all, don’t know about most, but certainly more than a trivial sum.

Accuracy is not exactly a cut and dry subject
I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently.Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points.

Faster perhaps, but I said “scalable”
I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building. I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable.

The Cost of Apples vs. Oranges
Sure, reviewing properly coded source can be a low cost breeze, but that’s not the challenge that lay in front of us. It’s the millions of websites that are not coded properly, that do not religiously use parameterized queries, which were the crux of my guerilla math estimates. Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing.

But how much faster to fix?
Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted.

On the strategy front, we have a lot more to talk about. I’m gathering my thoughts now and hope to share them with you soon.

Jaydeep Dave said...

Extracting data from an SQL injection is not laborious work. Its matter of time.

Would like to prove.

jaydipdave@gmail.com

dre said...

@ Jeremiah:

Accuracy is not exactly a cut and dry subject
I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently


I do not think that code reviews lack context of true exploitability, especially in the case of SQLi. Also, exploitability is often not part of a proper risk analysis -- it doesn't "have to be". There are many metrics to look at, and exploitability factors are usually weighted much lower than other factors, and it's often perfectly ok if they are missing. Prioritization of any software defect fix may or may not take impact into the result. The result is that you have some bugs, and the goal is to fix all of them in the fastest amount of time with the fewest amount of resources. Show me code that is obviously exploitable, and I'll show you code that is obviously not exploitable. Which would any development organization rather spend their time on?

Your perspective is from the classic penetrate-and-patch world. Web applications don't live in this world. There is no need to penetrate, and patches don't exist. You don't patch web applications; you re-code them. Better yet, you refactor them. Fixing every SQLi in modern dev shops could be as simple as using a template, a code generator, or using some metaprogramming concepts. What? You don't have a modern dev shop that already uses prepared statements for performance reasons? I guarantee you'll be spending less money in the long run making your dev shop more agile (to use the wrong word for what I actually mean) than to spend more money on security products/services, especially security products/services which require costly "proof of exploitability and impact analysis". If an organizations wants to start making spending cuts on their security, start with proving the right things (good code ; modern dev shops, performance-enhanced / scalable query language ; modern risk analysis) and avoiding proving the wrong things (we are exploitable ; people can steal information from us in this exact way and therefore now that somebody knows exactly how to do it: it will actually probably happen, since we also don't know how to fix it).

Faster perhaps, but I said “scalable”
I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building. I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable


There are exactly enough code reviewers available to cover all of the web code we've spent the last 15 years building. In fact, most of the them wrote that web code, or were responsible for writing off on it before it went production. It's the developers!

Companies like WhiteHatSec (SaaS vendors) and the people that work for these companies are often blinded by the fact that a majority of the rest of the world uses published open-source components to build their applications -- and that they actually want to continue to do so. Am I declaring PaaS DOA? No, but certainly the Google or Force API would be useless without an open and well-documented API, only proving my point further.

There is no such thing as a closed-source application. Somebody wrote it. Somebody has access to a repo somewhere. Most of the build, however, is probably not in that same repo. It's probably third-party components downloaded directly from the Internet (or copied out of a book). Again, welcome to the differences between the old world (penetrate-and-patch) and the new world (refactoring web applications).

As a final point, you have entirely missed Jeff and I's points about the need to re-run web application security scanners (or other manual black-box functional testing) over and over again. Regression testing cannot be improved with functional testing. It is what it is. In the case of a well-designed web application, preferably using managed code, it is quite possible to eliminate regression testing and replace it with continuous-prevention development. It's called "Red-Green-Refactor", and all you need to do is write a test to assert for the defect's fix. Now, you're finding even more defects that are related or similar to the original defect.

The best part about SDLC improvements is that there is no one answer like continuous-prevention development (although this is my favorite since it seems to apply to a large majority of shops out there today). A dev shop doesn't have to be using TDD to gain the advantages because there's probably another way to do it in a different paradigm. Not using Agile/XP? Well, even Clean Room methodology seems to provide layers to solve this problem. Not using MDA/MDE with code generation in your OO language? Certainly, DbC is still available, it's almost the defining characteristic of OO development.

So, yes -- a web application security scanner running in default mode, and a 17 year old with Burp Suite (aka the WhiteHatSec model) is going to be a lot faster than a thirtysomething with O2 and SharpDevelop. However, the thirtysomething reviewer is going to be able to explain to developers how to do what he/she does, come up with coding standards to prove un-exploitability, and build an SDLC to remove the need for long-duration and antiquated regressions. The scanner + 17 yro are going to be able to run that test again. And again. And again. And fix nothing.

Even in the "Sean Fay / Geoff Morrison" Pepsi Challenge (N.B. I think BlackHat calls this "Iron Chef"), the judges claimed that Pepsi tasted better than Coke, even though Coke sells more / wins. Wait. I got that backwards. Fortify does sell more than WhiteHatSec. And it does taste better.

Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing

This is why I always separate the words "Black-Box" from "Dynamic Analysis". They are not the same. Dynamic Analysis is White-Box, not Black-Box. Nobody understands this except for me and basically all of the people who invented and have been using these words outside of the security community and Gartner for almost 50 years.

Nobody in their right mind analyzes code "line by line" unless it's less than 1 kloc. Usually if you can read code, then you also know how to code. Therefore, you also understand the power of automation and recursion. You may also understand the use of a debugger, possibly even code coverage tools. If I had "tens of klocs" of Classic ASP code to review, you'd bet that I'd be using RTTI or fall back to using something else from the type system to help me approach finding bugs, let alone simple/stupid security bugs like SQLi.

This is the same approach as using a SAST tool with managed code. It's why commercial SAST vendors don't support Python or Ruby (or other dynamic languages that don't have a type system). The reason that AST models of data flow work so well on managed code is because of that type system, but it also helps that the bytecode (see again: I did not say source code) is available to fall back on. In other words, dynamic languages like Groovy that produce Java bytecode may be great candidates for SAST review (although certainly this isn't perfect today just like reviewing Spring with these tools is not perfect today).

But how much faster to fix?
Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted


Jeremiah said it best:
Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points

This is exactly correct. Users of both SAST/DAST have discovered that DAST is superior at finding issues like Directory/Path Traversal, Forced Browsing, PRL's, etc. They have also found that SAST and/or manual review are better at finding everything else, especially SQLi and XSS.

See also: every organization or individual that has spent time with both SAST and DAST. Is that enough anecdotal evidence?

Erwin Geirnaert said...

The comments started off great but in the end somebody started with bashing a commercial company and I don't agree there.

If we live in a static environment I agree that a code review with tools like static analysis combined with coding guidelines, security frameworks like OWASP ESAPI, ... will result in a secure application but you will need a verification of the secure deployment in the web app space.

This can only be verified with security testing, not with code review or audits of the environment. I have seen enough cases of secure applications using strong authentication being deployed in a default way, reducing the security level to zero.

Also, back to the original article: organizations that are into application security and request a code review or security test are already aware about SQLi or XSS and will have developed something to protect against these vulnerabilities, even just "good enough".

Anonymous said...

Cool. I love this discussion.

My intention was not to bash any VA approach, but instead place the monetary costs into context. If a firm can afford a full source review enterprise wide, perceives adequate value, then by all means they should. What I don’t believe is any VA solution can be generalized as the “best” approach in all case. It must come down to what works best in a given situation or one risks arguing ideology rather than logic.

Wait a sec, I thought your article was arguing for black box and against code review. I've *always* (since 1999) argued for using a combination of the most cost-effective approaches for verifying security in an application, including automated tools, manual security testing, manual code review, and architecture review. When you set them up as alternatives, it sounds to me like arguing that a hammer is better than a screwdriver.

Also from my experience, access to source often is difficult to obtain, maybe not for all, don’t know about most, but certainly more than a trivial sum.

We review applications totalling millions of lines of code every month. Access to code is much less an issue once people see how much better the results are for the same cost. (We charge the same whether we get access to the source code or not - we just do a better job if the code is provided). Having the source code available makes the verification *faster*.

Accuracy is not exactly a cut and dry subject
I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently.Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points.


Code review has a *different* context than a black box review. Actually there is far more useful context in the code than in the black box perspective (not that black box doesn't have some of its own useful context). But the lack of context is *why* we call it "black" box.

I believe that the focus on exploitability leads organizations to make very bad security decisions. It sends the message that any programming practices are okay, as long as they happen to be protected by a random alignment of network, firewall, and server configuration. By prioritizing this way, you will always be chasing vulnerabilities and never stamping them out of your organization permanently.

To illustrate, in a recent review, we identified nine bad programming patterns that led to several thousand XSS problems in the code. Some of these may not be exploitable, although I suspect with enough work we would be able to exploit most. (Note that commercial scanning tools found a *very* small percentage of these problems). But ALL of that exploitability analysis effort is wasted. Putting in a strong output encoding solution like ESAPI is faster and provides far more assurance.

By the way, of course "predictable resource location" issues can be found statically. A "code" review should include both the source code baseline AND the entire web application as deployed. This allows you to see any misdeployed resources statically. Again, you chose a bad example. it's way easier than scanning for them blindly. Why poke blindly for issues that you can just verify easily if you're inside.

Faster perhaps, but I said “scalable”
I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building.


How does the Internet-scale of the problem help your argument? Using a poor approach doesn't somehow get more scalable when you apply it to a staggering size problem. The only path to fixing application security at an Internet-scale is to get software organizations to adopt a responsible application security program (foundation, implementation, verification, and management). I believe we're moving out of the time when "hey, you've got a problem" approaches like blackbox scanning need to step aside for "here's how to fix it" solutions. Chasing vulnerabilties by scanning just gets you farther and farther behind. (BTW, I like the term "security backlog" a lot - very Agile).

I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable.

Could you explain what you mean by "some level of comprehensiveness"? Would you be willing to say which of the Application Security Verification Standard (ASVS) requirements you cover? We typically do Level 2 and Level 3 reviews, and occasionally a Level 4 for truly critical applications. Are you including the time it takes to eliminate false positives, evaluate the likelihood and impact, and provide detailed remediation guidance? In any case, whatever level of rigor you choose, it's more cost-effective to have the code available than not.

The Cost of Apples vs. Oranges
Sure, reviewing properly coded source can be a low cost breeze, but that’s not the challenge that lay in front of us. It’s the millions of websites that are not coded properly, that do not religiously use parameterized queries, which were the crux of my guerilla math estimates. Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing.


Using SLOC is the wrong metric. Verification work doesn't scale linearly with SLOC because the security controls aren't randomly sprinkled across the baseline. For each of the major security areas, we verify:
1) that a security control is present
2) that the security control works properly
3) that the security control is used where it needs to be
4) that the security control is configured properly

By their nature, scanning is a weak form of verification. Negative signatures match on problems, but you have no idea how much coverage you're really getting. The positive verification approach (as suggested in ASVS) is the way to gain assurance. As I said before, the hybrid approach is the most cost-effective way to verify.

I have the easy side of this argument because I'm arguing that you should have both apples and oranges. We use the most cost-effective technique to verify the security of applications. When it's possible to automate, great (not that often really). Most often, we do some dynamic testing to see how the application behaves and then verify the controls in the code. Every application is different and what is the most cost-effective technique on one may not be for the next.

But how much faster to fix?
Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted.


Seriously, we do this all the time. Understanding the relationship between the URL space and the software baseline is time consuming. You can push this cost down onto the developers to artificially lower the cost associated with scanning, but it's misleading. And developers make way more mistakes trying to implement weak (black box only) findings. When we come in after organizations have "fixed" black-box findings we see all kinds of crazy one-off hacks put in place. They make the scanner happy, but the software is still horribly vulnerable.

On the strategy front, we have a lot more to talk about. I’m gathering my thoughts now and hope to share them with you soon.

Looking forward to it!

Jeremiah Grossman said...

@Jeff Williams, I think it's time to let the discussion rest a bit. I certainly learned a lot about where you are coming from. I should still answer your questions about our relative ASVS coverage.

My statement, "some level of comprehensiveness" was only meant to recognize that shades of testing comprehensiveness exist within the black-box testing world.

If I understand ASVS correctly, we're typically either 1A and/or 2A depending on the level of WhiteHat Sentinel service.

As for human time to set-up, configure, scan, assess, and verify results on a per website basis. Somewhere between 4 and 24 man-hours per year for ~weekly assessment coverage. A portion of that time is in customer support related. We maintain a high degree of quality results (for black boxers of course. :) as we constantly improve technology efficiency.

It's best to think of our process sort of like a NASCAR pit crew, everyone has a specific task they are responsible for on a collection of websites.

yi li said...

Jeremy:
will appreciate if you could answer me a few questions about sql injection:
1. if I use preparedstatement in the rightway (binding), do I still need to perform input validation in order to defend sql injection?
the post here seems to say i need to do both, is my understand correct?
http://sylvanvonstuppe.blogspot.com/2009/02/dealing-with-sql-injection-part-i.html
2. how do I perform parameterized input for stored procedure?

yi li said...

just one more question:
I need to create an input field that allows user to enter any url link; sql injection can be conducted by url; then is it possible to allow any url and defending sql injection? if so, how?
thanks a lot

Jeremiah Grossman said...

@Yi,

#1 Im not familiar with all languages and frameworks, but the yes the proper use of parameterized SQL will defend against SQLi. However, input validation is also HIGHLY recommended to do as well. Not only to prevent SQLi and other Web attacks, but also help produce quality code. Parameterized SQL and input validation should never replace each other.

#2 Im not certain, but would suspect it depends on the language and framework.

#3 when taking in the user supplied data, perform a format check to make sure it is what you expect. That is conform to proper URL structure, length, and character sets. If you then place that data into a DB, then use Parameterized SQL to do so.

Hope this helps.