tag:blogger.com,1999:blog-13756280.post1123237514635410952..comments2024-02-08T03:44:23.780-08:00Comments on Jeremiah Grossman: SQL Injection, eye of the stormJeremiah Grossmanhttp://www.blogger.com/profile/05017778127841311186noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-13756280.post-12104055444748753602009-07-31T20:05:09.507-07:002009-07-31T20:05:09.507-07:00@Yi,
#1 Im not familiar with all languages and f...@Yi, <br /><br />#1 Im not familiar with all languages and frameworks, but the yes the proper use of parameterized SQL will defend against SQLi. However, input validation is also HIGHLY recommended to do as well. Not only to prevent SQLi and other Web attacks, but also help produce quality code. Parameterized SQL and input validation should never replace each other.<br /><br />#2 Im not certain, but would suspect it depends on the language and framework.<br /><br />#3 when taking in the user supplied data, perform a format check to make sure it is what you expect. That is conform to proper URL structure, length, and character sets. If you then place that data into a DB, then use Parameterized SQL to do so.<br /><br />Hope this helps.Jeremiah Grossmanhttps://www.blogger.com/profile/05017778127841311186noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-43464053003302429092009-07-30T15:05:19.002-07:002009-07-30T15:05:19.002-07:00just one more question:
I need to create an inpu...just one more question:<br /> I need to create an input field that allows user to enter any url link; sql injection can be conducted by url; then is it possible to allow any url and defending sql injection? if so, how?<br />thanks a lotyi lihttps://www.blogger.com/profile/13571437873299186039noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-4679030234951843522009-07-30T15:02:28.707-07:002009-07-30T15:02:28.707-07:00Jeremy:
will appreciate if you could answer me a ...Jeremy:<br /> will appreciate if you could answer me a few questions about sql injection:<br /> 1. if I use preparedstatement in the rightway (binding), do I still need to perform input validation in order to defend sql injection?<br /> the post here seems to say i need to do both, is my understand correct?<br />http://sylvanvonstuppe.blogspot.com/2009/02/dealing-with-sql-injection-part-i.html<br /> 2. how do I perform parameterized input for stored procedure?yi lihttps://www.blogger.com/profile/13571437873299186039noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-25200666512809930172009-02-23T15:21:00.000-08:002009-02-23T15:21:00.000-08:00@Jeff Williams, I think it's time to let the discu...@Jeff Williams, I think it's time to let the discussion rest a bit. I certainly learned a lot about where you are coming from. I should still answer your questions about our relative ASVS coverage. <BR/><BR/>My statement, "some level of comprehensiveness" was only meant to recognize that shades of testing comprehensiveness exist within the black-box testing world.<BR/><BR/>If I understand ASVS correctly, we're typically either 1A and/or 2A depending on the level of WhiteHat Sentinel service.<BR/><BR/>As for human time to set-up, configure, scan, assess, and verify results on a per website basis. Somewhere between 4 and 24 man-hours per year for ~weekly assessment coverage. A portion of that time is in customer support related. We maintain a high degree of quality results (for black boxers of course. :) as we constantly improve technology efficiency.<BR/><BR/>It's best to think of our process sort of like a NASCAR pit crew, everyone has a specific task they are responsible for on a collection of websites.Jeremiah Grossmanhttps://www.blogger.com/profile/05017778127841311186noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-54248514012010395352009-02-23T11:19:00.000-08:002009-02-23T11:19:00.000-08:00Cool. I love this discussion.My intention was not ...Cool. I love this discussion.<BR/><BR/><I>My intention was not to bash any VA approach, but instead place the monetary costs into context. If a firm can afford a full source review enterprise wide, perceives adequate value, then by all means they should. What I don’t believe is any VA solution can be generalized as the “best” approach in all case. It must come down to what works best in a given situation or one risks arguing ideology rather than logic.</I><BR/><BR/>Wait a sec, I thought your article was arguing for black box and against code review. I've *always* (since 1999) argued for using a combination of the most cost-effective approaches for verifying security in an application, including automated tools, manual security testing, manual code review, and architecture review. When you set them up as alternatives, it sounds to me like arguing that a hammer is better than a screwdriver.<BR/><BR/><I>Also from my experience, access to source often is difficult to obtain, maybe not for all, don’t know about most, but certainly more than a trivial sum.</I><BR/><BR/>We review applications totalling millions of lines of code every month. Access to code is much less an issue once people see how much better the results are for the same cost. (We charge the same whether we get access to the source code or not - we just do a better job if the code is provided). Having the source code available makes the verification *faster*. <BR/><BR/><I><B>Accuracy is not exactly a cut and dry subject</B><BR/>I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently.Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points.</I><BR/><BR/>Code review has a *different* context than a black box review. Actually there is far more useful context in the code than in the black box perspective (not that black box doesn't have some of its own useful context). But the lack of context is *why* we call it "black" box.<BR/><BR/>I believe that the focus on exploitability leads organizations to make very bad security decisions. It sends the message that any programming practices are okay, as long as they happen to be protected by a random alignment of network, firewall, and server configuration. By prioritizing this way, you will always be chasing vulnerabilities and never stamping them out of your organization permanently.<BR/><BR/>To illustrate, in a recent review, we identified nine bad programming patterns that led to several thousand XSS problems in the code. Some of these may not be exploitable, although I suspect with enough work we would be able to exploit most. (Note that commercial scanning tools found a *very* small percentage of these problems). But ALL of that exploitability analysis effort is wasted. Putting in a strong output encoding solution like ESAPI is faster and provides far more assurance.<BR/><BR/>By the way, of course "predictable resource location" issues can be found statically. A "code" review should include both the source code baseline AND the entire web application as deployed. This allows you to see any misdeployed resources statically. Again, you chose a bad example. it's way easier than scanning for them blindly. Why poke blindly for issues that you can just verify easily if you're inside.<BR/><BR/><I><B>Faster perhaps, but I said “scalable”</B><BR/>I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building.</I><BR/><BR/>How does the Internet-scale of the problem help your argument? Using a poor approach doesn't somehow get more scalable when you apply it to a staggering size problem. The only path to fixing application security at an Internet-scale is to get software organizations to adopt a responsible application security program (foundation, implementation, verification, and management). I believe we're moving out of the time when "hey, you've got a problem" approaches like blackbox scanning need to step aside for "here's how to fix it" solutions. Chasing vulnerabilties by scanning just gets you farther and farther behind. (BTW, I like the term "security backlog" a lot - very Agile).<BR/><BR/><I>I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable.</I><BR/><BR/>Could you explain what you mean by "some level of comprehensiveness"? Would you be willing to say which of the Application Security Verification Standard (ASVS) requirements you cover? We typically do Level 2 and Level 3 reviews, and occasionally a Level 4 for truly critical applications. Are you including the time it takes to eliminate false positives, evaluate the likelihood and impact, and provide detailed remediation guidance? In any case, whatever level of rigor you choose, it's more cost-effective to have the code available than not.<BR/><BR/><I><B>The Cost of Apples vs. Oranges</B><BR/>Sure, reviewing properly coded source can be a low cost breeze, but that’s not the challenge that lay in front of us. It’s the millions of websites that are not coded properly, that do not religiously use parameterized queries, which were the crux of my guerilla math estimates. Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing.</I><BR/><BR/>Using SLOC is the wrong metric. Verification work doesn't scale linearly with SLOC because the security controls aren't randomly sprinkled across the baseline. For each of the major security areas, we verify:<BR/> 1) that a security control is present<BR/> 2) that the security control works properly<BR/> 3) that the security control is used where it needs to be<BR/> 4) that the security control is configured properly<BR/><BR/>By their nature, scanning is a weak form of verification. Negative signatures match on problems, but you have no idea how much coverage you're really getting. The positive verification approach (as suggested in ASVS) is the way to gain assurance. As I said before, the hybrid approach is the most cost-effective way to verify.<BR/><BR/>I have the easy side of this argument because I'm arguing that you should have both apples and oranges. We use the most cost-effective technique to verify the security of applications. When it's possible to automate, great (not that often really). Most often, we do some dynamic testing to see how the application behaves and then verify the controls in the code. Every application is different and what is the most cost-effective technique on one may not be for the next.<BR/><BR/><I><B>But how much faster to fix?</B><BR/>Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted.</I><BR/><BR/>Seriously, we do this all the time. Understanding the relationship between the URL space and the software baseline is time consuming. You can push this cost down onto the developers to artificially lower the cost associated with scanning, but it's misleading. And developers make way more mistakes trying to implement weak (black box only) findings. When we come in after organizations have "fixed" black-box findings we see all kinds of crazy one-off hacks put in place. They make the scanner happy, but the software is still horribly vulnerable.<BR/><BR/><I>On the strategy front, we have a lot more to talk about. I’m gathering my thoughts now and hope to share them with you soon.</I><BR/><BR/>Looking forward to it!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-13756280.post-65139027087895599202009-02-23T10:31:00.000-08:002009-02-23T10:31:00.000-08:00The comments started off great but in the end some...The comments started off great but in the end somebody started with bashing a commercial company and I don't agree there.<BR/><BR/>If we live in a <B>static environment</B> I agree that a code review with tools like static analysis combined with coding guidelines, security frameworks like OWASP ESAPI, ... will result in a secure application but you will need a verification of the secure deployment in the web app space.<BR/><BR/>This can only be verified with security testing, not with code review or audits of the environment. I have seen enough cases of secure applications using strong authentication being deployed in a default way, reducing the security level to zero.<BR/><BR/>Also, back to the original article: organizations that are into application security and request a code review or security test are already aware about SQLi or XSS and will have developed something to protect against these vulnerabilities, even just "good enough".Erwin Geirnaerthttps://www.blogger.com/profile/06369526872794527942noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-8373366016422897132009-02-23T03:19:00.000-08:002009-02-23T03:19:00.000-08:00@ Jeremiah:Accuracy is not exactly a cut and dry s...@ Jeremiah:<BR/><BR/><I><B>Accuracy is not exactly a cut and dry subject</B><BR/>I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently</I><BR/><BR/>I do not think that code reviews lack context of true exploitability, especially in the case of SQLi. Also, exploitability is often not part of a proper risk analysis -- it doesn't "have to be". There are many metrics to look at, and exploitability factors are usually weighted much lower than other factors, and it's often perfectly ok if they are missing. Prioritization of any software defect fix may or may not take impact into the result. The result is that you have some bugs, and the goal is to fix all of them in the fastest amount of time with the fewest amount of resources. Show me code that is obviously exploitable, and I'll show you code that is obviously not exploitable. Which would any development organization rather spend their time on?<BR/><BR/>Your perspective is from the classic penetrate-and-patch world. Web applications don't live in this world. There is no need to penetrate, and patches don't exist. You don't patch web applications; you re-code them. Better yet, you refactor them. Fixing every SQLi in modern dev shops could be as simple as using a template, a code generator, or using some metaprogramming concepts. What? You don't have a modern dev shop that already uses prepared statements for performance reasons? I guarantee you'll be spending less money in the long run making your dev shop more agile (to use the wrong word for what I actually mean) than to spend more money on security products/services, especially security products/services which require costly "proof of exploitability and impact analysis". If an organizations wants to start making spending cuts on their security, start with proving the right things (good code ; modern dev shops, performance-enhanced / scalable query language ; modern risk analysis) and avoiding proving the wrong things (we are exploitable ; people can steal information from us in this exact way and therefore now that somebody knows exactly how to do it: it will actually probably happen, since we also don't know how to fix it).<BR/><BR/><I><B>Faster perhaps, but I said “scalable”</B><BR/>I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building. I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable</I><BR/><BR/>There are exactly enough code reviewers available to cover all of the web code we've spent the last 15 years building. In fact, most of the them wrote that web code, or were responsible for writing off on it before it went production. It's the developers!<BR/><BR/>Companies like WhiteHatSec (SaaS vendors) and the people that work for these companies are often blinded by the fact that a majority of the rest of the world uses published open-source components to build their applications -- and that they actually want to continue to do so. Am I declaring PaaS DOA? No, but certainly the Google or Force API would be useless without an open and well-documented API, only proving my point further.<BR/><BR/>There is no such thing as a closed-source application. Somebody wrote it. Somebody has access to a repo somewhere. Most of the build, however, is probably not in that same repo. It's probably third-party components downloaded directly from the Internet (or copied out of a book). Again, welcome to the differences between the old world (penetrate-and-patch) and the new world (refactoring web applications).<BR/><BR/>As a final point, you have entirely missed Jeff and I's points about the need to re-run web application security scanners (or other manual black-box functional testing) over and over again. Regression testing cannot be improved with functional testing. It is what it is. In the case of a well-designed web application, preferably using managed code, it is quite possible to eliminate regression testing and replace it with continuous-prevention development. It's called "Red-Green-Refactor", and all you need to do is write a test to assert for the defect's fix. Now, you're finding even more defects that are related or similar to the original defect.<BR/><BR/>The best part about SDLC improvements is that there is no one answer like continuous-prevention development (although this is my favorite since it seems to apply to a large majority of shops out there today). A dev shop doesn't have to be using TDD to gain the advantages because there's probably another way to do it in a different paradigm. Not using Agile/XP? Well, even Clean Room methodology seems to provide layers to solve this problem. Not using MDA/MDE with code generation in your OO language? Certainly, DbC is still available, it's almost the defining characteristic of OO development.<BR/><BR/>So, yes -- a web application security scanner running in default mode, and a 17 year old with Burp Suite (aka the WhiteHatSec model) is going to be a lot faster than a thirtysomething with O2 and SharpDevelop. However, the thirtysomething reviewer is going to be able to explain to developers how to do what he/she does, come up with coding standards to prove un-exploitability, and build an SDLC to remove the need for long-duration and antiquated regressions. The scanner + 17 yro are going to be able to run that test again. And again. And again. And fix nothing.<BR/><BR/>Even in the "Sean Fay / Geoff Morrison" Pepsi Challenge (N.B. I think BlackHat calls this "Iron Chef"), the judges claimed that Pepsi tasted better than Coke, even though Coke sells more / wins. Wait. I got that backwards. Fortify does sell more than WhiteHatSec. And it does taste better.<BR/><BR/><I>Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing</I><BR/><BR/>This is why I always separate the words "Black-Box" from "Dynamic Analysis". They are not the same. Dynamic Analysis is White-Box, not Black-Box. Nobody understands this except for me and basically all of the people who invented and have been using these words outside of the security community and Gartner for almost 50 years.<BR/><BR/>Nobody in their right mind analyzes code "line by line" unless it's less than 1 kloc. Usually if you can read code, then you also know how to code. Therefore, you also understand the power of automation and recursion. You may also understand the use of a debugger, possibly even code coverage tools. If I had "tens of klocs" of Classic ASP code to review, you'd bet that I'd be using RTTI or fall back to using something else from the type system to help me approach finding bugs, let alone simple/stupid security bugs like SQLi.<BR/><BR/>This is the same approach as using a SAST tool with managed code. It's why commercial SAST vendors don't support Python or Ruby (or other dynamic languages that don't have a type system). The reason that AST models of data flow work so well on managed code is because of that type system, but it also helps that the bytecode (see again: I did not say source code) is available to fall back on. In other words, dynamic languages like Groovy that produce Java bytecode may be great candidates for SAST review (although certainly this isn't perfect today just like reviewing Spring with these tools is not perfect today).<BR/><BR/><I><B>But how much faster to fix?</B><BR/>Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted</I><BR/><BR/>Jeremiah said it best:<BR/><I>Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points</I><BR/><BR/>This is exactly correct. Users of both SAST/DAST have discovered that DAST is superior at finding issues like Directory/Path Traversal, Forced Browsing, PRL's, etc. They have also found that SAST and/or manual review are better at finding everything else, especially SQLi and XSS.<BR/><BR/>See also: every organization or individual that has spent time with both SAST and DAST. Is that enough anecdotal evidence?drehttps://www.blogger.com/profile/17414510788948258195noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-1135931017565626642009-02-22T21:50:00.000-08:002009-02-22T21:50:00.000-08:00Extracting data from an SQL injection is not labor...Extracting data from an SQL injection is not laborious work. Its matter of time.<BR/><BR/>Would like to prove.<BR/><BR/>jaydipdave@gmail.comJaydeep Davehttps://www.blogger.com/profile/14900650063343194586noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-41311751896023599032009-02-22T21:24:00.000-08:002009-02-22T21:24:00.000-08:00Hey Jeff, Alright! Now we’re getting somewhere!My ...Hey Jeff, Alright! Now we’re getting somewhere!<BR/><BR/>My intention was not to bash any VA approach, but instead place the monetary costs into context. If a firm can afford a full source review enterprise wide, perceives adequate value, then by all means they should. What I don’t believe is any VA solution can be generalized as the “best” approach in all case. It must come down to what works best in a given situation or one risks arguing ideology rather than logic. Also from my experience, access to source often is difficult to obtain, maybe not for all, don’t know about most, but certainly more than a trivial sum.<BR/><BR/><B>Accuracy is not exactly a cut and dry subject</B><BR/>I submit code reviews may claim better accuracy on specific types of vulnerabilities (ie. SQL Injection). At the same time, would you agree code reviews often lack context of true exploitability, which is a key part of risk management? For example, a source code review might uncover a code level issue, but one not externally exploitable (or easily) for a dozen or more reasons. And if the issue is not exploitable then it may/should be prioritized differently.Secondly, not all high severity and easy to exploit vulnerabilities (ie. Predictable Resource Location) can be found in the Web application code. Agree? Black box can provide organizations with these data points.<BR/><BR/><B>Faster perhaps, but I said “scalable”</B><BR/>I wouldn’t bet on who would win the “John Henry challenge” for Web applications of a given website. I would though if the race was on something close to the scale of the Internet, or maybe just the top 1 percent of sites (impossible I know). I don’t believe you are suggesting there is a sufficient supply of qualified source code reviewers available to cover all the Web code we’ve spent the last 15 years building. I’d also claim black boxers could identify an organizations websites and at least scan/assess them with some level of comprehensiveness many orders of magnitude faster than white boxers could obtain the code and do the same by their standards. I have statistics to backup my time estimates, I’d be very much interested in learning yours if amicable.<BR/><BR/><B>The Cost of Apples vs. Oranges</B><BR/>Sure, reviewing properly coded source can be a low cost breeze, but that’s not the challenge that lay in front of us. It’s the millions of websites that are not coded properly, that do not religiously use parameterized queries, which were the crux of my guerilla math estimates. Surely these would take much longer to analyze line by line. I’d be interested in getting your estimates of time/cost for X LoC for spaghetti ASP or whatever else you feel comfortable sharing. <BR/><BR/><B>But how much faster to fix?</B><BR/>Your suggestion is that its much harder and takes longer to fix a vulnerability with black box results (over white box) because the results lack the specific the line of code. You might be right, but I’m thinking if there is a difference the delta is rather small. I don’t know if anyone has any even anecdotal evidence of whether or not this is true so I believe its hard for either of us to argue. Most discussion is warranted.<BR/><BR/>On the strategy front, we have a lot more to talk about. I’m gathering my thoughts now and hope to share them with you soon.Jeremiah Grossmanhttps://www.blogger.com/profile/05017778127841311186noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-34812562408488113622009-02-22T20:41:00.000-08:002009-02-22T20:41:00.000-08:00This seems to be a good place to let people know a...This seems to be a good place to let people know about a new tool we just released: <A HREF="https://www.owasp.org/index.php/Category:OWASP_Scrubbr" REL="nofollow">Scrubbr</A> - a standalone GUI tool for helping you find stored XSS attacks in your database.<BR/><BR/>It's not perfect, but it's light years beyond:<BR/><BR/>> select * from foo where bar like '%<script%'"<BR/><BR/>Anyway, you can read about it on the link above or download it directly from <A HREF="http://code.google.com/p/owaspscrubbr/downloads/list" REL="nofollow">the Google Code page.</A>Arshan Dabirsiaghihttps://www.blogger.com/profile/17228728745073712711noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-62916585745564323762009-02-22T17:46:00.000-08:002009-02-22T17:46:00.000-08:00Great summary of where SQL injection has evolved t...Great summary of where SQL injection has evolved to. But I agree with Andre that the code-review bashing is really overdone.<BR/><BR/>First of all, getting access to the code is generally no harder than gaining access to test systems on an internal network. You need a process and some supporting technology, but we do it all the time with critical applications from the largest enterprises. (If you're reviewing your own code this is even less of a problem).<BR/><BR/>Second, the math is wildly in favor of code review. To test an application requires fuzzing potentially thousands of parameters, detecting success is spotty at best, and there are false alarms to deal with. In a code review, you can find and verify all the actual queries (generally not that many) quickly and with damn near perfect accuracy.<BR/><BR/>I'll personally take the John Henry challenge on any application any time to prove that manual human code review is faster and more accurate than any scanning approach. It's even more efficient when applied to an entire application portfolio.<BR/><BR/>The financial argument you set out against code review is also specious. Using numbers from the entire Internet is pretty misleading. If you used scanning as the basis finding SQL injection in the entire Internet, it would cost more, and would be far less accurate.<BR/><BR/>To address your argument directly, verifying that a 500,000 LOc application doesn't have SQL injection with manual code review can cost a little as 10 minutes if it uses parameterized queries properly everwhere. That's 100% coverage with a false alarm rate of exactly zero. If they've used dynamic queries, it can take up to a few minutes for each query in the application (i.e. not very long).<BR/><BR/>The cost to do the same thing with scanning is dramatically higher. Tailoring the scanner to the application takes time. Running the scan takes time (even if automated). And verifying that the findings are real (aka eliminating the false alarms) takes time (a lot of time). Note that the best way to eliminate false alarms is to look at the code, so what have you gained. Also, if you have to prove the vulnerability with an exploit, crafting it blind can take hours or days, but with the code it is trivial. And of course there are the missed negatives to consider.<BR/><BR/>But the real problem is that the vast majority of the cost comes from the "fixing" part. That's why giving developers excellent findings that indicate the exact line of code and the exact remedy tailored to the code (not a generic recommendation) is so important. Fixing the problem takes much longer if all you give developers is a broken URL and parameters and not a specific line of code and custom recommendation.<BR/><BR/>There are other problems in application security where static analysis, penetration testing, and scanning are the most efficient approach. But if you want to find SQL injection, there's no question that getting into the code just the right approach.<BR/><BR/>Let me finish by zooming out even a bit more. ALL of this is just about *finding* SQL injection. Finding vulnerabilities is just a small part of a responsible application security program. The whole point of it is to show you where you made mistakes earlier in the process. If you really want to eliminate SQL injection from your enterprise, set a standard requiring the use of parameterized queries, train developers to follow it, use code review process to verify that they did it, and manage the process with some metrics. It takes about a year and you'll never have to deal with it again.<BR/><BR/>I agree with the rest of your post about the size of the challenges we face, and the need for a comprehensive approach to application security.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-13756280.post-80798602283058434592009-02-22T15:17:00.000-08:002009-02-22T15:17:00.000-08:00Great article, with some very poignant topics and ...Great article, with some very poignant topics and insights. I hope this sends a message out to the SQLi-happy pen-testing companies out there that cling to their remote execution and database dumping "showy" kung-fu.<BR/><BR/>However, I also think the industry is growing tired of a focus against code review. Lighten it up a little bit. Everyone knows that web application security scanners are the worst way to find SQLi bugs.<BR/><BR/>Also: you cling to old terminology such as white-box and black-box. Of course it is possible to do both (i.e. hybrid), and white-box does not have to be static analysis, partially or fully automated through a security review tool, and/or manual code review.<BR/><BR/>Regarding "white-box testing", you said, "Before considering this as a scalable solution begin by asking yourself if executive management would allocate enough resources to perform source code reviews on every website every time they change". This is incorrect. There are many different kinds of white-box analysis methods, not just the popular kind that you specifically refer to (i.e. I think you're talking about Fortify or Ounce used by lead-developers and/or security "experts").<BR/><BR/>The best thing about white-box analysis is that it solves *exactly* what you claim is needed: "To solve the SQL Injection problem, preferably in the code, first we must identify what is broken". When using code comprehension, dynamic analysis (no, not the black-box functional testing kind that web application security scanners do, I'm talking about walking the code paths with the binaries, bytecode, and/or source code), code coverage (e.g. Fortify PTA), security review tools, manual code review, unit testing, integration unit testing, component testing, design review, checking acceptance testing against requirements, and about a thousand other different methods...<BR/><BR/>Almost all of the techniques in the above paragraph get to the root-cause and source of the SQLi problem. Web application security scanners, by themselves, never do. They can't. The reality is that web application security scanners alone, even when backed by human experts such as the people who wrote the tools in the first place, require more regression testing than any white-box method. WAF requires more regression testing than any white-box method. They never solve the root-cause, and thus are always going to create unknown situations.<BR/><BR/>While static analysis tools have a notoriously high false-positive rate -- they only have it once (i.e. the first time it is thoroughly run, which is normally the only time you need to run one). WAF and web application security scanners share a continuously high rate of false-positives and false-negatives that cannot be significantly reduced through time and effort. Nobody is ever going to get better than 65% coverage using these methods, and the typical rate today is less than 10% 1sigma and 30% 4sigma.<BR/><BR/>Your last paragraph speaks volumes. I guess the only part where we differ in opinion is that you think Vulnerability Assessment for web applications is the WhiteHatSec method, and I think the same Vulnerability Assessment is everything *but* the WhiteHatSec method. Also, the WhiteHatSec choice of Web Application Firewalls is incorrect -- instead of F5, Cisco, Citrix, and Imperva -- WAF operators should be looking into Microsoft AntiCSRF / AntiXSS SRE, GDS-Security SPF, Java Enterprise HDIV, Mod-Security using Lua (and not huge regex blacklists), etc.<BR/><BR/>While I understand that some recommendations that I propose may not be fully PCI-DSS requirement 6.6 compliant -- I do understand that there are plenty of alternate and compensating controls around every PCI-DSS requirement.drehttps://www.blogger.com/profile/17414510788948258195noreply@blogger.com