Monday, January 26, 2009

Best-Practices are partly responsible for SQL Injection woes

Security Horizon invited me to contribute an article for their free Winter 2009 edition of Security Journal. I took the opportunity to discuss several very important aspects of SQL Injection, which are not well understood. For example, why certain best-practices may have contributed to the ongoing problem. How black and white box vulnerability testing is impacted. Why the good guys are at a substantial disadvantage to the bad guys. How the problem could potentially be solved and how much it might cost us. etc. Especially timely material considering the ongoing exploitation. Enjoy!

SQL Injection, Eye of the Storm
In 2008 SQL Injection became the leading method of malware distribution, infecting millions of Web pages and foisting browser-based exploits upon unsuspecting visitors. The ramifications to online businesses include data loss, PCI fines, downtime, recovery costs, brand damage, and revenue decline when search engines blacklist them. According to WhiteHat Security1, 16 percent of websites are vulnerable to SQL Injection. This is likely under-reported given that the statistics are largely based on top-tier Web properties that employ a website vulnerability management solution to identify the problem. The majority of websites do not and as such may be completely unaware of the extent of the issue. In addition, some recommended security best-practice have ironically benefited malicious hackers. Websense now reports that "60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008." Let’s examine the forces that have aligned to create the storm that allows SQL Injection to thrive.

4 comments:

Anonymous said...

Finding SQL injection dynamically always struck me as a little foolish. It's orders of magnitude faster and more accurate to look for it in the code. Getting the code isn't that hard if you explain the benefit. But whatever your math says for the cost of code review (and static analysis) the dynamic approach is radically more expensive. It's very possible to stamp out SQL injection in an organization, and it does take education, lifecycle, and a good set of security controls. But I don't see the role for dynamic scanning or testing here.

Anonymous said...

@Jeff,

I completely agree with you long-term...dynamic testing will not help solve the problem for any organization. However, in my experience, many times a single dynamic black or white box test is the initial catalyst that opens the eyes to many organizations that have no idea as to the impact. As a first step, its relatively low-cost to having it discovered by an outside malicious entity. Next steps are training, lifecycle management and periodic code reviews.

Anonymous said...

This is one of the few cases where I would strongly advocate both black and white box analysis ongoing as equally valuable. (by white I mean == source & permissions auditing -- DB perm auditing can save you a lot of headache and heartache if you make mistakes in your sql code)

1. There is nothing foolish about black-box scanning/testing for verification of defects.

Defect detection is about throwing the bird into the running turbine engine of the jet and measuring what happens. I sure like the fact they measure this on running jets, vs. having a strong faith in the blueprints or materials.

2. There's nothing foolish about black-box discovery of SQLi. In fact, this is exactly what the black-hats are doing, for many of the same reasons that one should test for it black-box.

It is, mathematically speaking, not the most ideal way to discover this defect per-query. It *is* mathematically speaking the most ideal way to discover this defect per-Internet.

It's a matter of scope. :)

3. Agreed that per-application SQLi is orders of magnitude faster to find in source. In fact someone skilled can do it by hand (sans automation) faster and more accurately than most automation of any kind can do it, source, binary, dynamic injection, etc.. (though I have not used the newest iterations of Fortify/Ounce/etc.)

This is one of the few Syntax areas that I really preferred to go to source as a consultant, onsite, looking at a few applications. If they have a clearly defined or abstracted data-access layer this is easy if you get access to the code.

The desktop "Scanners" were barely capable of finding SQLi LHF only 1 year ago, so I completely empathize with frustrations regarding using them.


4. I have not seen organizations stamp out SQLi, though I can theorize it. I've certainly seen large strides in squashing it. Through all the things you [Jeff] mention and modern frameworks/abstraction layers that try to force you not to shoot yourself in the foot.

5. The role of dynamic testing & verification for SQLi even for organizations that think they have squashed in source is two-fold:

5.1 Scope
5.2 Due-diligence

It is hard to review the SQL code of dozens to hundreds of applications at a time, especially with a spaghetti mess of sql queries all over different layers. This mess often include third party software, deprecated software, and stuff written by the New Guy.

While all SQLi syntax issues may be solved in source in all three of those situations, the 3rd party and deprecated software often does not make business dollars & sense to address in this manner.

And the latter case -- the reality is that the new guy will make mistakes and most organizations want to verify on a regular basis. You can do cool, trendy pair-programming or peer code review or make the new guy verify his queries against all your prebaked unit tests, and all that is *A Good Thing*. (I guess, depending on your organization). But the Friendly New Guys often have a way of borking up code in one or two little places that aren't obvious, and may be many audits away from being spotted.

Here dynamic testing is also of great value to provide verification on running, production software -- to find these little borkages that the bad guys are going to find, blindly and in mass, with their bots today.

If you had to pick *only one*:

I'd be hard pressed, without better numbers, to give a statistically proven "pick one".

If I only had one application and full control of the code -- I'd say "go to source".

If I had many applications, and did not have full control of all the code, I would have a hard time trusting *only* source.

Everyone in the world is getting free unauthenticated Black Box SQLi pen-tests now so I think most enterprises are getting an education in the importance of doing *something*. I think they will start to figure out what works best for them in the next year or two and we will see what approaches work best.

ciao

-ae

Sarah Z said...

I have never used such injection before but after this post I will try this.
sql