Wednesday, December 22, 2010

Which mountain would you rather climb?

Some Web application vulnerability scanners, dynamic and static analysis, are designed for comprehensiveness over accuracy. For others, the exact opposite is true. The tradeoff is that as the number of "checks" a scanner attempts increases causes the amount of findings, false-positives, scan times, site impact, and required man-hour investment to grow exponentially. To allow users to choose their preferred spot between those two points, comprehensiveness and accuracy, most scanners offer a configuration dial typically referred to as a "policy." Policies essentially ask, "What do you want to check for?" Whichever direction the comprehensiveness dial is turned will have a profound effect on the workload to analyze the results. Only this subject isn't discussed much.

Before going further we need to define a few terms. A "finding" is something reported that’s of particular interest. It may be a vulnerability, the lack of a “best-practice” control, or perhaps just something weird warranting further investigation. Within those findings are sure to be "false-positives" (FP) and "duplicates" (DUP). A false-positive is a vulnerability that’s reported, but really isn’t one for any variety of potential reasons. Duplicates are when the same real vulnerability is reported multiple times. "False-negatives," (FN) which reside outside the findings pool, are real vulnerabilities with true organizational risk, that for whatever reason the scanner failed to identify.

Let’s say the website owner wants a "comprehensive" scan. A scan that will attempt to identify just about everything modern day automation is capable of checking for. In this use-case it is not uncommon for scanners to generate literally thousands, often tens or hundreds of thousands, of findings that need to be validated to isolate the ~10% of stuff that’s real (yes, a 90% FP/DUP rate). For some spending many many hours vetting is acceptable. For others, not so much. That’s why the larger product vendors all have substantial consulting divisions to handle deployment and integration post-purchase. Website owners can also opt for a more accurate (point-and-shoot) style of scan where comprehensiveness may be cut down by say half, but thousands of findings becomes a highly accurate hundreds or dozens thereby decreasing validation workload to something manageable.

At this point it is important to note, as illustrated in the diagram, even today’s top-of-the-line Web application vulnerability scanners can only reliably test for roughly half of the known Web application classes of attack. These are the technical vulnerability (aka syntax related) classes including SQL Injection, Cross-Site Scripting, Content-Spoofing, and so on. This holds true even when the scanner is well-configured (logged-in and forms filled out). Covering the other half, the business logic flaws (aka semantic related) such as Insufficient Authentication, Insufficient Authorization, Cross-Site Request Forgery, etc. require some level of human analysis.

With respect to scanner output, an organizations tolerance for false-negatives, false-positives, and personnel resources investment is what should dictate the type of product or scan configuration selected. The choice becomes a delicate balancing act. Dialing up scanner comprehensiveness too high, get buried in a tsunami of findings. What good is comprehensiveness if you can’t find the things that are truly important? On the other hand dialing down the noise too far reduces the number of vulnerabilities identified (and hopefully fixed) to the point where there's marginal risk reduction because the bad guys could easily find one that was missed. The answer is somewhere in the middle and one of risk management.

About 20 km west of Mount Everest (29,029 ft. ASL) is a peak called Cho Oyu (26,906 ft. ASL), the 6th highest mountain in the world. The difference being the two is only 2,000 ft. For some mountain climbers the physical difficulty, risk of incident, and monetary expense of that last 2,000 ft necessary to summit Everest is just not worth it. For others, it makes all the difference in the world. So, just like scanner selection, an individual decision must be made. Of course the vendor in me says just use WhiteHat Sentinel and we’ll give you a lift to the top of whichever mountain you’d like. :)
Vendors take Note: Historically, whenever I've discussed scanners and scanner performance the comments would typically be superficial marketing BS with no willingness to supply evidence to backup the claims. As always I encourage open discourse, but respectfully if you make claims about your product performance, and I sincerely hope you do, please be ready to do so with data. Without data, as Jack Daniel as concisely stated, we'll assume you are bluffing, guessing, or lying.

Tuesday, December 21, 2010

Bug Bounty Programs comes to Website Security: What do they mean?

Recently I tweeted a passing thought, "I wonder if the final stage of maturity for website vulnerability management is offering a bug bounty program." This was stimulated by the news that Mozilla became the second company, following Google, to provide monetary rewards for security researches who find and privately report website vulnerabilities. Only last year this idea would have been considered crazy. Sure, other organizations including Microsoft, Facebook, and PayPal already gladly accept third-party vulnerability disclosures without threatening legal action, but it’s the financial compensation part that sets Google and Mozilla apart.

I’m sure others like myself in the community are asking if website vulnerability bug bounty programs a good idea to begin with and if such programs an anomaly or the start of a 2011 trend?

If we posed the first question to bug hunting masters Charlie Miller, Alex Sotirov, and Dino Dai Zovi there is no question how they’d answer. "No More Free Bugs." Not that all researchers must ascribe to this philosophy, it’s a personal choice, but there certainly shouldn’t be a stigma attached to those who do. The thing is the bugs these gentlemen generally focus on reside in desktop-based software developed by large ISVs. Software that can be readily tested in the safe confines of ones own computer where permission is not strictly required. Website vulnerabilities are in a word, different.

Website vulnerabilities reside in the midst of a live online business, on someone else’s network, where penetration-testing without permission is illegal and the results of which may cause degraded performance and downtime. Not that legalities ever really got in the way of a free pen-test. See the thousands of public cross-site scripting disclosures on XSSed.com. Still, I’d personally agree that while bug bounty programs can indeed be a good idea for a certain class of website owner, I think everyone would recommend thoughtful consideration before opening up the hack-me-for-cash flood gates.

What’s most interesting to me is understanding why Google and Mozilla themselves believe they need a bug bounty program the first place. It’s not like Google and Mozilla don’t invest in application security or would depend on such an initiative. In fact, from my personal interactions their level of application security awareness is top notch and practices represent among the most mature across the Web. They invest in source code reviews, security QA testing, penetrating tests / scans conducted by insiders and third-parties, developer training, standardized development constructs, threat modeling, and a collection of other Software Security Assurance (SSA) related activities. Activities most organization are still coming up to speed on.

So Google and Mozilla have already done essentially all our industry "recommends." Yet, as the multitude of negative headlines and unpaid vulnerabilities disclosures historically show, issues are still found by outsiders with annoying regularity. Personally I think that’s where the motivation for a bug bounty program comes from.

Google and Mozilla probably view their bounty programs as a way to remove additional missed bugs from the vulnerabilities pool, remediate them in a manageable way, foster community good will, and for the low low price of few hundred to a few thousand bucks. Check it out. In the first two months of Google’s program, it looks like they’ve paid out a few 10s of thousands of dollars to three dozen or so researchers. Said another way, the PR benefit is perhaps three dozen user confidence shaking news stories DIDN'T get published. All in all for that price, suddenly the idea of paying "the hackers" doesn’t sound as crazy.

It should be made crystal-clear that bug bounty programs are in no way a replacement for any part of an SSA or an SDL program, rather they are complementary and an opportunity to facilitate improvement. Also, bug bounty programs are not for everybody, and probably not even for most. Only those organizations that truly have their application security act together should even consider offering such a program.

For example, the organization should already have reasonably bug free websites or they won't offering attractively priced bounties for long. Budgets would run out fast and they’ll be forced to suspend the program, which would be quite embarrassing. The organization must also have a strong process in place to receive, validate, respond, act upon, and pay out for submissions. Next as Mike Bailey, a self proclaimed Narcissistic Vulnerability Pimp elegantly stated, "bounty program also involves an implicit commitment to fix bugs quickly." That’s right, no sitting on bugs for a "reasonable" amount of time -- like months to a year or more. Finally the organization will require a super-stable infrastructure capable of enduring sustained attack by hundred or perhaps thousands of entities.

In my humble opinion if an organization has all of this in place, then I’m confident in saying there is a correlation between bug bounty programs and website vulnerability management / SSA maturity. Gunnar Peterson for the graphic>

Jeff Moss, the man behind Black Hat and Defcon, recently encouraged Microsoft, a firm long opposed paying for bugs, to offer a bounty program. "I think it is time Microsoft seriously consider a bug bounty program. They advanced the SDL, it is time for them to advance bounties." I’ve suggested the very same to Microsoft in person on more than one occasion. Veracode has that as a 2011 infosec prediction. Everyone I know of has received a response similar to the following:

"We do not believe that offering compensation for vulnerability information is the best way we can help protect our customers." - Dave Forstrom, group manager of Microsoft Trustworthy Computing.

And there you have it. Is the website vulnerability bounty program phenomena the start of a trend? Who can really say? Only time will tell.

Monday, December 20, 2010

Sandboxing: Welcome to the Dawn of the Two-Exploit Era

Exploitation of just ONE software vulnerability is typically all that separates the bad guys from compromising an entire machine. The more complicated the code, the larger the attack surface, and the popularity of the product increases the likelihood of that outcome. Operating systems, document readers, Web browsers and their plug-ins are on today’s front lines. Visit a single infected Web page, open a malicious PDF or Word document, and bang -- game over. Too close for comfort if you ask me. Firewalls, IDS, anti-malware, and other products aren’t much help. Fortunately, after two decades, I think the answer is finally upon us.

First, let’s have a look at the visionary of software security practicality that is Michael Howard as he characterizes the goal of Microsoft’s SDL, "Reduce the number of vulnerabilities and reduce the severity of the bugs you miss." Therein lies the rub. Perfectly secure code is a fantasy. We all know this, but we also know that what is missed is the problem we deal with most often, unpatched vulnerabilities and zero-days. Even welcome innovations such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) only seem to slow the inevitable, making exploitation somewhat harder, but not stopping it entirely. Unless the battlefield itself is changed, no matter what is tried, getting hacked will always come down to just one application vulnerability. ONE. That’s where sandboxes come in.

A sandbox is an isolated zone designed to run applications in a confined execution area where sensitive functions can be tightly controlled, if not outright prohibited. Any installation, modification, or deletion of files and/or system information is restricted. The Unix crowd will be familiar with chroot jails. This is the same basic concept. From a software security standpoint, sandboxes provide a much smaller code base to get right. Better yet, realizing the security benefits of sandboxes requires no decision-making on the user’s behalf. The protections are invisible.

Suppose you are tasked with securing a long-established and widely-used application with millions of lines of insanely complicated code that’s deployed in a hostile environment. You know, like an operating system, document reader, Web browser or a plug-in. Any of these applications contain a complex supply chain of software, cross-pollinated code, and legacy components created long before security was a business requirement or anyone knew of today’s class of attacks. Explicitly or intuitively you know vulnerabilities exist and the development team is doing its best to eliminate them, but time and resources are scarce. In the meantime, the product must ship. What then do you do? Place the application in a sandbox to protect it when and if it comes under attack.

That’s precisely what Google did with Chrome, and recently again with the Flash plugin, and what Adobe did with their PDF Reader. The idea is the attacker would first need to exploit the application itself, bypass whatever anti-exploitation defenses would be in place, then escape the sandbox. That’s at least two bugs to exploit rather than just one. The second bug, to exploit the sandbox, obviously being much harder than the first. In the case of Chrome, you must pop the WebKit HTML renderer or some other core browser component and then escape the encapsulating sandbox. The same with Adobe PDF reader. Pop the parser, then escape the sandbox. Again, two bugs, not just one. To reiterate, this is this not say breaking out of a sandbox environment is impossible as elegantly illustrated by Immunity's Cloudburst video demo.

I can easily see Microsoft and Mozilla following suit with their respective browsers and other desktop software. It would be very nice to see the sandboxing trend continue throughout 2011. Unfortunately though, sandboxing doesn’t do much to defend against SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, Clickjacking, and so on. But maybe if we get the desktop exploitation attacks off the table, perhaps then we can start to focus attention on the in-the-browser-walls attacks.

Thursday, December 16, 2010

Why Speed & Frequency of Software Security Testing Matter, A LOT

The length of time between when a developer writes a vulnerable piece of code and when the issue is reported by a software security testing process is vitally important. The more time in between, the more effort the development group must expend to fix the code. Therefore the speed and frequency of the testing process whether going with dynamic scanning, binary analysis, pen-testing, static analysis, line-by-line source code review, etc. matters a great deal.

WhiteHat Sentinel is frequently deployed in the Software Development Life-cyle, mostly during QA or User Acceptance Testing phases. From that experience we’ve noticed three distinct time intervals (1 week, 1 month, and 1 year), from when code is written to vulnerability identification, where the effort to fix is highly distinct. Below is what we are seeing.

The following focuses solely on syntax vulnerabilities such as SQL Injection, Cross-Site Scripting, HTTP Response Splitting, and so on. Semantic issues, also known as Business Logic Flaws, cause a different environmental impact.

When vulnerability details are communicated within ______ of the code being written:

1 Week (Less than 1 hour fix)
The same developer who introduced the vulnerability is the same developer who fixes the issue. Typically the effort required ranges from just minutes to an hour because the code is still fresh in the developers mind and they are probably still working on that particular project. The code change impact on QA and regression is minimal given how new the code is to the overall system.

1 Month - (1 - 3 hour fix)
The original developer who introduced the vulnerability may have moved onto another project. Peeling them off their current task enacts an opportunity cost. While remediation effort might be only 1 - 3 hours of development time, usually an entire day of their productivity is lost as they must reset their environment, re-familiarize themselves with the code, find the location of the issue, and fix the flaw. The same effort would be necessary if another developer was tasked to patch. If the vulnerability is serious a production hot-fix might be necessary requiring additional QA & regression resources.

1 Year (More than 10 hour fix)
The original developer who introduced the vulnerability is at least several projects away by now or completely unavailable. The codebase may have transferred to a software maintenance group, who have less skills and less time to dedicate to “security.” Being unfamiliar with the code another developer will have to spend a lot of time hunting for the exact location, figure out the preferred way fix it, that is if any exists. 10 or more developer hours is common. Then a significant amount of QA & regress will be necessary. Then depending on the release cycle deployment of said fix might have to wait until the next schedule release, whenever that may be.

What’s interesting is that the time and effort required to fix a vulnerability is not only subject to the class of attack itself, but how long ago the piece of code was introduced. Seems logical that it would be, just a subject not usually discussed. Another observation is that the longer the vulnerability lay undiscovered the more helpful it becomes to pinpoint the problematic line of code for the developer. Especially true in the 1 year zone. Again terribly logical.

Clearly then during SDL it’s preferable to get software security test results back into the developer hands as fast as possible. So much so that testing comprehensiveness will be happily sacrificed if necessary to increase the speed and frequency of testing. Comprehensiveness is less attractive within the SDL when results only become available once per year as in the annual consultant assessment model. Of course it’d be nice have it all (speed, frequency and comprehensiveness), but it’ll cost you (Good, Fast, or Cheap - Pick Two). Accuracy is the real wild card though. Without it the entire point of saving developers time has been lost.

I also wanted to briefly touch on the differences between act of "writing secure code" and "testing the security of code." I don’t recall when or where, but Dinis Cruz, OWASP Board Member and visionary behind the 02 Platform, said something a while back that stuck with me. Dinis said developers need to be provided exactly the right security knowledge at exactly the time they need it. Asking developers to read and recall veritable mountains of defensive programming do’s and don’ts as they carry out their day job isn’t effective or scalable.

For example, it would be much better if when a developer is interacting with database they are automatically reminded to use parameterized SQL statements. When handling user-supplied input, pop-ups immediately point to the proper data validation routines. Or, how about printing to screen? Warn the developer about the mandatory use of the context aware output filtering method. This type of just-in-time guidance needs to be baked into their IDE, which is one of the OWASP O2 Platform’s design objectives. "Writing secure code” using this approach would seem to be the future.

When it comes to testing as you might imagine WhiteHat constantly strives to improve the speed of our testing processes. You can see the trade-offs we make for speed, comprehensiveness, and cost as demonstrated by the different flavors of Sentinel offered. The edge we have on the competition by nature of the SaaS model is we know precisely, which of our tests are the most effective or likely to hit on certain types of systems. Efficiency = Speed. We’ve been privately testing new service line prototypes with some customers to better meet their needs. Exciting announcement are on the horizon.

Wednesday, December 15, 2010

DO NOT Poke the Bear

ThreatPost was kind enough to allow me to guest post on their blog about some thoughts on the Gawker hack. A snippet is below, click through for the rest.


Lessons Learned From the Gawker Hack
"Everyone sounded the alarms at the
Gawker Media attack, which included a security breach of websites such as Gizmodo, Lifehacker, Kotaku, io9, and others. The numbers were impressive: 1.3 million user accounts exposed, 405 megabytes of source code lost, and perhaps more important to some, the identity of those leaving anonymous comments potentially revealed. For Gawker, there is a loss of trust that will be difficult to regain. Users are already clamoring for the ability to delete their accounts. And, on the technical side, all Gawker’s systems will need to painstakingly audited or rebuilt entirely from scratch to prevent the same thing from happening again. Happy Holidays indeed.So, what is to be learned from this perfect storm of bluster and bravado? Many lessons, most of them demonstrating what not to do.

1. First and foremost, DO NOT poke the bear. By taunting the hacker community, especially the vigilante types, Gawker made itself a target unnecessarily. Never claim to be “unhackable.” The hackers outnumber you by several orders of magnitude, and they have more free time. Respect their capabilities. Not to mention the odds are always stacked against defenders. The attackers only have to find one little crack in wall to bring the castle crumbling down."


....

Friday, December 10, 2010

Spoofing Google search history with CSRF

Let’s assume, dear Web surfer, that I can get you to visit a Web page I control. Just like the page on my blog you’re reading right now. Once you do, by nature of the way the Web works, near complete control of your Web browser is transferred to me as long as you are here. I can invisibly force your browser to initiate online bank wire transfers, post offensive message board comments, vote Jullian Assange as Times Person of the Year, upload illegal material, hack other websites and essentially whatever else I can think up. Worse still, on the receiving end, all the logs will point back to you. Not me.

If you don’t believe me keep reading. I already made you search Google for something a little embarrassing. And no, this is not something anti-virus scanners can do anything about.

The technical term for this type of attack is Cross-Site Request Forgery (CSRF) and years back I called it the sleeping giant. If you happen to be one of the legions of Web developers who have never heard of CSRF then chances are every feature of every website you’ve ever built is vulnerable. Millions of other websites out there are suffering the same problem. With same technology (HTML and JavaScript) that Web pages use to include images, audio, video, banners, trackers, counters etc from all over the internet, any website owner can instruct a victim’s browser to send arbitrary HTTP requests to any website of their choosing.

Generally, Web browsers generate two different types of HTTP requests, GET and POST. For the sake of demonstration here we’ll be focusing only on GET. POSTs require a tiny bit more code. To have someones browser send a particular GET request, like a Google Search for example, is extremely simple.

1) Search Google something like “Justin Bieber fan club” and copy the URL in the location bar.


2) Paste the Google search URL into an HTML IMG tag and zero out the height, width, and border to make it invisible.

<* IMG SRC="http://www.google.com/search?hl=en&q=Justin+Bieber+fan+club&btnG=Search&aq=f&aqi=&aql=&oq=&gs_rfai=" WIDTH="0" HEIGHT="0" BORDER="0" *>

3) Load this code into a Web page, like this one, and voila! When the a Web surfer arrives their browser will execute the code and perform the exact same search (see HTTP request screen shot).

Obviously then any website owner can make your browser search for anything on Google, anything at all. Keep in mind that if the victim is logged-in, their session cookies will be automatically be sent as well. This is a key point about CSRF attacks. Forged HTTP requests are authenticated if the user had previously logged-in to the target website.

If you happen to be logged-in to Google right now, go check your Web search history. Maybe you’ll see something in there you didn’t search for. It might look something like this... :)





Wednesday, December 08, 2010

Internet Explorer 9 ad blocking via "Tracing Protection" -- no means yes.

A year ago I began strongly encouraging Microsoft and Mozilla to include ad blocking technology in their respective Web browsers. And not just point to an extension like the awesomeness that is AdBlock Plus, but include it or something exactly like it, as core browser feature. For security, privacy, and competitive advantage reasons I believed this to be a really good idea. Still do, in fact.

In a blog post I argued ad blocking technology would be a huge win for Microsoft (and users), in particular by retroactively integrating the feature in all versions of Internet Explorer (6, 7, and 8). Doing so would attract vast quantities of new users. After all, ad blocking is the most popular extension type in ALL the browsers, but it also gives them an attractive differentiator that Google’s business model would never tolerate matching.

Despite my best efforts, my arguments fell flat. During Blue Hat, Robert “RSnake” Hansen and I had very productive discussions with the IE & Mozilla security teams about their browser security plans for the future. Still the idea of making ad blocking readily available for the end-user was met with much resistance. The fear was such a feature would conflict with Microsoft’s own ad serving business, or advertising partners, and may also lead to potential anti-trust issues. Oh well, we thought. Nothing ventured nothing gained. As any infosec pro, we can attest, we’re used to getting told “no” by the business. :)

Then something really interesting took place last week. The U.S. Federal Trade Commission (FTC) issued a report on protecting consumer privacy online which recommended that Congress create new online Do-Not-Track legislation and it be accompanied by a privacy policy feature-set baked into Web browsers. To over simplify, browsers supporting Do-Not-Track would allow Web surfers to indicate, presumably through HTTP headers, that they opt-out from being tracked. Unfortunately user data would still get sent to the tracking company by nature of the way browsers work (third-party cookies, etc), but tracking companies receiving the do-not-track header would be legally compelled to respect the users choice.

Then the impossible happened! Only days after the report was released Dean Hachamovitch, Corporate Vice President of Internet Explorer, published a lengthy and technically detailed blog post (even had a video) describing their plans to include an implementation of Do-Not-Track in the yet to be released Internet Explorer 9. Dean explains...

“We want to develop (as the recent FTC report put it) ‘more effective technologies for consumer control’ and make progress on the report’s recommendation of ‘a browser-based mechanism through which consumers could make persistent choices’ regarding tracking.”

The two primary components involved:
  1. IE9 will offer consumers a new opt-in mechanism (“Tracking Protection”) to identify and block many forms of undesired tracking.
  2. “Tracking Protection Lists” will enable consumers to control what third-party site content can track them when they’re online.
Ladies and gentlemen, this is all the framework that is necessary for ad blocking to function in IE9! User configurations will also be persistent across sessions, even when the browser is restarted, which is opposite to how InPrivate mode behaves. This is huge! Sure, just like in AdBlock Plus, IE9 users will still need to procure a regularly updated TPS list in order to block all the known advertisers, but that problem is already long solved.

No way Microsoft slammed out the code from scratch in a few short days because the FTC made some recommendation. The IE team clearly saw ad blocking as a good idea despite what they told us before and had ad blocking, errr I mean Tracking Protection, ready to go. Only they might not have had the juice to include it because of the aforementioned road blocks.

My speculation is, and I could be WAY off base here, that Microsoft knew generally what was in the FTC’s report prior to release and was waiting for publication before announcing Tracking Protection in IE9. That way IE team could be seen as simply supporting the FTC’s out-out recommendations to protect user privacy, and who could argue with that, while at the same time taking the opportunity to implement ad blocking functionality under the same banner. That way they could get around competing business interests and the watchful eye of anti-trust regulators. They could say they are “protecting the user” and “abiding by the FTC,” which they are in both counts. If I’m right, extremely brilliant move.

I think we’re witnessing the beginning of a whole new chapter in the ongoing browser war. Now we must ask, when and if Mozilla is going to add the functionality of their #1 extension natively into their browser? How can they now not do so? Can Firefox’s market-share position afford Internet Explorer to be more advanced in privacy protection features? We’ll have to wait and see what they say or do. I’m hopeful they’ll come around as Microsoft did. Even more interesting will be how Google reacts. AdBlock is their most popular add-on as well. The bottom line is these are very good signs for everyone on the Web.

Friday, December 03, 2010

Google rewards the first set of reserachers in their website bug bounty program

Early this year Google announced a bug bounty program for the Chromium browser designed to encourage and reward security researchers for privately disclosing vulnerabilities they find. The program was well received by the community and by the looks of the results, nothing less than a success. At the time of the announcement I half-jokingly poked Google via Twitter to expand the program to include their websites (*.google.com). That way the Web hackers could get in on the action.

I guess Google was listening, or more specifically those managing the bug bounty program, and kudos to them because they did exactly that! Starting last month finding and disclosing a vulnerability (legally) in a google domain nets you somewhere between $500 and $3,133.70. Over the last 30 days several members of our Threat Research Center (TRC) in their spare time jumped into the action.

Yesterday Google posted the first set of individuals who qualified for security rewards -- that is who found serious website vulnerabilities. Of the three dozen people are on the “Google Security Hall of Fame” list five are from WhiteHat Security's TRC.
  • Justin Barron
  • Michael Cottingham
  • Phillip Purviance
  • Kyle Osborn
  • Matt Evans
This is rather remarkable, impressive even. Congratulations to those members of our team and to all the other researchers listed. Stellar work. You've made millions of Web users just a little bit safer online. And also a big thanks to Google for having the guts and foresight to offer such a program.

Thursday, December 02, 2010

Website Monocultures and Polycultures

Way back in 2003 a group of highly respected security pros released a controversial yet landmark paper, “CyberInsecurity: The Cost of Monopoly.” The text offered two lessons: software monocultures are dangerous and Microsoft, the largest of monocultures, is the most dangerous. At the time this conclusion sparked an industry firestorm. More recently Bruce Schneier and Marcus Ranum faced-off on the monoculture debate, which got me thinking if these ideas apply equally to website security. You see monoculture and polyculture theories are generally based upon commonly understood network and host layer security behaviors, behaviors very different from website security.

Before diving in let’s first establish a baseline on the fundamental assumptions about software monocultures and polycultures. Monocultures, meaning all systems are identical, are at elevated risk to systemic widespread compromise because all nodes are vulnerable to the same attack. For example, one zero-day exploit (not necessarily required) has the capability of ripping through the entire ecosystem. The benefit of a monoculture however is the consistency of all the connected nodes allow for easier management by IT. Manageability makes keeping patches up-to-date less difficult and by extension raises the bar against targeted attacks and random opportunistic worms.

In polycultures exactly the opposite is true. In the event of a worm outbreak, again possibly leveraging a zero-day vulnerability, a polyculture would be more resilient (survivable) by virtue of the diversity in the ecosystem. All nodes are not vulnerable to the same issue. The downside of course is the increased difficulty and cost of IT managing a polyculture, including keeping security patches up-to-date. Therefore targeted attacks and random opportunistic worms are more likely to succeed in a polyculture environment, but to a limited extent.

Depending on tolerance for risk and the threat agent one is most concerned about, targeted or random opportunistic, it dictates if a monoculture or polyculture environment is preferable. We see where the authors of the aforementioned paper fell. And in 2003, the era of extremely large worm infections including SQL Slammer, Blaster, Nimda, Code Red, and so on it is not hard to see why. Today I hazard to guess that most people in the infosec industry would still agree with their conclusion -- monocultures as dangerous, polycultures are more survivable. So when thinking in terms of networks comprised of Windows, Linux, BSD, OS X, and so on all this sounds reasonable, but when the context is switched to websites things get trickier.

First of all, it doesn't matter what OS is underlying when SQL Injection, CSRF, XSS, AuthN, and AuthZ attacks are employed against a target site. Second, save for mass SQL Injection worms that take advantage of one-off Web application flaws, websites compromises are primarily targeted. That is the attacker is to some degree sentient. Lastly, and this is where the monoculture vs polyculture question comes in, in networks the attack surface consists of its many hosts while in a website its really the collection of discreet Web applications (or inputs to those applications). Applications written in one or more programmings languages.

I’m suggesting that a “website monoculture” is one where all the Web applications are written in a single programming language and development framework. Pick Python, Perl, PHP, .Net, Java, Ruby, etc. But one and only one. Conversely, a “website polyculture,” is where there’s some mix of languages and frameworks. Of course the manageability aspects of multi-language website mirrors a multi-OS network. Sites using a single consistent language are easier for an organization to code securely and keep secure. Here’s where it gets interesting, I’m not sure how common a website monoculture really is on the Web.

Any website worth hacking is NOT a static network or host component, but a mass of Web applications that are updated with some frequency over years -- daily, week, monthly, etc. The functionality may be merged or migrated with other websites, built by different teams developers, and whose section of the code is created using whatever programming language is popular at the time. A website is not AN application, but many hyperlinked together. That's why you’ll often see websites using some ASP classic and .NET, Java mixed in with Cold Fusion, perhaps Perl intermingled with PHP, and many other combos. Case in point, the WhiteHat Security 9th Website Security Statistics Report showed that websites exhibiting extensions from one language often had a substantial number of vulnerabilities with extensions in another.

Also important to point out that when one host is compromised on a network the attacker may attempt to exploit another flaw in another host and leapfrog across. A quite common scenario because the first compromised machine is usually not enough to achieve their end goal. Websites on the other hand are different. One exploited bug in a website tends to give the attacker exactly what they wanted, server-side data or end-user compromise. It is rather uncommon to find it necessary to exploit one Web application flaw to then exploit another to achieve the goal. Just one XSS allows someone to hack all users. One SQL Injection pilfers all of the data. So no real compartmentalization exists on a website and therefore there’s nothing to be gained security wise that I can see in a polyculture website.

So if website attacks are generally targeted, again except for SQLi worms, and it's easier to secure code written all in the same language, then we should be advocating monoculture websites. Right? Which is exactly the opposite of how the community seems to want to treat networks. I just found that to be really interesting. What I’m working on now inside WhiteHat is trying to find statistical evidence in real terms how the security posture of the average monoculture and polyculture compare. I’m guessing monoculture websites are noticebley more secure, that is, less vulnerabilities. But what would your theory be?