Wednesday, February 03, 2010

The Web won’t be safe, let alone secure, unless we break it

There are several security issues affecting all major Web browsers that have remained unaddressed for years (probably because the bad guys haven’t leveraged them aggressively enough, but the potential is there). The problem is that the only known ways to fix these issues (adequately) is to “break the Web” -- i.e. negatively impact the usability of a significant and unacceptable percentage of websites. Doing so is a nonstarter for any browser vendor looking to grow market share. The choice is clear for most vendors: Be less secure and adopted, rather than secure and obscure. This is what the choice comes down to. This is a topic deserving of further exploration.

Web security can be divided into two parts, Website security and Web Browser security. Both are equally important. A website must be able to protect itself from a hostile browser and a browser must be able to protect itself from a hostile website. If either side of these assumptions fails, then there is a problem (the Web is not secure). Attacks targeting browsers, which will be the focus of this post, can be broadly categorized into three distinct vectors:

1) Attacks designed to escape the confines of the browser walls and execute within the desktop operating system below. This is primarily achieved by exploiting memory and file-handling implementation flaws.

2) Behavioral attacks that trick users into doing something, such as downloading and installing malware, thereby harming their machine or encouraging them to reveal sensitive information.

3) Attacks taking advantage of design flaws in the way the Web works. These attacks normally remain within the browser walls and use the victim’s browser as a launch platform for surreptitiously pilfering information from their session or the surrounding network.

After years of massive volumes of CVEs (repository for published vulnerabilities), the browser vendor incumbents (Microsoft, Mozilla, Opera, Google, Apple) have made great strides in addressing vector #1. Some have more work to do than others. This is a good thing, as exploiting unpatched browsers is the primary method for malware propagation such as the so-called drive-by-downloads, legitimate websites hosting malware that infects their visitors. Fortunately “fixing” #1 doesn’t require “breaking the Web,” only updating shoddy code and distributing updates.

Solving #2 is more psychological than technical in nature. The challenge is that people trust computer screens, believe what they see on the Web, and will install anything in order to watch the latest celebrity sex tape or open a personalized e-greeting sent by their “friend.” Attackers prey on this inherent trust, general good nature, and basic human instinct. In response, browsers have provided EV-SSL, Anti-Phishing Toolbars, SSL warning dialogs, password managers, etc. These efforts make important security decisions more visible, harder to get wrong, or remove the decision altogether. Again “fixing” these issues doesn’t require “breaking the Web,” but creating a more intuitive user-interface design.

Addressing #3, with roots dating back to the earliest days of the Web, is another matter entirely. Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), Clickjacking, CSS History Stealing, Intranet Hacking, etc. are all good examples. While these weren’t pressing issue before, they are trending in a dangerous direction. We’ve seen outbreaks of Twitter worms, XSS Defacements of government websites, Facebook Clickjacking attacks, sites that disclose which porn sites people visit, several Intranet Hacking proof-of-concept tools, and so on.

Many, including myself, have asked the major browser vendors to do something about the CSS History Hacking, a privacy violation where a malicious website can tell if you’ve been to a certain URL, by disabling access to key DOM APIs. They said doing so would break certain websites and upset Web developers. (Update: See Wladimir's comment below for excellent insight into the true difficulty of solving this problem)

To solve Intranet Hacking, the suggestion was made to deny websites with a non-RFC 1918 IP address the ability to passively instruct a browser to connect to RFC 1918 IP addresses. The response was that it would break certain essential features like corporate Web proxy set-ups and add-ons like Google Desktop.

Fixing Clickjacking would require changing IFRAMES implementation so that they would not be transparent or allowed at all. Doing so would undoubtedly cause major Web breakage, such as no banner advertising or Facebook-style application platforms. So instead we get opt-in X-FRAME-OPTIONS, which basically no one uses at the moment.

Maybe browser tab/session separation is in order. When logged-in to a website in one tab, other tabs wouldn’t have session access thereby limiting the damage XSS, CSRF, and Clickjacking could inflict. But, this solution would probably annoy users and Web developers who really want persistent authentication. Oh, and we really need Web tracking cookies too. Gah!

So here we are, waiting for the other shoe to drop, and bad enough things to happen. Then we’ll get the juice required to fix these problems, by default. The bigger problem is when that time eventually comes we might actually be forced to break the Web to secure it. In the meantime, the community has been lobbying hard for opt-in tools that the proactive crowd can use to protect themselves ahead of time. Fortunately, we are starting to see new technologies like XSSFilter, Content Security Policy, Strict Transport Security, and Origin headers come into view. Maybe this is the future and a look into the security proving ground for the changes we’ll need to make later.

8 comments:

Wladimir Palant said...

While I agree with the essence of this blog post, you seem to underestimate what it takes to prevent the CSS history hack. I followed the discussion on Mozilla's side and disallowing access to "key DOM APIs" wouldn't be sufficient - you have to disallow access to almost all of the DOM since it can be used to indirectly test CSS styles. Clearly this won't work which is why they addressed the problem from the other side - and now we have layout.css.visited_links_enabled preference. Only that nobody dares to switch it on by default and that's where we get to the point of your post again.

Jeremiah Grossman said...

@Wladimir great insight, thank you, I have no doubt you are 100% correct about the difficulty. Updated the post pointing to your words. It is only when the problem gets bad enough will the trade-offs be worthwhile.

Jenna Culbertson said...

Good post. Thanks for the information.

Christian said...

I'm wondering if the HTML spec's "sandbox" attribute for iframes should also be included in your list of "fortunates" at the end?

Jeremiah Grossman said...

@Christian you are probably right, but I actually haven't spent enough time reading the new spec. I suspect it'll suffer from backwards compat issues where security for iframes would be default-off -- hence opt-in (which very few will)

web development said...

Thanks for share this post. I really enjoy this post.

mckt said...

I was in the middle of writing a rant on some parallel subjects when you posted this. A few days and some heavy editing later, here's my take on the matter:

Anonymous said...

Re. the CSS history hack: http://cssfingerprint.com :)