Monday, February 12, 2007

We need Web Application Firewalls to work

In the early 90’s network firewalls surfaced as the product everyone needed for defense against the dangers of being internet-connected. Host security purists countered calling firewalls unnecessary because everyone should patch and harden their hosts. Proponents rejoiced because firewalls made life easier as their networks were vast, diverse, and largely beyond their control. Many years later almost every computer has a firewall in front of it (sometimes several) and some form of automated patching, so both solutions eventually won. History seems to be repeating itself with Web Application Firewalls (WAF’s) and the “secure software” movement.

Generally IT/Sec guys like the idea of WAF’s, while secure software purists argue for the code saying WAF’s shouldn’t be viewed as cure-alls. Fair enough, but in my opinion neither should secure software. The reality is software has bugs and hence will have vulnerabilities. Modern development frameworks like ASP.NET, J2EE and others have demonstrated big gains in software quality, but what about the vast majority of the world’s 100+ million websites already riddled with vulnerabilities? Is anyone actually claiming we should go back and fix all that code? Fixing them one at time would be like trying to drain the ocean with a teacup.

What happens today is IT/Sec must compete for development resources over revenue generating features being pumped out every week. The same people, with responsibility and no authority, are also powerless to fix the issues like they’re used to with patches or firewall rules. In web application security, IT/Sec who used to have control assumes a subservient role to the development group who are not security experts. Developers say they need to be convinced why fixing XSS and SQL Injection is important. Typically result of the exchange is perpetually insecure websites as the interests of both parties are not in alignment. We need something that gives developers time and IT/Sec control. That’s where WAF’s come in.

A good WAF’s is designed block a lot of the most common and dangerous web application attacks. Why would anyone not want that? From what I’ve found its not that the objectors don’t like what WAFs promise, it’s that they don’t DO what they promise. Or there is some set-up and ongoing management overhead involved, which are all completely valid concerns. Still I think the web application security problem has simply gotten WAY too big to be fixable in the code without the help of WAF’s. So two things need to happen:

1) WAFs are evolving technologies that MUST BE MADE TO WORK, or work a lot better, and we must see them work over and over again. Witnessing this will help build trust in these devices, which will lead to….

2) The web application security mindset maturing to the level of network security. No one views Cisco Pix’s (Firewall) as competitive to BigFix (Patch Management) as overlapping with Qualys (VA). The same should go for WAF, Security in the SDLC (Frameworks), and Web Application Vulnerability Assessments respectively.

6 comments:

Anonymous said...

I don't like this. Firewalls didn't solve network security problems. They were not a panacea in the 1990's. They left behind a trail of guilt and betrayal.

What happened to `Intranet network/port scanning'? I thought you were the one bringing up these issues! This was your message to the community - and now you come out saying how great firewalls are?

Let me start with pivot bouncing, move on to firewall hole punching, and finish with a little blurb on the same-origin policy. Need I mention that intrusion detection/prevention doesn't work? Logs don't work, either!

The problem with network security in the 1990's that spawned the evolution and proliferation of firewalls were three attacks. Two of which were really seen as the same attack thanks to Mitnick and CERT, and the other attack wasn't really an attack at all:
1) SYN flooding
2) Berkeley r-services
3) Potential exhaustion of the IPv4 address space (a scaling problem that was alleviated by IPv6 but "band-aided" by CIDR in 1993 and more-so later by NAT)

The problem with web security in the 2000's that spawned the evolution (but not yet proliferation) of WAF's are two types of attacks:
1) SYN flooding (well, ok - full connection scan flooding)
2) XSS and CSRF

Note the similarities between the same-origin policy and Berkeley r-services. They are the same.

The solution to the network security attacks was not firewalls. Firewalls served as a vehicle for both NAT and CIDR. By adding in this new "firewall" infrastructure, an organization could migrate their prefixes on their "classful" outside IP ranges and start using CIDR, and get unlimited IP addresses for their internal network (also wrongly seen as a security feature) with NAT. At the cost of sacrificing the end-to-end model of the Internet and demultiplexing everything to port 80.

In 1994 I started playing around with this neat program called PGP. This was around the same time that Mitnick hacked Netcom and started SYN flooding machines so he could takeover their IPs to rsh into machines under the same domain policy control (this is where I see a particular relation to "same-origin"). It was also around the same time that people found out that by typing rlogin host -l -froot, they could get local or remote root without supplying any credentials.

Speaking of which, Solaris versions 10 and 11 yesterday were found to be vulnerable to the same flaw (except in telnet), 12 years after the rlogin -l -froot fiasco.

One year later, in 1995, I found out about this neat program called SSH that supported the same public-key crypto I liked in PGP. SSH replaced Berkeley r-services in my mind. Unfortunately, it's taking the rest of the world a very long time to keep up. Especially since they feel safe behind their FIREWALLS.

I also learned (especially hard in October of 1997 when my machine was rm'd) that passwords and keys don't work that great when somebody backdoors the client... even though I personally had been using OTP with OPIE and S/Key on that very machine. In other words, `you can't trust the client'. I also learned about privilege escalation and how all the sandboxes (sometimes read: firewalls) in the world, including grsecurity's chroot and FreeBSD's jail - can't and won't protect you from a determined adversary. However, I learned that if you trust a client by checking its signed-code (do not read that as hash; I did not say hash), as well as using an OTP to access the server - you're at least more than half-way ahead of your adversary.

The most obvious problems were always the problem with the listener (i.e. the SSH server). Everyone knew that servers were hard to protect so they were often fronted by small-but-secure server code such as those written by djb. SSH wasn't. Eventually, people started using portknocking or SPA to protect SSH servers and similar services that required being open to the world but with no qmail (alternative to sendmail) equivalent.

In network security, there shouldn't be any protocols over the wire that authenticate or provide a channel that is different than the mechanisms provided in classic implementations of SSH, IPSec, or SSL. Unauthenticated and cleartext protocols should be wrapped. Listeners should also be wrapped by safe-listeners. See IETF BCP 61.

In the 1990's wrappers existed in the form of netcat and later cryptcat, openssl, etc. Today we have really powerful plugboard proxies that can handle dirty workloads - no matter what the size. I'm not sure how Echo Mirage works for this, but the idea is solid.

If we can lock down web applications in a similar way (and I don't mean by running djb's publicfile instead of Apache, although this is not really that bad of an idea) then we move closer to our goal of `security' (whatever that means).

Speaking of definitions, I worry that the [risk/security/vulnerability] "assessment" word is also losing its meaning. For example, some consider S-OX guidelines to be an assessment (I prefer the word audit), while others think that audits are what you do to source-code.

First of all, "assessment" is too close to "compliance" and "audit" is too close to "test".

Let me be specific about `what works' in `security' "assessment". Generation fuzzing and code-covered fault-injection through series of tests that explore the inner workings of each application *works*. Such activity exposes `security' bugs also known as vulnerabilities in software. When these bugs are fixed, you know you are secure from them. Risk management (note that I did not say patch management) encompasses the ability to look at risk as a model, apply threats to that model, assign vulnerabilities to threats (instead of categorizing or rating them), and verify that each vulnerability has been applied to the infrastructure the patch/fix was meant to protect. Vulnerability management means knowing which vulnerabilities apply to which threat models, usually best done with a vulnerability database and notification services.

I'd like to take a moment here to talk about scanning. Scanning looks for vulnerabilities and when it finds them - it reports them. Sometimes it gets false positives. Worse - it chases the elusive false negatives that it will never find. Finally, scanners present a problem to security because they are in fact huge piles of software themselves. That means that they also can have vulnerabilities! Imagine that - a security "product" with the potential for vulnerabilities!

An alternative to scanning for vulnerabilities would be to sign the code and verify signatures to revision levels (a simpler, less intrusive solution). This can be done on the infrastructure even if code is binary-only, but it can also be done in revision-control-systems and content-management-systems. This is actually the final missing piece in network security, and it's a part of vulnerability management. Note that incident response does use these methods and they work remarkably well.

Applying similar principles to web application security is probably our primary goal. SSL, secure cookies, POST's with nonces, safe-listeners (or even portknocking), OTP's, n-tier application-verification through cryptographic-checks, generation fuzzing, risk management, and vulnerability management are all good for web application security. Scanner-based assessments, compliance testing, source-code audits, "secure" frameworks, and WAF's make my list of `very distant last-places'. Should you still use these if they are so far off the target goal of security? Sure, if you have the time. But don't lose the big picture.

txs said...

Holy cow ntp. I read half of your post and passed out due to exhaustion. Everything I did manage to get through was spot on tho. Great comments.

However, you are missing the big picture. What JG is trying to say is simply "respect the security onion". To quote Shrek - "Ogres (InfoSec) are like onions.. they have LAYERS". There are layers and trade offs in just about every area of InfoSec and not one solution is going to solve every issue. There isn't a silver bullet.

With regard to WAFS vs. an appropriate SDLC and secure coding practices. We need BOTH. It just so happens that many times the WAF will be a quicker implementation and should thus be first. This is primarily due to limited resources and the quantity of the rewards resulting from the effort point expenditure. Low effort + medium_security = implement NOW when compared to medium_effort = high_security. (this order is debatable of course)

I'd rather put a WAF in place ASAP while I'm working on creating a proper SDLC on the backend simultaneously. Neither one can hurt as long as I have the resources to execute both. If it's one or the other.... well.. that's a different discussion all together.

Cheers

txs

(txs ~a.t~ donkeyonawaffle ~d.o.t~ org)

Anonymous said...

Near every paragraph in that rant is grossly inaccurate. Just to pick one, take the bit about how signing code is an alternative to vulnerability scanning. Signing code will make it more secure? I wish I could remove security bugs from my code just by signing it, but my PGP key just isn't that magic.

Nonsense. Total nonsense.

Jeremiah Grossman said...

ntp, thanks for the well thought out post. I think RSnake agrees with you. :) What I also think is that we're talking about two different issues that might not have been 100% apparent in my post. One is the "right way" to do security and second is what we have to do now to buy time to get that point. That's why firewalls came first.

For instance you said: "In network security, there shouldn't be any protocols over the wire that authenticate or provide a channel that is different than the mechanisms provided in classic implementations of SSH, IPSec, or SSL. Unauthenticated and cleartext protocols should be wrapped. Listeners should also be wrapped by safe-listeners. See IETF BCP 61."

All well and good, only in the 90's no was ready and willing to do that. It took years before people got on board and started in that direction and we're still not there yet. Whether you agree with it as an end-game solution or not, firewalls offered a faster, cheaper, and good enough for now way to reduce risk. Moving onto web application security....

"Applying similar principles to web application security is probably our primary goal. SSL, secure cookies, POST's with nonces, safe-listeners (or even portknocking), OTP's, n-tier application-verification through cryptographic-checks, generation fuzzing, risk management, and vulnerability management are all good for web application security."

Again, good ideas, but even if we started now We're at least 5-10 years off if not more. Business are simply not going to pay what it would take it realize that vision right now. So in my opinion, webappsec will follow the same path of network security. We need something good enough for right now. Web Application Firewalls will come first and we'll slowly move towards what we'll all begin to see as the "right way".

Anonymous said...

@ Anonymous

...signing code is an alternative to vulnerability scanning. Signing code will make it more secure? I wish I could remove security bugs from my code just by signing it, but my PGP key just isn't that magic.

Sure, signing code at different stages of deployment can tell you which revision you have, and your vulnerability database can tell you which bugs were patched at that revision level.

I made the comparison to the incident response industry sub-set, which uses NIST's . NSRL is used in EnCase, the forensic investigation software. Possibly a bad comparison, but its the idea I'm looking to promote.

@ Jeremiah

What I also think is that we're talking about two different issues ... One is the "right way" to do security and second is what we have to do now... firewalls came first... offered a faster, cheaper, and good enough for now way to reduce risk... if we started now we're at least 5-10 years off.

I don't want to talk about them as separate issues. I don't want people to think that there are any "free rides".

You are a web application security expert. You are not a risk assessment expert. You get paid to sell scanning services. Your peers get paid to sell WAF's. Enjoy your 5-10 years of paychecks, just so that somebody else can come in later to clean up the mess you made.

Business are simply not going to pay what it would take it realize that vision right now. So in my opinion, webappsec will follow the same path of network security. We need something good enough for right now.

Businesses will pay whatever it takes to stop getting FCC fines for losing personal data and via fraud (stolen money or private information). They only know which measures to put in place by receiving a report by a risk assessor (or by following compliance guidelines).

Speaking of which, you're in luck - the PCI standard is adding a WAF requirement.

Anonymous said...

ntp's reply was mostly cruft to me, but one point I agree with: HTTP on port 80 has mutated from a known (and therefor verifiable) protocol to the modern day equivalent of RPC.

Jeremiah,

I agree that application layer firewalling is the right thing, I'm just not sure that the current implementations of HTTP can actually be inspected any more. Sometimes I get nostalgic for the simpler days when putting up FWTK or Gauntlet at strategic points had a measurable effect on the misuse of network resources.