Wednesday, December 17, 2008

History Repeating Itself

“All of this will happen before and all of this will happen again.”, is a memorable quote from Battlestar Galactica (awesome show). Meaning, history tends to repeat itself in a prophetic sort of way. As I’ve been involved in the evolution of the Web Application Security for the better part of a decade, I couldn’t help but notice the strikingly similar paths the field is taking to that of Network Security. Incidents (hacks) prompt technology research and over time drive business cost justification. Follow on attacks, best-practices, and regulation directly impact business models and the style of solution deployment.



To get a better visual comparison I created a timeline mapping key events. What’s interesting, Web application security closely matches network security if you shift by 8 years. We’ll see what the next couple of years has in store.


Network Security
(1988) The Morris Worm, the first computer worm distributed over the Internet, infects over 6,000 hosts. The incident prompts research into network firewall technology.

(1992) The first commercial packet-filter firewall (DEC SEAL) was shipped. Marketed as a perimeter security device able to thwart remote attacks targeting unpatched vulnerabilities, including the Morris Worm.

(1994 - 1995) Given that firewalls were not widely deployed, their costs not yet justified, savvy network administrators sought tools to identify and patch vulnerabilities. ISS released Internet Scanner, a commercial network security scanner. Security Analysis Tool for Auditing Networks (SATAN) was released for free.

(1996) Network security scanners revealed the need for more mature patch management products as security updates were required frequently. PATCHLINK UPDATE, a commercial patch management product released as a solution to the problem.

(1996) The costs for commercial patch management software and the potential downtime held back the technology adoption. Mature free patch management solutions were also not available. In an environment where few systems were diligently patched, hackers successfully exploited large blocks of undefended networks.

(1997) Broadly targeted attacks highlighted the need for additional security controls. The free software firewall, Linux IPChains, became a viable alternative for commercial products. Many enterprises chose perimeter firewalls before deploying patch management solutions because they were often seen as a faster, simpler, and more cost effective approach.

(1998) With the wide availability of network scanners (Nessus), increasing deployment of firewalls, and a proven need to defend against remote attacks - the environment created a need for high-end consultative network penetration testing services.

(1998) To ease the challenge of keeping up-to-date on security patches, Windows Update was first introduced with Windows 98.

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Bill Gates: Trustworthy Computing Memo.

(2003) SQL slammer and Blaster worm demonstrate the porous state of network security by exploiting tens of thousands of unpatched hosted, even those located within the network perimeter. The incident also highlighted the need for host-based firewall deployments and patch management for all hosts, public and private.

(2003) To keep pace with the increased frequency of remote attacks and patching requirements, adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

(2004) Windows XP Service Pack 2 ships with Windows Firewall as a default security feature to protect unpatched hosts, which may or may not be protected by a perimeter firewall. Shortly thereafter firewalls become fairly ubiquitous for any Internet-connected host.

(2005) Network vulnerability scanning moves towards ubiquity, but the costs of software and management are prohibitive. This, coupled with compliance requirements, lead to the increased adoption of Managed Security Service and Software-as-a-Services (Qualys / ScanAlert) providers to achieve lower a Total Cost of Ownership.

(2006) PCI Security Standards Council formed, uniting the payment brands disparate initiatives, enforcing the requirement of vulnerability mananagment, patch management, and firewall ubiquity.


Web Application Security
(1996) The PHF exploit, one of the more notorious CGI input validation vulnerabilities, was use to compromise untold numbers of Web servers. The incident, couple with other possible Web hacking techniques, prompts research into Web Application Firewall technology.

(1999) The first commercial Web Application Firewalls (AppShield) were shipped. Marketed as a perimeter application security devices able to thwart attacks targeting unmitigated vulnerabilities, including the PHF exploit.

(2000 - 2001) Given that Web Application Firewalls were not widely deployed, their costs not yet justified, security professionals sought tools to identify Web application vulnerabilities. Commercial Web application scanners (AppScan) become commercial available, as well as open source versions such as Whisker.

(2001) Web application scanners, and published research, reveal the need for more secure Web application software. The Open Web Application Security Project was founded as a community effort to improve raise awareness of Web application security. - http://www.owasp.org/index.php/Main_Page

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Broadly targeted Web application attacks highlighted the need for additional security controls. The free Web Application Firewall, ModSecurity, became a viable alternative for commercial products. Enterprises began choosing WAFs before secure software initiative because they were often seen as a faster, simpler, and more cost effective approach.

(2004) The number of types of attacks and esoteric naming conventions became vast. The OWASP Top Ten was released to highlight and describe the most prevalent and critical Web application security vulnerabilities.

(2005) With the wide availability of Web applications scanners & other free tools, increasing deployment of Web Application Firewalls, and a proven need to defend against remote Web application attacks - the environment created a need for high-end consultative network penetration testing services.

(2005) The Samy Worm, the first major XSS worm, infected over 1 million MySpace profiles in under 24 hours causing an outage on the largest social network. The incident highlighted the need for more secure Web application software.

(2007) Mass SQL Injection attacks begin to appear infected website databases with browser-based malware exploits. Visitors to infected websites would have their machines compromised automatically. The incident highlighted the need for more secure Web application software and Web Application Firewalls used as stop gap measures.

(2008) The deadline for the Payment Card Industry’s Data Security Standard section 6.6, requiring credit card merchants to conduct code reviews or install Web Application Firewalls expires. The requirements fuels the need for both solutions, commercial and open source.

(2008 - 2009) To keep pace with the increased frequency of remote attacks, the rate of Web application change, and frequency of vulnerability testing required by PCI-DSS 6.6, the adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

Next major incident?

Broad SaaS Network Vulnerability Scanning adoption?

Broad WAF Adoption?

Broad SDL Adoption?

7 comments:

dre said...

*rolls*eyes*

Rafal Los said...

@Jeremiah:
Great time-line, made me think a little though (which is often quite dangerous this close to vacation); it took us *forever* to figure out network security was an "Internet-wide" problem. When I first started deploying firewalls and preaching about ACLs back in 1996/1997 I noticed that there were several steps people went through:
1. Dismissal
2. Apathy
3. Disbelief
4. Limited Comprehension
5. Acknowledgment
6. Standardization
7. Apathy

Ironically, but not surprisingly, I have a paper I was writing that's been back-burner'd that details the 7 steps that have been the reaction of the general populous to any of the security concerns over the past whether it be network security, system security, and now web application security. It may be time to dust off that notebook and finish that paper huh?

Thanks for bringing this to front-of mind... invaluable and I'm sure it'll be referenced many, many times.

Jeremiah Grossman said...

Thanks Rafal and yes, please go forth and publish cool stuff. Its interesting to see how events create the need for solutions, which in turn cause follow on problems, highlighting the need for more. :)

Anonymous said...

I would not bet too much on the repeating nature of history. Usually it is much more complicated (and interesting) than pure repetition. Call it the professional scepticism of a historian.

However, I fell in love with the following quote by Mark Twain:

History doesn't repeat itself, but it does rhyme.

Other than that: nice blog post.

Personally, I believe that the trend goes in direction of whitelisting on the WAF. And rather not in autolearning mode, or do you have a network firewall that does autolearning? But maybe I am falling for the repetition idea myself here.

Jeremiah Grossman said...

@Christian, thanks for the feedback and I did enjoy they quote. Got another one for ya that seems particularly applicable...

“When you know nothing, permit-all is the only option. When you know something, default-permit is what you can and should do. When you know everything, default-deny becomes possible, and only then.”

- Economics and Strategies of Data Security", by Dr. Dan Geer.

I see WAFs going down this road. First permit-all, default-permit (blacklist, w/ some whitelisting), and finally if we are very luck on some applications, we'll be able to do full whitelisting.

test said...

Jeremiah,

This was an excellent post and I really enjoyed reviewing the timeline and recalling the projects that I was working on during each particular year.

I find it interesting that you categorized the Mass SQL Injection attacks under software security as I would have likely categorized it as a network security event as the actual attacks are still carried out on the network.

Jeremiah Grossman said...

@inuk-x, thanks for the kind words. Some of the hacking events could have gone either way as they contained my biased. It was a fun bit of research to think, where was I when X happened. :)