Thursday, June 26, 2008

Can WAFs protect against business logic flaws?

“Web Application Firewalls (WAF) are a total waste of time/money because they can’t protect against business logic flaws!,” a common theme among the few, but vocal, seriously anti-WAF zealots out there. While there is some truth it’s also like saying car door locks are useless because criminals can break in by smashing the windows. Or car alarms are a waste because they don’t protect against carjacking. And steering wheel locks are lousy because the car is at risk to thieves with tow trucks. You see where I’m going with this. Every security measure has a particular purpose, limitation, and overall value to the owner considering what it is they’re protecting.

Sure, WAFs don’t defend against every logic flaw, or even every crazy form of SQLi or XSS. Just as white/black box scanners can’t identify every vulnerability and neither can expert pen-testers or source code auditors. A/V products don’t red flag every piece of malware. Anti-spam misses some junk mail. Yet we still utilize these solutions anyway because their value outweighs their limitations. And of course WAFs don’t replace vulnerability assessment (VA) or secure coding practices just as Nessus doesn’t compete with network firewalls or good segmentation practices. Therefore I recommend we ignore rash criticisms and keep an open mind into what WAFs can/can’t do, the value they may provide today, and consider how they made be improved – including being aided by VA intelligence (VA+WAF).

I’m going to keep my comments vendor agnostic. Perhaps some of the features described below are already included in some of the available WAF products. In fact I know they are and claim no novelty of any of these ideas (probably printed elsewhere), but I’ll leave it to the vendors to comment on their specific products capabilities. I think the readers here might be pleasantly surprised. My intent here is to explore a few of the more common business logic flaw examples we’ve all seen, assume we know where their location (VA), and attempt to hypothesize defense measures.

Business Logic Flaw examples

1) Rotating numbers in URLs, the classic case of Insufficient Authentication, Insufficient Authorization and Insufficient Process Validation where an attacker can gain access to data or functionality their user-level should not have. We’ve seen these issues countless times in order tracking systems, bank account screens, and even in online vote registration. I see at least two possible ways to prevent these types of business logic flaws with a WAF.

URL encryption
The WAF inspects outbound Web page content, dynamically encrypts and replaces every URL directed to the website, and by extension decrypts them on the way back in – completely transparent to the web server or application. For example:

<* a href=”http://website/app.cgi?foo=bar”>action<* /a>

becomes…

<* a href=”http://website/06ad47d8e64bd28de537…”>action<* /a>

or

<* a href=”http://website/app.cgi?foo=bar&t=1fad47d…”>action<* /a>

URL encryption is powerful as it prevents URL parameter tampering and by extension protects against a wide range of attacks (XSS, SQLi, CSRF, etc.). No parameter tampering, no number rotation, no business logic flaw. Implementation is really tricky though because the HTML parser has to be perfect otherwise requests will be blocked when links are missed. Bookmarks and search engine indexing is also potentially disrupted. However, websites where most functionality is behind a login screen, such as banking sites, might not mind. Its also possible these side effects could be reduced by only focusing on the URLs known to be vulnerable (VA) instead of pursuing global enforcement. There is no need to encrypt URLs that aren’t vulnerable.

Session-State tracking
Users can be tracked from one page to the next so it’s technically possible for a WAF to know where they are in a particular flow and where they should be able to get to, or not. If an attacker were to rotate a number in a URL the WAF could be capable of determining if they should have been able to get it (UI-wise) from where they are. If they shouldn’t be able to, deny! Or perhaps a more forgiving threshold is in order so the may try 1, 2, or even 10 illegal URLs, but not more because that would surely be abnormal behavior. Scalability is biggest drawback here as increasingly large state tables are required for tracking. However, if you know a particular URL or parameter name has a problem with number rotation, WAFs can again be configured to focus and enforce controls only there.

2) Session hijacking by way of cookie tampering is another old school hack that has implications for Credential/Session Prediction, Insufficient Session Expiration, and Session Fixation. This issue doesn’t show up as much as it once did because most developers are using the native session handling APIs in their development frameworks as opposed to rolling their own. A very good thing.

Just like the previous example we can utilize some good ol’ on-the-fly cookie encryption/decryption that can be easily performed with a WAF. If an attacker is unable to modify their cookie to a valid value, and the WAF would know cryptographically, then session handling issues go away. You could even add some httpOnly, secure, and non-persistent flags if you want. You’d still probably have issues with Insufficient Session Expiration or Session Fixation, but we’re getting somewhere. The only drawback I can think of is if JavaScript or some other client-side language needed to read/write the original cookies values.

3) WAFs could also potentially be used to stop login brute force attacks or Insufficient Anti-Automation by including CAPTCHAs on-the-fly at various choke points. Again, thresholds would be neat. We could explore other examples, but I think you get the idea and this post is long enough. Well at least I don't want to write anymore. :)

Its important to understand that we’re at the very beginning of WAFs (or website VA for that matter), their deployments, which is also why there is so little field experience posted anywhere. We need an open community dialog so we can see where this technology can go and how it can be improved. - independent of the PCI 6.6 mandate. My point is I don’t think WAFs will be able to solve all of our web application security problems, or even all business logic flaws, and I don’t know of anyone who does. It certainly would be nice though to see what WAFs can do or be made to do. We won’t know unless we keep and open mind and try.


"Any fool can criticize, condemn, and complain, and most fools do."
Benjamin Franklin

Tuesday, June 24, 2008

Microsoft announces Black Box, White Box, and WAF

Apparently the mass SQL Injection attacks have really woken people up and they’re probably flooding the MS blogs and inboxes with pleas for assistance. No doubt a lot of them use Twitter. :) Site owners are desperate to protect their old legacy ASP classic code. To help the situation Microsoft has just announced 3 free new toys specifically targeted at SQLi.

1) The Microsoft Source Code Analyzer for SQL Injection (MSCASI) is a static code analysis tool that identifies SQL Injection vulnerabilities in ASP code. In order to run MSCASI you will need source code access and MSCASI will output areas vulnerable to SQL injection (i.e. the root cause and vulnerable path is identified).


Cool. If anyone wants to provide feedback on effectiveness, I'd really like to know!


2) Microsoft worked with the HP Web Security Research group to release the Scrawlr tool. The tool will crawl a website, simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities.

This is nice of HP to offer, but the product limitations seem somewhat onerous to me...

* Will only crawls up to 1500 pages
* Does not support sites requiring authentication
* Does not perform Blind SQL injection
* Cannot retrieve database contents
* Does not support JavaScript or flash parsing
Will not test forms for SQL Injection (POST Parameters)

Hmm, if used MSCASI and Scrawlr are used at the same time, can we call this Hybrid Analysis? :)

3) In order to block and mitigate SQL injection attacks (while the root cause is being fixed), you can also deploy SQL filters using a new release of URLScan 3.0. This tool restricts the types of HTTP requests that Internet Information Services (IIS) will process. By blocking specific HTTP requests, UrlScan helps prevent potentially harmful requests from being executed on the server. It uses a set of keywords to block certain requests. If a bad request is detected, the filter will drop the request and it will not be processed by SQL.

IIS's equivalent to ModSecurity on Apache. Cool stuff, first used it a LOONG time ago and no doubt solid improvements have been made. From the description it appears to still be using a black list negative security model approach to protection. How about that!? :) Looks like the only thing they left out is some kind of DB or system clean up for those who have already suffered an incident. I’m hearing that the hacked count is up to 2 million sites now. Ouch.

Twitters users angry about SQL Injection hacks on their websites

The mass SQL injection attacks have impacted the lives of a lot of Twitter users out there. I did a search for “SQL Injection and the results are page after page of misery, time wasted cleaning things up, and cursing up a storm. You can really feel their pain and the worst is probably not yet over. Still gotta fix all that legacy code. Here are some of my favorites tweets…

shartley: Cleaning yet another SQL injection attack. I'm F'n sick of cleaning up after lazy programming that took place during my year away.

jamesTWIT: To the hacker who designed the SQL injection bot. I hope you die and not a fast death...something slow and painful. Like caught in a fire!

chadmonahan: Dearest SQL Injection people, I don't like you. Yours, CM

programwitch: F'n SQL injection hacks.

Anirask: Damnit. Our main website is down cause of SQL Injection attacks. You figure devs would sanitize their inputs against this shit..

Sunday, June 22, 2008

Made purple belt in brazilian jiu jitsu

This weekend is one to be remembered. Out of the blue (no pun intended) I make purple belt in BJJ (white - blue - purple - brown - black) and our GGAFL nationals footy team wins a regional qualifier for 2008 USAFL National tournament (we’re trying to make Division 1). Woot! I love both these sports.

I’ve been training BJJ seriously hard over the last year and a half since receiving my blue, typically 4 days a week when not traveling and visiting local academies across the national when I am. Sprinkle in some running, weights, and footy and my stamina/skill is somewhat reasonable. Now if I could just be more disciplined about my diet, 221 lbs should be in reach within the next 6-8 weeks for competition.

A dislocated shoulder about a month ago set things back and is taking forever to heal. Ugh! It’s extremely painful to push or pull when my arm is extended so I have to keep it bent and use my elbow for everything. I also have to make sure no one grabs onto it otherwise its tap city. And I hate tapping, the thing I failed to do and the cause of the injury (stupid). I’ve also noticed that when injured my technical game comes up a lot faster than it would have otherwise.

You see if you can’t use any much of any strength you are forced to rely more on technique and strategy. Speed, strength, and technique are basically the three ways to beat someone. At my size speed is unreliable :). For me if I stop training altogether to heal or get frustrated my game and stamina goes down fast. For those in BJJ they know nursing minor injuries is just part of the sport.

While I really hadn’t realized it over the past couple of weeks, I’ve been taping everyone in the academy (blues and purples) one after the other. I guess it looked weird for me to NOT have a purple belt, even if I’m early, so here I am. A whole new level, the realm of leg locks, and pursuing journey to the brown.

Hoff, we’re gonna have some fun at Black Hat. :)

Friday, June 20, 2008

Day 1: Starting at the beginning

You’re hired on at a new company placed in charge of securing their online business (websites). You know next to nothing about the technical details of the infrastructure other than they have no existing web/software security program and a significant portion of the organizations revenues are generated through their websites.

What is the very first thing do on day 1?

The idea here is to collect wisdom from the crowd (avoiding multiple choice), see if we can find reoccurring themes, order them accordingly, and then move onto the next step. And please don’t say pour a cup of coffee. :)

Top 5 questions to get webappsec threads spinning out of control

1. Should all Web traffic be SSL'ed or only username/passwords?
2. Black box vs. white box testing, which is better?
3. Ask anything about a WAF.
4. What are the best-practices for conducting password recovery?
5. Which certification should I get? CISSP?

Thursday, June 19, 2008

Ultimate Scanner Accuracy Test

Imperva issued a press release and a pair of blog posts describing their new SecureSphere (WAF) API functionality (OpenSphere). Cool stuff. Clearly from the text they’re excited by the VA+WAF concept, which is the same technology integration path WhiteHat Security completed with F5’s ASM and Breach’s open source ModSecurty. We’ve been getting email asking when we plan to integrate Sentinel with Imperva (and we likely will) because we’re only VA scanning vendor not listed (Cenzic, HP, IBM, NTO) in their announcement, but I’ll get to that in a moment.

There is something else deep underneath that VA+WAF brings to the surface, something everyone finds important, and that is true scanner accuracy. For VA+WAF to work (block) in a production environment scanner results MUST be EXTREMELY accurate. We’re talking sub 1% false positive and duplicate rates or things will fail (badly). And even then you still want to test virtual patches in alert mode first. Scanners cannot fudge these results, use noncommittal “potential vulnerability” reporting, or use clever marketing spin. You got the good or you don’t and probably an interesting new way to review scanners.

For example Kavado (defunct) attempted the VA+WAF in 2002-2003; and other vendors tried again in 2003-2004 using AVDL. Ultimately, all VA+WAF attempts proved unsuccessful because the scanning products dumped hundreds or even thousands of false positives and duplicate vulnerabilities (still the case?) into the WAFs. Early implementations labored WAF performance or rendered the entire website inaccessible. We feel at WhiteHat Security we’ve overcome the accuracy challenge and statistically gathered enough proof points to feel comfortable bringing the solution to market.

As far as integration our plans go and why we’re not listed, WhiteHat Sentinel features are almost entirely customer demand driven and WAF integration is no different. Many enterprises are considering their options (Cisco, Citrix, WebDefend, etc.) and then let us know which product they liked best. Then we get the product in the lab, integrate the technology, test, test, test, and test again, then show it off to a several people. If feedback is positive, then we announce. Perhaps other scanning vendors do it differently and don’t mind marketing vapor-ware. We prefer offering solutions that work first and that might explain why we’re not on the list. :)

Wednesday, June 18, 2008

I've joined the Twitter-sphere

Don't know exactly why yet, but people say it's cool and worth a look. Seems like speed blogging to me, or maybe like a big chatroom. jeremiahg is the name if you care to follow.

OWASP/WASC Party - Black Hat 2008

Last year hundreds showed up to the OWASP/WASC Black Hat (2007) party (sponsored by Breach Security) for the free booze, food, and chance to geek out on webappsec. The 2008 party is going to just as awesome, who knows, may be even better. If that's possible. :) Due to the overly massive attendance last year (and long line), there will be no email RSVP list. You MUST show up to get a wristband at the Breach booth. My advice, don't wait till the last moment. They'll definitely run out because the Shadow Bar can only hold so many people. See you all there!

When: Wednesday, August 6, 7:30 PM – 9:30 PM
Where: Shadow Bar, Caesar's Palace, Las Vegas

Monday, June 16, 2008

Why most WAFs do not block

I found the reason elegantly described in “Economics and Strategies of Data Security", by Dr. Dan Geer.

“When you know nothing, permit-all is the only option. When you know something, default-permit is what you can and should do. When you know everything, default-deny becomes possible, and only then.”

To implement default-deny Web Application Firewalls (WAF) must know everything about a website at all times, even when they change. That’s programmatically documenting every expected request method, URL, parameter name/value pair, cookie, process flow, etc making default-permit deployments the rule rather than the exception. Some WAF policies though, like HTTP protocol validation, can run in default-deny mode - the rest well, not so much. Which is why putting in point rules (virtual patches) to defend against known vulnerabilities tends to work well in lieu of pure positive security models.

Developers don’t go to security conferences

RSnake and I were talking about this the other day as one of the problems in the webappsec industry. Over the years I’ve spoken at well over 100 events including ISSA, ISACA, Black Hat, AppSec, InfoSec, SANS, CSI, and probably more OWASP chapters than anyone. Saying 10% of infosec conference audiences are developers would be a generous estimate. The vast majority is part of IT Security in one form or another. For some reason though IT Security people expect developers to come to us and be shown the light, perhaps it should be the other way around. In fact, I’m sure of it.

Back in May Joe Walker and I co-presented at JavaOne (as we did the year before at the Ajax Experience). He’s plays the part of the well-known developer bringing in the well-known infosec guy. Over 1,000 people maxed out the ballroom, so clearly “security” is a draw at developer conferences. We covered XSS, CSRF, SQL Injection, and offered some live demos. We showed what an attacker could do to their beloved code should some basic precautions not be taken and discussed the value of security inside the SDLC. Judging from the feedback - they loved it. Demos rule, no question. I think I reached more developers and made a more positive impact on that group that day than in all the shows since the Ajax Experience.

Here’s what I’ve learned: Tailor your subject matter and talking points to your audience. OK, you already knew that. From my experience IT Security is most interested in learning about the latest and greatest threats, statistical trends, and what they’re peers did to solve a particular problem. So that's what I focus on during those events. Preaching security in the SDLC is just “OK”, but not terribly sexy to this audience. This is why you hardly see any code security talks at the infosec conferences, much the dismay of Andrew van der Stock. Developers and developer conferences are much interested in “application security”, but it has to be presented in the proper context to keep it interesting. Getting a handle on the right flow, format, and level of sophistication is the hard part. Seems they want to cover more browser extension and Google Gears related stuff now.

Wednesday, June 11, 2008

AppSec Conference goes to India!

Dhruv Soi, Founder & Director, OWASP – Delhi, is organizing AppSec India Conference 2008 (August 20th-21st). It's exciting to see webappsec take off in awareness continent after continent and country after country. India has some of the best experts in the industry local to the region who are already on the schedule including as Shreeraj Shah (BlueInfy) and Saumil Shah (Net-Square). Organizers are also bringing in Arshan Dabirsiaghi, Dinis Cruz, and Nishchal Bhalla. Looks like the makings of a solid event and highly recommended if you are anywhere near the area. It's sure to be unique. I would have liked to attend myself, but I'll be vacationing immediately after Black Hat /Defcon. I'll need the rest. :)

Thursday, June 05, 2008

Site Security Policy – open for comments

OK gang, this is one of those rare moments where feedback from community will directly influence a security feature that’ll make a real difference. First some background...

About 6 months ago Brandon Sterne left a cushy infosec position at eBay for Mozilla to solve an extremely important Web security problem he couldn’t while he was there. The same exact problem a lot of major website properties have including Google, Yahoo, MySpace, Microsoft, Facebook and so on. Where business requirements say that users must be able to upload dynamic content (HTML/JavaScript) where it’ll interact with other users. The other being including CDN content (advertising) supplied by multiple unknown upstream providers. We all know the damage this stuff do when abused.

Unfortunately browsers lack any mechanism to specify what the content on its website should be able to do and where its supposed to originate. When accepting user-supplied dynamic content on a website, it’s all or nothing. Website owners need more granularity. This is where the idea of content-restrictions came from years ago, ironically by RSnake whom also worked for eBay years back. The idea never really got off the paper and into browser code despite a lot experts, including myself, pleading for even a limited implementation. This is where Brandon comes in and this presentation on “Web Application Security and the Browser” he recently gave during Yahoo Security Week.

Brandon is in the process of creating Site Security Policy, a specification for people to comment on and proof-of-concept extension for people to play around with. He’s got policy provisions worked in to help prevent XSS, CSRF, and even Intranet Hacking. Brandon even has some cool client-side IDS stuff worked in. The vision is to later formalize the specification through W3C and integrate the feature natively into the browser once trouble spots are ironed out.

Comment away!

Wednesday, June 04, 2008

Summary: SANS WhatWorks in Web Application Security Summit 2008

All around stellar event, everyone I talked to had a good time, and was able to take something of value home with them. ~150 people attended on behalf of many different organizations, large and small, from banking, telecom, ecommerce, software development, healthcare, etc. The format favored enterprise speakers rather than experts, which made it less about the newest attacks/threats and more about how enterprises went about solving problem X. This was great because I don’t think we have to push as hard anymore to promote general webappec awareness. In my opinion the early adopters are here and we should be supporting them in being mentors and evangelists. We need to continue facilitating knowledge exchange.

Judging from the feedback my keynote was well received (whew). I was a little nervous because the material was largely new and also because I touched on some, well, sensitive and deeply held beliefs about ways in which we approach web application security. I’ll post the slides and speech text as soon as I can. I also want to thank the SANS team, especially Carol Calhoun, for allowing myself and WASC to participate. The sponsors (Breach, Cenzic, Core, HP, WhiteHat) were very generous who paid for a lot of the food, drinks, and evening entertainment. Also thank you to the attendees who really made it all possible.

Before I forget them, there we a several interesting I noted down:

1) There was a lot of talk about how to effectively communicate with upper management on web application security issues. If the security guy manically informs the CIO, “OMG, we got 30 XSS and 10 SQLi issues!”, chances are they’re not going to know what you are talking about or understand well enough to make an informed business decision. However if you are able to put vulnerability reports into a meaningful business context you stand a better chance of influencing action in the right direction. For instance we have 5 high severity issues, that if exploited could lead to X dollars lost or X number of users impacted. I don’t think anyone REALLY had good answers here on specifics, but its clear something better is needed.

2) The verbiage some people used to ask for advice on how to get programmer to develop more secure code was a little concerning. They used terms such as “how do we strong arm them?”, “we need to beat them on the head”, and “can we force them in some way?”. Ed Pagget from First American keyed on this negativity right away. Basically he said nobody, including developers, likes to be manipulated, or otherwise forced to do something and this is exactly the wrong approach. I agreed as this approach could easily backfire. Security people must work to establish productive working relationships with the business and its developers or nothing will change. If we believe developers (people) generally want to do the right thing if only empowered to do so, then let’s do that in the best way possible.

3) Several people said of the white and black box scanner tools (AppScan, WebInspect, Hailstorm) that when they integrated them in development, they disabled all but the most accurate of rules. Apparently developers have a high tolerance for false negatives and low tolerance for false positives -- perhaps in contrast to security folks. I guess this makes sense when you have to get some form of reliable security testing in the SDLC that’s managed by developers. But I’m left wondering how much security has actually been gained as a result? How much harder does it make it for the bad guy to find the next vuln?

4) Several enterprises described their investment in security “champions” inside development groups as opposed to trying to tackle the entire group as a whole. For web security matters the developers can go to someone in the immediate vicinity to consult with and ask questions. This is actually quite clever as that person acts as a mentor for the rest. You are effectively training the trainer.

5) I thought I’d have to do more “selling” on the VA+WAF subject, but overall people seemed highly receptive to the notion. I even had a short discussion with Gary McGraw and figured I’d have to spar a bit at least with him. Instead he basically said it’s a security solution that has a specific time and place, just like everything else. Indeed. When exactly that is, is the question we’re all trying to figure out and get comfortable with. Still as Sharon Besser from Imperva picked up on, our time-to-fix metric are less than desirable. We can and need to do better.

6) Several people were asked for their thoughts on the Microsoft SDL and OWASP ESAPI. Consensus on the SDL, great for enterprise/desktop software, not so good for web application development. Agile development sprint cycles are too fast for security to be built in the way its described. ESAPI, good ideas and lots of potential, difficult to fork into existing projects. Also, to be effective alternative APIs must be ripped out to prevent developers from rolling back to less secure code their more used to working with.

Sentinel dynamically generates ModSecurity rules

Update 06.05.2008: Some people wanted to see the SQL Injection version. Screenshot added below.

Many of our customers have Apache installations, large and small, and prefer to go with a software WAF rather than a device. Reasons will vary, but cost is always a concern and ModSecurity is free. :) You can pay Breach for rule support and commercial grade management appliances if you need it, cool stuff. What customers asked for though is a path to go from Sentinel identified vulnerabilities to custom ModSecurity rules (virtual patch) - VA+WAF.

Once Sentinel pinpoints the location of a vulnerability (class, url, parameter), its verified by our operations team, and a dynamically generated ModSecurity rule becomes part of the web-based reporting (see screenshot). The custom rule is a blacklist meta-character match narrowly focused on the vulnerable URL, parameter value (multiple params are supported), and class of attack. A parameter known to be vulnerable to SQL Injection probably has no legitimate use-case for receiving quotes, semicolons, dashes, parens, etc. Similar can be said for XSS.

Sentinel users then copy/paste the custom rule into their Apache config. We recommend the rule be tested in “pass” mode ensuring no false positives exist that might block legitimate traffic. Once confidence in the rule is established, which may take an hour to a couple days, “deny” mode can then be used to block attacks. False positives are minimal because only identified attack traffic where a vulnerability of that type is specifically known to exist in that location is affected. Finally the re-test button can test the single issue and should everything work properly the vulnerability status would be automatically closed out.

Hopefully in the coming months I’ll get some good case studies going and statistics to measure how the time-to-fix windows are shrunken.

What you need to know about HTTP Verb Tampering

Recently Arshan Dabirsiaghi, Director of Research of Aspect Security, published a white paper entitled “Bypassing URL Authentication and Authorization with HTTP Verb Tampering”. Initially there was a lot of confusion about what exactly was being explained or claimed. Including, is it real? Is it novel? Is it dangerous? What is this? Most will get lost in the semantics of the debate and only care if it impacts them in some way. So I hope to get to the relevant bits, borrow from Arian Evan’s summaries, and make things a bit easier to understand.

1) No one is claiming the HTTP Verb (GET/POST/HEAD) manipulation is new. Manipulating what type of HTTP request a webapp is expecting to receive, such changing GET to POST and POST to GET, has been done for years. Our websites should only be using the types of requests we expect to receive and no more. What is interesting here is when it can be used and for what purpose.

2) HTTP Verb tampering is generally used in conjunction with syntactic (XSS, SQLi, etc.) and semantic (bypass authentication/authorization controls) attacks as way to bypass certain defense measures. Arshan’s work on implementation details focus on the semantic version.

3) In syntactic attacks you can use verb manipulation to get malicious data (‘ DROP TABLE …’) in a session object that might now have otherwise been allowed. i.e. Query string parameters were sanity checked, but the attacker used POST placing the data in the message body where it was overlooked by the application. This can lead to SQLi, XSS, and several other common technical vulnerabilities.

4) To protect yourself from syntactic HTTP verb manipulation attacks, make sure you only include user-supplied data from where it’s expected to be received (Query string or POST data), or sanity check them both the same if necessary. Also only include the parameter names in the session object you expect to receive. Don’t allow attackers to add arbitrary name/value pairs.

5) In semantic attacks verb manipulation can be used to bypass authorization/authentication protection for specific verbs and areas of the site. A config might say HTTP requests to the /admin/* directory using “GET” must have an “admin” session role. One would ASSUME any methods not listed (POST, HEAD, WHATEVER) in the config would automatically be placed in default-deny mode, but this is not necessarily the case. RFC’s say the HEAD verb is supposed to be treated exactly the same as GET, just don’t return any response data. So its possible for an attacker to send a request to /admin/delete_user.cgi?id=1 with a HEAD verb with no authentication/authorization. They just wouldn’t get a response to the action and certain frameworks are known to be vulnerable to similar attacks. Nasty stuff and commonly goes untested for.

6) There are several things one can do to protect themselves, the most direct is ensuring ALL HTTP verbs are placed in default-deny mode unless otherwise specified. What you are looking for in consistency of authentication/authorization controls across the various methods you expect to receive. Addition details are going to be implementation specific and can be found in the white paper or list chatter.

7) Scanning for syntactic issues is possible, but can easily double or triple the number of requests that need to be sent. Varying degrees of effectiveness are wide across the commercial vendor range. Scanning for semantic issues is going to be extremely hard and likely to be a manual process for quite some time. Its basically a business logic flaw even though the scanner can technically manipulate the verb, it just doesn’t know what the outcome of a test means.

Overall good paper, I learned some good stuff about the particulars of certain implementations. Plus it sparked a lot of good debate. Hope this helps clear some things up.