Wednesday, June 13, 2007

Moving Forward: CSI Working Group on Web Security Research Law

The CSI Working Group on Web Security Research Law (web security researchers, computer crime law experts, and law enforcement officials) was formed in effort to advance our collective understanding of website security vulnerability discovery and disclosure. The inaugural report explores all aspects of the debate, complete with case studies, and provides a solid resource from bringing the industry up to speed. As part of the working group and having time contemplating what I’ve learned, the big question on my mind is "where to go from here"?

If our goal is to…

1) Protect the security researcher
If a software vendor or website owner is knowingly or unknowingly putting consumers (or their data) at risk, security researchers make it known. A security researchers act of vulnerability discovery/disclosure, which may cross ethical and legal lines, in a sense serves as an industry watchdog. While many argue over specifics, few say vulnerability security researchers do not overall help the greater good. If we wish to continue having security researchers play a role as more software becomes web-based we’ll need:

a) Clear guidance as to what actions are legal or illegal when looking for and disclosing website vulnerabilities.

Today’s climate of legal liability and criminal prosecution has already caused many experienced researchers to curtail website vulnerability discovery and disclosure (at least in the U.S.). Without guidance, those who will suffer the most unfortunately will be the new comers to the information security field who don't know any better. Careers or in some cases lives will seriously impacted before they’ve even begun. For the rest of the people looking for vulnerabilities on websites, the bad guys, it’s a free for all for them one way or the other.

b) Whistleblower protection
Even the most well intended laws sometimes prevent people from serving the great good. We’ve seen this happen in other areas and it’s reaching to the point where this may be required in the information security field as well. Especially with more and more of our most sensitive information under the protection of others. People in a position to know should be able to come forward with at least some expectation of legal protection. Right now there is none.


2) Motivate organizations to better secure their websites

By some estimates over 1 billion people are online with access to over 122 million websites (growing by nearly 4 million per month). The vast majority of these websites that are assessed for security have serious vulnerabilities. So its as no surprise that the most commonly attacked spot is the Web layer because it represents the path of least resistance. With so much commerce being conducted on the Web, it should be in the best interest of website owners to protect the security and privacy of the consumer. Question we all ask is how to do help that.

a) Industry’s self-regulation - the carrot
Industry’s may self-regulate and reward website owners with perks for maintaining a high level of security for consumers. With PCI we’re beginning to see this tend. While the results will not be immediate, over time they will be measurable. For industries who fail to self-regulate on-line security, they’ll continue to suffer massive incidents. If the problem gets bad enough, the risk is of government imposes regulations becomes a reality as has already happened in many other industries.

b) Legal liability - the stick
The government may also to decide consumers deserve to be compensated for breaches of their personal information. For myself I find this route preferable to legislated compliance standards for security. Let the organizations involved properly balance their need for security with the potential of legal liability.

However, maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies.

17 comments:

Anonymous said...

Hey Jeremiah,
Kind of unrelated question, I've been playing around with the safari browser for windows. We run through a squid proxy here and everytime I go to a site it prompts to enter credentials for the proxy, If i enter anything it crashes the browser. Would this just be a bug, or classified as a dos? Ive tried fuzzing the user/pass parameters but I can't overwrite anytihng on the stack.

Thanks for the 'noob' question :p
-Eric

Jeremiah Grossman said...

Sounds like a bug, but I really don't know.

Anonymous said...

Asking Jeremiah a windows browser question is a bit like asking a fish about air pollution.

Jeremiah Grossman said...

HAHAHA, now THATs funny! :)

Anonymous said...

hahaha

Anonymous said...

Sounds very good Jeremiah, the webappsec disclosure is still some renegade business now. Would be great to see some changes, I'm glad you're one of those who would like to change it and pro-actively trying to accomplish this. Good work.

Ronald van den Heetkamp
0x000000.com

Jeremiah Grossman said...

Thanks Ronald, though I don't think its proactive really, not much in the infosec industry is. We're also reacting to something it seems. For my part at least both sides of the conversation are now being heard for those who choose to listen. We're going to have to figure out a way to balance the legalities and ethics with pen-testing web-based software.

Silentz said...

Excellent piece Jememiah, just a couple of points i wish to raise.

Quote:

"Conversely, the Web researcher, in most cases, must perform his activity on a server owned by someone else."

I know you (or whoever) says "in most cases" but i feel this is not true, well at least not in my case and in the case of a few other of my associates as i always download the app i am auditing to my local system and run my probes from there and have never once had to run it on another site.

Another thing, i saw a lot of legal stuff in there and also some stuff about the laws here in the UK but one law that would've been good for you guys to include was the petty law they introduced last year:

http://www.publications.parliament.uk/pa/cm200506/cmbills/119/06119.27-33.html#j383A

This basically say something along the lines of "If you write a whitepaper informing the general community on the types of attacks an attacker could use against you & your system...you could face 12 months in prison".

Pathetic really...

sperlis said...

Hello Jeremiah,
I'm very new to the Web-App Sec business, and as an outsider, the first thing that struck me is the need for legislature.
I'd like to offer a couple more points. The law needs to be comprehensive. Right now, it's the Researchers vs. The Corporations - "Clash of the Titans"-mode. There are a couple of other things to bring under the umbrella of the law, perhaps borrowing from other industries (I come from telecom, so I'll lean on them):
1) Do Not Call - telemarketers keep a Do Not Call list where you can sign up to not get disturbed. A central registrar could be made where each company's WebApp policy should be clearly stated.
2) "How is my Driving" - in Israel a company with a fleet of cars must keep a complaint hot-line, and each car must display the number. Force WebApp providers to do the same. Security warnings by researchers may be reported to a third party that could check the validity and report back (new industry in the making!).
3) Tapping standards - in telecom, standards for connecting tapping equipment exist (CALEA, for example). Where and how a call may be recorded, and how to insure the validity of the recorded data (anti-tamper measures). In WebApps, such standards may define which pen-tests may be safely done and which kind of "fingerprints" a researcher must leave.
4) Enablers Liability/Protection - access service providers such as Coffee houses and airports might also face liability in the future for providing untraceable and unidentifiable house for malicious attackers. These attackers are the real reason why legitimate research sounds like a contradiction in terms to lawmakers and enforcers. A law enabling research must deal with these issues as well.

That's all I can think of right now, but I believe that a law must encompass a wider range of elements in order to obtain legitimacy and, most importantly, interest.
Aside from that - keep up the good blog, I'm learning a lot!

Benjamin Wright said...

Jeremiah: For responsible security professionals, a general way to avoid appearance that their assertive actions are criminal is to notify people BEFORE they act. Before they act, professionals can explain the justification for their action. Cuthbert and McCarty (the Report's two sad examples of convictions) acted surreptitiously. Their cases could well have turned out differently if, in advance, they told the web site owners (or hosters) what they intended to do and why.

-Ben Wright, Hack-igations.com

Jeremiah Grossman said...

Hi Ben, thank you for the comment and I agree with you in practice. In today’s environment it’s best not to test any system you don’t own without consent. However, the key point of the CSI discussion was one set beyond that. With normal desktop software, researchers don’t need to ask for permission to find vulnerabilities. In many ways consumers are dependent upon that work because it helps to ensure vendors are shipping “secure” software. However in the case of Web-based software, where the software industry is heading, we’re losing the visibility researchers provide for reasons we’re both familiar with. I think its unlikely that website owners will grant explicit permission to just anybody to test their security. That’s typically restricted to paid professionals. That also mean it won’t see the unrestricted testing of the masses as desktop software does.

Benjamin Wright said...
This comment has been removed by the author.
Benjamin Wright said...

Hello, Jeremiah. This is a complex and important subject, and I salute you and the Working Group for taking it on. When I say a web researcher can change the complexion of his case by notifying people before he acts, I'm not necessarily saying he gets consent. Giving someone notice of what one plans to do is not the same as getting their consent.

One example: Suppose a respected group of researchers, acting with support of a responsible group like EFF, is concerned about a vulnerability common to major web sites. Imagine the group sends a message to the sites in question (and they publicize the message), saying a. we intend to inspect your site under these controlled parameters; b. we will publish our results; c. the reason we are doing this is to promote the public interest, consistent with the long-standing tradition of respected, independent security experts testing software applications; d. the identity and contact information for each one of us is XYZ.

If the group sends this notice and then acts, it does not have explicit consent from the web site owners.

However, advance notice like this makes the situation for these white hat researchers very different from the situtation for Cuthbert and McCarty (the two examples of convictions in this area).

--Ben Wright, hack-igations.com

Jeremiah Grossman said...

Hi Ben,

I really liked the way you drafted this up. It makes some good points and presents the scenario in a way I hadn't seen before. The others in the CSI working group might benefit from this PoV. You mind if I post it to the group?

Benjamin Wright said...

Yes, Jeremiah, you are completely free to repost my words. The Report grabbed my attention when it came out in June. I am very interested in learning more about this field of law. Does the Working Group have a public blog?

--Ben Wright, hack-igations.com

Jeremiah Grossman said...

I posted your words to the group. Full credit cited of course. At the moment there is no publicly available text from the group. I double checked. We use grouphub to communicate. Though I can see if we can open this up a bit.

Benjamin Wright said...

Jeremiah: I wish to offer another comment about the Report. I do this not to throw stones at the Working Group, for which I have much respect. Instead, I seek to foster public dialog about the important topic the Report addresses. The Report says, "It is true that software security researchers can get tangled in legal snares if their research methods brazenly defy copyright law or the software vendor’s end-user licensing agreement. Yet those are not criminal offenses." Although I understand the educated spirit behind that statement, I'm uncomfortable with it. A EULA for a desktop application in fact might support a criminal prosecution. For example, the EULA for Adobe Reader says, "The structure, organization and code of the Software are the valuable trade secrets and confidential information of Adobe Systems Incorporated and its suppliers." Further, the Adobe EULA authorizes only limited uses of the software, and security testing is not one of them. California Penal Code Section 499c(b)(1) essentially says it is a crime to use a trade secret without authorization. I suspect a prosecutor could build a case that a tester who (with evil intent) probes deep into the Adobe code is using a trade secret without authority and therefore committing a crime under 499c. Now, I don’t think California prosecutors will commonly go after responsible security researchers investigating Adobe Reader installed on their own machines. However, I argue that the primary things that protect responsible security researchers (whether they are inspecting applications installed on their own machines or Web 2.0 applications installed on other peoples' servers) are their intent, motives and methods and their carefulness to avoid hurting other people. I argue the protection for respected security researchers does not come so much from a legal distinction based on who owns the hardware on which the application being tested is installed. . . . I am interested to hear what you and others think ('cause I don't know everything!). –Ben Wright, hack-igations.com