tag:blogger.com,1999:blog-13756280.post3726621003120537422..comments2024-02-08T03:44:23.780-08:00Comments on Jeremiah Grossman: Duplicates, Duplicates, and Duplicate RatesJeremiah Grossmanhttp://www.blogger.com/profile/05017778127841311186noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-13756280.post-31002388286208261402007-11-20T15:08:00.000-08:002007-11-20T15:08:00.000-08:00> I think there is some implied/misleading connota...> I think there is some implied/misleading connotations when you say only 12%/9% of the results are "meaningful." All the results are "meaningful"...they are all accurate results and can be thought of as valid attack vectors. <BR/><BR/>Valid sure, helpful to resolving the problem, I’d say no. And if its not helpful, I have a had time believing that its meaningful.<BR/><BR/>> Since once vulnerability instance can have multiple vectors to access the vulnerability, there doesn't need to be a 1-to-1 correlation between vulnerabilities and attack vectors. <BR/><BR/>Agree, but If the customer intends to fix the code, someone is going to have to go through the results eventually and validate everything to isolate the unique vulnerability instances. And from that standpoint, duplicates waste just as much time as false positives do. <BR/><BR/>> Superfluous vectors are not meaningless...especially if you are protecting your webapp via a vector-centric mitigation approach (i.e. WAF).<BR/><BR/>Fair enough, but does that mean you are recommending customers pump 1,000 – 3,000 rules into their WAF of choice? If so, that still means they're going to have to go through all the results and separate out the falses anyway.<BR/><BR/>> I guess this questions what exactly webapp vulnerability scanners are measuring: the number of exploit vectors (i.e. unique public entry points that lead to a vulnerability), or the number of unique underlying vulnerabilities (taking into account the potential for multiple access vectors). <BR/><BR/>I’ve always been of the belief that scanners/VA are supposed to be measuring the security of a website, but that’s just me. And if its not secure, provide guidance as to what types of solutions might best improve the security posture. <BR/><BR/>> My opinion is that without access to source code and/or runtime instrumentation, it is extremely difficult to do the latter with a blackbox scanner approach. That’s not to say blackbox scanners fail in that regard--it ultimately depends on the user’s individual goals. Sure, those looking to fix code will likely want to know just the unique vulnerability instances; <BR/><BR/>Again in that case a high duplicate rate gets in the way. <BR/><BR/>> but those looking to determine vulnerability exposure and/or don’t have the capability to fix the code (particularly if it’s a third-party supplied webapp) are more likely to want to know how many attack vectors there are into the webapp. <BR/><BR/>In this case though, they have no choice. Perhaps this is where I’m confused. Is this a design choice of your product? Massive amounts of data that have to be widdled down in order to fix the code or not and placed into a WAF?<BR/><BR/>> After all, unique vulnerability instances is a code-centric measurement by definition. Knowing a third-party supplied webapp only has one unique vulnerability instance doesn’t make any one of the multiple associated exploit vectors any less potent or “meaningless.”<BR/><BR/>But it certainly doesn’t make it meaningful either.Jeremiah Grossmanhttps://www.blogger.com/profile/05017778127841311186noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-75948076918669296212007-11-20T14:15:00.000-08:002007-11-20T14:15:00.000-08:00I think there is some implied/misleading connotati...I think there is some implied/misleading connotations when you say only 12%/9% of the results are "meaningful." All the results are "meaningful"...they are all accurate results and can be thought of as valid attack vectors. Since once vulnerability instance can have multiple vectors to access the vulnerability, there doesn't need to be a 1-to-1 correlation between vulnerabilities and attack vectors. Superfluous vectors are not meaningless...especially if you are protecting your webapp via a vector-centric mitigation approach (i.e. WAF).<BR/><BR/>I guess this questions what exactly webapp vulnerability scanners are measuring: the number of exploit vectors (i.e. unique public entry points that lead to a vulnerability), or the number of unique underlying vulnerabilities (taking into account the potential for multiple access vectors). My opinion is that without access to source code and/or runtime instrumentation, it is extremely difficult to do the latter with a blackbox scanner approach. That’s not to say blackbox scanners fail in that regard--it ultimately depends on the user’s individual goals. Sure, those looking to fix code will likely want to know just the unique vulnerability instances; but those looking to determine vulnerability exposure and/or don’t have the capability to fix the code (particularly if it’s a third-party supplied webapp) are more likely to want to know how many attack vectors there are into the webapp. After all, unique vulnerability instances is a code-centric measurement by definition. Knowing a third-party supplied webapp only has one unique vulnerability instance doesn’t make any one of the multiple associated exploit vectors any less potent or “meaningless.”Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-13756280.post-47076373259844760802007-11-19T17:21:00.000-08:002007-11-19T17:21:00.000-08:00I have your IP address mapped through Google Analy...I have your IP address mapped through Google Analytics, which is wired to send me an alert any time you visit. :)Jeremiah Grossmanhttps://www.blogger.com/profile/05017778127841311186noreply@blogger.comtag:blogger.com,1999:blog-13756280.post-80736118503120607082007-11-19T16:06:00.000-08:002007-11-19T16:06:00.000-08:00Ha!, I knew I should've disabled my Referer header...Ha!, I knew I should've disabled my Referer header ;-)<BR/><BR/>Actually, we have been working on reproducing Larry's work in the lab (with some cooperation with Jeff@SPI), so when results come in, I'll post them in my blog.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-13756280.post-18708963564160109322007-11-19T12:52:00.000-08:002007-11-19T12:52:00.000-08:00I don't have the hard data handy from my review (m...I don't have the hard data handy from my review (magazine reviews are more fire-and-forget -- have to move on to the next totally different topic before the previous one is even in print), but my recollection is that this is in line with what the majority of scanning products produced. <BR/><BR/>For some combinations of scanners/applications it was even worse. I specifically mentioned in the Cenzic review that I disliked their HARM score because it was too weighted toward apps where the same vulnerability showed up in multiple pages. One app had a vulnerability that resulted in an astronomical HARM score because it appeared on every page and was counted each time, despite being one vulnerability in a commonly used function. <BR/><BR/>While it turned out there was a way to cap the impact of a single vuln toward the HARM metric, I think the point that the duplicates were in and of themselves an issue was something I over looked.Jordanhttps://www.blogger.com/profile/08341608982649448622noreply@blogger.com