Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
by
Meneely, A.
, Osborne, J. A.
, Williams, L.
, Yonghee Shin
in
Analysis
/ Browsers (computer)
/ Case studies
/ Charge coupled devices
/ Complexity
/ Complexity theory
/ Computer programs
/ Developers
/ Digital Object Identifier
/ Fault diagnosis
/ Fault prediction
/ Inspection
/ Linux
/ Predictive models
/ Security
/ Software
/ Software engineering
/ software metrics
/ Software quality
/ Software security
/ Source code
/ Studies
/ vulnerability prediction
/ Web browsers
2011
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
by
Meneely, A.
, Osborne, J. A.
, Williams, L.
, Yonghee Shin
in
Analysis
/ Browsers (computer)
/ Case studies
/ Charge coupled devices
/ Complexity
/ Complexity theory
/ Computer programs
/ Developers
/ Digital Object Identifier
/ Fault diagnosis
/ Fault prediction
/ Inspection
/ Linux
/ Predictive models
/ Security
/ Software
/ Software engineering
/ software metrics
/ Software quality
/ Software security
/ Source code
/ Studies
/ vulnerability prediction
/ Web browsers
2011
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
by
Meneely, A.
, Osborne, J. A.
, Williams, L.
, Yonghee Shin
in
Analysis
/ Browsers (computer)
/ Case studies
/ Charge coupled devices
/ Complexity
/ Complexity theory
/ Computer programs
/ Developers
/ Digital Object Identifier
/ Fault diagnosis
/ Fault prediction
/ Inspection
/ Linux
/ Predictive models
/ Security
/ Software
/ Software engineering
/ software metrics
/ Software quality
/ Software security
/ Source code
/ Studies
/ vulnerability prediction
/ Web browsers
2011
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
Journal Article
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
2011
Request Book From Autostore
and Choose the Collection Method
Overview
Security inspection and testing require experts in security who think like an attacker. Security experts need to know code locations on which to focus their testing and inspection efforts. Since vulnerabilities are rare occurrences, locating vulnerable code locations can be a challenging task. We investigated whether software metrics obtained from source code and development history are discriminative and predictive of vulnerable code locations. If so, security experts can use this prediction to prioritize security inspection and testing efforts. The metrics we investigated fall into three categories: complexity, code churn, and developer activity metrics. We performed two empirical case studies on large, widely used open-source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. The results indicate that 24 of the 28 metrics collected are discriminative of vulnerabilities for both projects. The models using all three types of metrics together predicted over 80 percent of the known vulnerable files with less than 25 percent false positives for both projects. Compared to a random selection of files for inspection and testing, these models would have reduced the number of files and the number of lines of code to inspect or test by over 71 and 28 percent, respectively, for both projects.
This website uses cookies to ensure you get the best experience on our website.