Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
115,413
result(s) for
"Software metrics"
Sort by:
Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes
by
Quattlebaum, S.
,
Olague, H.M.
,
Etzkorn, L.H.
in
Case studies
,
Computer industry
,
Computer programs
2007
Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.
Journal Article
SQMetrics: An Educational Software Quality Assessment Tool for Java
by
Rigou, Maria
,
Tambouris, Efthimios
,
Margounakis, Dimitrios
in
Automation
,
Empowerment
,
ISO standards
2023
Over the years, various software quality measurement models have been proposed and used in academia and the software industry to assess the quality of produced code and to obtain guidelines for its improvement. In this article, we describe the design and functionality of SQMetrics, a tool for calculating object-oriented quality metrics for projects written in Java. SQMetrics provides the convenience of measuring small code, mainly covering academic or research needs. In this context, the application can be used by students of software engineering courses to make measurements and comparisons in their projects and gradually increase their quality by improving the calculated metrics. Teachers, on the other hand, can use SQMetrics to evaluate students’ Java projects and grade them in proportion to their quality. The contribution of the proposed tool is three-fold, as it has been: (a) tested for its completeness and functionality by comparing it with widely known similar tools, (b) evaluated for its usability and value as a learning aid by students, and (c) statistically tested for its value as a teachers’ aid assisting in the evaluation of student projects. Our findings verify SQMetrics’ effectiveness in helping software engineering students learn critical concepts and improve the quality of their code, as well as in helping teachers assess the quality of students’ Java projects and make more informed grading decisions.
Journal Article
Reliability and validity in comparative studies of software prediction models
2005
Empirical studies on software prediction models do not converge with respect to the question \"which prediction model is best?\" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.
Journal Article
Metrics for Measuring the Quality of Modularization of Large-Scale Object-Oriented Software
2008
The metrics formulated to date for characterizing the modularization quality of object-oriented software have considered module and class to be synonymous concepts. But a typical class in object oriented programming exists at too low a level of granularity in large object-oriented software consisting of millions of lines of code. A typical module (sometimes referred to as a superpackage) in a large object-oriented software system will typically consist of a large number of classes. Even when the access discipline encoded in each class makes for \"clean\" class-level partitioning of the code, the intermodule dependencies created by associational, inheritance-based, and method invocations may still make it difficult to maintain and extend the software. The goal of this paper is to provide a set of metrics that characterize large object-oriented software systems with regard to such dependencies. Our metrics characterize the quality of modularization with respect to the APIs of the modules, on the one hand, and, on the other, with respect to such object-oriented inter-module dependencies as caused by inheritance, associational relationships, state access violations, fragile base-class design, etc. Using a two-pronged approach, we validate the metrics by applying them to popular open-source software systems.
Journal Article
Functional Software Size Measurement Methodology with Effort Estimation and Performance Indication
2017
<p><b> Presents a new, effective methodology in software size measurement </b> <p> Software size measurement is an extremely important and highly specialized aspect of the software life cycle. It is used for determining the effort and cost estimations for project planning purposes of a software project's execution, and/or for other costing, charging, and productivity analysis purposes. Many software projects exceed their allocated budget limits because the methodologies currently available lack accuracy. <p> The new software size measurement methodology presented in this book offers a complete procedure that overcomes the deficiencies of the current methodologies, allowing businesses to estimate the size and required effort correctly for all their software projects developed in high-level languages. The Functional Software Size Measurement Methodology with Effort Estimation and Performance Indication (FSSM) allows for projects to be completed within the defined budget limits by obtaining accurate estimations. The methodology provides comprehensive and precise measurements of the complete software whereby factual software size determination, development effort estimation, and performance indications are obtained. The approach is elaborate, effective, and accurate for software size measurement and development effort estimation, avoiding inaccurate project planning of software projects. <p><b> Key features: </b> <ul> <li>Pinpoints one of the major, originating root causes of erroneous planning by disclosing hidden errors made in software size measurement, and consequently in effort estimates and project planning</li> <li>All the major relevant and important aspects of software size measurement are taken into consideration and clearly presented to the reader</li> </ul> <br> <p><i> Functional Software Size Measurement Methodology with Effort Estimation and Performance Indication</i> is a vital reference for software professionals and Master level students in software engineering.
State of the Art Software Development in the Automotive Industry and Analysis upon Applicability of Software Fault Prediction
2017
In recent years the amount of software within automobiles has increased up to 100 Million LOC in modern day premium vehicles. Virtually all innovations in automotive engineering in the last decade include software components. Parallel to this increasing amount, testing becomes more vital. Automotive software development follows restrictive guidelines in terms of coding standard, language limitations and processes. Traditionally testing is a core part of automotive development, but the raising number of features increases the time and money required to perform all tests. Repeating them multiple times due to programming errors might jeopardises a cars introduction on the market. SFP is a new approach to forecast bugs already at time of commit, thus to guide test engineers upon defining testing hotspots. This work reports on the first successful application using model driven and code generated automotive software as a case study and a success prediction rate up to 97% upon a bug or fault free commit. A compiled and published dataset is presented along with analysis upon the used software metrics. Performance data achieved using different machine learning algorithms is given. An indepth analysis upon factors preventing CPFP is conducted. Further usage and practical application areas will conclude the work.
Exclusive use and evaluation of inheritance metrics viability in software fault prediction—an experimental study
by
Aziz, Syed Rashid
,
Nadeem, Aamer
,
Khan, Tamim Ahmed
in
Algorithms and Analysis of Algorithms
,
Artificial Intelligence
,
Datasets
2021
Software Fault Prediction (SFP) assists in the identification of faulty classes, and software metrics provide us with a mechanism for this purpose. Besides others, metrics addressing inheritance in Object-Oriented (OO) are important as these measure depth, hierarchy, width, and overriding complexity of the software. In this paper, we evaluated the exclusive use, and viability of inheritance metrics in SFP through experiments. We perform a survey of inheritance metrics whose data sets are publicly available, and collected about 40 data sets having inheritance metrics. We cleaned, and filtered them, and captured nine inheritance metrics. After preprocessing, we divided selected data sets into all possible combinations of inheritance metrics, and then we merged similar metrics. We then formed 67 data sets containing only inheritance metrics that have nominal binary class labels. We performed a model building, and validation for Support Vector Machine(SVM). Results of Cross-Entropy, Accuracy, F-Measure, and AUC advocate viability of inheritance metrics in software fault prediction. Furthermore, ic, noc, and dit metrics are helpful in reduction of error entropy rate over the rest of the 67 feature sets.
Journal Article
An evaluation of the MOOD set of object-oriented software metrics
by
Nithi, R.V.
,
Harrison, R.
,
Counsell, S.J.
in
Application software
,
Applied sciences
,
Computer science; control theory; systems
1998
This paper describes the results of an investigation into a set of metrics for object-oriented design, called the MOOD metrics. The merits of each of the six MOOD metrics is discussed from a measurement theory viewpoint, taking into account the recognized object-oriented features which they were intended to measure: encapsulation, inheritance, coupling, and polymorphism. Empirical data, collected from three different application domains, is then analyzed using the MOOD metrics, to support this theoretical validation. Results show that (with appropriate changes to remove existing problematic discontinuities) the metrics could be used to provide an overall assessment of a software system, which may be helpful to managers of software development projects. However, further empirical studies are needed before these results can be generalized.
Journal Article
A study on software fault prediction techniques
by
Kumar, Sandeep
,
Rathore, Santosh S
in
Classification
,
Computer aided software engineering
,
Data quality
2019
Software fault prediction aims to identify fault-prone software modules by using some underlying properties of the software project before the actual testing process begins. It helps in obtaining desired software quality with optimized cost and effort. Initially, this paper provides an overview of the software fault prediction process. Next, different dimensions of software fault prediction process are explored and discussed. This review aims to help with the understanding of various elements associated with fault prediction process and to explore various issues involved in the software fault prediction. We search through various digital libraries and identify all the relevant papers published since 1993. The review of these papers are grouped into three classes: software metrics, fault prediction techniques, and data quality issues. For each of the class, taxonomical classification of different techniques and our observations have also been presented. The review and summarization in the tabular form are also given. At the end of the paper, the statistical analysis, observations, challenges, and future directions of software fault prediction have been discussed.
Journal Article
Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities
2011
Security inspection and testing require experts in security who think like an attacker. Security experts need to know code locations on which to focus their testing and inspection efforts. Since vulnerabilities are rare occurrences, locating vulnerable code locations can be a challenging task. We investigated whether software metrics obtained from source code and development history are discriminative and predictive of vulnerable code locations. If so, security experts can use this prediction to prioritize security inspection and testing efforts. The metrics we investigated fall into three categories: complexity, code churn, and developer activity metrics. We performed two empirical case studies on large, widely used open-source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. The results indicate that 24 of the 28 metrics collected are discriminative of vulnerabilities for both projects. The models using all three types of metrics together predicted over 80 percent of the known vulnerable files with less than 25 percent false positives for both projects. Compared to a random selection of files for inspection and testing, these models would have reduced the number of files and the number of lines of code to inspect or test by over 71 and 28 percent, respectively, for both projects.
Journal Article