Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
129 result(s) for "object-oriented metrics"
Sort by:
Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes
Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.
Assessing the performance of object‐oriented LiDAR predictors for forest bird habitat suitability modeling
Habitat suitability models (HSMs) are widely used to plan actions for species of conservation interest. Models that will be turned into conservation actions need predictors that are both ecologically pertinent and fit managers' conceptual view of ecosystems. Remote sensing technologies such as light detection and ranging (LiDAR) can describe landscapes at high resolution over large spatial areas and have already given promising results for modeling forest species distributions. The point‐cloud (PC) area‐based LiDAR variables are often used as environmental variables in HSMs and have more recently been complemented by object‐oriented (OO) metrics. However, the efficiency of each type of variable to capture structural information on forest bird habitat has not yet been compared. We tested two hypotheses: (1) the use of OO variables in HSMs will give similar performance as PC area‐based models; and (2) OO variables will improve model robustness to LiDAR datasets acquired at different times for the same area. Using the case of a locally endangered forest bird, the capercaillie (Tetrao urogallus), model performance and predictions were compared between the two variable types. Models using OO variables showed slightly lower discriminatory performance than PC area‐based models (average DAUC = -0.032 and -0.01 for females and males, respectively). OO-based models were as robust (absolute difference in Spearman rank correlation of predictions ≤ 0.21) or more robust than PC area-based models. In sum, LiDAR derived PC area-based metrics and OO metrics showed similar performance for modeling the distribution of the capercaillie. We encourage the further exploration of OO metrics for creating reliable HSMs, and in particular testing whether they might help improve the scientist-stakeholder interface through better interpretability.
A hierarchical model for object-oriented design quality assessment
The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands.
Optimized design refactoring (ODR): a generic framework for automated search-based refactoring to optimize object-oriented software architectures
Software design optimization (SDO) demands advanced abstract reasoning to define optimal design components’ structure and interactions. Modeling tools such as UML and MERISE, and to a degree, programming languages, are chiefly developed for lucid human–machine design dialogue. For effective automation of SDO, an abstract layer attuned to the machine’s computational prowess is crucial, allowing it to harness its swift calculation and inference in determining the best design. This paper contributes an innovative and universal framework for search-based software design refactoring with an emphasis on optimization. The framework accommodates 44% of Fowler’s cataloged refactorings. Owing to its adaptable and succinct structure, it integrates effortlessly with diverse optimization heuristics, eliminating the requirement for further adaptation. Distinctively, our framework offers an artifact representation that obviates the necessity for a separate solution representation, this unified dual-purpose representation not only streamlines the optimization process but also facilitates the computation of essential object-oriented metrics. This ensures a robust assessment of the optimized model through the construction of pertinent fitness functions. Moreover, the artifact representation supports parallel optimization processes and demonstrates commendable scalability with design expansion.
The confounding effect of class size on the validity of object-oriented metrics
Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness.
Techniques for Calculating Software Product Metrics Threshold Values: A Systematic Mapping Study
Several aspects of software product quality can be assessed and measured using product metrics. Without software metric threshold values, it is difficult to evaluate different aspects of quality. To this end, the interest in research studies that focus on identifying and deriving threshold values is growing, given the advantage of applying software metric threshold values to evaluate various software projects during their software development life cycle phases. The aim of this paper is to systematically investigate research on software metric threshold calculation techniques. In this study, electronic databases were systematically searched for relevant papers; 45 publications were selected based on inclusion/exclusion criteria, and research questions were answered. The results demonstrate the following important characteristics of studies: (a) both empirical and theoretical studies were conducted, a majority of which depends on empirical analysis; (b) the majority of papers apply statistical techniques to derive object-oriented metrics threshold values; (c) Chidamber and Kemerer (CK) metrics were studied in most of the papers, and are widely used to assess the quality of software systems; and (d) there is a considerable number of studies that have not validated metric threshold values in terms of quality attributes. From both the academic and practitioner points of view, the results of this review present a catalog and body of knowledge on metric threshold calculation techniques. The results set new research directions, such as conducting mixed studies on statistical and quality-related studies, studying an extensive number of metrics and studying interactions among metrics, studying more quality attributes, and considering multivariate threshold derivation.
An exploratory study for software change prediction in object-oriented systems using hybridized techniques
Variation in software requirements, technological upgrade and occurrence of defects necessitate change in software for its effective use. Early detection of those classes of a software which are prone to change is critical for software developers and project managers as it can aid in efficient resource allocation of limited resources. Moreover, change prone classes should be efficiently restructured and designed to prevent introduction of defects. Recently, use of search based techniques and their hybridized counter-parts have been advocated in the field of software engineering predictive modeling as these techniques help in identification of optimal solutions for a specific problem by testing the goodness of a number of possible solutions. In this paper, we propose a novel approach for change prediction using search-based techniques and hybridized techniques. Further, we address the following issues: (i) low repeatability of empirical studies, (ii) less use of statistical tests for comparing the effectiveness of models, and (iii) non-assessment of trade-off between runtime and predictive performance of various techniques. This paper presents an empirical validation of search-based techniques and their hybridized versions, which yields unbiased, accurate and repeatable results. The study analyzes and compares the predictive performance of five search-based, five hybridized techniques and four widely used machine learning techniques and a statistical technique for predicting change prone classes in six application packages of a popular operating system for mobile—Android. The results of the study advocate the use of hybridized techniques for developing models to identify change prone classes.
Finding Bad Code Smells with Neural Network Models
Code smell refers to any symptom introduced in design or implementation phases in the source code of a program. Such a code smell can potentially cause deeper and serious problems during software maintenance. The existing approaches to detect bad smells use detection rules or standards using a combination of different object-oriented metrics. Although a variety of software detection tools have been developed, they still have limitations and constraints in their capabilities. In this paper, a code smell detection system is presented with the neural network model that delivers the relationship between bad smells and object-oriented metrics by taking a corpus of Java projects as experimental dataset. The most well-known object-oriented metrics are considered to identify the presence of bad smells. The code smell detection system uses the twenty Java projects which are shared by many users in the GitHub repositories. The dataset of these Java projects is partitioned into mutually exclusive training and test sets. The training dataset is used to learn the network model which will predict smelly classes in this study. The optimized network model will be chosen to be evaluated on the test dataset. The experimental results show when the modelis highly trained with more dataset, the prediction outcomes are improved more and more. In addition, the accuracy of the model increases when it performs with higher epochs and many hidden layers.
A Comprehensive MCDM-Based Approach for Object-Oriented Metrics Selection Problems
Object-oriented programming (OOP) is prone to defects that negatively impact software quality. Detecting defects early in the development process is crucial for ensuring high-quality software, reducing maintenance costs, and increasing customer satisfaction. Several studies use the object-oriented metrics to identify design flaws both at the model level and at the code level. Metrics provide a quantitative measure of code quality by analyzing specific aspects of the software, such as complexity, cohesion, coupling, and inheritance. By examining these metrics, developers can identify potential defects in OOP, such as design defects and code smells. Unfortunately, we cannot assess the quality of an object-oriented program by using a single metric. Identifying design-defect-metric-based rules in an object-oriented program can be challenging due to the number of metrics. In fact, it is difficult to determine which metrics are the most relevant for identifying design defects. Additionally, multiple thresholds for each metric indicates different levels of quality and increases the difficulty to set clear and consistent rules. Hence, the problem of object-oriented metrics selection can be ascribed to a multi-criteria decision-making (MCDM) problem. Based on the experts’ judgement, we can identify the most appropriate metric for the detection of a specific defect. This paper presents our approach to reduce the number of metrics using one of the MCDM methods. Therefore, to identify the most important detection rules, we apply the fuzzy decision-making trial and evaluation laboratory (Fuzzy DEMATEL) method. We also classify the metrics into cause-and-effect groups. The results of our proposed approach, applied on four open-source projects, compared to our previous published results, confirm the efficiency of the MCDM and especially the Fuzzy DEMATEL method in selecting the best rules to identify design flaws. We increased the defect detection accuracy by the selection of rules containing important and interrelated metrics.
Predicting maintenance performance using object-oriented design complexity metrics
The Object-Oriented (OO) paradigm has become increasingly popular in recent years. Researchers agree that, although maintenance may turn out to be easier for OO systems, it is unlikely that the maintenance burden will completely disappear. One approach to controlling software maintenance costs is the utilization of software metrics during the development phase, to help identify potential problem areas. Many new metrics have been proposed for OO systems, but only a few of them have been validated. The purpose of this research is to empirically explore the validation of three existing OO design complexity metrics and, specifically, to assess their ability to predict maintenance time. This research reports the results of validating three metrics, Interaction Level (IL), Interface Size (IS), and Operation Argument Complexity (OAC). A controlled experiment was conducted to investigate the effect of design complexity (as measured by the above metrics) on maintenance time. Each of the three metrics by itself was found to be useful in the experiment in predicting maintenance performance.