Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
30 result(s) for "UML (Computer science) Evaluation."
Sort by:
Quantitative Analysis of Apache Storm Applications: The NewsAsset Case Study
The development of Information Systems today faces the era of Big Data. Large volumes of information need to be processed in real-time, for example, for Facebook or Twitter analysis. This paper addresses the redesign of NewsAsset, a commercial product that helps journalists by providing services, which analyzes millions of media items from the social network in real-time. Technologies like Apache Storm can help enormously in this context. We have quantitatively analyzed the new design of NewsAsset to assess whether the introduction of Apache Storm can meet the demanding performance requirements of this media product. Our assessment approach, guided by the Unified Modeling Language (UML), takes advantage, for performance analysis, of the software designs already used for development. In addition, we converted UML into a domain-specific modeling language (DSML) for Apache Storm, thus creating a profile for Storm. Later, we transformed said DSML into an appropriate language for performance evaluation, specifically, stochastic Petri nets. The assessment ended with a successful software design that certainly met the scalability requirements of NewsAsset.
VeriSIM: A model-based learning pedagogy for fostering software design evaluation skills in computer science undergraduates
Evaluating a software design is an important practice of expert software designers. They spend significant time evaluating their solution, by developing an integrated mental model of the software design and the requirements. However, sufficient emphasis has not been given on teaching and learning of evaluation practices in software design courses, and hence, graduating students find it difficult to critically analyse an existing design and improve upon it. In this paper, we describe a model-based learning pedagogy for teaching–learning of software design evaluation. Model-based learning has been extensively used in science education and entails helping students construct, refine, revise, evaluate, and validate scientific models. We argue that modelling practices in software design evaluation are analogous to these practices. We adapted the model-based learning paradigm and operationalised it into a technology-enhanced learning environment (TELE) for fostering software design evaluation skills in computer science undergraduates. We conducted a research study with 22 undergraduate students to explore how the TELE and its features help students effectively evaluate a given software design. Students attempted a pre-test and post-test which asked them to identify defects in the design. We used the content analysis method to identify categories of defects from student responses in the pre-test and post-test. We also analysed student interaction logs and conducted focus group interviews to identify how features in the TELE contributed towards student learning. Findings from the study showed that students’ understanding of evaluation improved, from merely adding new functionalities and requirements, to a process which involved identifying alternate scenarios in the design which violate the given requirements. Students perceived that pedagogical features of the TELE were useful in helping them effectively evaluate software designs. Findings from the study provide evidence for the model-based learning paradigm as an appropriate pedagogy for software design and also opens the space for researchers to investigate model-based learning in other aspects of software design, such as designs of different types and varying complexities.
From UML to Petri Nets: The PCM-Based Methodology
In this paper, we present an evaluation methodology to validate the performance of a UML model, representing a software architecture. The proposed approach is based on open and well-known standards: UML for software modeling and the OMG Profile for Schedulability, Performance, and Time Specification for the performance annotations into UML models. Such specifications are collected in an intermediate model, called the Performance Context Model (PCM). The intermediate model is translated into a performance model which is subsequently evaluated. The paper is focused on the mapping from the PCM to the performance domain. More specifically, we adopt Petri nets as the performance domain, specifying a mapping process based on a compositional approach we have entirely implemented in the ArgoPerformance tool. All of the rules to derive a Petri net from a PCM and the performance measures assessable from the former are carefully detailed. To validate the proposed technique, we provide an in-depth analysis of a web application for music streaming.
An UML Based Performance Evaluation of Real-Time Systems Using Timed Petri Net
Performance is a critical non-functional parameter for real-time systems and performance analysis is an important task making it more challenging for complex real-time systems. Mostly performance analysis is performed after the system development but an early stage analysis and validation of performance using system models can improve the system quality. In this paper, we present an early stage automated performance evaluation methodology to analyse system performance using the UML sequence diagram model annotated with modeling and analysis of real-time and embedded systems (MARTE) profile. MARTE offers a performance domain sub-profile that is used for representing real-time system properties essential for performance evaluation. In this paper, a transformation technique and transformation rules are proposed to map the UML sequence diagram model into a Generalized Stochastic Timed Petri net model. All the transformation rules are implemented using a metamodel based approach and Atlas Transformation Language (ATL). A case study from the manufacturing domain a Kanban system is used for validating the proposed technique.
Learning from Peer Mistakes: Collaborative UML-Based ITS with Peer Feedback Evaluation
Collaborative Intelligent Tutoring Systems (ITSs) use peer tutor assessment to give feedback to students in solving problems. Through this feedback, the students reflect on their thinking and try to improve it when they get similar questions. The accuracy of the feedback given by the peers is important because this helps students to improve their learning skills. If the student acting as a peer tutor is unclear about the topic, then they will probably provide incorrect feedback. There have been very few attempts in the literature that provide limited support to improve the accuracy and relevancy of peer feedback. This paper presents a collaborative ITS to teach Unified Modeling Language (UML), which is designed in such a way that it can detect erroneous feedback before it is delivered to the student. The evaluations conducted in this study indicate that receiving and sending incorrect feedback have negative impact on students’ learning skills. Furthermore, the results also show that the experimental group with peer feedback evaluation has significant learning gains compared to the control group.
An experiment in model-driven conceptual database design
The article presents the results of an experiment we conducted with database professionals in order to evaluate an approach to automatic design of the initial conceptual database model based on collaborative business process models. The source business process model is represented by BPMN, while the target conceptual model is represented by the UML class diagram. The results confirm those already obtained in a case-study-based evaluation, as well as those of an earlier controlled experiment conducted with undergraduate students. The evaluation implies that the proposed approach and implemented generator enable automatic generation of the target conceptual model with a high percentage of completeness and precision. The experiment also confirms that the automatically generated model can be efficiently used as a starting point for manual design of the target model, since it significantly shortens the estimated efforts and actual time spent to obtain the target model in contrast to the manual design from scratch.
Assessing and improving state-based class testing: a series of experiments
This work describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.
A Petri net tool for software performance estimation based on upper throughput bounds
Functional and non-functional properties analysis (i.e., dependability, security, or performance) ensures that requirements are fulfilled during the design phase of software systems. However, the Unified Modelling Language (UML), standard de facto in industry for software systems modelling, is unsuitable for any kind of analysis but can be tailored for specific analysis purposes through profiling. For instance, the MARTE profile enables to annotate performance data within UML models that can be later transformed to formal models (e.g., Petri nets or timed automatas) for performance evaluation. A performance (or throughput) estimation in such models normally relies on a whole exploration of the state space, which becomes unfeasible for large systems. To overcome this issue upper throughput bounds are computed, which provide an approximation to the real system throughput with a good complexity-accuracy trade-off. This paper introduces a tool, named PeabraiN, that estimates the performance of software systems via their UML models. To do so, UML models are transformed to Petri nets where performance is estimated based on upper throughput bounds computation. PeabraiN also allows to compute other features on Petri nets, such as the computation of upper and lower marking place bounds, and to simulate using an approximate (continuous) method. We show the applicability of PeabraiN by evaluating the performance of a building closed circuit TV system.
Evaluation of the Ontological Completeness and Clarity of Object-Oriented Conceptual Modelling Grammars
Several research studies have concluded that modelling grammars that support the Object-Oriented (OO) methodology focus more on modelling system design and implementation phenomena than real-world phenomena in IS users' domains. Thus, the purpose of this research study was to evaluate the suitability of OO modelling grammars for conceptual modelling. Although the research work focused on one widely used OO modelling grammar—namely, the Unified Modelling Language (UML)—the approach developed can be applied to any OO modelling grammar. The first phase of this research study focused on evaluating all UML constructs and identifying a subset of UML constructs that are capable of representing real-world phenomena in user domains. The second phase was an empirical evaluation of the identified subset of UML constructs. The results of this empirical evaluation suggest that instead of using all UML constructs the subset of UML constructs is better suited for conceptual modelling.