Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
201,707
result(s) for
"Software performance"
Sort by:
Business models in the software industry : the impact on firm and M&A performance
The relevance of software business models has tremendously increased in recent years. Markus Schief explores opportunities to improve the management of these models. Based on a conceptual framework of software business model characteristics, he conducts large empirical studies to examine the current state of business models in the software industry.
Exploring Performance Assurance Practices and Challenges in Agile Software Development: An Ethnographic Study
2022
BackgroundAgile principles play a pivotal role in modern software development. Unfortunately, the assessment of non-functional software properties, such as performance, can be challenging in Agile Software Development (ASD). Agile mentality tends to favor functional development over non-functional quality assurance. Additionally, frequent code changes and software releases make impractical the use of classical performance assurance approaches.ObjectiveThis paper investigates the current practices, problems and challenges of performance assurance in a real context of ASD. To the best of our knowledge, this is the first empirical study that specifically investigate performance assurance in ASD daily work.MethodThrough a 6-months industry collaboration with a large software organization that adopts ASD, we investigated practical and management problems in handling performance assurance activities. The research was conducted in line with ethnographic research, which guided towards building knowledge from participatory observations, unstructured interviews and reviews of documentations.ResultsThe study shows that the case organization still relies on a waterfall-like approach for performance assurance. Such an approach showed to be inadequate for ASD, thereby leading to a sub-optimal management of performance assessment activities. We distilled three key challenges when trying to improve the performance assurance process: (i) managing performance assessment activities, (ii) continuous performance assessment and (iii) defining the performance assessment effort.ConclusionsThe assessment of software performance in the context of ASD is still far from being flawless. The lack of guidelines and well-established practices induces the adoption of approaches that can be obsolete and inadequate for ASD. Further research is needed to improve the performance management in this context, and to enable effective continuous performance assessment.
Journal Article
From UML to Petri Nets: The PCM-Based Methodology
by
Distefano, S
,
Puliafito, A
,
Scarpa, M
in
Annotations
,
Application software
,
Computer programs
2011
In this paper, we present an evaluation methodology to validate the performance of a UML model, representing a software architecture. The proposed approach is based on open and well-known standards: UML for software modeling and the OMG Profile for Schedulability, Performance, and Time Specification for the performance annotations into UML models. Such specifications are collected in an intermediate model, called the Performance Context Model (PCM). The intermediate model is translated into a performance model which is subsequently evaluated. The paper is focused on the mapping from the PCM to the performance domain. More specifically, we adopt Petri nets as the performance domain, specifying a mapping process based on a compositional approach we have entirely implemented in the ArgoPerformance tool. All of the rules to derive a Petri net from a PCM and the performance measures assessable from the former are carefully detailed. To validate the proposed technique, we provide an in-depth analysis of a web application for music streaming.
Journal Article
Broadened support for software and system model interchange
2019
Although sound performance analysis theories and techniques exist, they are not widely used because they require extensive expertise in performance modeling and measurement. The overall goal of our work is to make performance modeling more accessible by automating much of the modeling effort. We have proposed a model interoperability framework that enables performance models to be automatically exchanged among modeling (and other) tools. The core of the framework is a set of model interchange formats (MIF): a common representation for data required by performance modeling tools. Our previous research developed a representation for system performance models (PMIF) and another for software performance models (S-PMIF), both based on the Queueing Network Modeling (QNM) paradigm. In order to manage the research scope and focus on model interoperability issues, the initial MIFs were limited to QNMs that can be solved by efficient, exact solution algorithms. The overall model interoperability approach has now been demonstrated to be viable. This paper broadens the scope of PMIF and S-PMIF to represent models that can be solved with additional methods such as analytical approximations or simulation solutions. It presents the extensions considered, describes the extended meta-models, and provides verification with examples and a case study.
Journal Article
Facilitating Performance Predictions Using Software Components
by
Happe, J
,
Koziolek, H
,
Reussner, R
in
Analysis
,
Architecture
,
component-based software architecture
2011
Component-based software engineering (CBSE) poses challenges for predicting and evaluating software performance but also offers several advantages. Software performance engineering can benefit from CBSE ideas and concepts. The MediaStore, a fictional system, demonstrates how to achieve compositional reasoning about software performance.
Journal Article
The Method of Layers
1995
Distributed applications are being developed that contain one or more layers of software servers. Software processes within such systems suffer contention delays both for shared hardware and at the software servers. The responsiveness of these systems is affected by the software design, the threading level and number of instances of software processes, and the allocation of processes to processors. The Method of Layers (MOL) is proposed to provide performance estimates for such systems. The MOL uses the mean value analysis (MVA) linearizer algorithm as a subprogram to assist in predicting model performance measures.< >
Journal Article
Model-based performance prediction in software development: a survey
2004
Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.
Journal Article
Modelado exploratorio del rendimiento y la confiabilidad de software sobre middleware orientado a mensajes
by
Garita, César
,
Flores-González, Martín
,
Trejos-Zelaya, Ignacio
in
confiabilidad del software
,
ingeniería de rendimiento de software
,
message
2020
El rendimiento es un importante atributo de calidad de un sistema de software. La Ingeniería de rendimiento del software comprende las actividades de análisis, diseño, construcción, medición y validación, que atienden los requerimientos de rendimiento a lo largo del proceso de desarrollo de software. En los sistemas de software que utilizan comunicación basada en mensajes, el rendimiento depende en gran medida del middleware orientado a mensajes (Message-Oriented Middleware – MOM). Los arquitectos de software necesitan considerar su organización, configuración y uso para predecir el comportamiento de un sistema que use tal plataforma. La inclusión de un MOM en una arquitectura de software requiere conocer el impacto de la mensajería y de la infraestructura utilizada. Omitir la influencia del MOM llevaría a la generación de predicciones erróneas. En este artículo se explora tal influencia, mediante el modelado y la simulación basados en componentes, utilizando el enfoque Palladio Component Model – PCM. En particular, una aplicación modelada en PCM fue adaptada para incluir comunicación basada en mensajes. Las simulaciones sobre el modelo, mediciones sistemáticas y pruebas de carga sobre la aplicación permitieron determinar cómo cambios introducidos en el modelo influyen en las predicciones del comportamiento de la aplicación en cuanto a rendimiento y confiabilidad. Fue posible identificar un cuello de botella que impacta negativamente el rendimiento y la confiabilidad del sistema original. La introducción de MOM mejoró la confiabilidad del sistema, a expensas del rendimiento. La simulación del rendimiento basado en componentes reveló diferencias significativas respecto de los experimentos basados en pruebas de carga y mediciones.
Journal Article
The role of modeling in the performance testing of e-commerce applications
2004
An e-commerce scalability case study is presented in which both traditional performance testing and performance modeling were used to help tune the application for high performance. This involved the creation of a system simulation model as well as the development of an approach for test case generation and execution. We describe our experience using a simulation model to help diagnose production system problems, and discuss ways that the effectiveness of performance testing efforts was improved by its use.
Journal Article
A configurable method for benchmarking scalability of cloud-native applications
2022
Cloud-native applications constitute a recent trend for designing large-scale software systems. However, even though several cloud-native tools and patterns have emerged to support scalability, there is no commonly accepted method to empirically benchmark their scalability. In this study, we present a benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, frameworks, and deployment options. Our benchmarking method consists of scalability metrics, measurement methods, and an architecture for a scalability benchmarking tool, particularly suited for cloud-native applications. Following fundamental scalability definitions and established benchmarking best practices, we propose to quantify scalability by performing isolated experiments for different load and resource combinations, which asses whether specified service level objectives (SLOs) are achieved. To balance usability and reproducibility, our benchmarking method provides configuration options, controlling the trade-off between overall execution time and statistical grounding. We perform an extensive experimental evaluation of our method’s configuration options for the special case of event-driven microservices. For this purpose, we use benchmark implementations of the two stream processing frameworks Kafka Streams and Flink and run our experiments in two public clouds and one private cloud. We find that, independent of the cloud platform, it only takes a few repetitions (≤ 5) and short execution times (≤ 5 minutes) to assess whether SLOs are achieved. Combined with our findings from evaluating different search strategies, we conclude that our method allows to benchmark scalability in reasonable time.
Journal Article