Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
59,924
result(s) for
"software product line"
Sort by:
Aspect-oriented, model-driven software product lines : the AMPLE way
\"Software product lines provide a systematic means of managing variability in a suite of products. They have many benefits but there are three major barriers that can prevent them from reaching their full potential. First, there is the challenge of scale: a large number of variants may exist in a product line context and the number of interrelationships and dependencies can rise exponentially. Second, variations tend to be systemic by nature in that they affect the whole architecture of the software product line. Third, software product lines often serve different business contexts, each with its own intricacies and complexities. The AMPLE (http://www.ample-project.net/) approach tackles these three challenges by combining advances in aspect-oriented software development and model-driven engineering. The full suite of methods and tools that constitute this approach are discussed in detail in this edited volume and illustrated using three real-world industrial case studies\"-- Provided by publisher.
Effects of variability in models: a family of experiments
by
Berger, Thorsten
,
Mahmood, Wardah
,
Anjorin, Anthony
in
Best practice
,
Computer models
,
Experiments
2022
The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models.
Journal Article
Tackling Combinatorial Explosion: A Study of Industrial Needs and Practices for Analyzing Highly Configurable Systems
by
Berger, Thorsten
,
Nešić, Damir
,
Maro, Salome Honest
in
analysis
,
Highly configurable systems
,
software product lines
2018
Highly configurable systems are complex pieces of software. To tackle this complexity, hundreds of dedicated analysis techniques have been conceived, many of which able to analyze system properties for all possible system configurations, as opposed to traditional, single-system analyses. Unfortunately, it is largely unknown whether these techniques are adopted in practice, whether they address actual needs, or what strategies practitioners actually apply to analyze highly configurable systems. We present a study of analysis practices and needs in industry. It relied on a survey with 27 practitioners engineering highly configurable systems and follow-up interviews with 15 of them, covering 18 different companies from eight countries. We confirm that typical properties considered in the literature (e.g., reliability) are relevant, that consistency between variability models and artifacts is critical, but that the majority of analyses for specifications of configuration options (a.k.a., variability model analysis) is not perceived as needed. We identified rather pragmatic analysis strategies, including practices to avoid the need for analysis. For instance, testing with experience-based sampling is the most commonly applied strategy, while systematic sampling is rarely applicable. We discuss analyses that are missing and synthesize our insights into suggestions for future research.
Conference Proceeding
Uniform and scalable sampling of highly configurable systems
by
Benavides, David
,
Galindo, José A
,
Batory, Don
in
Algorithms
,
Configurable programs
,
Embedded systems
2022
Many analyses on configurable software systems are intractable when confronted with colossal and highly-constrained configuration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random samples at a reasonable computational cost. However, there is a lack of experimental validation over colossal spaces to show whether the samplers indeed produce uniform samples or not. This paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and five other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our experimental results show only BDDSampler satisfies both scalability and uniformity.
Journal Article
Incremental software product line verification - A performance analysis with dead variable code
by
Schmid, Klaus
,
Flöter Moritz
,
Gerling Lea
in
Product lines
,
Software development
,
Software engineering
2022
Verification approaches for Software Product Lines (SPL) aim at detecting variability-related defects and inconsistencies. In general, these analyses take a significant amount of time to provide complete results for an entire, complex SPL. If the SPL evolves, these results potentially become invalid, which requires a time-consuming re-verification of the entire SPL for each increment. However, in previous work we showed that variability-related changes occur rather infrequently and typically only affect small parts of a SPL. In this paper, we utilize this observation and present an incremental dead variable code analysis as an example for incremental SPL verification, which achieves significant performance improvements. It explicitly considers changes and partially updates its previous results by re-verifying changed artifacts only. We apply this approach to the Linux kernel demonstrating that our fastest incremental strategy takes only 3.20 seconds or less for most of the changes, while the non-incremental approach takes 1,020 seconds in median. We also discuss the impact of different variants of our strategy on the overall performance, providing insights into optimizations that are worthwhile.
Journal Article
Open-source software product line extraction processes: the ArgoUML-SPL and Phaser cases
by
Martinez, Jabier
,
Moreira, Rodrigo André Ferreira
,
Figueiredo, Eduardo
in
Case studies
,
Empirical analysis
,
Feature extraction
2022
Software Product Lines (SPLs) are rarely developed from scratch. Commonly, they emerge from one product when there is a need to create tailored variants, or from existing variants created in an ad-hoc way once their separated maintenance and evolution become challenging. Despite the vast literature about re-engineering systems into SPLs and related technical approaches, there is a lack of detailed analysis of the process itself and the effort involved. In this paper, we provide and analyze empirical data of the extraction processes of two open-source case studies, namely ArgoUML and Phaser. Both cases emerged from the transition of a monolithic system into an SPL. The analysis relies on information mined from the version control history of their respective source-code repositories and the discussion with developers that took part in the process. Unlike previous works that focused mostly on the structural results of the final SPL, the contribution of this study is an in-depth characterization of the processes. With this work, we aimed at providing a deeper understanding of the strategies for SPL extraction and their implications. Our results indicate that the source code changes can range from almost a fourth to over half of the total lines of code. Developers may or may not use branching strategies for feature extraction. Additionally, the problems faced during the extraction process may be due to lack of tool support, complexity on managing feature dependencies and issues with feature constraints. We made publicly available the datasets and the analysis scripts of both case studies to be used as a baseline for extractive SPL adoption research and practice.
Journal Article
An Experimental Evaluation of Path-Based Product Line Integration Testing and Test Coverage Metrics
2023
Product line testing is significant because any faults in a product line platform can lead to widespread impacts on multiple products configured from that platform within a product line. Due to the shared platform, certain testing can be repeatedly performed across different products, leading to unnecessary costs. To enhance quality and reduce costs in product line testing, it is essential to minimize redundant testing of the products in a product line. Because test coverage provides a way to explicitly state the extent to which a software item has been tested, having a clear understanding of test coverage helps avoid unnecessary repetition of tests and ensures that the testing efforts are focused on areas that require attention, ultimately leading to more efficient and effective product line testing. It is necessary to define appropriate test coverage metrics of product line testing that enable testers to identify redundancies in their testing efforts. Path-based integration testing has been proven to be an effective approach to product line integration testing. This paper defines coverage metrics for path-based product line integration testing and demonstrates their effectiveness in preventing redundant testing between platform testing and testing for individual products, while also effectively detecting faults. The experiment results highlight the coverage metrics’ effectiveness in avoiding redundant testing, reducing costs, and covering interfacing across different modules.
Journal Article
An adaptive IoT architecture using combination of concept-drift and dynamic software product line engineering
2021
In general, the architecture of IoT has capability to provide services to convey one on making their decision based on data reading from sensors and other inputs for specific purpose with the help of machine learning and big data technology. [...]a mechanism to identify the data behaviour should be developed to address the aforementioned problems. [...]we need to figure out of how to automate [15] the software reconfiguration mechanism. [...]an adaptive architecture has its importance to be included as a success factor.
Journal Article
Model-Driven and Software Product Line Engineering
by
Jean-Claude Royer, Hugo Arboleda
in
COMPUTERS
,
Model-driven software architecture
,
Software product line engineering
2013,2012
Many approaches to creating Software Product Lines have emerged that are based on Model-Driven Engineering. This book introduces both Software Product Lines and Model-Driven Engineering, which have separate success stories in industry, and focuses on the practical combination of them. It describes the challenges and benefits of merging these two software development trends and provides the reader with a novel approach and practical mechanisms to improve software development productivity.
The book is aimed at engineers and students who wish to understand and apply software product lines and model-driven engineering in their activities today. The concepts and methods are illustrated with two product line examples: the classic smart-home systems and a collection manager information system.
Toward Compositional Software Product Lines
2010
Software product lines (SPLs) were introduced over the last two decades as a mechanism for dealing with the complexities of software systems' ever-increasing size by exploiting the commonalities among the company's different products or systems. By standardizing the software components sourced from the outside and sharing domain specific software assets the company develops among different product teams, you can significantly reduce the per-product R&D cost, which improves the company's competitive position. This can be achieved through a richer product portfolio, a harmonized look-and-feel across the product portfolio, or a significantly higher degree of customer configurability. Companies that successfully deploy SPL technology can achieve order-of magnitude growth over a decade and reach major business milestones.
Journal Article