Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
865,479 result(s) for "SOFTWARE PRODUCTS"
Sort by:
Uniform and scalable sampling of highly configurable systems
Many analyses on configurable software systems are intractable when confronted with colossal and highly-constrained configuration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random samples at a reasonable computational cost. However, there is a lack of experimental validation over colossal spaces to show whether the samplers indeed produce uniform samples or not. This paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and five other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our experimental results show only BDDSampler satisfies both scalability and uniformity.
Evaluating organizational characteristics complementary with enterprise software products
Implementation, deployment and maintenance of enterprise software pre-configured products are one of the key challenges managers need to address in order to stay competitive in the never ending search to find better ways of conducting business. In the literature there are discovered two general approaches through which managers can use for a successful implementation, deployment and maintenance of enterprise software products. First approach is based on the internal re-deployment of the managerial practices that are already used to manage other fields in the enterprise. Second – the deployment of “world-wide” industry “best practices” that international vendors of enterprise software and their local representatives sell as part of their pre-configured software products. This paper presents a novel model that enables enterprises to systematically evaluate the fit between their specific organizational characteristics and the organizational characteristics complementary with successful deployment of international pre-configured enterprise software products. The proposed model is tested through a comparison of two groups of enter-prises from the population of 1000 biggest enterprises in Slovenia. The first group mostly invests in local, while the second group mostly invests in international enterprise software products. The paper finds that on average there are significant and relevant differences in 44% of the examined organizational characteristics between the groups of enterprises that mostly invest in international or local enterprise software products. The model serves as a comprehensive organizational risk checklist for enterprises that are about to invest in enterprise software products.
A comprehensive overview of software product management challenges
The principal focus of software product management is to ensure the economic success of the product, which means to prolong the product life as much as possible with modest expenditures to maximizs profits. Software product managers play an important role in the software development organization while being responsible for the strategy, business case, product roadmap, high-level requirements, product deployment (release-management), and retirement plan. This article explores the problems that affect the software product management process, their perceived frequency and perceived severity. The data were collected by a systematic literature review (5 main databases were analyzed), interviews (10 software product managers from IT companies), and surveys (89 participants). 95 software product management problems assigned nonexclusively to 7 areas were identified. 27 commonly mentioned software product management problems were evaluated for their perceived frequency and perceived severity. The problems perceived as the most frequent are: determining the true value of the product that the customer needs, strategy and priorities change frequently, technical debt, working in silos, and balancing between reactive and proactive work. In total, 95 problems have been identified which have been narrowed down to 27 problems based on their occurrence in at least 3 interviews. These selected problems were prioritized by perceived frequency and perceived severity. Some of the identified problems spanned beyond the software product management process itself, but they all affect the work of software product managers.
Effects of variability in models: a family of experiments
The ever-growing need for customization creates a need to maintain software systems in many different variants. To avoid having to maintain different copies of the same model, developers of modeling languages and tools have recently started to provide implementation techniques for such variant-rich systems, notably variability mechanisms, which support implementing the differences between model variants. Available mechanisms either follow the annotative or the compositional paradigm, each of which have dedicated benefits and drawbacks. Currently, language and tool designers select the used variability mechanism often solely based on intuition. A better empirical understanding of the comprehension of variability mechanisms would help them in improving support for effective modeling. In this article, we present an empirical assessment of annotative and compositional variability mechanisms for three popular types of models. We report and discuss findings from a family of three experiments with 164 participants in total, in which we studied the impact of different variability mechanisms during model comprehension tasks. We experimented with three model types commonly found in modeling languages: class diagrams, state machine diagrams, and activity diagrams. We find that, in two out of three experiments, annotative technique lead to better developer performance. Use of the compositional mechanism correlated with impaired performance. For all three considered tasks, the annotative mechanism was preferred over the compositional one in all experiments. We present actionable recommendations concerning support of flexible, tasks-specific solutions, and the transfer of established best practices from the code domain to models.
Incremental software product line verification - A performance analysis with dead variable code
Verification approaches for Software Product Lines (SPL) aim at detecting variability-related defects and inconsistencies. In general, these analyses take a significant amount of time to provide complete results for an entire, complex SPL. If the SPL evolves, these results potentially become invalid, which requires a time-consuming re-verification of the entire SPL for each increment. However, in previous work we showed that variability-related changes occur rather infrequently and typically only affect small parts of a SPL. In this paper, we utilize this observation and present an incremental dead variable code analysis as an example for incremental SPL verification, which achieves significant performance improvements. It explicitly considers changes and partially updates its previous results by re-verifying changed artifacts only. We apply this approach to the Linux kernel demonstrating that our fastest incremental strategy takes only 3.20 seconds or less for most of the changes, while the non-incremental approach takes 1,020 seconds in median. We also discuss the impact of different variants of our strategy on the overall performance, providing insights into optimizations that are worthwhile.
Tackling Combinatorial Explosion: A Study of Industrial Needs and Practices for Analyzing Highly Configurable Systems
Highly configurable systems are complex pieces of software. To tackle this complexity, hundreds of dedicated analysis techniques have been conceived, many of which able to analyze system properties for all possible system configurations, as opposed to traditional, single-system analyses. Unfortunately, it is largely unknown whether these techniques are adopted in practice, whether they address actual needs, or what strategies practitioners actually apply to analyze highly configurable systems. We present a study of analysis practices and needs in industry. It relied on a survey with 27 practitioners engineering highly configurable systems and follow-up interviews with 15 of them, covering 18 different companies from eight countries. We confirm that typical properties considered in the literature (e.g., reliability) are relevant, that consistency between variability models and artifacts is critical, but that the majority of analyses for specifications of configuration options (a.k.a., variability model analysis) is not perceived as needed. We identified rather pragmatic analysis strategies, including practices to avoid the need for analysis. For instance, testing with experience-based sampling is the most commonly applied strategy, while systematic sampling is rarely applicable. We discuss analyses that are missing and synthesize our insights into suggestions for future research.
Software Product System Model: A Customer-Value Oriented, Adaptable, DevOps-Based Product Model
DevOps pipelines have brought notable advantages, such as fast and frequent software delivery to software production paradigms, but dynamically dealing with quality attributes desired by the customer employing a DevOps pipeline remains a challenge. This work aims to define the design of a systems thinking inspired model, called Software Product System Model (SPSM), applying a customer-value oriented, holistic approach for implementing quality requirements, and its application and evaluation in a large software house. The main features include dynamic control of quality gates, the parameters of which are driven by customer requirements and feedback from surveys. All of the inputs are collected in a product backlog and fed forward to the quality gates over the DevOps pipeline. SPSM was successfully deployed in a large software house extending a DevOps pipeline with an accompanying improvement of customer-value oriented key performance indicators for projects. In a 2-year-long case study, security and code quality were the main quality attributes, with the metrics on security vulnerabilities and unit test coverage. At the end of the 2020, the DevOps pipeline within SPSM provided a 69.50% decrease of security vulnerabilities of all software products, and a 29.43% increase in unit test coverage for the whole code base for increasing code quality. At the end of 2020, the project completion ratio was measured to be 99.50% and the Schedule Performance Index (SPI) was measured to be 99.78% as the average of 762 projects delivered. The flexibility of SPSM allowed the software house to adapt to changing customer expectations. A checklist is provided for the replicability of the model application.