Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
51
result(s) for
"Chechik, Marsha"
Sort by:
Formal reasoning for analyzing goal models that evolve over time
2021
In early-phase requirements engineering, modeling stakeholder goals and intentions helps stakeholders understand the problem context and evaluate tradeoffs, by exploring possible “what if” questions. Prior research allows modelers to make evaluation assignments to desired goals and generate possible selections for task and dependency alternatives, but this treats models as static snapshots, where the evaluation of the fulfillment of an intention remains constant once it has been determined. Using these techniques, stakeholders are unable to reason about possible evolutions, leaving questions about project viability unanswered when the fulfillment of goals or availability of components is not guaranteed in the future. In this article, we formalize the Evolving Intentions framework for specifying, modeling, and reasoning about goals that change over time. Using the Tropos language, we specify a set of functions that define how intentions and relationships evolve, and use path-based analysis for asking a variety of “what if” questions about such changes. We illustrate the framework using the Bike Lanes example and prove correctness of the analysis. Finally, we demonstrate scalability and effectiveness, enabling stakeholders to explore model evolution.
Journal Article
Reconstructing the past: the case of the Spadina Expressway
2020
In order to build resilient systems that can be operational for a long time, it is important that analysts are able to model the evolution of the requirements of that system. The Evolving Intentions framework models how stakeholders’ goals change over time. In this work, our aim is to validate applicability and effectiveness of this technique on a substantial case. In the absence of ground truth about future evolutions, we used historical data and rational reconstruction to understand how a project evolved in the past. Seeking a well-documented project with varying stakeholder intentions over a substantial period of time, we selected requirements of the Toronto Spadina Expressway. In this paper, we report on the experience and the results of modeling this project over different time periods, which enabled us to assess the modeling and reasoning capabilities of the approach, its support for asking and answering ‘what if’ questions, and the maturity of the underlying tool support. We also demonstrate a novel process for creating time-based models through the construction and merging of scenarios.
Journal Article
Heterogeneous megamodel management using collection operators
2020
Model management techniques help tame the complexity caused by the many models used in large-scale software development; however, these techniques have focused on operators to manipulate individual models rather than entire collections of them. In this work, we begin to address this gap by adapting the widely used map, reduce and filter collection operators for collections of models represented by megamodels. Key parts of this adaptation include the special handling of relationships between models and the use of polymorphism to support heterogeneous model collections. We evaluate the complexity of our operators analytically and demonstrate their applicability on six diverse megamodel management scenarios. We describe our tool support for the approach and evaluate its scalability experimentally as well as its applicability on a practical application from the automotive domain.
Journal Article
Managing design-time uncertainty
by
Famelis, Michalis
,
Chechik, Marsha
in
Compilers
,
Computer Science
,
Information Systems Applications (incl.Internet)
2019
Managing design-time uncertainty, i.e., uncertainty that developers have about making design decisions, requires creation of “uncertainty-aware” software engineering methodologies. In this paper, we propose a methodological approach for managing uncertainty using partial models. To this end, we identify the stages in the lifecycle of uncertainty-related design decisions and characterize the tasks needed to manage it. We encode this information in the Design-Time Uncertainty Management (
DeTUM
) model. We then use the
DeTUM
model to create a coherent, tool-supported methodology centred around partial model management. We demonstrate the effectiveness and feasibility of our methodology through case studies.
Journal Article
The ForeMoSt approach to building valid model-based safety arguments
2023
Safety assurance cases
(ACs) are structured arguments designed to comprehensively show that a system is safe. ACs are often
model-based
, meaning that a model of the system is a primary subject of the argument. ACs use reasoning steps called
strategies
to decompose high-level claims about system safety into refined subclaims that can be directly supported by evidence. Strategies are often informal and difficult to rigorously evaluate in practice, and consequently, AC arguments often contain reasoning errors. This has led to the deployment of unsafe systems, and caused severe real-world consequences. These errors can be mitigated by formalizing and verifying AC strategies using formal methods; however, these techniques are difficult to use without formal methods expertise. To mitigate potential challenges faced by engineers when developing and interpreting formal ACs, we present ForeMoSt, our tool-supported framework for rigorously validating AC strategies using the Lean theorem prover. The goal of the framework is to straddle the level of abstraction used by the theorem prover and by software engineers. We use case studies from the literature to demonstrate that ForeMoSt is able to (i) augment and validate ACs from the research literature, (ii) support AC development for systems with large models, and (iii) support different model types.
Journal Article
Cloned product variants: from ad-hoc to managed software product lines
2015
We focus on the problem of managing a collection of related software product variants realized via cloning. By analyzing three industrial case studies of organizations with cloned product lines, we conclude that an efficient management of clones relies on both refactoring cloned variants into a single-copy product line representation and improving development experience when maintaining existing clones. We propose a framework that consists of seven conceptual operators for cloned product line management and show that these operators are adequate to realize development activities we observed in the analyzed case studies. We discuss options for implementing the operators and benefits of the operator-based view.
Journal Article
Synthesis of Partial Behavior Models from Properties and Scenarios
2009
Synthesis of behavior models from software development artifacts such as scenario-based descriptions or requirements specifications helps reduce the effort of model construction. However, the models favored by existing synthesis approaches are not sufficiently expressive to describe both universal constraints provided by requirements and existential statements provided by scenarios. In this paper, we propose a novel synthesis technique that constructs behavior models in the form of modal transition systems (MTS) from a combination of safety properties and scenarios. MTSs distinguish required, possible, and proscribed behavior, and their elaboration not only guarantees the preservation of the properties and scenarios used for synthesis but also supports further elicitation of new requirements.
Journal Article
Precise semantic history slicing through dynamic delta refinement
by
Gligoric, Milos
,
Rubin, Julia
,
Zhu, Chenguang
in
Artificial Intelligence
,
Computer Science
,
Empirical analysis
2019
Semantic history slicing solves the problem of extracting changes related to a particular high-level functionality from software version histories. State-of-the-art techniques combine static program analysis and dynamic execution tracing to infer an over-approximated set of changes that can preserve the functional behaviors captured by a test suite. However, due to the conservative nature of such techniques, the sliced histories may contain irrelevant changes. In this paper, we propose a divide-and-conquer-style partitioning approach enhanced by dynamic delta refinement to produce much smaller semantic history slices. We utilize deltas in dynamic invariants generated from successive test executions to learn significance of changes with respect to the target functionality. Additionally, we introduce a file-level commit splitting technique for untangling unrelated changes introduced in a single commit. Empirical results indicate that these measurements accurately rank changes according to their relevance to the desired test behaviors and thus partition history slices in an efficient and effective manner.
Journal Article
Applying declarative analysis to industrial automotive software product line models
by
S, Ramesh
,
Shahin, Ramy
,
Toledo, Rafael
in
Engineers
,
Product lines
,
Program verification (computers)
2023
Program analysis of automotive software has several unique challenges, including that the code base is ultra large, comprising over a hundred million lines of code running on a single vehicle; the code is structured as a software product line (SPL) for managing a family of related software products from a common set of artifacts; and the analysis results (despite being numerous and despite being variable) need to be presented to the engineer in a way that is manageable. In previous work, we reported on lifting declarative analyses to apply to a software product line, rather than to an individual product variant. This paper reports on milestone results from applying lifted declarative analyses (behaviour alteration, recursion analysis, simplifiable global variable analysis, and two of their variants) to automotive software product lines from General Motors and assessing the scalability of the analyses and the effectiveness of reporting to engineers conditional analysis results (i.e., results conditioned on SPL program variants). We also reflect on some of the lessons learned throughout this project.
Journal Article