Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
616
result(s) for
"Runtime"
Sort by:
A taxonomy for classifying runtime verification tools
by
Traytel, Dmitriy
,
Reger, Giles
,
Krstić, Srđan
in
Classification
,
Computer Science
,
Software Engineering
2021
Over the last 20 years, runtime verification (RV) has grown into a diverse and active field, which has stimulated the development of numerous theoretical frameworks and practical tools. Many of the tools are at first sight very different and challenging to compare. Yet, there are similarities. In this work, we classify RV tools within a high-level taxonomy of concepts. We first present this taxonomy and discuss its different dimensions. Then, we survey the existing RV tools and, where possible with the support of tool authors, classify them according to the taxonomy. While the classification continually evolves, this article presents a snapshot with 60 state-of-the-art RV tools. We believe that this work is an important step in establishing a common terminology in RV and enabling a meaningful comparison of existing RV tools.
Journal Article
The (black) art of runtime evaluation: Are we comparing algorithms or implementations?
2017
Any paper proposing a new algorithm should come with an evaluation of efficiency and scalability (particularly when we are designing methods for “big data”). However, there are several (more or less serious) pitfalls in such evaluations. We would like to point the attention of the community to these pitfalls. We substantiate our points with extensive experiments, using clustering and outlier detection methods with and without index acceleration. We discuss what we can learn from evaluations, whether experiments are properly designed, and what kind of conclusions we should avoid. We close with some general recommendations but maintain that the design of fair and conclusive experiments will always remain a challenge for researchers and an integral part of the scientific endeavor.
Journal Article
Incremental execution of temporal graph queries over runtime models with history and its applications
by
Ghahremani, Sona
,
Giese, Holger
,
Sakizloglou, Lucas
in
Applications programs
,
Decision making
,
Graphical representations
2022
Modern software systems are intricate and operate in highly dynamic environments for which few assumptions can be made at design-time. This setting has sparked an interest in solutions that use a runtime model which reflects the system state and operational context to monitor and adapt the system in reaction to changes during its runtime. Few solutions focus on the evolution of the model over time, i.e., its history, although history is required for monitoring temporal behaviors and may enable more informed decision-making. One reason is that handling the history of a runtime model poses an important technical challenge, as it requires tracing a part of the model over multiple model snapshots in a timely manner. Additionally, the runtime setting calls for memory-efficient measures to store and check these snapshots. Following the common practice of representing a runtime model as a typed attributed graph, we introduce a language which supports the formulation of temporal graph queries, i.e., queries on the ordering and timing in which structural changes in the history of a runtime model occurred. We present a querying scheme for the execution of temporal graph queries over history-aware runtime models. Features such as temporal logic operators in queries, the incremental execution, the option to discard history that is no longer relevant to queries, and the in-memory storage of the model, distinguish our scheme from relevant solutions. By incorporating temporal operators, temporal graph queries can be used for runtime monitoring of temporal logic formulas. Building on this capability, we present an implementation of the scheme that is evaluated for runtime querying, monitoring, and adaptation scenarios from two application domains.
Journal Article
Reliable Task Management Based on a Smart Contract for Runtime Verification of Sensing and Actuating Tasks in IoT Environments
2020
With the gradual popularization of Internet-of-Things (IoT) applications and the development of wireless networking technologies, the use of heterogeneous devices and runtime verification of task fulfillment with different constraints are required in real-world IoT scenarios. As far as IoT systems are concerned, most of them are built on centralized architectures, which reveal various assailable points in data security and privacy threats. Hence, this paper aims to investigate these issues by delegating the responsibility of a verification monitor from a centralized architecture to a decentralized manner using blockchain technology. We present a smart contract-based task management scheme to provide runtime verification of device behaviors and allows trustworthy access control to these devices. The business logic of the proposed system is specified by the smart contract, which automates all time-consuming processes cryptographically and correctly. The usability of the proposed solution is further demonstrated by implementing a prototype application in which the Hyperledger Fabric is utilized to implement the business logic for runtime verification and access control with one desktop and one Raspberry Pi. A comprehensive evaluation experiment is conducted, and the results indicate the effectiveness and efficiency of the proposed system.
Journal Article
Comparison of Short-Read Sequence Aligners Indicates Strengths and Weaknesses for Biologists to Consider
by
Cadle-Davidson, Lance
,
Musich, Ryan
,
Osier, Michael V.
in
Accuracy
,
Airborne microorganisms
,
Alignment
2021
Aligning short-read sequences is the foundational step to most genomic and transcriptomic analyses, but not all tools perform equally, and choosing among the growing body of available tools can be daunting. Here, in order to increase awareness in the research community, we discuss the merits of common algorithms and programs in a way that should be approachable to biologists with limited experience in bioinformatics. We will only in passing consider the effects of data cleanup, a precursor analysis to most alignment tools, and no consideration will be given to downstream processing of the aligned fragments. To compare aligners [Bowtie2, Burrows Wheeler Aligner (BWA), HISAT2, MUMmer4, STAR, and TopHat2], an RNA-seq dataset was used containing data from 48 geographically distinct samples of the grapevine powdery mildew fungus Erysiphe necator . Based on alignment rate and gene coverage, all aligners performed well with the exception of TopHat2, which HISAT2 superseded. BWA perhaps had the best performance in these metrics, except for longer transcripts (>500 bp) for which HISAT2 and STAR performed well. HISAT2 was ~3-fold faster than the next fastest aligner in runtime, which we consider a secondary factor in most alignments. At the end, this direct comparison of commonly used aligners illustrates key considerations when choosing which tool to use for the specific sequencing data and objectives. No single tool meets all needs for every user, and there are many quality aligners available.
Journal Article
An adaptive, provable correct simplex architecture
2025
Simplex architectures optimize performance and safety by switching between an advanced controller and a base controller. We propose an approach to synthesize the switching logic and extensions of the base controller in the Simplex architectures to achieve high performance and provable correctness for a rich class of temporal specifications by maximizing the time the advanced controller is active. We achieve provable correctness by performing static verification of the baseline controller. The result of this verification is a set of states that is proven to be safe, called the recoverable region. We employ proofs on demand to ensure that the base controller is safe in those states that are visited during runtime, which depends on the advanced controller. Verification of hybrid systems is often overly conservative, resulting in smaller recoverable regions that cause unnecessary switches to the baseline controller. To avoid these switches, we invoke targeted reachability queries to extend the recoverable region at runtime. In case the recoverable region cannot be extended using the baseline controller, we employ a repair procedure. This tries to synthesize a patch for the baseline controller and can further extend the recoverable region. Our offline and online verification relies upon reachability analysis since it allows observation-based extension of the known recoverable region. We implemented our methodology on top of the state-of-the-art tool HyPro which allowed us to automatically synthesize verified and performant Simplex architectures for advanced case studies, like safe autonomous driving on a race track.
Journal Article
Stream runtime verification of real-time event streams with the Striver language
2021
In this paper, we study the problem of runtime verification of real-time event streams; in particular, we propose a language to describe monitors for real-time event streams that can manipulate data from rich domains. We propose a solution based on stream runtime verification (SRV), where monitors are specified by describing how output streams of data are computed from input streams of data. SRV enables a clean separation between the temporal dependencies among incoming events and the concrete operations that are performed during the monitoring. Most SRV specification languages assume that all streams share a global synchronous clock and divide time in discrete instants. At each instant every input has a reading, and for every instant the monitor computes an output. In this paper, we generalize the time assumption to cover real-time event streams, but keep the explicit time offsets present in some synchronous SRV languages like Lola. The language we introduce, called Striver, shares with SRV the simplicity and economy of operators, and the separation between the reasoning about time and the computation of data values. The version of Striver in this paper allows expressing future and past dependencies. Striver is a general language that allows expressing for certain time domains other real-time monitoring languages, like TeSSLa, and temporal logics, like STL. We show in this paper translations from other formalisms for (piecewise-constant) real-time signals and timed event streams. Finally, we report an empirical evaluation of an implementation of Striver.
Journal Article
An Efficient Backward/Forward Sweep Algorithm for Power Flow Analysis through a Novel Tree-Like Structure for Unbalanced Distribution Networks
by
Petridis, Stefanos
,
Voutetakis, Spyros
,
Stergiopoulos, Fotis
in
backward/forward sweep
,
breadth first search
,
data structures
2021
The increase of distributed energy resources (DERs) in low voltage (LV) distribution networks requires the ability to perform an accurate power flow analysis (PFA) in unbalanced systems. The characteristics of a well performing power flow algorithm are the production of accurate results, robustness and quick convergence. The current study proposes an improvement to an already used backward-forward sweep (BFS) power flow algorithm for unbalanced three-phase distribution networks. The proposed power flow algorithm can be implemented in large systems producing accurate results in a small amount of time using as little computational resources as possible. In this version of the algorithm, the network is represented in a tree-like structure, instead of an incidence matrix, avoiding the use of redundant computations and the storing of unnecessary data. An implementation of the method was developed in Python programming language and tested for 3 IEEE feeder test cases (the 4 bus feeder, the 13 bus feeder and the European Low Voltage test feeder), ranging from a low (4) to a very high (907) buses number, while including a wide variety of components witnessed in LV distribution networks.
Journal Article
Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels
2017
Speed and accuracy are important factors when dealing with time-constraint events for disaster, risk, and crisis-management support. Object-based image analysis can be a time consuming task in extracting information from large images because most of the segmentation algorithms use the pixel-grid for the initial object representation. It would be more natural and efficient to work with perceptually meaningful entities that are derived from pixels using a low-level grouping process (superpixels). Firstly, we tested a new workflow for image segmentation of remote sensing data, starting the multiresolution segmentation (MRS, using ESP2 tool) from the superpixel level and aiming at reducing the amount of time needed to automatically partition relatively large datasets of very high resolution remote sensing data. Secondly, we examined whether a Random Forest classification based on an oversegmentation produced by a Simple Linear Iterative Clustering (SLIC) superpixel algorithm performs similarly with reference to a traditional object-based classification regarding accuracy. Tests were applied on QuickBird and WorldView-2 data with different extents, scene content complexities, and number of bands to assess how the computational time and classification accuracy are affected by these factors. The proposed segmentation approach is compared with the traditional one, starting the MRS from the pixel level, regarding geometric accuracy of the objects and the computational time. The computational time was reduced in all cases, the biggest improvement being from 5 h 35 min to 13 min, for a WorldView-2 scene with eight bands and an extent of 12.2 million pixels, while the geometric accuracy is kept similar or slightly better. SLIC superpixel-based classification had similar or better overall accuracy values when compared to MRS-based classification, but the results were obtained in a fast manner and avoiding the parameterization of the MRS. These two approaches have the potential to enhance the automation of big remote sensing data analysis and processing, especially when time is an important constraint.
Journal Article
First international Competition on Runtime Verification: rules, benchmarks, tools, and final results of CRV 2014
by
Bonakdarpour, Borzoo
,
Klaedtke, Felix
,
Colombo, Christian
in
Benchmarks
,
Competition
,
Computer Science
2019
The first international Competition on Runtime Verification (CRV) was held in September 2014, in Toronto, Canada, as a satellite event of the 14th international conference on Runtime Verification (RV’14). The event was organized in three tracks: (1) offline monitoring, (2) online monitoring of C programs, and (3) online monitoring of Java programs. In this paper, we report on the phases and rules, a description of the participating teams and their submitted benchmark, the (full) results, as well as the lessons learned from the competition.
Journal Article