Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
5,278
result(s) for
"Systems software -- Reliability"
Sort by:
Software for dependable systems : sufficient evidence?
by
Jackson, Daniel
,
National Research Council (U.S.). Committee on Certifiably Dependable Software Systems
,
Thomas, Martyn
in
Computer software
,
Computer software -- Reliability
,
Reliability
2007
The focus of Software for Dependable Systemsis a set of fundamental principles that underlie software system dependability and that suggest a different approach to the development and assessment of dependable software.Unfortunately, it is difficult to assess the dependability of software.
Fault Tolerant Systems
by
Koren Israel
,
Krishna C. Mani
in
Computer Architecture
,
Computer Hardware Engineering
,
Computer system failures
2007,2010
There are many applications in which the reliability of the overall system must be far higher than the reliability of its individual components. In such cases, designers devise mechanisms and architectures that allow the system to either completely mask the effects of a component failure or recover from it so quickly that the application is not seriously affected. This is the work of fault-tolerant designers and their work is increasingly important and complex not only because of the increasing number of “mission critical” applications, but also because the diminishing reliability of hardware means that even systems for non-critical applications will need to be designed with fault-tolerance in mind. Reflecting the real-world challenges faced by designers of these systems, this book addresses fault tolerance design with a systems approach to both hardware and software. No other text on the market takes this approach, nor offers the comprehensive and up-to-date treatment the authors provide. Students, designers and architects of high performance processors will value this comprehensive overview of the field.
Bugs in machine learning-based systems: a faultload benchmark
by
Nikanjam, Amin
,
Khomh, Foutse
,
Jiang, Zhen Ming (Jack)
in
Benchmarks
,
Machine learning
,
Reliability engineering
2023
The rapid escalation of applying Machine Learning (ML) in various domains has led to paying more attention to the quality of ML components. There is then a growth of techniques and tools aiming at improving the quality of ML components and integrating them into the ML-based system safely. Although most of these tools use bugs’ lifecycle, there is no standard benchmark of bugs to assess their performance, compare them and discuss their advantages and weaknesses. In this study, we firstly investigate the reproducibility and verifiability of the bugs in ML-based systems and show the most important factors in each one. Then, we explore the challenges of generating a benchmark of bugs in ML-based software systems and provide a bug benchmark namely defect4ML that satisfies all criteria of standard benchmark, i.e. relevance, reproducibility, fairness, verifiability, and usability. This faultload benchmark contains 100 bugs reported by ML developers in GitHub and Stack Overflow, using two of the most popular ML frameworks: TensorFlow and Keras. defect4ML also addresses important challenges in Software Reliability Engineering of ML-based software systems, like: 1) fast changes in frameworks, by providing various bugs for different versions of frameworks, 2) code portability, by delivering similar bugs in different ML frameworks, 3) bug reproducibility, by providing fully reproducible bugs with complete information about required dependencies and data, and 4) lack of detailed information on bugs, by presenting links to the bugs’ origins. defect4ML can be of interest to ML-based systems practitioners and researchers to assess their testing tools and techniques.
Journal Article
A new framework of complex system reliability with imperfect maintenance policy
2022
The interactions and dependencies between software and hardware are often neglected in modeling system reliability in the past few decades due to the mathematical complexity. However, many system failures occurred from the interactions or simultaneous occurrences of software and hardware. This paper first proposes a new diagram of categorizing system-level failures and further incorporates such a diagram into the development of complex system reliability framework. System-level failures result from software subsystem, hardware subsystem, and the interactions of software and hardware subsystems. The focus of this study is on the investigation of the interactions failures generated from the interactions of software and hardware subsystems. In addition to the considerations of total hardware failures, software-induced hardware failures, and hardware-induced software failures introduced by Zhu and Pham (Mathematics 7(11):1049, 2019), we further introduce the partial hardware failures that can be respectively induced by hardware and software to explicitly demonstrate the dependencies and interactions between software and hardware. Hence, a new complex system reliability framework is developed based on such system-level failure categorization with the Markov process. Furthermore, the numerical examples are studied to illustrate the impacts on system reliability with the changes of state transition parameters that modeling the interactions of software and hardware subsystems. Finally, we have studied two maintenance policies of the proposed complex system reliability model.
Journal Article
Developing Safety-Critical Software
2013,2017
As the complexity and criticality of software increase and projects are pressed to develop software faster and more cheaply, it becomes even more important to ensure that software-intensive systems are reliable and safe. This book helps you develop, manage, and approve safety-critical software more efficiently and effectively. Although the focus is on aviation software and compliance with RTCA/DO-178C and its supplements, the principles also apply to other safety-critical software. Written by an international authority on the subject, this book brings you a wealth of best practices, real-world examples, and concrete recommendations.
How to treat uncertainties in life cycle assessment studies?
by
Baustert, Paul
,
Othoniel Benoit
,
Elorri, Igos
in
Computer programs
,
Computer simulation
,
Context
2019
PurposeThe use of life cycle assessment (LCA) as a decision support tool can be hampered by the numerous uncertainties embedded in the calculation. The treatment of uncertainty is necessary to increase the reliability and credibility of LCA results. The objective is to provide an overview of the methods to identify, characterize, propagate (uncertainty analysis), understand the effects (sensitivity analysis), and communicate uncertainty in order to propose recommendations to a broad public of LCA practitioners.MethodsThis work was carried out via a literature review and an analysis of LCA tool functionalities. In order to facilitate the identification of uncertainty, its location within an LCA model was distinguished between quantity (any numerical data), model structure (relationships structure), and context (criteria chosen within the goal and scope of the study). The methods for uncertainty characterization, uncertainty analysis, and sensitivity analysis were classified according to the information provided, their implementation in LCA software, the time and effort required to apply them, and their reliability and validity. This review led to the definition of recommendations on three levels: basic (low efforts with LCA software), intermediate (significant efforts with LCA software), and advanced (significant efforts with non-LCA software).Results and discussionFor the basic recommendations, minimum and maximum values (quantity uncertainty) and alternative scenarios (model structure/context uncertainty) are defined for critical elements in order to estimate the range of results. Result sensitivity is analyzed via one-at-a-time variations (with realistic ranges of quantities) and scenario analyses. Uncertainty should be discussed at least qualitatively in a dedicated paragraph. For the intermediate level, the characterization can be refined with probability distributions and an expert review for scenario definition. Uncertainty analysis can then be performed with the Monte Carlo method for the different scenarios. Quantitative information should appear in inventory tables and result figures. Finally, advanced practitioners can screen uncertainty sources more exhaustively, include correlations, estimate model error with validation data, and perform Latin hypercube sampling and global sensitivity analysis.ConclusionsThrough this pedagogic review of the methods and practical recommendations, the authors aim to increase the knowledge of LCA practitioners related to uncertainty and facilitate the application of treatment techniques. To continue in this direction, further research questions should be investigated (e.g., on the implementation of fuzzy logic and model uncertainty characterization) and the developers of databases, LCIA methods, and software tools should invest efforts in better implementing and treating uncertainty in LCA.
Journal Article
Salp Swarm Optimizer for Modeling Software Reliability Prediction Problems
by
Othman, Zalinda
,
Abdullah, Salwani
,
Al-Betar, Mohammed Azmi
in
Algorithms
,
Artificial Intelligence
,
Artificial neural networks
2021
In this paper, software effort prediction (SEP) and software test prediction (STP) (i.e., software reliability problems) are tackled by integrating the salp swarm algorithm (SSA) with a backpropagation neural network (BPNN). Software effort and test prediction problems are common in software engineering and arise when seeking to determine the actual software resources needed to develop a project. BPNN is the most popular prediction algorithm used in the literature. The performance of BPNN depends totally on the initial parameter values such as weight and biases. The main objective of this paper is to integrate SSA with the BPNN to find the optimal weight for every training cycle and thereby improve prediction accuracy. The proposed method, abbreviated as SSA-BPNN, is tested on twelve SEP datasets and two STP datasets. All datasets vary in terms of complexity and size. The results obtained by SSA-BPNN are evaluated according to twelve performance measures: MSE, RMSE, RAE, RRSE, MAE, MRE, MMRE, MdMRE, VAF(%), R2(%), ED, and MD. First, the results obtained by BPNN with SSA (i.e., SSA-BPNN) and without SSA are compared. The evaluation of the results indicates that SSA-BPNN performs better than BPNN for all datasets. In the comparative evaluation, the results of SSA-BPNN are compared against thirteen state-of-the-art methods using the same SEP and STP problem datasets. The evaluation of the results reveals that the proposed method outperforms the comparative methods for almost all datasets, both SEP and STP, in the case of most performance measures. In conclusion, integrating SSA with BPNN is a very powerful approach for solving software reliability problems that can be used widely to yield accurate prediction results.
Journal Article