Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Item TypeItem Type
-
YearFrom:-To:
-
More FiltersMore FiltersIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
18,118
result(s) for
"Safety critical"
Sort by:
HACCP : a food industry briefing
by
Mortimore, Sara, author
,
Wallace, Carol
in
Hazard Analysis and Critical Control Point (Food safety system)
,
Food adulteration and inspection.
,
Food handling Safety measures.
2015
\"Readers of this accessible book - now in a revised and updated new edition - are taken on a conceptual journey which passes every milestone and important feature of the HACCP landscape at a pace which is comfortable and productive. The information and ideas contained in the book will enable food industry managers and executives to take their new-found knowledge into the workplace for use in the development and implementation of HACCP systems appropriate for their products and manufacturing processes.The material is structured so that the reader can quickly assimilate the essentials of the topic. Clearly presented, this HACCP briefing includes checklists, bullet points, flow charts, schematic diagrams for quick reference, and at the start of each section the authors have provided useful key points summary boxes. HACCP: a Food Industry Briefing is an introductory-level text for readers who are unfamiliar with the subject either because they have never come across it or because they need to be reminded. The book will also make a valuable addition to material used in staff training and is an excellent core text for HACCP courses\"-- Provided by publisher.
How to certify machine learning based safety-critical systems? A systematic literature review
by
Nikanjam, Amin
,
Khomh, Foutse
,
Mindom, Paulina Stevia Nouwou
in
Artificial Intelligence
,
Certification
,
Computer Science
2022
Context
Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called “safety-critical” systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches.
Objective
This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question “How to Certify Machine Learning Based Safety-critical Systems?”.
Method
We conduct a Systematic Literature Review (SLR) of research papers published between 2015 and 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification:
Robustness
,
Uncertainty
,
Explainability
,
Verification
,
Safe Reinforcement Learning
, and
Direct Certification
. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted.
Results
The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of ML models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mentioned main pillars that are for now mainly studied separately.
Conclusion
We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
Journal Article
A linear MPC with control barrier functions for differential drive robots
2024
The need for fully autonomous mobile robots has surged over the past decade, with the imperative of ensuring safe navigation in a dynamic setting emerging as a primary challenge impeding advancements in this domain. In this article, a Safety Critical Model Predictive Control based on Dynamic Feedback Linearization tailored to the application of differential drive robots with two wheels is proposed to generate control signals that result in obstacle‐free paths. A barrier function introduces a safety constraint to the optimization problem of the Model Predictive Control (MPC) to prevent collisions. Due to the intrinsic nonlinearities of the differential drive robots, computational complexity while implementing a Nonlinear Model Predictive Control (NMPC) arises. To facilitate the real‐time implementation of the optimization problem and to accommodate the underactuated nature of the robot, a combination of Linear Model Predictive Control (LMPC) and Dynamic Feedback Linearization (DFL) is proposed. The MPC problem is formulated on a linear equivalent model of the differential drive robot rendered by the DFL controller. The analysis of the closed‐loop stability and recursive feasibility of the proposed control design is discussed. Numerical experiments illustrate the robustness and effectiveness of the proposed control synthesis in avoiding obstacles with respect to the benchmark of using Euclidean distance constraints. A Safety Critical Model Predictive Control is proposed for autonomous navigation of differential drive robots. Leveraging a control‐barrier function and the dynamic feedback linearization technique, we formulate the MPC problem into a quadratic program (QP) which can be solved efficiently but without any approximation in the description of the inherently nonlinear dynamics. Closed‐loop stability, recursive feasibility, and computational complexity are analyzed rigorously.
Journal Article
Engineering problems in machine learning systems
by
Nakae Toshihiro
,
Yasuoka Hirotoshi
,
Kuwajima Hiroshi
in
Automation
,
Design specifications
,
Engineering
2020
Fatal accidents are a major issue hindering the wide acceptance of safety-critical systems that employ machine learning and deep learning models, such as automated driving vehicles. In order to use machine learning in a safety-critical system, it is necessary to demonstrate the safety and security of the system through engineering processes. However, thus far, no such widely accepted engineering concepts or frameworks have been established for these systems. The key to using a machine learning model in a deductively engineered system is decomposing the data-driven training of machine learning models into requirement, design, and verification, particularly for machine learning models used in safety-critical systems. Simultaneously, open problems and relevant technical fields are not organized in a manner that enables researchers to select a theme and work on it. In this study, we identify, classify, and explore the open problems in engineering (safety-critical) machine learning systems—that is, in terms of requirement, design, and verification of machine learning models and systems—as well as discuss related works and research directions, using automated driving vehicles as an example. Our results show that machine learning models are characterized by a lack of requirements specification, lack of design specification, lack of interpretability, and lack of robustness. We also perform a gap analysis on a conventional system quality standard SQuaRE with the characteristics of machine learning models to study quality models for machine learning systems. We find that a lack of requirements specification and lack of robustness have the greatest impact on conventional quality models.
Journal Article
Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events
by
Abu Tami, Mohammad
,
Ashqar, Huthaifa I.
,
Elhenawy, Mohammed
in
Accident prevention
,
Accuracy
,
Automation
2024
Traditional approaches to safety event analysis in autonomous systems have relied on complex machine and deep learning models and extensive datasets for high accuracy and reliability. However, the emerge of multimodal large language models (MLLMs) offers a novel approach by integrating textual, visual, and audio modalities. Our framework leverages the logical and visual reasoning power of MLLMs, directing their output through object-level question–answer (QA) prompts to ensure accurate, reliable, and actionable insights for investigating safety-critical event detection and analysis. By incorporating models like Gemini-Pro-Vision 1.5, we aim to automate safety-critical event detection and analysis along with mitigating common issues such as hallucinations in MLLM outputs. The results demonstrate the framework’s potential in different in-context learning (ICT) settings such as zero-shot and few-shot learning methods. Furthermore, we investigate other settings such as self-ensemble learning and a varying number of frames. The results show that a few-shot learning model consistently outperformed other learning models, achieving the highest overall accuracy of about 79%. The comparative analysis with previous studies on visual reasoning revealed that previous models showed moderate performance in driving safety tasks, while our proposed model significantly outperformed them. To the best of our knowledge, our proposed MLLM model stands out as the first of its kind, capable of handling multiple tasks for each safety-critical event. It can identify risky scenarios, classify diverse scenes, determine car directions, categorize agents, and recommend the appropriate actions, setting a new standard in safety-critical event management. This study shows the significance of MLLMs in advancing the analysis of naturalistic driving videos to improve safety-critical event detection and understanding the interactions in complex environments.
Journal Article
Artificial intelligence in safety-critical systems: a systematic review
by
Wang, Yue
,
Chung, Sai Ho
in
Artificial intelligence
,
Artificial neural networks
,
Bayesian analysis
2022
PurposeThis study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application status according to different AI techniques and propose some research directions and insights to promote its wider application.Design/methodology/approachA total of 92 articles were selected for this review through a systematic literature review along with a thematic analysis.FindingsThe literature is divided into three themes: interpretable method, explain model behavior and reinforcement of safe learning. Among AI techniques, the most widely used are Bayesian networks (BNs) and deep neural networks. In addition, given the huge potential in this field, four future research directions were also proposed.Practical implicationsThis study is of vital interest to industry practitioners and regulators in safety-critical domain, as it provided a clear picture of the current status and pointed out that some AI techniques have great application potential. For those that are inherently appropriate for use in safety-critical systems, regulators can conduct in-depth studies to validate and encourage their use in the industry.Originality/valueThis is the first review of the application of AI in safety-critical systems in the literature. It marks the first step toward advancing AI in safety-critical domain. The paper has potential values to promote the use of the term “safety-critical” and to improve the phenomenon of literature fragmentation.
Journal Article
Safety and security risk assessment in cyber-physical systems
by
Ding, Yulong
,
Lyu, Xiaorong
,
Yang, Shuang-Hua
in
computer networks
,
Confidentiality
,
Convergence
2019
The term cyber physical systems (CPS) refers to a new generation of systems with integrated computational and physical capabilities through computation, communication, and control. In the past decades, related techniques for CPS have been well studied and developed, and are widely applied in the fields such as industrial automation, smart transportation, aerospace, environment monitoring, and smart grids. However, with the expansion of CPS complexity and the enhancement of the system openness, most of CPS become not only safety-critical but also security-critical since deeply involving both physical objects and computer networks. In the last decade, it is no longer rare to see safety incidents and security attacks happening in industries. Safety and security issues are increasingly converging on CPS, leading to new situations in which these two closely interdependent issues should now be considered together, rather than separately or in sequence. This paper reviews the existing approaches of risk assessment and management from the perspective of safety, security, and their integration. The comparisons of these approaches are summarised with their pros and cons before the technical gaps between the demand and the current situation of safety and security issues in CPS are identified.
Journal Article
Risk‐Aware Control: Integrating Worst‐Case Conditional Value‐At‐Risk With Control Barrier Function
by
Kishida, Masako
in
conditional value‐at‐risk
,
control barrier function
,
safety‐critical systems
2025
In safety‐critical control systems such as autonomous vehicles and medical devices, managing the risk of rare but severe tail events under uncertainty is crucial. This paper addresses this challenge by proposing a risk‐aware control framework that integrates the worst‐case conditional value‐at‐risk (CVaR) with control barrier functions (CBFs). Specifically, we formulate risk‐aware safety constraints based on the worst‐case CVaR, and show that the resulting risk‐aware controllers can be computed via quadratic programs (for half‐space and polytopic safe sets) or a semidefinite program (for ellipsoidal safe sets). Numerical simulations on an inverted pendulum illustrate that the proposed approach ensures safety under various scenarios and significantly reduces the safety constraint violation compared to existing CBF approaches. Overall, we show that incorporating worst‐case CVaR into CBF design offers a tractable solution for safety‐critical applications under uncertainty. Overview of the proposed risk‐aware control. The proposed approach combines control barrier functions with worst‐case conditional value‐at‐risk to design optimization‐based controllers for safety‐critical systems under stochastic uncertainties.
Journal Article
Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
2023
Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin.
Journal Article
Safety-critical Optimization of Vehicle Parts
2026
In recent years, automotive weight reduction has attracted considerable attention due to its benefits in fuel consumption, emissions, material usage, and vehicle dynamics. For unsprung masses, these effects are particularly pronounced, directly influencing vehicle stability, maneuverability, and road safety. Conventional engineering optimization is typically based on static load cases; however, such \"simple\" optimization is insufficient for safety-critical components operating under real service conditions.In practice, automotive components are exposed to dynamically varying, stochastic loads originating from road excitation, and their failure is therefore predominantly governed by fatigue rather than static strength. Current engineering optimization tools do not yet enable direct optimization with respect to fatigue life. To address this limitation, a dynamic factor is introduced to represent time-dependent loading effects within the optimization framework. The optimization problem is reformulated with the explicit constraint that the original safety factor must not decrease, ensuring that the expected service life of the component is preserved.The results indicate that, although the achievable mass reduction is smaller than that obtained by purely static optimization, it remains significant while maintaining fatigue-related safety margins. The applied approach is restricted to geometry modifications compatible with conventional manufacturing, ensuring industrial relevance.
Journal Article