Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,218 result(s) for "Reliability (Engineering) Data processing."
Sort by:
The foundations of operational resilience - assessing the ability to operate in an anti-access/area denial (A2/AD) environment : the analytical framework, lexicon, and characteristics of the Operational Resilience Analysis Model (ORAM)
\"Although much work has been done considering the issue of airbase resilience especially in the Asia-Pacific region these studies have typically focused on a single aspect of the problem (such as hardening or runway repair) but have not considered the issues in total. There is a need to view the issue more holistically, especially given the strategic implications of U.S. power projection in anti-access/area denial (A2/AD) environments. The authors of this report developed a modeling framework and lexicon for conducting a detailed analysis of future Air Force operational resilience in an A2/AD environment; the analysis itself focused on different regions (Pacific, Southwest Asia, etc.) to bound the problem and identify a robust set of strategic assumptions and planning requirements. The study was set within the context of efforts to rebalance the joint force in the Asia-Pacific region. This report describes the Operational Resilience Analysis Model (ORAM) built for this effort, which was used to evaluate the impact of different courses of action from an operational standpoint. The authors explain the ORAM model, discuss the inputs that go into modeling Blue (friendly) and Red (enemy) capabilities, and illustrate the model using a simple notional case. They conclude with some suggestions for follow-on work to improve the functionality of ORAM and to address data uncertainties in the model\"--Publisher's website.
Delay Tolerant Networks
Delay Tolerant Networks (DTN)- which include terrestrial mobile networks, exotic media networks, ad-hoc networks, and sensor networks- are becoming more important and may not be well served by the current end-to-end TCP/IP model. This book provides a self-contained, one-stop reference for researchers and practitioners who are looking toward the future of networking. The text presents a systematic exploration of DTN concepts, architectures, protocols, enabling technologies, and applications. It also discusses various challenges associated with DTN. The author includes a wealth of illustrative material written in an accessible tone for easy understanding of the topics covered in the book.
The Italian earthquake catalogue CPTI15
The parametric catalogue of Italian earthquakes CPTI15 (Catalogo Parametrico dei Terremoti Italiani) represents the latest of a 45-years-long tradition of earthquake catalogues for Italy, and a significant innovation with respect to its predecessors. CPTI15 combines all known information on significant Italian earthquakes of the period 1000–2017, balancing instrumental and macroseismic data. Although the compilation criteria are the same as in the previous CPTI11 version, released in 2012, the catalogue has been revised as concerns: the time coverage, extended to 2017; the associated macroseismic data, improved in quantity and quality; the considered instrumental data, new and/or updated; the energy thresholds, lowered to maximum or epicentral intensity 5 or magnitude 4.0 (instead of 5–6 and 4.5, respectively); the determination of parameters from macroseismic data, based on a new calibration; the instrumental magnitudes, resulting from new sets of data and new conversion relationships to Mw. The catalogue considers and harmonizes data of different types and origins, both macroseismic and instrumental. For all earthquakes, the magnitude is given in terms of true or proxy moment magnitude (Mw), with the related uncertainty. The compilation procedure rigorously implements data and methods published in peer-reviewed journals. All data and methods are clearly indicated in the catalogue, in order to guarantee the maximum transparency of the compilation procedures. As compared to previous CPTI releases, the final CPTI15 catalogue shows a frequency–magnitude distribution coherent with current Italian instrumental catalogues, making it suitable for statistical analysis of the time-space property of the Italian seismicity.
Surface crack detection using deep learning with shallow CNN architecture for enhanced computation
Surface cracks on the concrete structures are a key indicator of structural safety and degradation. To ensure the structural health and reliability of the buildings, frequent structure inspection and monitoring for surface cracks is important. Surface inspection conducted by humans is time-consuming and may produce inconsistent results due to the inspectors’ varied empirical knowledge. In the field of structural health monitoring, visual inspection of surface cracks on civil structures using deep learning algorithms has gained considerable attention. However, these vision-based techniques require high-quality images as inputs and depend on high computational power for image classification. Thus, in this study, shallow convolutional neural network (CNN)-based architecture for surface concrete crack detection is proposed. LeNet-5, a well-known CNN architecture, is optimized and trained for image classification using 40,000 images in the Middle East Technical University (METU) dataset. To achieve maximum accuracy for crack detection with minimum computation, the hyperparameters of the proposed model were optimized. The proposed model enables the employment of deep learning algorithms using low-power computational devices for a hassle-free monitoring of civil structures. The performance of the proposed model is compared with those of various pretrained deep learning models, such as VGG16, Inception, and ResNet. The proposed shallow CNN architecture was found to achieve a maximum accuracy of 99.8% in the minimum computation. Better hyperparameter optimization in CNN architecture results in higher accuracy even with a shallow layer stack for enhanced computation. The evaluation results confirm the incorporation of the proposed method with autonomous devices, such as unmanned aerial vehicle, for real-time inspection of surface crack with minimum computation.
Reliability-Based Design for Strip-Footing Subjected to Inclined Loading Using Hybrid LSSVM ML Models
The bearing capacity of strip footings is significantly influenced by uncertainties related to the footing, soil conditions, and load inclination. Given the inherent unpredictability in footing design, the reliability-based design of geotechnical structures has garnered considerable interest in the research community. This paper presents a state-of-the-art probabilistic design for footings under inclined loading using the first-order reliability method (FORM) combined with a hybrid least squares support vector machine (LSSVM) learning approach. A comprehensive dataset comprising 920 samples from the literature, with the reduction factor (RF) as the output parameter, was utilized to simulate hybrid LSSVM models based on particle swarm optimization (PSO) and Harris hawks optimization (HHO). The input variables for predicting the bearing capacity include the load eccentricity-to-width ratio, embedment ratio, load inclination-to-friction angle, and load arrangement. The performance metrics indicate that among the three proposed machine learning models, the LSSVM-PSO model achieves the best predictive performance, with an R 2 of 0.991 and an RMSE of 0.051 during training and an R 2 of 0.962 and an RMSE of 0.109 during testing. The model’s performance was further evaluated via rank analysis, reliability analysis, regression plots, and uncertainty analysis. The reliability index (β) and corresponding probability of failure (POF) computed using FORM were compared with the actual values for both phases. The study concluded that the LSSVM-PSO is the most reliable method for reliability-based design, demonstrating superior performance and reliability. This hybrid approach offers a robust framework for addressing uncertainties in geotechnical engineering, enhancing the reliability and accuracy of footing design under inclined loading conditions.
Reliability of time-constrained multi-state network susceptible to correlated component faults
Correlation can seriously degrade reliability and capacity due to the simultaneous failure of multiple components, which lowers the probability that a system can execute its required functions with acceptable levels of confidence. The high cost of fault in time-critical systems necessitates methods to explicitly consider the influence of correlation on reliability. This paper constructs a network-structured model, namely time-constrained multi-state network (TCMSN), to investigate the capacity of a computer network. In the TCMSN, the physical lines comprising the edges of the computer network experience correlated faults. Our approach quantifies the probability that d units of data can be sent from source to sink in no more than T units of time. This probability that the computer network delivers a specified level of data before the deadline is referred to as the system reliability. Experimental results indicate that the negative influence of correlation on reliability could be significant, especially when the data amount is close to network bandwidth and the time constraint is tight. The modeling approach will subsequently promote design and optimization studies to mitigate the vulnerability of networks to correlated faults.
Fault Tolerant Systems
There are many applications in which the reliability of the overall system must be far higher than the reliability of its individual components. In such cases, designers devise mechanisms and architectures that allow the system to either completely mask the effects of a component failure or recover from it so quickly that the application is not seriously affected. This is the work of fault-tolerant designers and their work is increasingly important and complex not only because of the increasing number of “mission critical” applications, but also because the diminishing reliability of hardware means that even systems for non-critical applications will need to be designed with fault-tolerance in mind. Reflecting the real-world challenges faced by designers of these systems, this book addresses fault tolerance design with a systems approach to both hardware and software. No other text on the market takes this approach, nor offers the comprehensive and up-to-date treatment the authors provide. Students, designers and architects of high performance processors will value this comprehensive overview of the field.
The Materials Data Facility: Data Services to Advance Materials Science Research
With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloud-hosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific) and automatically-extracted metadata in a registry while the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. The MDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of third-party publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF’s design, current status, and future plans.
Mathematical Modelling of System Resilience
Almost all the systems in our world, including technical, social, economic, and environmental systems, are becoming interconnected and increasingly complex, and as such they are vulnerable to various risks. Due to this trend, resilience creation is becoming more important to system managers and decision makers, this to ensure sustained performance. In order to be able to ensure an acceptable sustained performance under such interconnectedness and complexity, resilience creation with a system approach is a requirement. Mathematical modeling based approaches are the most common approach for system resilience creation. Mathematical Modelling of System Resilience covers resilience creation for various system aspects including a functional system of the supply chain, overall supply chain systems; various methodologies for modeling system resilience; satellite-based approach for addressing climate related risks, repair-based approach for sustainable performance of an engineering system, and modeling measures of the reliability for a vertical take-off and landing system. Each of the chapters contributes state of the art research for the relevant resilience related topic covered in the chapter. Technical topics covered in the book include: 1. Supply chain risk, vulnerability and disruptions 2. System resilience for containing failures and disruptions 3. Resiliency considering frequency and intensities of disasters 4. Resilience performance index 5. Resiliency of electric Traction system 6. Degree of resilience 7. Satellite observation and hydrological risk 8. Latitude of Resilience 9. On-line repair for resilience 10. Reliability design for Vertical Takeoff and landing Prototype
Reliability evaluation method for squeeze casting process parameter data
To evaluate the reliability of squeeze casting (SC) process parameter data originating from different sources such that guarantee the performance of data-driven manufacturing, this paper proposes a relatively reliability evaluation method based on the status of relevant data. The relative reliability of the data is defined. The value range of each attribute of the initial data set is achieved and divided into different (reliable) intervals or classes by Canopy K-means clustering and a local linear embedding algorithm according to the data values and engineering fact, and the data status of each data cell is obtained by calculating the classes to which its value belongs. The reliability of a data unit is obtained by calculating the probability of the data cell status. Furthermore, a reliability evaluation model of all the data is proposed. It integrates association rules and a Bayesian network to formulate the relationship between data units and obtain the reliability of each piece of data through reasoning. A process parameter sample datasets of SC collected from 107 studies was evaluated to prove the effectiveness of the proposed method, and its performance was assessed by comparing it with the simulation results obtained using ProCAST and JMatPro. The results show that castings manufactured with more reliable data, as evaluated by the method, exhibit lower shrinkage porosity and better mechanical, thermal, and physical properties, which proves its effectiveness in detecting the reliability of different data for specific applications.