Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
18,959 result(s) for "data fusion"
Sort by:
An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment.
Ecological forecasting and data assimilation in a data-rich era
Several forces are converging to transform ecological research and increase its emphasis on quantitative forecasting. These forces include (1) dramatically increased volumes of data from observational and experimental networks, (2) increases in computational power, (3) advances in ecological models and related statistical and optimization methodologies, and most importantly, (4) societal needs to develop better strategies for natural resource management in a world of ongoing global change. Traditionally, ecological forecasting has been based on process-oriented models, informed by data in largely ad hoc ways. Although most ecological models incorporate some representation of mechanistic processes, today's models are generally not adequate to quantify real-world dynamics and provide reliable forecasts with accompanying estimates of uncertainty. A key tool to improve ecological forecasting and estimates of uncertainty is data assimilation (DA), which uses data to inform initial conditions and model parameters, thereby constraining a model during simulation to yield results that approximate reality as closely as possible. This paper discusses the meaning and history of DA in ecological research and highlights its role in refining inference and generating forecasts. DA can advance ecological forecasting by (1) improving estimates of model parameters and state variables, (2) facilitating selection of alternative model structures, and (3) quantifying uncertainties arising from observations, models, and their interactions. However, DA may not improve forecasts when ecological processes are not well understood or never observed. Overall, we suggest that DA is a key technique for converting raw data into ecologically meaningful products, which is especially important in this era of dramatically increased availability of data from observational and experimental networks.
Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets
Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.
IoMT-Enabled Fusion-Based Model to Predict Posture for Smart Healthcare Systems
Smart healthcare applications depend on data from wearable sensors (WSs) mounted on a patient’s body for frequent monitoring information. Healthcare systems depend on multi-level data for detecting illnesses and consequently delivering correct diagnostic measures. The collection of WS data and integration of that data for diagnostic purposes is a difficult task. This paper proposes an Errorless Data Fusion (EDF) approach to increase posture recognition accuracy. The research is based on a case study in a health organization. With the rise in smart healthcare systems, WS data fusion necessitates careful attention to provide sensitive analysis of the recognized illness. As a result, it is dependent on WS inputs and performs group analysis at a similar rate to improve diagnostic efficiency. Sensor breakdowns, the constant time factor, aggregation, and analysis results all cause errors, resulting in rejected or incorrect suggestions. This paper resolves this problem by using EDF, which is related to patient situational discovery through healthcare surveillance systems. Features of WS data are examined extensively using active and iterative learning to identify errors in specific postures. This technology improves position detection accuracy, analysis duration, and error rate, regardless of user movements. Wearable devices play a critical role in the management and treatment of patients. They can ensure that patients are provided with a unique treatment for their medical needs. This paper discusses the EDF technique for optimizing posture identification accuracy through multi-feature analysis. At first, the patients’ walking patterns are tracked at various time intervals. The characteristics are then evaluated in relation to the stored data using a random forest classifier.
An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images
Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal) NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM), is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02%) and computation efficiency of NDVI-LMGM (31 seconds using a personal computer). Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Enhanced STARFM (ESTARFM) and Weighted Linear Model (WLM)) show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.
Activity learning : discovering, recognizing, and predicting human behavior from sensor data
Defines the notion of an activity model learned from sensor data and presents key algorithms that form the core of the field Activity Learning: Discovering, Recognizing and Predicting Human Behavior from Sensor Data provides an in-depth look at computational approaches to activity learning from sensor data. Each chapter is constructed to provide practical, step-by-step information on how to analyze and process sensor data. The book discusses techniques for activity learning that include the following: * Discovering activity patterns that emerge from behavior-based sensor data * Recognizing occurrences of predefined or discovered activities in real time * Predicting the occurrences of activities The techniques covered can be applied to numerous fields, including security, telecommunications, healthcare, smart grids, and home automation. An online companion site enables readers to experiment with the techniques described in the book, and to adapt or enhance the techniques for their own use. With an emphasis on computational approaches, Activity Learning: Discovering, Recognizing, and Predicting Human Behavior from Sensor Data provides graduate students and researchers with an algorithmic perspective to activity learning.
Training Performance Indications for Amateur Athletes Based on Nutrition and Activity Lifelogs
To maintain and improve an amateur athlete’s fitness throughout training and to achieve peak performance in sports events, good nutrition and physical activity (general and training specifically) must be considered as important factors. In our context, the terminology “amateur athletes” represents those who want to practice sports to protect their health from sickness and diseases and improve their ability to join amateur athlete events (e.g., marathons). Unlike professional athletes with personal trainer support, amateur athletes mostly rely on their experience and feeling. Hence, amateur athletes need another way to be supported in monitoring and recommending more efficient execution of their activities. One of the solutions to (self-)coaching amateur athletes is collecting lifelog data (i.e., daily data captured from different sources around a person) to understand how daily nutrition and physical activities can impact their exercise outcomes. Unfortunately, not all factors of the lifelog data can contribute to understanding the mutual impact of nutrition, physical activities, and exercise frequency on improving endurance, stamina, and weight loss. Hence, there is no guarantee that analyzing all data collected from people can produce good insights towards having a good model to predict what the outcome will be. Besides, analyzing a rich and complicated dataset can consume vast resources (e.g., computational complexity, hardware, bandwidth), and this therefore does not suit deployment on IoT or personal devices. To meet this challenge, we propose a new method to (i) discover the optimal lifelog data that significantly reflect the relation between nutrition and physical activities and training performance and (ii) construct an adaptive model that can predict the performance for both large-scale and individual groups. Our suggested method produces positive results with low MAE and MSE metrics when tested on large-scale and individual datasets and also discovers exciting patterns and correlations among data factors.