Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Item Type
      Item Type
      Clear All
      Item Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
7,310 result(s) for "Normalization"
Sort by:
Recognizing and responding to normalization of deviance
\"Contains guidelines to assist facilities in recognizing and addressing the phenomenon of normalization of deviation--Provides techniques for addressing normalized deviations and techniques to eliminate waste in all manufacturing processes--Describes methods for identifying normalized deviation as well as where to find deviations--Includes techniques to reduce operational normalization of deviance and to reduce organizational normalization of deviance; Market description: Process safety professionals in all areas of manufacturing; Process safety consultants; Chemical engineering students; Certified safety professionals\"-- Provided by publisher.
Improving the data normalization method in the CORASO method
The CORASO (COmpromise Ranking from Alternative SOlutions) method is one of the more recent MCDM (Multi-Criteria Decision-Making) methods, developed in 2024. It is a straightforward method that has been proven to be highly accurate in ranking alternatives. However, the method becomes unusable if a maximization-type criterion in the decision matrix has a maximum value of zero for a certain alternative, or if a minimization-type criterion has a value of zero for any alternative. This issue is directly related to the existing data normalization technique used within the CORASO method. This study was conducted to identify alternative data normalization methods that can be used with the CORASO method. Three specific data normalization techniques are investigated in this study: linear normalization method (the default method in CORASO), Weitendorf normalization method, and the vector normalization method. These three normalization methods were combined with the CORASO method to solve three different decision-making problems, each with varying numbers of alternatives to be ranked, as well as different numbers and types of criteria. The ranking results obtained using the CORASO method (when combined with the three aforementioned normalization methods) were compared with the results from other established MCDM methods. The findings confirmed that both the Weitendorf normalization and Vector normalization methods are suitable for use with the CORASO method. These two normalization methods were then applied in a specific case where the native CORASO normalization method could not be used. The results consistently demonstrated that these two normalization methods fully meet the requirements when integrated with the CORASO method
A new complete color normalization method for H&E stained histopatholgical images
The popularity of digital histopathology is growing rapidly in the development of computer aided disease diagnosis systems. However, the color variations due to manual cell sectioning and stain concentration make the process challenging in various digital pathological image analysis such as histopathological image segmentation and classification. Hence, the normalization of these variations are needed to obtain the promising results. The proposed research intends to introduce a reliable and robust new complete color normalization method, addressing the problems of color and stain variability. The new complete color normalization involves three phases, namely enhanced fuzzy illuminant normalization, fuzzy-based stain normalization, and modified spectral normalization. The extensive simulations are performed and validated on histopathological images. The presented algorithm outperforms the existing conventional normalization methods by overcoming the certain limitations and challenges. As per the experimental quality metrics and comparative analysis, the proposed algorithm performs efficiently and provides promising results.
A Reversible Automatic Selection Normalization (RASN) Deep Network for Predicting in the Smart Agriculture System
Due to the nonlinear modeling capabilities, deep learning prediction networks have become widely used for smart agriculture. Because the sensing data has noise and complex nonlinearity, it is still an open topic to improve its performance. This paper proposes a Reversible Automatic Selection Normalization (RASN) network, integrating the normalization and renormalization layer to evaluate and select the normalization module of the prediction model. The prediction accuracy has been improved effectively by scaling and translating the input with learnable parameters. The application results of the prediction show that the model has good prediction ability and adaptability for the greenhouse in the Smart Agriculture System.
Testing relevant hypotheses in functional time series via self-normalization
We develop methodology for testing relevant hypotheses about functional time series in a tuning-free way. Instead of testing for exact equality, e.g. for the equality of two mean functions from two independent time series, we propose to test the null hypothesis of no relevant deviation. In the two-sample problem this means that an L²-distance between the two mean functions is smaller than a prespecified threshold. For such hypotheses self-normalization, which was introduced in 2010 by Shao, and Shao and Zhang and is commonly used to avoid the estimation of nuisance parameters, is not directly applicable.We develop new self-normalized procedures for testing relevant hypotheses in the one-sample, two-sample and change point problem and investigate their asymptotic properties. Finite sample properties of the tests proposed are illustrated by means of a simulation study and data examples. Our main focus is on functional time series, but extensions to other settings are also briefly discussed.
Cross-Subject EEG-Based Emotion Recognition Through Neural Networks With Stratified Normalization
Due to a large number of potential applications, a good deal of effort has been recently made toward creating machine learning models that can recognize evoked emotions from one's physiological recordings. In particular, researchers are investigating the use of EEG as a low-cost, non-invasive method. However, the poor homogeneity of the EEG activity across participants hinders the implementation of such a system by a time-consuming calibration stage. In this study, we introduce a new participant-based feature normalization method, named stratified normalization , for training deep neural networks in the task of cross-subject emotion classification from EEG signals. The new method is able to subtract inter-participant variability while maintaining the emotion information in the data. We carried out our analysis on the SEED dataset, which contains 62-channel EEG recordings collected from 15 participants watching film clips. Results demonstrate that networks trained with stratified normalization significantly outperformed standard training with batch normalization. In addition, the highest model performance was achieved when extracting EEG features with the multitaper method, reaching a classification accuracy of 91.6% for two emotion categories (positive and negative) and 79.6% for three (also neutral). This analysis provides us with great insight into the potential benefits that stratified normalization can have when developing any cross-subject model based on EEG.
Group Normalization
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems—BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO (https://github.com/facebookresearch/Detectron/blob/master/projects/GN), and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.