Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Item Type
      Item Type
      Clear All
      Item Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
7,182 result(s) for "NORMALIZATION"
Sort by:
Recognizing and responding to normalization of deviance
\"Contains guidelines to assist facilities in recognizing and addressing the phenomenon of normalization of deviation--Provides techniques for addressing normalized deviations and techniques to eliminate waste in all manufacturing processes--Describes methods for identifying normalized deviation as well as where to find deviations--Includes techniques to reduce operational normalization of deviance and to reduce organizational normalization of deviance; Market description: Process safety professionals in all areas of manufacturing; Process safety consultants; Chemical engineering students; Certified safety professionals\"-- Provided by publisher.
A new complete color normalization method for H&E stained histopatholgical images
The popularity of digital histopathology is growing rapidly in the development of computer aided disease diagnosis systems. However, the color variations due to manual cell sectioning and stain concentration make the process challenging in various digital pathological image analysis such as histopathological image segmentation and classification. Hence, the normalization of these variations are needed to obtain the promising results. The proposed research intends to introduce a reliable and robust new complete color normalization method, addressing the problems of color and stain variability. The new complete color normalization involves three phases, namely enhanced fuzzy illuminant normalization, fuzzy-based stain normalization, and modified spectral normalization. The extensive simulations are performed and validated on histopathological images. The presented algorithm outperforms the existing conventional normalization methods by overcoming the certain limitations and challenges. As per the experimental quality metrics and comparative analysis, the proposed algorithm performs efficiently and provides promising results.
A Reversible Automatic Selection Normalization (RASN) Deep Network for Predicting in the Smart Agriculture System
Due to the nonlinear modeling capabilities, deep learning prediction networks have become widely used for smart agriculture. Because the sensing data has noise and complex nonlinearity, it is still an open topic to improve its performance. This paper proposes a Reversible Automatic Selection Normalization (RASN) network, integrating the normalization and renormalization layer to evaluate and select the normalization module of the prediction model. The prediction accuracy has been improved effectively by scaling and translating the input with learnable parameters. The application results of the prediction show that the model has good prediction ability and adaptability for the greenhouse in the Smart Agriculture System.
Testing relevant hypotheses in functional time series via self-normalization
We develop methodology for testing relevant hypotheses about functional time series in a tuning-free way. Instead of testing for exact equality, e.g. for the equality of two mean functions from two independent time series, we propose to test the null hypothesis of no relevant deviation. In the two-sample problem this means that an L²-distance between the two mean functions is smaller than a prespecified threshold. For such hypotheses self-normalization, which was introduced in 2010 by Shao, and Shao and Zhang and is commonly used to avoid the estimation of nuisance parameters, is not directly applicable.We develop new self-normalized procedures for testing relevant hypotheses in the one-sample, two-sample and change point problem and investigate their asymptotic properties. Finite sample properties of the tests proposed are illustrated by means of a simulation study and data examples. Our main focus is on functional time series, but extensions to other settings are also briefly discussed.
Cross-Subject EEG-Based Emotion Recognition Through Neural Networks With Stratified Normalization
Due to a large number of potential applications, a good deal of effort has been recently made toward creating machine learning models that can recognize evoked emotions from one's physiological recordings. In particular, researchers are investigating the use of EEG as a low-cost, non-invasive method. However, the poor homogeneity of the EEG activity across participants hinders the implementation of such a system by a time-consuming calibration stage. In this study, we introduce a new participant-based feature normalization method, named stratified normalization , for training deep neural networks in the task of cross-subject emotion classification from EEG signals. The new method is able to subtract inter-participant variability while maintaining the emotion information in the data. We carried out our analysis on the SEED dataset, which contains 62-channel EEG recordings collected from 15 participants watching film clips. Results demonstrate that networks trained with stratified normalization significantly outperformed standard training with batch normalization. In addition, the highest model performance was achieved when extracting EEG features with the multitaper method, reaching a classification accuracy of 91.6% for two emotion categories (positive and negative) and 79.6% for three (also neutral). This analysis provides us with great insight into the potential benefits that stratified normalization can have when developing any cross-subject model based on EEG.
Normalization in the simply typed -calculus
In this paper, in connection with the program of extending the Curry–Howard isomorphism to classical logic, we study the$\\lambda \\mu$-calculus of Parigot emphasizing the difference between the original version of Parigot and the version of de Groote in terms of normalization properties. In order to talk about a satisfactory representation of the integers, besides the usual$\\beta$-,$\\mu$-, and$\\mu '$-reductions, we consider the$\\lambda \\mu$-calculus augmented with the reduction rules$\\rho$,$\\theta$and$\\varepsilon$. We show that we need all of these rules for this purpose. Then we prove that, with the syntax of Parigot, the calculus enjoys the strong normalization property even when we add the rules$\\rho$,$\\theta$, and$\\epsilon$, while the$\\lambda \\mu$-calculus presented with the more flexible de Groote-style syntax, in contrast, has only the weak normalization property. In particular, we present a normalization algorithm for the$\\beta \\mu \\mu '\\rho \\theta \\varepsilon$-reduction in the de Groote-style calculus.
Group Normalization
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems—BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO (https://github.com/facebookresearch/Detectron/blob/master/projects/GN), and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.
Are MCDA Methods Benchmarkable? A Comparative Study of TOPSIS, VIKOR, COPRAS, and PROMETHEE II Methods
Multi-Criteria Decision-Analysis (MCDA) methods are successfully applied in different fields and disciplines. However, in many studies, the problem of selecting the proper methods and parameters for the decision problems is raised. The paper undertakes an attempt to benchmark selected Multi-Criteria Decision Analysis (MCDA) methods. To achieve that, a set of feasible MCDA methods was identified. Based on reference literature guidelines, a simulation experiment was planned. The formal foundations of the authors’ approach provide a reference set of MCDA methods ( Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), Complex Proportional Assessment (COPRAS), and PROMETHEE II: Preference Ranking Organization Method for Enrichment of Evaluations) along with their similarity coefficients (Spearman correlation coefficients and WS coefficient). This allowed the generation of a set of models differentiated by the number of attributes and decision variants, as well as similarity research for the obtained rankings sets. As the authors aim to build a complex benchmarking model, additional dimensions were taken into account during the simulation experiments. The aspects of the performed analysis and benchmarking methods include various weighing methods (results obtained using entropy and standard deviation methods) and varied techniques of normalization of MCDA model input data. Comparative analyses showed the detailed influence of values of particular parameters on the final form and a similarity of the final rankings obtained by different MCDA methods.
A review of convolutional neural networks in computer vision
In computer vision, a series of exemplary advances have been made in several areas involving image classification, semantic segmentation, object detection, and image super-resolution reconstruction with the rapid development of deep convolutional neural network (CNN). The CNN has superior features for autonomous learning and expression, and feature extraction from original input data can be realized by means of training CNN models that match practical applications. Due to the rapid progress in deep learning technology, the structure of CNN is becoming more and more complex and diverse. Consequently, it gradually replaces the traditional machine learning methods. This paper presents an elementary understanding of CNN components and their functions, including input layers, convolution layers, pooling layers, activation functions, batch normalization, dropout, fully connected layers, and output layers. On this basis, this paper gives a comprehensive overview of the past and current research status of the applications of CNN models in computer vision fields, e.g., image classification, object detection, and video prediction. In addition, we summarize the challenges and solutions of the deep CNN, and future research directions are also discussed.
Multilingual and cross-domain temporal tagging
Extraction and normalization of temporal expressions from documents are important steps towards deep text understanding and a prerequisite for many NLP tasks such as information extraction, question answering, and document summarization. There are different ways to express (the same) temporal information in documents. However, after identifying temporal expressions, they can be normalized according to some standard format. This allows the usage of temporal information in a term- and language-independent way. In this paper, we describe the challenges of temporal tagging in different domains, give an overview of existing annotated corpora, and survey existing approaches for temporal tagging. Finally, we present our publicly available temporal tagger HeidelTime, which is easily extensible to further languages due to its strict separation of source code and language resources like patterns and rules. We present a broad evaluation on multiple languages and domains on existing corpora as well as on a newly created corpus for a language/domain combination for which no annotated corpus has been available so far.