Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
823 result(s) for "Permissible error"
Sort by:
Justification of the use of wear-out digital micrometer in repair production
Issues related to the evaluation of the results of direct multiple measurements with a worn digital micrometer have considered. It had determined that the total error of the device used in operation exceeds the limit of permissible error indicated in the passport by 1.25 times and is equal to 5 μm. But in the field of control of parts in repair production, the passage of mandatory verification by measuring instruments is not required, and the calibration operation is advisory in nature. Based on the obtained value, we can conclude that the device can be used to measure the dimensions of parts with a tolerance of at least 15 µm.
On the validity of the effective field theory approach to SM precision tests
A bstract We discuss the conditions for an effective field theory (EFT) to give an adequate low-energy description of an underlying physics beyond the Standard Model (SM). Starting from the EFT where the SM is extended by dimension-6 operators, experimental data can be used without further assumptions to measure (or set limits on) the EFT parameters. The interpretation of these results requires instead a set of broad assumptions (e.g. power counting rules) on the UV dynamics. This allows one to establish, in a bottom-up approach, the validity range of the EFT description, and to assess the error associated with the truncation of the EFT series. We give a practical prescription on how experimental results could be reported, so that they admit a maximally broad range of theoretical interpretations. Namely, the experimental constraints on dimension-6 operators should be reported as functions of the kinematic variables that set the relevant energy scale of the studied process. This is especially important for hadron collider experiments where collisions probe a wide range of energy scales.
Variable selection with error control: another look at stability selection
Stability selection was recently introduced by Meinshausen and Bühlmann as a very general technique designed to improve the performance of a variable selection algorithm. It is based on aggregating the results of applying a selection procedure to subsamples of the data. We introduce a variant, called complementary pairs stability selection, and derive bounds both on the expected number of variables included by complementary pairs stability selection that have low selection probability under the original procedure, and on the expected number of high selection probability variables that are excluded. These results require no (e.g. exchangeability) assumptions on the underlying model or on the quality of the original selection procedure. Under reasonable shape restrictions, the bounds can be further tightened, yielding improved error control, and therefore increasing the applicability of the methodology.
The Open Quantum Materials Database (OQMD): assessing the accuracy of DFT formation energies
The Open Quantum Materials Database (OQMD) is a high-throughput database currently consisting of nearly 300,000 density functional theory (DFT) total energy calculations of compounds from the Inorganic Crystal Structure Database (ICSD) and decorations of commonly occurring crystal structures. To maximise the impact of these data, the entire database is being made available, without restrictions, at www.oqmd.org/download . In this paper, we outline the structure and contents of the database, and then use it to evaluate the accuracy of the calculations therein by comparing DFT predictions with experimental measurements for the stability of all elemental ground-state structures and 1,670 experimental formation energies of compounds. This represents the largest comparison between DFT and experimental formation energies to date. The apparent mean absolute error between experimental measurements and our calculations is 0.096 eV/atom. In order to estimate how much error to attribute to the DFT calculations, we also examine deviation between different experimental measurements themselves where multiple sources are available, and find a surprisingly large mean absolute error of 0.082 eV/atom. Hence, we suggest that a significant fraction of the error between DFT and experimental formation energies may be attributed to experimental uncertainties. Finally, we evaluate the stability of compounds in the OQMD (including compounds obtained from the ICSD as well as hypothetical structures), which allows us to predict the existence of ~3,200 new compounds that have not been experimentally characterised and uncover trends in material discovery, based on historical data available within the ICSD. Materials database: A catalog for all Researchers in the USA and Germany introduce a database of over 300,000 calculations detailing the electronic structure and stability of inorganic materials. Chris Wolverton and co-workers from Northwestern University and the Leibniz Institute for Information Infrastructure describe the structure of the Open Quantum Materials Database—a catalog storing information about the electronic properties of a significant fraction of the known crystalline solids determined using density functional theory calculations. Density functional theory is a powerful computational technique that uses quantum mechanics to determine the lowest energy state of the electrons travelling through a lattice of atoms. The researchers verified the accuracy of the calculations by comparing them to experimental results on 1,670 crystals. The database is freely available to scientists, enabling them to design and predict the properties of as yet unrealised materials.
Elucidating interplay of speed and accuracy in biological error correction
One of the most fascinating features of biological systems is the ability to sustain high accuracy of all major cellular processes despite the stochastic nature of underlying chemical processes. It is widely believed that such low error values are the result of the error-correcting mechanism known as kinetic proofreading. However, it is usually argued that enhancing the accuracy should result in slowing down the process, leading to the so-called speed–accuracy trade-off. We developed a discrete-state stochastic framework that allowed us to investigate the mechanisms of the proofreading using the method of first-passage processes. With this framework, we simultaneously analyzed the speed and accuracy of the two fundamental biological processes, DNA replication and tRNA selection during the translation. The results indicate that these systems tend to optimize speed rather than accuracy, as long as the error level is tolerable. Interestingly, for these processes, certain kinetic parameters lay in the suboptimal region where their perturbations can improve both speed and accuracy. Additional constraints due to the energetic cost of proofreading also play a role in the error correcting process. Our theoretical findings provide a microscopic picture of how complex biological processes are able to function so fast with high accuracy.
All-Sky Microwave Radiance Assimilation in NCEP’s GSI Analysis System
The capability of all-sky microwave radiance assimilation in the Gridpoint Statistical Interpolation (GSI) analysis system has been developed at the National Centers for Environmental Prediction (NCEP). This development effort required the adaptation of quality control, observation error assignment, bias correction, and background error covariance to all-sky conditions within the ensemble–variational (EnVar) framework. The assimilation of cloudy radiances from the Advanced Microwave Sounding Unit-A (AMSU-A) microwave radiometer for ocean fields of view (FOVs) is the primary emphasis of this study. In the original operational hybrid 3D EnVar Global Forecast System (GFS), the clear-sky approach for radiance data assimilation is applied. Changes to data thinning and quality control have allowed all-sky satellite radiances to be assimilated in the GSI. Along with the symmetric observation error assignment, additional situation-dependent observation error inflation is employed for all-sky conditions. Moreover, in addition to the current radiance bias correction, a new bias correction strategy has been applied to all-sky radiances. In this work, the static background error variance and the ensemble spread of cloud water are examined, and the levels of cloud variability from the ensemble forecast in single- and dual-resolution configurations are discussed. Overall, the all-sky approach provides more realistic simulated brightness temperatures and cloud water analysis increments, and improves analysis off the west coasts of the continents by reducing a known bias in stratus. An approximate 10% increase in the use of AMSU-A channels 1–5 and a 12% increase for channel 15 are also observed. The all-sky AMSU-A radiance assimilation became operational in the 4D EnVar GFS system upgrade of 12 May 2016.
First-order methods of smooth convex optimization with inexact oracle
We introduce the notion of inexact first-order oracle and analyze the behavior of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau–Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast gradient methods, and study in particular their dependence on the accuracy of the oracle and the desired accuracy of the objective function. We observe that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used. We prove that, contrary to simple gradient schemes, fast gradient methods must necessarily suffer from error accumulation. Finally, we show that the notion of inexact oracle allows the application of first-order methods of smooth convex optimization to solve non-smooth or weakly smooth convex problems.
Gluon-induced Higgs-strahlung at next-to-leading order QCD
A bstract Gluon-induced contributions to the associated production of a Higgs and a Z boson are calculated with NLO accuracy in QCD. They constitute a significant contribution to the cross section for this process. The perturbative correction factor ( K -factor) is calculated in the limit of infinite top-quark and vanishing bottom-quark masses. The qualitative similarity of the results to the well-known ones for the gluon-fusion process gg → H allows to conclude that rescaling the LO prediction by this K -factor leads to a reliable NLO result and realistic error estimate due to missing higher-order perturbative effects. We consider the total inclusive cross section as well as a scenario with a boosted Higgs boson, where the Higgs boson’s transverse momentum is restricted to values p T,H > 200 GeV. In both cases, we find large correction factors K ≈ 2 in most of the parameter space.
A Multiscale Variational Data Assimilation Scheme: Formulation and Illustration
A multiscale data assimilation (MS-DA) scheme is formulated for fine-resolution models. A decomposition of the cost function is derived for a set of distinct spatial scales. The decomposed cost function allows for the background error covariance to be estimated separately for the distinct spatial scales, and multi-decorrelation scales to be explicitly incorporated in the background error covariance. MS-DA minimizes the partitioned cost functions sequentially from large to small scales. The multi-decorrelation length scale background error covariance enhances the spreading of sparse observations and prevents fine structures in high-resolution observations from being overly smoothed. The decomposition of the cost function also provides an avenue for mitigating the effects of scale aliasing and representativeness errors that inherently exist in a multiscale system, thus further improving the effectiveness of the assimilation of high-resolution observations. A set of one-dimensional experiments is performed to examine the properties of the MS-DA scheme. Emphasis is placed on the assimilation of patchy high-resolution observations representing radar and satellite measurements, alongside sparse observations representing those from conventional in situ platforms. The results illustrate how MS-DA improves the effectiveness of the assimilation of both these types of observations simultaneously.
Reference Values for 30 Common Biochemistry Analytes Across 5 Different Analyzers in Neonates and Children 30 Days to 18 Years of Age
Age-specific reference intervals (RIs) have been developed for biochemistry analytes in children. However, the ability to interpret results from multiple laboratories for 1 individual is limited. This study reports a head-to-head comparison of reference values and age-specific RIs for 30 biochemistry analytes for children across 5 analyzer types. Blood was collected from healthy newborns and children 30 days to <18 years of age. Serum aliquots from the same individual were analyzed on 5 analyzer types. Differences in the mean reference values of the analytes by the analyzer types were investigated using mixed-effect regression analysis and by comparing maximum variation between analyzers with analyte-specific allowable total error reported in the Westgard QC database. Quantile regression was used to estimate age-specific RIs using power variables in age selected by fractional polynomial regression for the mean, with modification by sex when appropriate. The variations of age-specific mean reference values between analyzer types were within allowable total error (Westgard QC) for most analytes, and common age-specific reference limits were reported as functions of age and/or sex. Analyzer-specific reference limits for all analytes on 5 analyzer types are also reported as functions of age and/or sex. This study provides quantitative and qualitative measures of the extent to which results for individual children can or cannot be compared across analyzer types, and the feasibility of RI harmonization. The reported equations enable incorporation of age-specific RIs into laboratory information systems for improving evidence-based clinical decisions in children.