Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
20,213 result(s) for "Consistency"
Sort by:
Proof of unitarity of multidimensional discrete Fourier transform
The multidimensional discrete Fourier transform (MD-DFT) plays an important role in a growing number of signal processing applications. The fundamentals of its applicability as a unitary transform between discrete periodic sequences defined on multidimensional lattices stand on the Hermitian orthogonality of the vectors defining the MD-DFT matrix. A proof of the consistency of the MD-DFT formulation was first provided by Bernardini and Manduchi in 1994 using the Smith normal form theorem of integer matrices. In this reported work, a new proof is provided based on the nullity of the cardinal function on the nonzero cardinal points. [PUBLICATION ABSTRACT]
Towards continuous consistency axiom
It is shown for the first time in this paper, that Kleinberg’s (2002) (self-contradictory) axiomatic system for distance-based clustering fails (that is one of the data transforming axioms, consistency axiom, turns out to be identity transformation) in fixed-dimensional Euclidean space due to the consistency axiom limitations and that its replacement with inner-consistency or outer consistency does not help if continuous data transformations are required. Therefore we formulate a new, sound axiomatic framework for cluster analysis in the fixed dimensional Euclidean space, suitable for k-means like algorithms. The system incorporates centric consistency axiom and motion consistency axiom which induce clustering preserving transformations useful e.g. for deriving new labelled sets for testing clustering procedures. It is suitable for continuous data transformations so that labelled data with small perturbations can be derived. Unlike Kleinberg’s consistency, the new axioms do not lead the data outside of Euclidean space nor cause increase in data dimensionality. Our cluster preserving transformations have linear complexity in data transformation and checking. They are in practice less restrictive, less rigid than Kleinberg’s consistency as they do not enforce inter-cluster distance increase and inner cluster distance decrease when performing clustering preserving transformation.
CONSISTENT PROBABILISTIC SOCIAL CHOICE
Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non-probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (1984). Fishburn's function returns so-called maximal lotteries, that is, lotteries that correspond to optimal mixed strategies in the symmetric zero-sum game induced by the pairwise majority margins. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming.
Research on the Trajectory Consistency based on Mahalanobis distance method
For the average trajectory consistency test, the density is good, which leads to the problem of poor consistency. This paper proposes to use the Mahalanobis distance method to process the test data of two kinds of ammunition. First, each pair of values in a set of tests is obtained through experiments. The obtained data is processed by the Mahalanobis distance method for relative distance processing. Then the mean and variance of the distance data of the two ammunition are calculated. In the simulation test, the data of tests which are processed by the Mahalanobis distance method. The consistency of the two ammunition is checked according to the trajectory consistency test method. The test data using the Mahalanobis distance method can better than the data of the two ammunition in the consistency. The test data processed by the Mahalanobis distance method can improve the situation of good density and poor consistency in the consistency test, and provide technical support for the subsequent average trajectory consistency test.
Convex Relaxation Methods for Community Detection
This paper surveys recent theoretical advances in convex optimization approaches for community detection. We introduce some important theoretical techniques and results for establishing the consistency of convex community detection under various statistical models. In particular, we discuss the basic techniques based on the primal and dual analysis. We also present results that demonstrate several distinctive advantages of convex community detection, including robustness against outlier nodes, consistency under weak assortativity, and adaptivity to heterogeneous degrees. This survey is not intended to be a complete overview of the vast literature on this fast-growing topic. Instead, we aim to provide a big picture of the remarkable recent development in this area and to make the survey accessible to a broad audience. We hope that this expository article can serve as an introductory guide for readers who are interested in using, designing, and analyzing convex relaxation methods in network analysis.
Consistency Indices in Analytic Hierarchy Process: A Review
A well-regarded as well as powerful method named the ‘analytic hierarchy process’ (AHP) uses mathematics and psychology for making and analysing complex decisions. This article aims to present a brief review of the consistency measure of the judgments in AHP. Judgments should not be random or illogical. Several researchers have developed different consistency measures to identify the rationality of judgments. This article summarises the consistency measures which have been proposed so far in the literature. Moreover, this paper describes briefly the functional relationships established in the literature among the well-known consistency indices. At last, some thoughtful research directions that can be helpful in further research to develop and improve the performance of AHP are provided as well.
A Device for Test and Evaluation of Consistency of Power Battery Pack for Automotive applications
In this paper, the thermal consistency and electrochemical performance of batteries were comprehensively considered to improve the test and ensure the consistency of the power battery pack for automotive applications. At the same time, a safer and more efficient device IS established for testing and evaluation of battery consistency for automotive applications to achieve the real-time monitoring, assessment, prediction, and control of its consistency.
Towards quasi-transverse momentum dependent PDFs computable on the lattice
A bstract Transverse momentum dependent parton distributions (TMDPDFs) which appear in factorized cross sections involve infinite Wilson lines with edges on or close to the light-cone. Since these TMDPDFs are not directly calculable with a Euclidean path integral in lattice QCD, we study the construction of quasi-TMDPDFs with finite-length spacelike Wilson lines that are amenable to such calculations. We define an infrared consistency test to determine which quasi-TMDPDF definitions are related to the TMDPDF, by carrying out a one-loop study of infrared logarithms of transverse position b T  ∼ Λ QCD −1 , which must agree between them. This agreement is a necessary condition for the two quantities to be related by perturbative matching. TMDPDFs necessarily involve combining a hadron matrix element, which nominally depends on a single light-cone direction, with soft matrix elements that necessarily depend on two light-cone directions. We show at one loop that the simplest definitions of the quasi hadron matrix element, the quasi soft matrix element, and the resulting quasi-TMDPDF all fail the infrared consistency test. Ratios of impact parameter quasi-TMDPDFs still provide nontrivial information about the TMD-PDFs, and are more robust since the soft matrix elements cancel. We show at one loop that such quasi ratios can be matched to ratios of the corresponding TMDPDFs. We also introduce a modified “bent” quasi soft matrix element which yields a quasi-TMDPDF that passes the consistency test with the TMDPDF at one loop, and discuss potential issues at higher orders.
Height Aiding, C/N0 Weighting and Consistency Checking for GNSS NLOS and Multipath Mitigation in Urban Areas
Multiple global navigation satellite system (GNSS) constellations can dramatically improve the signal availability in dense urban environments. However, accuracy remains a challenge because buildings block, reflect and diffract the signals. This paper investigates three different techniques for mitigating the impact of non-line-of-sight (NLOS) reception and multipath interference on position accuracy without using additional hardware, testing them using data collected at multiple sites in central London. Aiding the position solution using a terrain height database was found to have the biggest impact, improving the horizontal accuracy by 35% and the vertical accuracy by a factor of 4. An 8% improvement in horizontal accuracy was also obtained from weighting the GNSS measurements in the position solution according to the carrier-power-to-noise-density ratio (C/N0). Consistency checking using a conventional sequential elimination technique was found to degrade horizontal positioning performance by 60% because it often eliminated the wrong measurements in cases when multiple signals were affected by NLOS reception or strong multipath interference. A new consistency checking method that compares subsets of measurements performed better, but was still equally likely to improve or degrade the accuracy. This was partly because removing a poor measurement can result in adverse signal geometry, degrading the position accuracy. Based on this, several ways of improving the reliability of consistency checking are proposed.
STATISTICAL CONSISTENCY AND ASYMPTOTIC NORMALITY FOR HIGH-DIMENSIONAL ROBUST M-ESTIMATORS
We study theoretical properties of regularized robust M-estimators, applicable when data are drawn from a sparse high-dimensional linear model and contaminated by heavy-tailed distributions and/or outliers in the additive errors and covariates. We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors. When an appropriate nonconvex regularizer is used in place of an ℓ₁-penalty, we show that such stationary points are in fact unique and equal to the local oracle solution with the correct support; hence, results on asymptotic normality in the low-dimensional case carry over immediately to the high-dimensional setting. This has important implications for the efficiency of regularized nonconvex M-estimators when the errors are heavy-tailed. Our analysis of the local curvature of the loss function also has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. We show that as long as a composite gradient descent algorithm is initialized within a constant radius of the true regression vector, successive iterates will converge at a linear rate to a stationary point within the local region. Furthermore, the global optimum of a convex regularized robust regression function may be used to obtain a suitable initialization. The result is a novel two-step procedure that uses a convex M-estimator to achieve consistency and a nonconvex M-estimator to increase efficiency. We conclude with simulation results that corroborate our theoretical findings.