Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
450 result(s) for "maximum likelihood distances"
Sort by:
Distance Measures of Polarimetric SAR Image Data: A Survey
Distance measure plays a critical role in various applications of polarimetric synthetic aperture radar (PolSAR) image data. In recent decades, plenty of distance measures have been developed for PolSAR image data from different perspectives, which, however, have not been well analyzed and summarized. In order to make better use of these distance measures in algorithm design, this paper provides a systematic survey of them and analyzes their relations in detail. We divide these distance measures into five main categories (i.e., the norm distances, geodesic distances, maximum likelihood (ML) distances, generalized likelihood ratio test (GLRT) distances, stochastics distances) and two other categories (i.e., the inter-patch distances and those based on metric learning). Furthermore, we analyze the relations between different distance measures and visualize them with graphs to make them clearer. Moreover, some properties of the main distance measures are discussed, and some advice for choosing distances in algorithm design is also provided. This survey can serve as a reference for researchers in PolSAR image processing, analysis, and related fields.
Improving the Efficiency of Robust Estimators for the Generalized Linear Model
The distance constrained maximum likelihood procedure (DCML) optimally combines a robust estimator with the maximum likelihood estimator with the purpose of improving its small sample efficiency while preserving a good robustness level. It has been published for the linear model and is now extended to the GLM. Monte Carlo experiments are used to explore the performance of this extension in the Poisson regression case. Several published robust candidates for the DCML are compared; the modified conditional maximum likelihood estimator starting with a very robust minimum density power divergence estimator is selected as the best candidate. It is shown empirically that the DCML remarkably improves its small sample efficiency without loss of robustness. An example using real hospital length of stay data fitted by the negative binomial regression model is discussed.
Minimum Hellinger Distance Estimation for Finite Mixtures of Poisson Regression Models and Its Applications
Minimum Hellinger distance estimation (MHDE) has been shown to discount anomalous data points in a smooth manner with first-order efficiency for a correctly specified model. An estimation approach is proposed for finite mixtures of Poisson regression models based on MHDE. Evidence from Monte Carlo simulations suggests that MHDE is a viable alternative to the maximum likelihood estimator when the mixture components are not well separated or the model parameters are near zero. Biometrical applications also illustrate the practical usefulness of the MHDE method.
Prospects for Inferring Very Large Phylogenies by Using the Neighbor-Joining Method
Current efforts to reconstruct the tree of life and histories of multigene families demand the inference of phylogenies consisting of thousands of gene sequences. However, for such large data sets even a moderate exploration of the tree space needed to identify the optimal tree is virtually impossible. For these cases the neighbor-joining (NJ) method is frequently used because of its demonstrated accuracy for smaller data sets and its computational speed. As data sets grow, however, the fraction of the tree space examined by the NJ algorithm becomes minuscule. Here, we report the results of our computer simulation for examining the accuracy of NJ trees for inferring very large phylogenies. First we present a likelihood method for the simultaneous estimation of all pairwise distances by using biologically realistic models of nucleotide substitution. Use of this method corrects up to 60% of NJ tree errors. Our simulation results show that the accuracy of NJ trees decline only by ≈5% when the number of sequences used increases from 32 to 4,096 (128 times) even in the presence of extensive variation in the evolutionary rate among lineages or significant biases in the nucleotide composition and transition/transversion ratio. Our results encourage the use of complex models of nucleotide substitution for estimating evolutionary distances and hint at bright prospects for the application of the NJ and related methods in inferring large phylogenies.
ON THE NONPARAMETRIC MAXIMUM LIKELIHOOD ESTIMATOR FOR GAUSSIAN LOCATION MIXTURE DENSITIES WITH APPLICATION TO GAUSSIAN DENOISING
We study the nonparametric maximum likelihood estimator (NPMLE) for estimating Gaussian location mixture densities in d-dimensions from independent observations. Unlike usual likelihood-based methods for fitting mixtures, NPMLEs are based on convex optimization. We prove finite sample results on the Hellinger accuracy of every NPMLE. Our results imply, in particular, that every NPMLE achieves near parametric risk (up to logarithmic multiplicative factors) when the true density is a discrete Gaussian mixture without any prior information on the number of mixture components. NPMLEs can naturally be used to yield empirical Bayes estimates of the oracle Bayes estimator in the Gaussian denoising problem. We prove bounds for the accuracy of the empirical Bayes estimate as an approximation to the oracle Bayes estimator. Here our results imply that the empirical Bayes estimator performs at nearly the optimal level (up to logarithmic factors) for denoising in clustering situations without any prior knowledge of the number of clusters.
MINIMAX RATES OF COMMUNITY DETECTION IN STOCHASTIC BLOCK MODELS
Recently, network analysis has gained more and more attention in statistics, as well as in computer science, probability and applied mathematics. Community detection for the stochastic block model (SBM) is probably the most studied topic in network analysis. Many methodologies have been proposed. Some beautiful and significant phase transition results are obtained in various settings. In this paper, we provide a general minimax theory for community detection. It gives minimax rates of the mis-match ratio for a wide rage of settings including homogeneous and inhomogeneous SBMs, dense and sparse networks, finite and growing number of communities. The minimax rates are exponential, different from polynomial rates we often see in statistical literature. An immediate consequence of the result is to establish threshold phenomenon for strong consistency (exact recovery) as well as weak consistency (partial recovery). We obtain the upper bound by a range of penalized likelihood-type approaches. The lower bound is achieved by a novel reduction from a global mis-match ratio to a local clustering problem for one node through an exchangeability property.
Estimating distances via received signal strength and connectivity in wireless sensor networks
Distance estimation is vital for localization and many other applications in wireless sensor networks (WSNs). Particularly, it is desirable to implement distance estimation as well as localization without using specific hardware in low-cost WSNs. As such, both the received signal strength (RSS) based approach and the connectivity based approach have gained much attention. The RSS based approach is suitable for estimating short distances, whereas the connectivity based approach obtains relatively good performance for estimating long distances. Considering the complementary features of these two approaches, we propose a fusion method based on the maximum-likelihood estimator to estimate the distance between any pair of neighboring nodes in a WSN through efficiently fusing the information from the RSS and local connectivity. Additionally, the method is reported under the practical log-normal shadowing model, and the associated Cramer–Rao lower bound (CRLB) is also derived for performance analysis. Both simulations and experiments based on practical measurements are carried out, and demonstrate that the proposed method outperforms any single approach and approaches to the CRLB as well.
Quantitative spatial distribution model of site-specific loess landslides on the Heifangtai terrace, China
Landslide disasters are associated with severe losses on the Loess Plateau of China. Although early warning systems and susceptibility mapping have mitigated this issue to some extent, most methods are qualitative or semi-quantitative in the site-specific range. In this paper, a quantitative spatial distribution model is presented for site-specific loess landslide hazard assessment. Coupled with multi-temporal remote sensing images and high-precision UAV cloud point data, a total of 98 loess landslides that have occurred since 2004 on the Heifangtai terrace were collected to establish a landslide volume-date and retreating distance database. Eleven loess landslides are selected to construct a numerical model for parameter back analysis, and the accuracy of the simulation results is quantitatively evaluated by the centroid distance and overlapping area. Different volumes and receding distance rates of landslides are fitted to determine the relationship between cracks and potential volume, and different volumes and parameters are combined to simulate the spatial distribution of potential loess landslides. The results of this study reveal that landslide volumes mainly range between 1 × 103 and 5 × 105 m3, and the historical occurrence probability reaches 0.551. The optimal parameters are estimated by the maximum likelihood method to obtain a uniform distribution parameter value probability model, and the results show that the error of the estimated length within a range of 0.05 from the optimal parameter does not exceed 15%. In the selected slope slide case, farmland near the toe of the slope primarily includes exposed hazards with probabilities greater than 0.7. This work provides a useful reference for local disaster reduction and a theoretical methodology for hazard assessments.
Pseudo-likelihood approach for parameter estimation in univariate normal mixture models
This article proposes a new approach for parameter estimation in univariate normal mixture distributions. The proposed method combines distance-based parameter estimation with maximum likelihood estimation and is therefore referred to as a pseudo-likelihood approach. In this pseudo-likelihood approach, the weights of the mixture components can be considered as functions of mixture component means and standard deviations. The proposed method has two main advantages in comparison to the traditional likelihood approach: (1) the pseudo-likelihood is always bounded, thus the global maximum exists; (2) since the mixture weights are functions of component means and standard deviations, the number of estimated parameters is reduced which may play an important role in models with many mixture components. We present several simulation examples to demonstrate the behaviour of the proposed method in different situations in comparison to other parameter estimation methods. It is interesting to observe that in some models, where it is more difficult to separate the components and the number of components is large, the pseudo-likelihood method beats the maximum likelihood method even for large sample sizes.