Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,419 result(s) for "Nonparametric maximum likelihood"
Sort by:
ASYMPTOTIC NORMALITY OF NONPARAMETRIC M-ESTIMATORS WITH APPLICATIONS TO HYPOTHESIS TESTING FOR PANEL COUNT DATA
In semiparametric and nonparametric statistical inference, the asymptotic normality of estimators has been widely established when they are √n-consistent. In many applications, nonparametric estimators are not able to achieve this rate. We have a result on the asymptotic normality of nonparametric M-estimators that can be used if the rate of convergence of an estimator is n−1/2 or slower. We apply this to study the asymptotic distribution of sieve estimators of functionals of a mean function from a counting process, and develop nonparametric tests for the problem of treatment comparison with panel count data. The test statistics are constructed with spline likelihood estimators instead of nonparametric likelihood estimators. The new tests have a more general and simpler structure and are easy to implement. Simulation studies show that the proposed tests perform well even for small sample sizes. We find that a new test is always powerful for all the situations considered and is thus robust. For illustration, a data analysis example is provided.
On a Problem of Robbins
An early example of a compound decision problem of Robbins (1951) is employed to illustrate some features of the development of empirical Bayes methods. Our primary objective is to draw attention to the constructive role that the nonparametric maximum likelihood estimator for mixture models introduced by Kiefer & Wolfowitz (1956) can play in these developments.
Estimating Hidden Semi-Markov Chains From Discrete Sequences
This article addresses the estimation of hidden semi-Markov chains from nonstationary discrete sequences. Hidden semi-Markov chains are particularly useful to model the succession of homogeneous zones or segments along sequences. A discrete hidden semi-Markov chain is composed of a nonobservable state process, which is a semi-Markov chain, and a discrete output process. Hidden semi-Markov chains generalize hidden Markov chains and enable the modeling of various durational structures. From an algorithmic point of view, a new forward-backward algorithm is proposed whose complexity is similar to that of the Viterbi algorithm in terms of sequence length (quadratic in the worst case in time and linear in space). This opens the way to the maximum likelihood estimation of hidden semi-Markov chains from long sequences. This statistical modeling approach is illustrated by the analysis of branching and flowering patterns in plants.
New Multi-Sample Nonparametric Tests for Panel Count Data
This paper considers the problem of multi-sample nonparametric comparison of counting processes with panel count data, which arise naturally when recurrent events are considered. Such data frequently occur in medical follow-up studies and reliability experiments, for example. For the problem considered, we construct two new classes of nonparametric test statistics based on the accumulated weighted differences between the rates of increase of the estimated mean functions of the counting processes over observation times, wherein the nonparametric maximum likelihood approach is used to estimate the mean function instead of the nonparametric maximum pseudolikelihood. The asymptotic distributions of the proposed statistics are derived and their finite-sample properties are examined through Monte Carlo simulations. The simulation results show that the proposed methods work quite well and are more powerful than the existing test procedures. Two real data sets are analyzed and presented as illustrative examples.
Weighted NPMLE for the Subdistribution of a Competing Risk
Direct regression modeling of the subdistribution has become popular for analyzing data with multiple, competing event types. All general approaches so far are based on nonlikelihood-based procedures and target covariate effects on the subdistribution. We introduce a novel weighted likelihood function that allows for a direct extension of the Fine-Gray model to a broad class of semiparametric regression models. The model accommodates time-dependent covariate effects on the subdistribution hazard. To motivate the proposed likelihood method, we derive standard nonparametric estimators and discuss a new interpretation based on pseudo risk sets. We establish consistency and asymptotic normality of the estimators and propose a sandwich estimator of the variance. In comprehensive simulation studies, we demonstrate the solid performance of the weighted nonparametric maximum likelihood estimation in the presence of independent right censoring. We provide an application to a very large bone marrow transplant dataset, thereby illustrating its practical utility. Supplementary materials for this article are available online.
Analyzing Pharmacodynamic Count Data That Rapidly Decrease to Zero
We present a framework for maximum likelihood analysis on count observations that begin high and quickly drop to zero, for example, from hollow fiber drug comparison studies. This simulation study focuses on treating observed counts as Poisson or normally distributed for the purpose of estimating infection rebound after effective treatment. CFU profiles were simulated from inoculation to 96 h post‐treatment. The PK‐PD link was an Emax inhibitory model. Random parameters were pathogen growth and natural decay rates, drug concentration for half‐maximal effect, and drug pathogen kill rate. Other parameters, including PK, were fixed. Parameters were adjusted to attain 67% efficacy at 24 h. Random parameter values were optimized for profiles observed at 24, 48, 72, and 96 h assuming each of four probability assumptions: (1) all CFU measurements were Poisson distributed (truth); (2) CFU < 128 were Poisson, higher values were normally distributed; (3) all observations were normally distributed; and (4) observations were normally distributed but CFU < 10 were censored. CFU‐time profiles were re‐simulated using the optimized parameter densities. Rebound percentage (CFU ≥ 10 at 24 h post‐treatment) was best predicted using strategy 2, above. For limited periodically collected time series count data that quickly fall to 0, the true proportion reaching 0 (lack of rebound) was best modeled by assuming Poisson distribution at low counts. At higher counts (≥ 128), assuming normality is reasonable. Censoring observations leads to biased models.
M-quantile regression for multivariate longitudinal data with an application to the Millennium Cohort Study
Motivated by the analysis of data from the UK Millennium Cohort Study on emotional and behavioural disorders, we develop an M-quantile regression model for multivariate longitudinal responses. M-quantile regression is an appealing alternative to standard regression models; it combines features of quantile and expectile regression and it may produce a detailed picture of the conditional response variable distribution, while ensuring robustness to outlying data. As we deal with multivariate data, we need to specify what it is meant by M-quantile in this context, and how the structure of dependence between univariate profiles may be accounted for. Here, we consider univariate (conditional) M-quantile regression models with outcome-specific random effects for each outcome. Dependence between outcomes is introduced by assuming that the random effects in the univariate models are dependent. The multivariate distribution of the random effects is left unspecified and estimated from the observed data. Adopting this approach, we are able to model dependence both within and between outcomes. We further discuss a suitable model parameterisation to account for potential endogeneity of the observed covariates. An extended EM algorithm is defined to derive estimates under a maximum likelihood approach.
Semiparametric regression analysis of length-biased interval-censored data
In prevalent cohort design, subjects who have experienced an initial event but not the failure event are preferentially enrolled and the observed failure times are often length-biased. Moreover, the prospective follow-up may not be continuously monitored and failure times are subject to interval censoring. We study the non-parametric maximum likelihood estimation for the proportional hazards model with length-biased interval-censored data. Direct maximization of likelihood function is intractable, thus we develop a computationally simple and stable expectation-maximization algorithm through introducing two layers of data augmentation. We establish the strong consistency, asymptotic normality and efficiency of the proposed estimator and provide an inferential procedure through profile likelihood. We assess the performance of the proposed methods through extensive simulations and apply the proposed methods to the Massachusetts Health Care Panel Study.
ON PROFILE MM ALGORITHMS FOR GAMMA FRAILTY SURVIVAL MODELS
Gamma frailty survival models have been extensively used for the analysis of such multivariate failure time data as clustered failure times and recurrent events. Estimation and inference procedures in these models often center on the nonparametric maximum likelihood method and its numerical implementation via the EM algorithm. Despite its success in dealing with incomplete data problems, the algorithm may not fare well in high-dimensional situations. To address this problem, we propose a class of profile MM algorithms with good convergence properties. As a key step in constructing minorizing functions, the high-dimensional objective function is decomposed as a sum of separable low-dimensional functions. This allows the algorithm to bypass the difficulty of inverting large matrix and facilitates its pertinent use in high-dimensional problems. Simulation studies show that the proposed algorithms perform well in various situations and converge reliably with practical sample sizes. The method is illustrated using data from a colorectal cancer study.