Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
142
result(s) for
"self-consistent algorithm"
Sort by:
Distribution-free estimation with interval-censored contingent valuation data: troubles with Turnbull?
2007
Contingent valuation (CV) surveys frequently employ elicitation procedures that return interval-censored data on respondents’ willingness to pay (WTP). Almost without exception, CV practitioners have applied Turnbull’s self-consistent algorithm to such data in order to obtain nonparametric maximum likelihood (NPML) estimates of the WTP distribution. This paper documents two failings of Turnbull’s algorithm; (1) that it may not converge to NPML estimates and (2) that it may be very slow to converge. With regards to (1) we propose starting and stopping criteria for the algorithm that guarantee convergence to the NPML estimates. With regards to (2) we present a variety of alternative estimators and demonstrate, through Monte Carlo simulations, their performance advantages over Turnbull’s algorithm. Copyright Springer Science+Business Media, Inc. 2007
Journal Article
On Consistency of the Self-Consistent Estimator of Survival Functions with Interval-Censored Data
by
Yu, Qiqing
,
Li, Linxiong
,
Wong, George Y. C.
in
case 2 interval-censored data
,
Censored data
,
Censorship
2000
The self-consistent estimator is commonly used for estimating a survival function with interval-censored data. Recent studies on interval censoring have focused on case 2 interval censoring, which does not involve exact observations, and double censoring, which involves only exact, right-censored or left-censored observations. In this paper, we consider an interval censoring scheme that involves exact, left-censored, right-censored and strictly interval-censored observations. Under this censoring scheme, we prove that the self-consistent estimator is strongly consistent under certain regularity conditions.
Journal Article
Non-parametric Hypothesis Testing and Confidence Intervals with Doubly Censored Data
2003
The non-parametric maximum likelihood estimator (NPMLE) of the distribution function with doubly censored data can be computed using the self-consistent algorithm (Tumbull, 1974). We extend the self-consistent algorithm to include a constraint on the NPMLE. We then show how to construct confidence intervals and test hypotheses based on the NPMLE via the empirical likelihood ratio. Finally, we present some numerical comparisons of the performance of the above method with another method that makes use of the influence functions.
Journal Article
Asymptotic Properties of Self-Consistent Estimators with Doubly-Censored Data
2001
The asymptotic properties of the self-consistend estimator (SCE) of a distribution function F of a random variable X with doubly-censored data are examined by several authors under the assumption that X is observable everywhere in the internal [a, b], where a = inf{x : F(x) > 0} and b = sup{x : F(x) < 1}. Such an assumption does not allow the situation that X is discrete and the situation that X is only observable in a nontrivial subinterval of [a, b]. However, often in practice this assumption is not satisfied. In this manuscript we establish strong uniform consistency, asymptotic normality and asymptotic efficiency of the SCE under a set of assumptions that allow the situation that X is discrete and the situation that X is only observable in a nontrivial subinterval of [a, b]. Finally, we point out a gap in the proofs of the existing results in the literature due to the definition of the SCE.[PUBLICATION ABSTRACT]
Journal Article
Self-consistent determination of long-range electrostatics in neural network potentials
2022
Machine learning has the potential to revolutionize the field of molecular simulation through the development of efficient and accurate models of interatomic interactions. Neural networks can model interactions with the accuracy of quantum mechanics-based calculations, but with a fraction of the cost, enabling simulations of large systems over long timescales. However, implicit in the construction of neural network potentials is an assumption of locality, wherein atomic arrangements on the nanometer-scale are used to learn interatomic interactions. Because of this assumption, the resulting neural network models cannot describe long-range interactions that play critical roles in dielectric screening and chemical reactivity. Here, we address this issue by introducing the self-consistent field neural network — a general approach for learning the long-range response of molecular systems in neural network potentials that relies on a physically meaningful separation of the interatomic interactions — and demonstrate its utility by modeling liquid water with and without applied fields.
Machine learning-based neural network potentials often cannot describe long-range interactions. Here the authors present an approach for building neural network potentials that can describe the electronic and nuclear response of molecular systems to long-range electrostatics.
Journal Article
Machine learning electronic structure methods based on the one-electron reduced density matrix
by
Pavanello, Michele
,
Tuckerman, Mark E.
,
Paetow, Lukas
in
639/638/563
,
639/638/563/980
,
639/766/36/1122
2023
The theorems of density functional theory (DFT) establish bijective maps between the local external potential of a many-body system and its electron density, wavefunction and, therefore, one-particle reduced density matrix. Building on this foundation, we show that machine learning models based on the one-electron reduced density matrix can be used to generate surrogate electronic structure methods. We generate surrogates of local and hybrid DFT, Hartree-Fock and full configuration interaction theories for systems ranging from small molecules such as water to more complex compounds like benzene and propanol. The surrogate models use the one-electron reduced density matrix as the central quantity to be learned. From the predicted density matrices, we show that either standard quantum chemistry or a second machine-learning model can be used to compute molecular observables, energies, and atomic forces. The surrogate models can generate essentially anything that a standard electronic structure method can, ranging from band gaps and Kohn-Sham orbitals to energy-conserving ab-initio molecular dynamics simulations and infrared spectra, which account for anharmonicity and thermal effects, without the need to employ computationally expensive algorithms such as self-consistent field theory. The algorithms are packaged in an efficient and easy to use Python code, QMLearn, accessible on popular platforms.
Electronic structure methods are vital, yet they are often too computationally expensive. Here, the authors develop machine learned density matrices to fully represent electronic structures in a computationally cheap and accurate way.
Journal Article
Riemannian Newton Methods for Energy Minimization Problems of Kohn–Sham Type
by
Stykel, T.
,
Peterseim, D.
,
Altmann, R.
in
Algorithms
,
Computational Mathematics and Numerical Analysis
,
Eigenvalues
2024
This paper is devoted to the numerical solution of constrained energy minimization problems arising in computational physics and chemistry such as the Gross–Pitaevskii and Kohn–Sham models. In particular, we introduce Riemannian Newton methods on the infinite-dimensional Stiefel and Grassmann manifolds. We study the geometry of these two manifolds, its impact on the Newton algorithms, and present expressions of the Riemannian Hessians in the infinite-dimensional setting, which are suitable for variational spatial discretizations. A series of numerical experiments illustrates the performance of the methods and demonstrates their supremacy compared to other well-established schemes such as the self-consistent field iteration and gradient descent schemes.
Journal Article
The System of Self-Consistent Models: The Case of Henry’s Law Constants
by
Benfenati, Emilio
,
Leszczynska, Danuta
,
Toropova, Alla P.
in
Algorithms
,
Atmosphere
,
Climate change
2023
Data on Henry’s law constants make it possible to systematize geochemical conditions affecting atmosphere status and consequently triggering climate changes. The constants of Henry’s law are desired for assessing the processes related to atmospheric contaminations caused by pollutants. The most important are those that are capable of long-term movements over long distances. This ability is closely related to the values of Henry’s law constants. Chemical changes in gaseous mixtures affect the fate of atmospheric pollutants and ecology, climate, and human health. Since the number of organic compounds present in the atmosphere is extremely large, it is desirable to develop models suitable for predictions for the large pool of organic molecules that may be present in the atmosphere. Here, we report the development of such a model for Henry’s law constants predictions of 29,439 compounds using the CORAL software (2023). The statistical quality of the model is characterized by the value of the coefficient of determination for the training and validation sets of about 0.81 (on average).
Journal Article