Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
25 result(s) for "Mengfei Ran"
Sort by:
Optimal Estimation of Large Functional and Longitudinal Data by Using Functional Linear Mixed Model
The estimation of large functional and longitudinal data, which refers to the estimation of mean function, estimation of covariance function, and prediction of individual trajectory, is one of the most challenging problems in the field of high-dimensional statistics. Functional Principal Components Analysis (FPCA) and Functional Linear Mixed Model (FLMM) are two major statistical tools used to address the estimation of large functional and longitudinal data; however, the former suffers from a dramatically increasing computational burden while the latter does not have clear asymptotic properties. In this paper, we propose a computationally effective estimator of large functional and longitudinal data within the framework of FLMM, in which all the parameters can be automatically estimated. Under certain regularity assumptions, we prove that the mean function estimation and individual trajectory prediction reach the minimax lower bounds of all nonparametric estimations. Through numerous simulations and real data analysis, we show that our new estimator outperforms the traditional FPCA in terms of mean function estimation, individual trajectory prediction, variance estimation, covariance function estimation, and computational effectiveness.
Grey modeling based on the transformation of Aarc cot x+B function
Purpose – The purpose of this paper is to utilize the proposed function transformation to make the original data series meet the properties of smooth ratio being lessen and stepwise ratio deviation being reduced, so that to improve the accuracy of grey forecasting model. Design/methodology/approach – According to the characteristics of anti-cotangent functional graph variation, the theory of functional transformation and grey system modeling, the authors proposed a grey model based on the transformation of Aarc cot x+B function. Findings – The calculated result of practical example shows that the proposed method is both valid on improving fitting effectiveness and forecasting accuracy. Practical implications – The proposed method in this paper can effectively improve the accuracy of forecasting of high-growth original data series (derivative of data series is not only greater than 1 but also increasing). Originality/value – The paper succeeds in providing an effective function transformation to make the smooth ratio and stepwise ratio deviation reduced significantly.
Robust semiparametric modeling of mean and covariance in longitudinal data
Longitudinal data often suffer from heavy-tailed errors and outliers, which can significantly reduce efficiency and lead to invalid inferences. Robust techniques are essential, especially in joint mean-covariance modeling, as the estimation of the covariance matrix is more sensitive to heavy-tailed errors and outliers than the estimation of the mean. Motivated by the modified Cholesky decomposition of the covariance matrix, we propose a novel semiparametric method that uses robust techniques to simultaneously estimate the mean, autoregressive coefficients, and innovation variance. We provide a practical algorithm for this method and investigate the asymptotic properties of the mean and covariance estimators. Numerical simulations demonstrate that the proposed method is efficient and stable when the dataset is contaminated with outliers and heavy-tailed errors. The new robust technique yields statistically interpretable inferences in real data analysis, whereas traditional approaches fail to provide any acceptable inferences.
DGM model based on anti-cotangent function and its application
In order to improve fitting accuracy of grey forecasting model according to the function transformation theory and grey system modeling theory, DGM(1,1) model based on anti-cotangent function for a non-negative ascending sequence is built in this paper. And DGM(1,1) model based on variable speed translation anti-cotangent function transformation for a non-negative oscillating sequence is built in this paper. For a non-negative ascending sequence, it has been proved that the smooth ratio, stepwise ratio deviation, and stepwise ratio variance after the function transformation can be reduced effectively. The function transformation methods in this paper are used in setting up models for ascending and oscillating sequences and the calculation results of actual examples show that the function transformation proposed in this paper is valid to an extent. Keywords: Function Transformation; Stepwise Ratio Deviation; Smooth Ratio; Stepwise Ratio Variance
A Generalized Adaptive Joint Learning Framework for High-Dimensional Time-Varying Models
In modern biomedical and econometric studies, longitudinal processes are often characterized by complex time-varying associations and abrupt regime shifts that are shared across correlated outcomes. Standard functional data analysis (FDA) methods, which prioritize smoothness, often fail to capture these dynamic structural features, particularly in high-dimensional settings. This article introduces Adaptive Joint Learning (AJL), a hierarchical regularization framework designed to integrate functional variable selection with structural changepoint detection in multivariate time-varying coefficient models. Unlike standard simultaneous estimation approaches, we propose a theoretically grounded two-stage screening-and-refinement procedure. This framework first synergizes adaptive group-wise penalization with sure screening principles to robustly identify active predictors, followed by a refined fused regularization step that effectively borrows strength across multiple outcomes to detect local regime shifts. We provide a rigorous theoretical analysis of the estimator in the ultra-high-dimensional regime (p >> n). Crucially, we establish the sure screening consistency of the first stage, which serves as the foundation for proving that the refined estimator achieves the oracle property-performing as well as if the true active set and changepoint locations were known a priori. A key theoretical contribution is the explicit handling of approximation bias via undersmoothing conditions to ensure valid asymptotic inference. The proposed method is validated through comprehensive simulations and an application to Sleep-EDF data, revealing novel dynamic patterns in physiological states.
Group-Sparse Smoothing for Longitudinal Models with Time-Varying Coefficients
Longitudinal data analysis is fundamental for understanding dynamic processes in biomedical and social sciences. Although varying coefficient models (VCMs) provide a flexible framework by allowing covariate effects to evolve over time, fitting all effects as time-varying may lead to overfitting, efficiency loss, and reduced interpretability when some effects are actually constant. In contrast, standard linear mixed models (LMMs) may suffer substantial bias when temporal heterogeneity is ignored. To address this issue, we propose time-varying effect selection, TV-Select, a unified framework for structural identification that simultaneously selects relevant variables and determines whether their effects are constant or time-varying. The proposed method decomposes each coefficient function into a time-invariant mean component and a centered time-varying deviation, where the latter is approximated by B-splines. We then construct a doubly penalized objective function that combines a group Lasso penalty for structural sparsity with a roughness penalty for smoothness control. An efficient block coordinate descent algorithm is developed for computation. Under regular semiparametric conditions, we establish selection consistency and oracle-type asymptotic properties, including asymptotic normality for the constant-effect component after correct structure recovery. Simulation studies and a real-data application show that TV-Select achieves more accurate structural recovery, smoother functional estimation, and better predictive performance than competing methods.
Adaptive Penalized Doubly Robust Regression for Longitudinal Data
Longitudinal data often involve heterogeneity, sparse signals, and contamination from response outliers or high-leverage observations especially in biomedical science. Existing methods usually address only part of this problem, either emphasizing penalized mixed effects modeling without robustness or robust mixed effects estimation without high-dimensional variable selection. We propose a doubly adaptive robust regression (DAR-R) framework for longitudinal linear mixed effects models. It combines a robust pilot fit, doubly adaptive observation weights for residual outliers and leverage points, and folded concave penalization for fixed effect selection, together with weighted updates of random effects and variance components. We develop an iterative reweighting algorithm and establish estimation and prediction error bounds, support recovery consistency, and oracle-type asymptotic normality. Simulations show that DAR-R improves estimation accuracy, false-positive control, and covariance estimation under both vertical outliers and bad leverage contamination. In the TADPOLE/ADNI Alzheimer's disease application, DAR-R achieves accurate and stable prediction of ADAS13 while selecting clinically meaningful predictors with strong resampling stability.
Block Empirical Likelihood Inference for Longitudinal Generalized Partially Linear Single-Index Models
Generalized partially linear single-index models (GPLSIMs) provide a flexible and interpretable semiparametric framework for longitudinal outcomes by combining a low-dimensional parametric component with a nonparametric index component. For repeated measurements, valid inference is challenging because within-subject correlation induces nuisance parameters and variance estimation can be unstable in semiparametric settings. We propose a profile estimating-equation approach based on spline approximation of the unknown link function and construct a subject-level block empirical likelihood (BEL) for joint inference on the parametric coefficients and the single-index direction. The resulting BEL ratio statistic enjoys a Wilks-type chi-square limit, yielding likelihood-free confidence regions without explicit sandwich variance estimation. We also discuss practical implementation, including constrained optimization for the index parameter, working-correlation choices, and bootstrap-based confidence bands for the nonparametric component. Simulation studies and an application to the epilepsy longitudinal study illustrate the finite-sample performance.
Universal 2-Local Symmetry-Preserving Quantum Neural Networks for Fermionic Systems
Simulating quantum many-body systems represents a fundamental challenge where classical machine learning methods are severely bottlenecked by the exponential curse of dimensionality. Variational Quantum Algorithms (VQAs) offer a native paradigm to tackle this by optimizing parameterized unitary evolutions to find the ground states of problem Hamiltonians. However, the efficacy of these VQA is deeply hindered by the challenge of balancing the preservation of critical physical symmetries with the strict constraints of hardware implementability. In this work, we address this dilemma by proposing a hardware-efficient, symmetry-preserving ansatz fortified with complete theoretical guarantees for fermionic systems, termed the Hamming Weight Preserving (HWP) ansatz. We establish the necessary and sufficient conditions for 2-local HWP operators to achieve subspace universality, formally debunking the prevailing assumption that truncation-free simulation requires complex high-order interactions. Empirical validations corroborate our theoretical guarantees, showcasing the exact approximation of arbitrary unitary matrices within the HWP subspace. Crucially, we demonstrate the exceptional versatility of the proposed approach by deploying the exact same ansatz across distinct fermionic models, including diverse molecular electronic structures and the Fermi-Hubbard model. Our proposed HWP ansatz consistently suppresses ground-state energy errors below \\(1 \\times 10^{-10}\\) Ha, achieving a level of precision that surpasses the stringent threshold of chemical accuracy by multiple orders of magnitude. This work establishes a complete, theoretically fortified 2-local framework for symmetry-preserving computation, offering a highly universal and hardware-efficient building block for advancing quantum machine learning and fermionic many-body simulations.
Cadmium Induces Kidney Iron Deficiency and Chronic Kidney Injury by Interfering with the Iron Metabolism in Rats
Cadmium (Cd) is a common environmental pollutant and occupational toxicant that seriously affects various mammalian organs, especially the kidney. Iron ion is an essential trace element in the body, and the disorder of iron metabolism is involved in the development of multiple pathological processes. An iron overload can induce a new type of cell death, defined as ferroptosis. However, whether iron metabolism is abnormal in Cd-induced nephrotoxicity and the role of ferroptosis in Cd-induced nephrotoxicity need to be further elucidated. Sprague Dawley male rats were randomly assigned into three groups: a control group, a 50 mg/L CdCl2-treated group, and a 75 mg/L CdCl2-treated group by drinking water for 1 month and 6 months, respectively. The results showed that Cd could induce renal histopathological abnormalities and dysfunction, disrupt the mitochondria’s ultrastructure, and increase the ROS and MDA content. Next, Cd exposure caused GSH/GPX4 axis blockade, increased FTH1 and COX2 expression, decreased ACSL4 expression, and significantly decreased the iron content in proximal tubular cells or kidney tissues. Further study showed that the expression of iron absorption-related genes SLC11A2, CUBN, LRP2, SLC39A14, and SLC39A8 decreased in proximal tubular cells or kidneys after Cd exposure, while TFRC and iron export-related gene SLC40A1 did not change significantly. Moreover, Cd exposure increased SLC11A2 gene expression and decreased SLC40A1 gene expression in the duodenum. Finally, NAC or Fer-1 partially alleviated Cd-induced proximal tubular cell damage, while DFO and Erastin further aggravated Cd-induced cell damage. In conclusion, our results indicated that Cd could cause iron deficiency and chronic kidney injury by interfering with the iron metabolism rather than typical ferroptosis. Our findings suggest that an abnormal iron metabolism may contribute to Cd-induced nephrotoxicity, providing a novel approach to preventing kidney disease in clinical practice.