Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
994
result(s) for
"Lu, Wenbin"
Sort by:
A Massive Data Framework for M-Estimators with Cubic-Rate
by
Lu, Wenbin
,
Song, Rui
,
Shi, Chengchun
in
Asymptotic properties
,
Computation
,
Computer simulation
2018
The divide and conquer method is a common strategy for handling massive data. In this article, we study the divide and conquer method for cubic-rate estimators under the massive data framework. We develop a general theory for establishing the asymptotic distribution of the aggregated M-estimators using a weighted average with weights depending on the subgroup sample sizes. Under certain condition on the growing rate of the number of subgroups, the resulting aggregated estimators are shown to have faster convergence rate and asymptotic normal distribution, which are more tractable in both computation and inference than the original M-estimators based on pooled data. Our theory applies to a wide class of M-estimators with cube root convergence rate, including the location estimator, maximum score estimator, and value search estimator. Empirical performance via simulations and a real data application also validate our theoretical findings. Supplementary materials for this article are available online.
Journal Article
Change-Plane Analysis for Subgroup Detection and Sample Size Calculation
2017
We propose a systematic method for testing and identifying a subgroup with an enhanced treatment effect. We adopts a change-plane technique to first test the existence of a subgroup, and then identify the subgroup if the null hypothesis on nonexistence of such a subgroup is rejected. A semiparametric model is considered for the response with an unspecified baseline function and an interaction between a subgroup indicator and treatment. A doubly robust test statistic is constructed based on this model, and asymptotic distributions of the test statistic under both null and local alternative hypotheses are derived. Moreover, a sample size calculation method for subgroup detection is developed based on the proposed statistic. The finite sample performance of the proposed test is evaluated via simulations. Finally, the proposed methods for subgroup identification and sample size calculation are applied to a data from an AIDS study.
Journal Article
On estimation of optimal treatment regimes for maximizing t-year survival probability
by
Jiang, Runchao
,
Lu, Wenbin
,
Song, Rui
in
Acquired immune deficiency syndrome
,
acquired immunodeficiency syndrome
,
AIDS
2017
A treatment regime is a deterministic function that dictates personalized treatment based on patients’ individual prognostic information. There is increasing interest in finding optimal treatment regimes, which determine treatment at one or more treatment decision points to maximize expected long-term clinical outcomes, where larger outcomes are preferred. For chronic diseases such as cancer or human immunodeficiency virus infection, survival time is often the outcome of interest, and the goal is to select treatment to maximize survival probability. We propose two non-parametric estimators for the survival function of patients following a given treatment regime involving one or more decisions, i.e. the so-called value. On the basis of data from a clinical or observational study, we estimate an optimal regime by maximizing these estimators for the value over a prespecified class of regimes. Because the value function is very jagged, we introduce kernel smoothing within the estimator to improve performance. Asymptotic properties of the proposed estimators of value functions are established under suitable regularity conditions, and simulation studies evaluate the finite sample performance of the regime estimators. The methods are illustrated by application to data from an acquired immune deficiency syndrome clinical trial.
Journal Article
Concordance-assisted learning for estimating optimal individualized treatment regimes
by
Fan, Caiyun
,
Lu, Wenbin
,
Song, Rui
in
Acquired immune deficiency syndrome
,
acquired immunodeficiency syndrome
,
AIDS
2017
We propose new concordance-assisted learning for estimating optimal individualized treatment regimes. We first introduce a type of concordance function for prescribing treatment and propose a robust rank regression method for estimating the concordance function. We then find treatment regimes, up to a threshold, to maximize the concordance function, named the prescriptive index. Finally, within the class of treatment regimes that maximize the concordance function, we find the optimal threshold to maximize the value function. We establish the rate of convergence and asymptotic normality of the proposed estimator for parameters in the prescriptive index. An induced smoothing method is developed to estimate the asymptotic variance of the estimator. We also establish the n⅓-consistency of the estimated optimal threshold and its limiting distribution. In addition, a doubly robust estimator of parameters in the prescriptive index is developed under a class of monotonic index models. The practical use and effectiveness of the methodology proposed are demonstrated by simulation studies and an application to an acquired immune deficiency syndrome data set.
Journal Article
Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects
by
Song, Rui
,
Lu, Wenbin
,
Shi, Chengchun
in
Asymptotic methods
,
Clinical research
,
Clinical trials
2018
A salient feature of data from clinical trials and medical studies is inhomogeneity. Patients not only differ in baseline characteristics, but also in the way that they respond to treatment. Optimal individualized treatment regimes are developed to select effective treatments based on patient’s heterogeneity. However, the optimal treatment regime might also vary for patients across different subgroups. We mainly consider patients’ heterogeneity caused by groupwise individualized treatment effects assuming the same marginal treatment effects for all groups. We propose a new maximin projection learning method for estimating a single treatment decision rule that works reliably for a group of future patients from a possibly new subpopulation. Based on estimated optimal treatment regimes for all subgroups, the proposed maximin treatment regime is obtained by solving a quadratically constrained linear programming problem, which can be efficiently computed by interior point methods. Consistency and asymptotic normality of the estimator are established. Numerical examples show the reliability of the methodology proposed.
Journal Article
WCAY object detection of fractures for X-ray images of multiple sites
2024
The WCAY (weighted channel attention YOLO) model, which is meticulously crafted to identify fracture features across diverse X-ray image sites, is presented herein. This model integrates novel core operators and an innovative attention mechanism to enhance its efficacy. Initially, leveraging the benefits of dynamic snake convolution (DSConv), which is adept at capturing elongated tubular structural features, we introduce the DSC-C2f module to augment the model’s fracture detection performance by replacing a portion of C2f. Subsequently, we integrate the newly proposed weighted channel attention (WCA) mechanism into the architecture to bolster feature fusion and improve fracture detection across various sites. Comparative experiments were conducted, to evaluate the performances of several attention mechanisms. These enhancement strategies were validated through experimentation on public X-ray image datasets (FracAtlas and GRAZPEDWRI-DX). Multiple experimental comparisons substantiated the model’s efficacy, demonstrating its superior accuracy and real-time detection capabilities. According to the experimental findings, on the FracAtlas dataset, our WCAY model exhibits a notable 8.8% improvement in mean average precision (mAP) over the original model. On the GRAZPEDWRI-DX dataset, the mAP reaches 64.4%, with a detection accuracy of 93.9% for the “fracture” category alone. The proposed model represents a substantial improvement over the original algorithm compared to other state-of-the-art object detection models. The code is publicly available at
https://github.com/cccp421/Fracture-Detection-WCAY
.
Journal Article
Variable selection for optimal treatment decision
by
Zeng, Donglin
,
Lu, Wenbin
,
Zhang, Hao Helen
in
Acquired immune deficiency syndrome
,
AIDS
,
Clinical research
2013
In decision-making on optimal treatment strategies, it is of great importance to identify variables that are involved in the decision rule, i.e. those interacting with the treatment. Effective variable selection helps to improve the prediction accuracy and enhance the interpretability of the decision rule. We propose a new penalized regression framework which can simultaneously estimate the optimal treatment strategy and identify important variables. The advantages of the new approach include: (i) it does not require the estimation of the baseline mean function of the response, which greatly improves the robustness of the estimator; (ii) the convenient loss-based framework makes it easier to adopt shrinkage methods for variable selection, which greatly facilitates implementation and statistical inferences for the estimator. The new procedure can be easily implemented by existing state-of-art software packages like LARS. Theoretical properties of the new estimator are studied. Its empirical performance is evaluated using simulation studies and further illustrated with an application to an AIDS clinical trial.
Journal Article
A novel m6A/m5C/m1A score signature to evaluate prognosis and its immunotherapy value in colon cancer patients
2023
Background
Colon cancer features strong heterogeneity and invasiveness, with high incidence and mortality rates. Recently, RNA modifications involving m6A, m5C, and m1A play a vital part in tumorigenesis and immune cell infiltration. However, integrated analysis among various RNA modifications in colon cancer has not been performed.
Methods
RNA-seq profiling, clinical data and mutation data were obtained from The Cancer Genome Atlas and Gene Expression Omnibus. We first explored the mutation status and expression levels of m6A/m5C/m1A regulators in colon cancer. Then, different m6A/m5C/m1A clusters and gene clusters were identified by consensus clustering analysis. We further constructed and validated a scoring system, which could be utilized to accurately assess the risk of individuals and guide personalized immunotherapy. Finally, m6A/m5C/m1A regulators were validated by immunohistochemical staining and RT-qPCR.
Results
In our study, three m6A/m5C/m1A clusters and gene clusters were identified. Most importantly, we constructed a m6A/m5C/m1A scoring system to assess the clinical risk of the individuals. Besides, the prognostic value of the score was validated with three independent cohorts. Moreover, the level of the immunophenoscore of the low m6A/m5C/m1A score group increased significantly with CTLA-4/PD-1 immunotherapy. Finally, we validated that the mRNA and protein expression of VIRMA and DNMT3B increased in colon cancer tissues.
Conclusions
We constructed and validated a stable and powerful m6A/m5C/m1A score signature to assess the survival outcomes and immune infiltration characteristics of colon cancer patients, which further guides optimization of personalized treatment, making it valuable for clinical translation and implementation.
Journal Article
Adaptive Lasso for Cox's proportional hazards model
2007
We investigate the variable selection problem for Cox's proportional hazards model, and propose a unified model selection and estimation procedure with desired theoretical properties and computational convenience. The new method is based on a penalized log partial likelihood with the adaptively weighted L1 penalty on regression coefficients, providing what we call the adaptive Lasso estimator. The method incorporates different penalties for different coefficients: unimportant variables receive larger penalties than important ones, so that important variables tend to be retained in the selection process, whereas unimportant variables are more likely to be dropped. Theoretical properties, such as consistency and rate of convergence of the estimator, are studied. We also show that, with proper choice of regularization parameters, the proposed estimator has the oracle properties. The convex optimization nature of the method leads to an efficient algorithm. Both simulated and real examples show that the method performs competitively.
Journal Article
Testing and Estimation of Social Network Dependence With Time to Event Data
by
Huang, Danyang
,
Lu, Wenbin
,
Song, Rui
in
Algorithms
,
Applications and Case Studies
,
autocorrelation
2020
Nowadays, events are spread rapidly along social networks. We are interested in whether people's responses to an event are affected by their friends' characteristics. For example, how soon will a person start playing a game given that his/her friends like it? Studying social network dependence is an emerging research area. In this work, we propose a novel latent spatial autocorrelation Cox model to study social network dependence with time-to-event data. The proposed model introduces a latent indicator to characterize whether a person's survival time might be affected by his or her friends' features. We first propose a score-type test for detecting the existence of social network dependence. If it exists, we further develop an EM-type algorithm to estimate the model parameters. The performance of the proposed test and estimators are illustrated by simulation studies and an application to a time-to-event dataset about playing a popular mobile game from one of the largest online social network platforms.
Supplementary materials
for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
Journal Article