Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
362
result(s) for
"Recurrent event analysis"
Sort by:
A systematic comparison of recurrent event models for application to composite endpoints
by
Kieser, Meinhard
,
Rauch, Geraldine
,
Ozga, Ann-Kathrin
in
Algorithms
,
Analysis
,
Clinical trials
2018
Background
Many clinical trials focus on the comparison of the treatment effect between two or more groups concerning a rarely occurring event. In this situation, showing a relevant effect with an acceptable power requires the observation of a large number of patients over a long period of time. For feasibility issues, it is therefore often considered to include several event types of interest, non-fatal or fatal, and to combine them within a composite endpoint. Commonly, a composite endpoint is analyzed with standard survival analysis techniques by assessing the time to the first occurring event. This approach neglects that an individual may experience more than one event which leads to a loss of information. As an alternative, composite endpoints could be analyzed by models for recurrent events. There exists a number of such models, e.g. regression models based on count data or Cox-based models such as the approaches of Andersen and Gill, Prentice, Williams and Peterson or, Wei, Lin and Weissfeld. Although some of the methods were already compared within the literature there exists no systematic investigation for the special requirements regarding composite endpoints.
Methods
Within this work a simulation-based comparison of recurrent event models applied to composite endpoints is provided for different realistic clinical trial scenarios.
Results
We demonstrate that the Andersen-Gill model and the Prentice- Williams-Petersen models show similar results under various data scenarios whereas the Wei-Lin-Weissfeld model delivers effect estimators which can considerably deviate under commonly met data scenarios.
Conclusion
Based on the conducted simulation study, this paper helps to understand the pros and cons of the investigated methods in the context of composite endpoints and provides therefore recommendations for an adequate statistical analysis strategy and a meaningful interpretation of results.
Journal Article
Acceptance and engagement patterns of mobile-assisted language learning among non-conventional adult L2 learners: A survival analysis
by
Hwang, Hyun-Bin
,
Tagarelli, Kaitlyn M.
,
Loewen, Shawn
in
Acceptance
,
Adult learning
,
Attrition
2024
Research on mobile-assisted language learning (MALL) has revealed that high rates of attrition among users can undermine the potential benefits of this learning method. To explore this issue, we surveyed 3,670 adult MALL users based on the Unified Theory of Acceptance and Use of Technology (UTAUT) and also conducted an in-depth analysis of their historical app usage data. The results of hierarchical k-means cluster analysis and recurrent event survival analysis revealed three major findings. First, three distinct profiles of learners were characterized by different MALL acceptance and engagement experiences. Second, those with greater MALL acceptance displayed more intense, frequent, and durable app usage (behavioral engagement). Lastly, high levels of MALL acceptance were associated with more frequent pauses in app usage but also (a) longer active usage, (b) shorter breaks before returning to the app, and, ultimately, (c) fewer dropouts. We argue that persistence is a multidimensional process involving cyclical phases of engagement, disengagement, dormancy, and reengagement, with each aspect, like intensity, frequency, and duration, building up cumulatively over time. Implications for promoting persistent MALL engagement are discussed.
Journal Article
Choosing and Changing Course
2021
Much prior research has examined the individual-level, major-specific, and institutional correlates of college students' choice of major, as well as the variation in labor market outcomes associated with this important choice. Extant accounts, however, largely overlook the process by which individuals change their major throughout college. This study provides a comprehensive description of major switching, and considers its relevance to concerns about stratification in postsecondary education. Drawing on survey and transcript data from students at three large universities in the United States, I find that switching is widespread, and that many students change their majors multiple times. Students appear to change majors in an effort to better fit their interests and abilities, as students seek out majors that are generally less competitive and easier. Major change further contributes to gender segregation, particularly as women leave science, technology, engineering, and math (STEM) fields after initially selecting these at lower rates than men.
Journal Article
Thrombotic Prediction Model Based on Epigenetic Regulator Mutations in Essential Thrombocythemia Patients Using Survival Analysis in Recurrent Events
2024
Introduction
Essential thrombocythemia (ET) involves the proliferation of megakaryocytes and platelets and is associated with an increased risk of thrombosis. We aimed to evaluate thrombotic risks in patients with epigenetic regulator mutations and generate a model to predict thrombosis in ET.
Materials and Methods
This cohort study enrolled patients aged > 15 years diagnosed with ET at the Songklanakarind Hospital between January 2002 and December 2019. Twenty-five targeted gene mutations, including somatic driver mutations (JAK2, CALR, MPL), epigenetic regulator mutations (TET2, DNMT3A, IDH1, IDH2, TET2, ASXL1, EZH2, SF3B1, SRSF2) and other genes relevant to myeloid neoplasms, were identified using next-generation sequencing. Thrombotic events were confirmed based on clinical condition and imaging findings, and thrombotic risks were analyzed using five survival models with the recurrent event method.
Results
Ninety-six patients were enrolled with a median follow-up of 6.91 years. Of these, 15 patients experienced 17 arterial thrombotic events in total. Patients with JAK2 mutation and IDH1 mutation had the highest frequency of thrombotic events with somatic driver mutations (17.3%) and epigenetic regulator mutations (100%). The 10-year thrombosis-free survival rate was 81.3% (95% confidence interval: 72.0-91.8%). IDH1 mutation was a significant factor for thrombotic risk in the multivariate analysis for all models. The Prentice, William, and Peterson (PWP) gap-time model was the most appropriate prediction model.
Conclusions
The PWP gap-time model was a good predictive model for thrombotic risk in patients with ET. IDH1 mutation was significant risk factors for thrombosis; however, further studies with a larger sample size should confirm this and provide more insight.
Journal Article
Economic evaluation of the effect of needle and syringe programs on skin, soft tissue, and vascular infections in people who inject drugs: a microsimulation modelling approach
by
El-Sheikh, Mariam
,
Panagiotoglou, Dimitra
,
Buckeridge, David L.
in
Abscesses
,
Adult
,
Bacterial infections
2024
Background
Needle and syringe programs (NSP) are effective harm-reduction strategies against HIV and hepatitis C. Although skin, soft tissue, and vascular infections (SSTVI) are the most common morbidities in people who inject drugs (PWID), the extent to which NSP are clinically and cost-effective in relation to SSTVI in PWID remains unclear. The objective of this study was to model the clinical- and cost-effectiveness of NSP with respect to treatment of SSTVI in PWID.
Methods
We performed a model-based, economic evaluation comparing a scenario with NSP to a scenario without NSP. We developed a microsimulation model to generate two cohorts of 100,000 individuals corresponding to each NSP scenario and estimated quality-adjusted life-years (QALY) and cost (in 2022 Canadian dollars) over a 5-year time horizon (1.5% per annum for costs and outcomes). To assess the clinical effectiveness of NSP, we conducted survival analysis that accounted for the recurrent use of health care services for treating SSTVI and SSTVI mortality in the presence of competing risks.
Results
The incremental cost-effectiveness ratio associated with NSP was $70,278 per QALY, with incremental cost and QALY gains corresponding to $1207 and 0.017 QALY, respectively. Under the scenario with NSP, there were 788 fewer SSTVI deaths per 100,000 PWID, corresponding to 24% lower relative hazard of mortality from SSTVI (hazard ratio [HR] = 0.76; 95% confidence interval [CI] = 0.72–0.80). Health service utilization over the 5-year period remained lower under the scenario with NSP (outpatient: 66,511 vs. 86,879; emergency department: 9920 vs. 12,922; inpatient: 4282 vs. 5596). Relatedly, having NSP was associated with a modest reduction in the relative hazard of recurrent outpatient visits (HR = 0.96; 95% CI = 0.95–0.97) for purulent SSTVI as well as outpatient (HR = 0.88; 95% CI = 0.87–0.88) and emergency department visits (HR = 0.98; 95% CI = 0.97–0.99) for non-purulent SSTVI.
Conclusions
Both the individuals and the healthcare system benefit from NSP through lower risk of SSTVI mortality and prevention of recurrent outpatient and emergency department visits to treat SSTVI. The microsimulation framework provides insights into clinical and economic implications of NSP, which can serve as valuable evidence that can aid decision-making in expansion of NSP services.
Journal Article
Associated factors of pregnancy spacing among women of reproductive age Group in South of Iran: cross-sectional study
by
Salarpour, Elaheh
,
Dehesh, Tania
,
Malekmohammadi, Neda
in
Abortion
,
Birth control
,
Breast feeding
2020
Background
Optimal pregnancy spacing is an important incidence in reproductive women’s health. Short or long pregnancy spacing leads to the greatest health, social and economic problems such as increase in maternal and infant mortality and morbidity, and adverse pregnancy outcomes. The aim of this study is to assess the mean of pregnancy spacing and associated factors of pregnancy spacing among women of reproductive age group with recurrent event analysis.
Methods
The fertility history of 1350 women aged 15–49 years was collected in this cross-sectional study. The women were selected through multistage random sampling method from a list of clinics in 2018. Some predictors were collected from their records and others were collected by face-to-face interview. The recurrent event survival analysis was used to explore the effect of predictors on pregnancy spacing. The R software program was used for analysis.
Results
There were nine predictors that had significant effect on pregnancy spacing. These predictors included the age of mother at marriage, mother’s BMI, contraception use, breast feeding duration of the previous child, the education level of husband, the sex preference of the mother, presence of abortion or stillbirth in the preceding pregnancies, income sufficiency, and mother’s awareness of optimum pregnancy interval. The most influential predictors; contraception use (HR = 2.34, 95%CI = 1.23 to 2.76,
P
< 0.001) and income sufficiency (HR = 2.046, 95%CI = 1.61 to 3.02,
P
= 0.018) lead to longer and son preference of mother (HR = 2.231, 95%CI = 1.24 to 2.81,
P
= 0.023) lead to shorter pregnancy spacing.
Conclusion
The up to date contraception tool should be at hand for couples to manage their pregnancy intervals. The unfavorable economic situation of a family leads to long pregnancy spacing. Despite the relative equality of the status of girls and boys in today’s societies, the desire to have a son child is still an important factor in shorter pregnancy spacing. The benefit of optimal pregnancy spacing should be more announced.
Journal Article
Impact of continuity of care on preventable hospitalization of patients with type 2 diabetes
2016
To determine whether patients with greater continuity of care (COC) have fewer preventable hospitalizations.
We conducted a cohort study using a stratified random sample of Korean National Health Insurance enrollees from 2002 to 2010. The COC index was calculated for each year post-diagnosis based on ambulatory care visits. We performed a recurrent event survival analysis via Cox proportional hazard regression analysis of preventable hospitalizations.
A total of 5163 patients newly diagnosed with type 2 diabetes mellitus in 2003-6 and receiving oral hypoglycemic medication.
Preventable hospitalization.
Of 5163 eligible participants, 6.4% (n = 328) experienced a preventable hospitalization during the study period. The adjusted hazard ratio (HR) was 8.69 (95% CI, 2.62-28.83) for subjects with a COC score of 0.00-0.19, 7.03 (95% CI, 4.50-10.96) for those with a score of 0.20-0.39, 3.01 (95% CI, 2.06-4.40) for those with a score of 0.40-059, 4.42 (95% CI, 3.04-6.42) for those with a score of 0.60-0.79 and 5.82 (95% CI, 3.87-8.75) for those with a score of 0.80-0.99. The difference in cumulative incidence of preventable hospitalizations in patients with COC scores of 0.00-0.19 relative to those with COC scores of 1.00 was the greatest, at 0.97% points.
Greater COC was associated with fewer preventable hospitalizations in subjects with type 2 diabetes.
Journal Article
Recurrent events analysis in the presence of time-dependent covariates and dependent censoring
by
van der Laan, Mark J.
,
Butler, Steve
,
Miloslavsky, Maja
in
Analytical estimating
,
Andersen-Gill multiplicative intensity model
,
Censored data
2004
Recurrent events models have had considerable attention recently. The majority of approaches show the consistency of parameter estimates under the assumption that censoring is independent of the recurrent events process of interest conditional on the covariates that are included in the model. We provide an overview of available recurrent events analysis methods and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen-Gill model that is commonly used for recurrent event analysis. This estimator remains consistent under informative censoring if the censoring mechanism is estimated consistently, and it generally improves on the naïve estimator for the Andersen-Gill model in the case of independent censoring. We illustrate the bias of ad hoc estimators in the presence of informative censoring with a simulation study and provide a data analysis of recurrent lung exacerbations in cystic fibrosis patients when some patients are lost to follow-up.
Journal Article
A Proposed Sentiment Analysis Deep Learning Algorithm for Analyzing COVID-19 Tweets
2021
With the rise in cases of COVID-19, a bizarre situation of pressure was mounted on each country to make arrangements to control the population and utilize the available resources appropriately. The swiftly rising of positive cases globally created panic, anxiety and depression among people. The effect of this deadly disease was found to be directly proportional to the physical and mental health of the population. As of 28 October 2020, more than 40 million people are tested positive and more than 1 million deaths have been recorded. The most dominant tool that disturbed human life during this time is social media. The tweets regarding COVID-19, whether it was a number of positive cases or deaths, induced a wave of fear and anxiety among people living in different parts of the world. Nobody can deny the truth that social media is everywhere and everybody is connected with it directly or indirectly. This offers an opportunity for researchers and data scientists to access the data for academic and research use. The social media data contains many data that relate to real-life events like COVID-19. In this paper, an analysis of Twitter data has been done through the R programming language. We have collected the Twitter data based on hashtag keywords, including COVID-19, coronavirus, deaths, new case, recovered. In this study, we have designed an algorithm called Hybrid Heterogeneous Support Vector Machine (H-SVM) and performed the sentiment classification and classified them positive, negative and neutral sentiment scores. We have also compared the performance of the proposed algorithm on certain parameters like precision, recall, F1 score and accuracy with Recurrent Neural Network (RNN) and Support Vector Machine (SVM).
Journal Article
Cascaded LSTM recurrent neural network for automated sleep stage classification using single-channel EEG signals
by
Acharya, U. Rajendra
,
Molinari, Filippo
,
Michielli, Nicola
in
Adult
,
Algorithms
,
Classification
2019
Automated evaluation of a subject's neurocognitive performance (NCP) is a relevant topic in neurological and clinical studies. NCP represents the mental/cognitive human capacity in performing a specific task. It is difficult to develop the study protocols as the subject's NCP changes in a known predictable way. Sleep is time-varying NCP and can be used to develop novel NCP techniques. Accurate analysis and interpretation of human sleep electroencephalographic (EEG) signals is needed for proper NCP assessment. In addition, sleep deprivation may cause prominent cognitive risks in performing many common activities such as driving or controlling a generic device; therefore, sleep scoring is a crucial part of the process. In the sleep cycle, the first stage of non-rapid eye movement (NREM) sleep or stage N1 is the transition between wakefulness and drowsiness and becomes relevant for the study of NCP.
In this study, a novel cascaded recurrent neural network (RNN) architecture based on long short-term memory (LSTM) blocks, is proposed for the automated scoring of sleep stages using EEG signals derived from a single-channel. Fifty-five time and frequency-domain features were extracted from the EEG signals and fed to feature reduction algorithms to select the most relevant ones. The selected features constituted as the inputs to the LSTM networks. The cascaded architecture is composed of two LSTM RNNs: the first network performed 4-class classification (i.e. the five sleep stages with the merging of stages N1 and REM into a single stage) with a classification rate of 90.8%, and the second one obtained a recognition performance of 83.6% for 2-class classification (i.e. N1 vs REM). The overall percentage of correct classification for five sleep stages is found to be 86.7%. The objective of this work is to improve classification performance in sleep stage N1, as a first step of NCP assessment, and at the same time obtain satisfactory classification results in the other sleep stages.
•A novel cascaded RNN architecture with LSTM blocks is proposed for the automated scoring of sleep stages.•A single-channel EEG based automated system.•Use of a publicly available sleep-EDF database.•The method outperforms the state-of-the-art methods in the N1 stage detection.•A first step forward to automated assessment of neurocognitive performance.
Journal Article