Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
233 result(s) for "Lee, Shing M."
Sort by:
Model calibration in the continual reassessment method
Background The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in dose finding clinical trials. A way to evaluate the sensitivity of a given CRM model including the functional form of the dose-toxicity curve, the prior distribution on the model parameter, and the initial guesses of toxicity probability at each dose is using indifference intervals. While the indifference interval technique provides a succinct summary of model sensitivity, there are infinitely many possible ways to specify the initial guesses of toxicity probability. In practice, these are generally specified by trial and error through extensive simulations. Methods By using indifference intervals, the initial guesses used in the CRM can be selected by specifying a range of acceptable toxicity probabilities in addition to the target probability of toxicity. An algorithm is proposed for obtaining the indifference interval that maximizes the average percentage of correct selection across a set of scenarios of true probabilities of toxicity and providing a systematic approach for selecting initial guesses in a much less time-consuming manner than the trial-and-error method. The methods are compared in the context of two real CRM trials. Results For both trials, the initial guesses selected by the proposed algorithm had similar operating characteristics as measured by percentage of correct selection, average absolute difference between the true probability of the dose selected and the target probability of toxicity, percentage treated at each dose and overall percentage of toxicity compared to the initial guesses used during the conduct of the trials which were obtained by trial and error through a time-consuming calibration process. The average percentage of correct selection for the scenarios considered were 61.5 and 62.0% in the lymphoma trial, and 62.9 and 64.0% in the stroke trial for the trial-and-error method versus the proposed approach. Limitations We only present detailed results for the empiric dose toxicity curve, although the proposed methods are applicable for other dose—toxicity models such as the logistic. Conclusions The proposed method provides a fast and systematic approach for selecting initial guesses of probabilities of toxicity used in the CRM that are competitive to those obtained by trial and error through a time-consuming process, thus, simplifying the model calibration process for the CRM.
Pragmatic, multicentre, factorial, randomised controlled trial of sepsis electronic prompting for timely intervention and care (SEPTIC trial): a protocol
IntroductionSepsis is a major cause of death both globally and in the United States. Early identification and treatment of sepsis are crucial for improving patient outcomes. International guidelines recommend hospital sepsis screening programmes, which are commonly implemented in the electronic health record (EHR) as an interruptive sepsis screening alert based on systemic inflammatory response syndrome (SIRS) criteria. Despite widespread use, it is unknown whether these sepsis screening and alert tools improve the delivery of high-quality sepsis care.Methods and analysisThe Sepsis Electronic Prompting for Timely Intervention and Care (SEPTIC) master protocol will study two distinct populations in separate trials: emergency department (ED) patients (SEPTIC-ED) and inpatients (SEPTIC-IP). The SEPTIC trials are pragmatic, multicentre, blinded, randomised controlled trials, with equal allocation to compare four SIRS-based sepsis screening alert groups: no alerts (control), nurse alerts only, prescribing clinician alerts only, or nurse and prescribing clinician alerts. Randomisation will be at the patient level. SEPTIC will be performed at eight acute-care hospitals in the greater New York City area and enrol patients at least 18 years old. The primary outcome is the percentage of patients with completion of a modified Surviving Sepsis Campaign (SSC) hour-1 bundle within 3 hours of the first SIRS alert. Secondary outcomes include time from first alert to completion of a modified SSC hour-1 bundle, time from first alert to individual bundle component order and completion, intensive care unit (ICU) transfer, hospital discharge disposition, inpatient mortality at 90 days, positive blood cultures (bacteraemia), adverse antibiotic events, sepsis diagnoses and septic shock diagnoses.Ethics and disseminationEthics approval was obtained from the Columbia University Institutional Review Board (IRB) serving as a single IRB. Results will be disseminated in peer-reviewed journal(s), scientific meeting(s) and via social media.Trial registration numberClinicalTrials.gov: NCT06117605 and NCT06117618.
A comparison of nurses’ and physicians’ perception of cancer treatment burden based on reported adverse events
Background Cancer treatments are associated with a multitude of adverse events (AEs). While both nurses and physicians are involved in patient care delivery and AE assessment, very few studies have examined the differences between nurses’ and physicians’ reporting and perception of AEs. An approach was recently proposed to assess treatment burden based on reported AEs from the physician’s perspective. In this paper, we use this approach to evaluate nurses’ perception of burden, and compare nurses’ and physicians’ assessment of the overall and relative burden of AEs. Methods AE records for 334 cancer patients from a randomized clinical trial conducted by the SWOG Cancer Research Network were evaluated by 14 nurses at Columbia University Medical Center. Two nurses were randomly selected to assign a burden score from 0 to 10 based on their impression of the global burden of the captured AEs. These nurses did not interact directly with the patients. Scores were compared to previously obtained physicians scores using paired T-test and Kappa statistic. Severity scores for individual AEs were obtained using mixed-effects models with nurses assessments, and were qualitatively compared to physicians’. Results Given the same AEs, nurses’ and physicians’ perception of the burden of AEs differed. While nurses generally perceived the overall burden of AEs to be only slightly worse compared to physicians (mean average VAS score of 5.44 versus 5.14), there was poor agreement in the perception of AEs that were in mild to severe range. The percent agreement for a moderate or worse AE was 64% with a Kappa of 0.34. Nurses also assigned higher severity scores to symptomatic AEs compared to physicians ( p  < 0.05), such as gastrointestinal (4.77 versus 4.14), hemorrhage (5.07 versus 4.14), and pain (5.17 versus 4.14). Conclusions These differences in the perception of burden of AEs can lead to different treatment decisions and symptom management strategies. Thus, having provider consistency, training, or a collaborative approach in follow-up care between nurses and physicians is important to ensure continuity in care delivery. Moreover, estimating overall burden from both physicians’ and nurses’ perspective, and comparing them may be useful for deciding when collaborations are warranted.
The Effects of Pulmonary Rehabilitation in the National Emphysema Treatment Trial
Pulmonary rehabilitation is an established treatment in patients with chronic lung disease but is not widely utilized. Most trials have been conducted in single centers. The National Emphysema Treatment Trial (NETT) provided an opportunity to evaluate pulmonary rehabilitation in a large cohort of patients who were treated in centers throughout the United States. Prospective observational study of cohort prior to randomization in a multicenter clinical trial. University-based clinical centers and community-based satellite pulmonary rehabilitation programs. A total of 1,218 patients with severe emphysema underwent pulmonary rehabilitation before and after randomization to lung volume reduction surgery (LVRS) or continued medical management. Rehabilitation was conducted at 17 NETT centers supplemented by 539 satellite centers. Lung function, exercise tolerance, dyspnea, and quality of life were evaluated at regular intervals. Significant (p < 0.001) improvements were observed consistently in exercise (cycle ergometry, 3.1 W; 6-min walk test distance, 76 feet), dyspnea (University of California, San Diego Shortness of Breath Questionnaire score, −3.2; Borg breathlessness score: breathing cycle, −0.8; 6-min walk, −0.5) and quality of life (St. George Respiratory Questionnaire score, −3.5; Quality of Well-Being Scale score, +0.035; Medical Outcomes Study 36-item short form score: physical health summary, +1.3; mental health summary, + 2.0). Patients who had not undergone prior rehabilitation improved more than those who had. In multivariate models, only prior rehabilitation status predicted changes after rehabilitation. In 20% of patients, exercise level changed sufficiently after rehabilitation to alter the NETT subgroup predictive of outcome. Overall, changes after rehabilitation did not predict differential mortality or improvement in exercise (primary outcomes) by treatment group. The NETT experience demonstrates the effectiveness of pulmonary rehabilitation in patients with severe emphysema who were treated in a national cross-section of programs. Pulmonary rehabilitation plays an important role in preparing and selecting patients for surgical interventions such as LVRS.
Obesity and survival in the neoadjuvant breast cancer setting: role of tumor subtype in an ethnically diverse population
BackgroundObesity may negatively affect survival in breast cancer (BC), but studies are conflicting, and associations may vary by tumor subtypes and race/ethnicity groups.MethodsIn a retrospective review, we identified 273 women with invasive BC administered Adriamycin/Taxane-based neoadjuvant chemotherapy from 2004 to 2016 with body mass index (BMI) data at diagnosis. Obesity was defined as BMI ≥30. Associations between obesity and event-free survival (EFS), using STEEP events, and overall survival (OS), using all-cause mortality, were assessed overall and stratified by tumor subtype [[Hormone Receptor Positive (HR+)/HER2−, HER2+, and Triple-Negative Breast Cancer (TNBC])] in our diverse population.ResultsMedian follow-up was 32.6 months (range 5.7–137.8 months). Overall, obesity was associated with worse EFS (HR 1.71, 95% CI 1.03–2.84, p = 0.04) and a trend towards worse OS (p = 0.13). In HR+/HER2− disease (n = 135), there was an interaction between obesity and hormonal therapy with respect to OS but not EFS. In those receiving tamoxifen (n = 33), obesity was associated with worse OS (HR 9.27, 95% CI 0.96–89.3, p = 0.05). In those receiving an aromatase inhibitor (n = 89), there was no association between obesity and OS. In TNBC (n = 44), obesity was associated with worse EFS (HR 2.62, 95% CI 1.03–6.66, p = 0.04) and a trend towards worse OS (p = 0.06). In HER2+ disease (n = 94), obesity was associated with a trend towards worse EFS (HR 3.37, 95% CI 0.97–11.72, p = 0.06) but not OS. Race/ethnicity was not associated with survival in any subtype, and there were no interactions with obesity on survival.ConclusionsObesity may negatively impact survival, with differences among tumor subtypes.
Lymphovascular invasion is an independent predictor of survival in breast cancer after neoadjuvant chemotherapy
Various prognostic indicators have been investigated in neoadjuvant chemotherapy (NAC)-treated invasive breast cancer (BC). Our study examines if lymphovascular invasion (LVI) is an independent predictor of survival in women receiving NAC. We performed a retrospective analysis in 166 women with operable invasive BC who underwent adriamycin- and taxane-based NAC between 2000 and 2013. The presence of LVI was noted in breast excisions following NAC. Associations between progression-free and overall survival and LVI and other clinicopathologic variables were assessed. Median follow-up was 31 months (range 1.4–153 months) with a total of 56 events and 24 deaths from any cause. LVI was found in 74 of 166 patients (45 %). In univariate analysis, the presence of LVI was associated with worse progression-free survival (HR 3.37, 95 % CI 1.87–6.06, p < 0.01) and overall survival (HR 4.35, 95 % CI 1.61–11.79, p < 0.01). In multivariate models adjusting for breast cancer subtype, LVI was significantly associated with a decrease in progression-free survival (HR 3.76, 95 % CI 2.07–6.83, p < 0.01) and overall survival (HR 5.70, 95 % CI 2.08–15.64, p < 0.01). When stratified by subtype, those with hormone receptor or HER2-positive BCs with no LVI had the most favorable progression-free and overall survival. Those with both LVI and triple-negative BC had the worst progression-free and overall survival. LVI is an important prognostic marker and is associated with worse clinical outcome in breast cancer patients receiving NAC.
Diffuse optical tomography breast imaging measurements are modifiable with pre-surgical targeted and endocrine therapies among women with early stage breast cancer
PurposeDiffuse optical tomography breast imaging system (DOTBIS) non-invasively measures tissue concentration of hemoglobin, which is a potential biomarker of short-term response to neoadjuvant chemotherapy. We evaluated whether DOTBIS-derived measurements are modifiable with targeted therapies, including AKT inhibition and endocrine therapy.MethodsWe conducted a proof of principle study in seven postmenopausal women with stage I-III breast cancer who were enrolled in pre-surgical studies of the AKT inhibitor MK-2206 (n = 4) or the aromatase inhibitors exemestane (n = 2) and letrozole (n = 1). We performed DOTBIS at baseline (before initiation of therapy) and post-therapy in the affected breast (tumor volume) and contralateral, unaffected breast, and measured tissue concentrations (in μM) of total hemoglobin (ctTHb), oxyhemoglobin (ctO2Hb), and deoxyhemoglobin (ctHHb), as well as water fraction (%).ResultsWe found consistent decreases in DOTBIS-measured hemoglobin concentrations in tumor volume, with median percent changes for ctTHb, ctHHb, ctO2Hb, and water fraction for the entire cohort of − 27.1% (interquartile range [IQR] 37.5%), − 49.8% (IQR 29.3%), − 33.5% (IQR 47.4%), and − 3.6% (IQR 10.6%), respectively. In the contralateral breast, median percent changes for ctTHb, ctHHb, ctO2Hb, and water fraction were + 1.8% (IQR 26.7%), − 8.6% (IQR 29.3%), + 6.2% (IQR 29.5%), and + 1.9% (IQR 30.7%), respectively.ConclusionWe demonstrated that DOTBIS-derived measurements are modifiable with pre-surgical AKT inhibition and endocrine therapy, supporting further investigation of DOTBIS as a potential imaging assessment of response to neoadjuvant targeted therapies in early stage breast cancer.
Six-Minute Walk Distance in Chronic Obstructive Pulmonary Disease: Reproducibility and Effect of Walking Course Layout and Length
The 6-minute walk test is used in clinical practice and clinical trials of lung diseases; however, it is not clear whether replicate tests need to be performed to assess performance. Furthermore, little is known about the impact of walking course layout on test performance. We conducted 6-minute walks on 761 patients with severe emphysema (mean +/- SD FEV1% predicted = 26.3 +/- 7.2) who were participants in the National Emphysema Treatment Trial. Four hundred seventy participants had repeated walks on a separate day. The second test was improved by an average of 7.0 +/- 15.2% (66.1 +/- 146 feet, p < 0.0001, by paired t test), with an intraclass correlation coefficient of 0.88 between days. The course layout had an effect on the distance walked. Participants tested on continuous (circular or oval) courses had a 92.2-foot longer walking distance than those tested on straight (out and back) courses. Course length had no significant effect on walking distance. The training effect found in these patients with severe emphysema is less than in previous reports of patients with chronic obstructive pulmonary disease. Furthermore, the layout of the track may influence the 6-minute walk performance.
Characterization of the likelihood continual reassessment method
This paper deals with the design of the likelihood continual reassessment method, which is an increasingly widely used model-based method for dose-finding studies. It is common to implement the method in a two-stage approach, whereby the model-based stage is activated after an initial sequence of patients has been treated. While this two-stage approach is practically appealing, it lacks a theoretical framework, and it is often unclear how the design components should be specified. This paper develops a general framework based on the coherence principle, from which we derive a design calibration process. A real clinical-trial example is used to demonstrate that the proposed process can be implemented in a timely and reproducible manner, while offering competitive operating characteristics. We explore the operating characteristics of different models within this framework and show the performance to be insensitive to the choice of dose-toxicity model.
The Citicoline Brain Injury Treatment (COBRIT) Trial: Design and Methods
Traumatic brain injury (TBI) is a major cause of death and disability. In the United States alone approximately 1.4 million sustain a TBI each year, of which 50,000 people die, and over 200,000 are hospitalized. Despite numerous prior clinical trials no standard pharmacotherapy for the treatment of TBI has been established. Citicoline, a naturally occurring endogenous compound, offers the potential of neuroprotection, neurorecovery, and neurofacilitation to enhance recovery after TBI. Citicoline has a favorable side-effect profile in humans and several meta-analyses suggest a benefit of citicoline treatment in stroke and dementia. COBRIT is a randomized, double-blind, placebo-controlled, multi-center trial of the effects of 90 days of citicoline on functional outcome in patients with complicated mild, moderate, and severe TBI. In all, 1292 patients will be recruited over an estimated 32 months from eight clinical sites with random assignment to citicoline (1000 mg twice a day) or placebo (twice a day), administered enterally or orally. Functional outcomes are assessed at 30, 90, and 180 days after the day of randomization. The primary outcome consists of a set of measures that will be analyzed as a composite measure using a global test procedure at 90 days. The measures comprise the following core battery: the California Verbal Learning Test II; the Controlled Oral Word Association Test; Digit Span; Extended Glasgow Outcome Scale; the Processing Speed Index; Stroop Test part 1 and Stroop Test part 2; and Trail Making Test parts A and B. Secondary outcomes include survival, toxicity, and rate of recovery.