Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Is Full-Text Available
      Is Full-Text Available
      Clear All
      Is Full-Text Available
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Language
    • Place of Publication
    • Contributors
    • Location
150 result(s) for "Morris, Tim P"
Sort by:
The best horror of the year. Volume ten
A group of mountain climbers, caught in the dark, fights to survive their descent; An American band finds more than they bargained for in Mexico while scouting remote locations for a photo shoot; A young student's exploration into the origins of a mysterious song leads him on a winding, dangerous path through the US's deep south; A group of kids scaring each other with ghost stories discovers alarming consequences. The Best Horror of the Year showcases the previous year's best offerings in horror short fiction. This edition includes award-winning and critically acclaimed authors Mark Morris, Kaaron Warren, John Langan, Carole Johnstone, Brian Hodge, and others. For more than three decades, award-winning editor and anthologist Ellen Datlow has had her finger on the pulse of the latest and most terrifying in horror writing. Night Shade Books is proud to present the tenth volume in this annual series, a new collection of stories to keep you up at night.
Tuning multiple imputation by predictive mean matching and local residual draws
Background Multiple imputation is a commonly used method for handling incomplete covariates as it can provide valid inference when data are missing at random. This depends on being able to correctly specify the parametric model used to impute missing values, which may be difficult in many realistic settings. Imputation by predictive mean matching (PMM) borrows an observed value from a donor with a similar predictive mean; imputation by local residual draws (LRD) instead borrows the donor’s residual. Both methods relax some assumptions of parametric imputation, promising greater robustness when the imputation model is misspecified. Methods We review development of PMM and LRD and outline the various forms available, and aim to clarify some choices about how and when they should be used. We compare performance to fully parametric imputation in simulation studies, first when the imputation model is correctly specified and then when it is misspecified. Results In using PMM or LRD we strongly caution against using a single donor, the default value in some implementations, and instead advocate sampling from a pool of around 10 donors. We also clarify which matching metric is best. Among the current MI software there are several poor implementations. Conclusions PMM and LRD may have a role for imputing covariates (i) which are not strongly associated with outcome, and (ii) when the imputation model is thought to be slightly but not grossly misspecified. Researchers should spend efforts on specifying the imputation model correctly, rather than expecting predictive mean matching or local residual draws to do the work.
Meta-analytical methods to identify who benefits most from treatments: daft, deluded, or deft approach?
Identifying which individuals benefit most from particular treatments or other interventions underpins so-called personalised or stratified medicine. However, single trials are typically underpowered for exploring whether participant characteristics, such as age or disease severity, determine an individual’s response to treatment. A meta-analysis of multiple trials, particularly one where individual participant data (IPD) are available, provides greater power to investigate interactions between participant characteristics (covariates) and treatment effects. We use a published IPD meta-analysis to illustrate three broad approaches used for testing such interactions. Based on another systematic review of recently published IPD meta-analyses, we also show that all three approaches can be applied to aggregate data as well as IPD. We also summarise which methods of analysing and presenting interactions are in current use, and describe their advantages and disadvantages. We recommend that testing for interactions using within-trials information alone (the deft approach) becomes standard practice, alongside graphical presentation that directly visualises this.
How are missing data in covariates handled in observational time-to-event studies in oncology? A systematic review
Background Missing data in covariates can result in biased estimates and loss of power to detect associations. It can also lead to other challenges in time-to-event analyses including the handling of time-varying effects of covariates, selection of covariates and their flexible modelling. This review aims to describe how researchers approach time-to-event analyses with missing data. Methods Medline and Embase were searched for observational time-to-event studies in oncology published from January 2012 to January 2018. The review focused on proportional hazards models or extended Cox models. We investigated the extent and reporting of missing data and how it was addressed in the analysis. Covariate modelling and selection, and assessment of the proportional hazards assumption were also investigated, alongside the treatment of missing data in these procedures. Results 148 studies were included. The mean proportion of individuals with missingness in any covariate was 32%. 53% of studies used complete-case analysis, and 22% used multiple imputation. In total, 14% of studies stated an assumption concerning missing data and only 34% stated missingness as a limitation. The proportional hazards assumption was checked in 28% of studies, of which, 17% did not state the assessment method. 58% of 144 multivariable models stated their covariate selection procedure with use of a pre-selected set of covariates being the most popular followed by stepwise methods and univariable analyses. Of 69 studies that included continuous covariates, 81% did not assess the appropriateness of the functional form. Conclusion While guidelines for handling missing data in epidemiological studies are in place, this review indicates that few report implementing recommendations in practice. Although missing data are present in many studies, we found that few state clearly how they handled it or the assumptions they have made. Easy-to-implement but potentially biased approaches such as complete-case analysis are most commonly used despite these relying on strong assumptions and where often more appropriate methods should be employed. Authors should be encouraged to follow existing guidelines to address missing data, and increased levels of expectation from journals and editors could be used to improve practice.
Planning a method for covariate adjustment in individually randomised trials: a practical guide
Background It has long been advised to account for baseline covariates in the analysis of confirmatory randomised trials, with the main statistical justifications being that this increases power and, when a randomisation scheme balanced covariates, permits a valid estimate of experimental error. There are various methods available to account for covariates but it is not clear how to choose among them. Methods Taking the perspective of writing a statistical analysis plan, we consider how to choose between the three most promising broad approaches: direct adjustment, standardisation and inverse-probability-of-treatment weighting. Results The three approaches are similar in being asymptotically efficient, in losing efficiency with mis-specified covariate functions and in handling designed balance. If a marginal estimand is targeted (for example, a risk difference or survival difference), then direct adjustment should be avoided because it involves fitting non-standard models that are subject to convergence issues. Convergence is most likely with IPTW. Robust standard errors used by IPTW are anti-conservative at small sample sizes. All approaches can use similar methods to handle missing covariate data. With missing outcome data, each method has its own way to estimate a treatment effect in the all-randomised population. We illustrate some issues in a reanalysis of GetTested , a randomised trial designed to assess the effectiveness of an electonic sexually transmitted infection testing and results service. Conclusions No single approach is always best: the choice will depend on the trial context. We encourage trialists to consider all three methods more routinely.
Internet-accessed sexually transmitted infection (e-STI) testing and results service: A randomised, single-blind, controlled trial
Internet-accessed sexually transmitted infection testing (e-STI testing) is increasingly available as an alternative to testing in clinics. Typically this testing modality enables users to order a test kit from a virtual service (via a website or app), collect their own samples, return test samples to a laboratory, and be notified of their results by short message service (SMS) or telephone. e-STI testing is assumed to increase access to testing in comparison with face-to-face services, but the evidence is unclear. We conducted a randomised controlled trial to assess the effectiveness of an e-STI testing and results service (chlamydia, gonorrhoea, HIV, and syphilis) on STI testing uptake and STI cases diagnosed. The study took place in the London boroughs of Lambeth and Southwark. Between 24 November 2014 and 31 August 2015, we recruited 2,072 participants, aged 16-30 years, who were resident in these boroughs, had at least 1 sexual partner in the last 12 months, stated willingness to take an STI test, and had access to the internet. Those unable to provide consent and unable to read English were excluded. Participants were randomly allocated to receive 1 text message with the web link of an e-STI testing and results service (intervention group) or to receive 1 text message with the web link of a bespoke website listing the locations, contact details, and websites of 7 local sexual health clinics (control group). Participants were free to use any other services or interventions during the study period. The primary outcomes were self-reported STI testing at 6 weeks, verified by patient record checks, and self-reported STI diagnosis at 6 weeks, verified by patient record checks. Secondary outcomes were the proportion of participants prescribed treatment for an STI, time from randomisation to completion of an STI test, and time from randomisation to treatment of an STI. Participants were sent a £10 cash incentive on submission of self-reported data. We completed all follow-up, including patient record checks, by 17 June 2016. Uptake of STI testing was increased in the intervention group at 6 weeks (50.0% versus 26.6%, relative risk [RR] 1.87, 95% CI 1.63 to 2.15, P < 0.001). The proportion of participants diagnosed was 2.8% in the intervention group versus 1.4% in the control group (RR 2.10, 95% CI 0.94 to 4.70, P = 0.079). No evidence of heterogeneity was observed for any of the pre-specified subgroup analyses. The proportion of participants treated was 1.1% in the intervention group versus 0.7% in the control group (RR 1.72, 95% CI 0.71 to 4.16, P = 0.231). Time to test, was shorter in the intervention group compared to the control group (28.8 days versus 36.5 days, P < 0.001, test for difference in restricted mean survival time [RMST]), but no differences were observed for time to treatment (83.2 days versus 83.5 days, P = 0.51, test for difference in RMST). We were unable to recruit the planned 3,000 participants and therefore lacked power for the analyses of STI diagnoses and STI cases treated. The e-STI testing service increased uptake of STI testing for all groups including high-risk groups. The intervention required people to attend clinic for treatment and did not reduce time to treatment. Service innovations to improve treatment rates for those diagnosed online are required and could include e-treatment and postal treatment services. e-STI testing services require long-term monitoring and evaluation. ISRCTN Registry ISRCTN13354298.
Ethnic Differences in the Prevalence of Type 2 Diabetes Diagnoses in the UK: Cross-Sectional Analysis of the Health Improvement Network Primary Care Database
Type 2 diabetes mellitus is associated with high levels of disease burden, including increased mortality risk and significant long-term morbidity. The prevalence of diabetes differs substantially among ethnic groups. We examined the prevalence of type 2 diabetes diagnoses in the UK primary care setting. We analysed data from 404,318 individuals in The Health Improvement Network database, aged 0-99 years and permanently registered with general practices in London. The association between ethnicity and the prevalence of type 2 diabetes diagnoses in 2013 was estimated using a logistic regression model, adjusting for effect of age group, sex, and social deprivation. A multiple imputation approach utilising population-level information about ethnicity from the UK census was used for imputing missing data. Compared with those of White ethnicity (5.04%, 95% CI 4.95 to 5.13), the crude percentage prevalence of type 2 diabetes was higher in the Asian (7.69%, 95% CI 7.46 to 7.92) and Black (5.58%, 95% CI 5.35 to 5.81) ethnic groups, while lower in the Mixed/Other group (3.42%, 95% CI 3.19 to 3.66). After adjusting for differences in age group, sex, and social deprivation, all minority ethnic groups were more likely to have a diagnosis of type 2 diabetes compared with the White group (OR Asian versus White 2.36, 95% CI 2.26 to 2.47; OR Black versus White 1.65, 95% CI 1.56 to 1.73; OR Mixed/Other versus White 1.17, 95% CI 1.08 to 1.27). The prevalence of type 2 diabetes was higher in the Asian and Black ethnic groups, compared with the White group. Accurate estimates of ethnic prevalence of type 2 diabetes based on large datasets are important for facilitating appropriate allocation of public health resources, and for allowing population-level research to be undertaken examining disease trajectories among minority ethnic groups, that might help reduce inequalities.
Estimands in published protocols of randomised trials: urgent improvement needed
Background An estimand is a precise description of the treatment effect to be estimated from a trial (the question) and is distinct from the methods of statistical analysis (how the question is to be answered). The potential use of estimands to improve trial research and reporting has been underpinned by the recent publication of the ICH E9(R1) Addendum on the use of estimands in clinical trials in 2019. We set out to assess how well estimands are described in published trial protocols. Methods We reviewed 50 trial protocols published in October 2020 in Trials and BMJ Open . For each protocol, we determined whether the estimand for the primary outcome was explicitly stated, not stated but inferable (i.e. could be constructed from the information given), or not inferable. Results None of the 50 trials explicitly described the estimand for the primary outcome, and in 74% of trials, it was impossible to infer the estimand from the information included in the protocol. The population attribute of the estimand could not be inferred in 36% of trials, the treatment condition attribute in 20%, the population-level summary measure in 34%, and the handling of intercurrent events in 60% (the strategy for handling non-adherence was not inferable in 32% of protocols, and the strategy for handling mortality was not inferable in 80% of the protocols for which it was applicable). Conversely, the outcome attribute was stated for all trials. In 28% of trials, three or more of the five estimand attributes could not be inferred. Conclusions The description of estimands in published trial protocols is poor, and in most trials, it is impossible to understand exactly what treatment effect is being estimated. Given the utility of estimands to improve clinical research and reporting, this urgently needs to change.
Prediction meets causal inference: the role of treatment in clinical prediction models
In this paper we study approaches for dealing with treatment when developing a clinical prediction model. Analogous to the estimand framework recently proposed by the European Medicines Agency for clinical trials, we propose a ‘predictimand’ framework of different questions that may be of interest when predicting risk in relation to treatment started after baseline. We provide a formal definition of the estimands matching these questions, give examples of settings in which each is useful and discuss appropriate estimators including their assumptions. We illustrate the impact of the predictimand choice in a dataset of patients with end-stage kidney disease. We argue that clearly defining the estimand is equally important in prediction research as in causal inference.
A four-step strategy for handling missing outcome data in randomised trials affected by a pandemic
Background The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including an inevitably higher rate of missing outcome data, with new and non-standard reasons for missingness. International drug trial guidelines recommend trialists review plans for handling missing data in the conduct and statistical analysis, but clear recommendations are lacking. Methods We present a four-step strategy for handling missing outcome data in the analysis of randomised trials that are ongoing during a pandemic. We consider handling missing data arising due to (i) participant infection, (ii) treatment disruptions and (iii) loss to follow-up. We consider both settings where treatment effects for a ‘pandemic-free world’ and ‘world including a pandemic’ are of interest. Results In any trial, investigators should; (1) Clarify the treatment estimand of interest with respect to the occurrence of the pandemic; (2) Establish what data are missing for the chosen estimand; (3) Perform primary analysis under the most plausible missing data assumptions followed by; (4) Sensitivity analysis under alternative plausible assumptions. To obtain an estimate of the treatment effect in a ‘pandemic-free world’, participant data that are clinically affected by the pandemic (directly due to infection or indirectly via treatment disruptions) are not relevant and can be set to missing. For primary analysis, a missing-at-random assumption that conditions on all observed data that are expected to be associated with both the outcome and missingness may be most plausible. For the treatment effect in the ‘world including a pandemic’, all participant data is relevant and should be included in the analysis. For primary analysis, a missing-at-random assumption – potentially incorporating a pandemic time-period indicator and participant infection status – or a missing-not-at-random assumption with a poorer response may be most relevant, depending on the setting. In all scenarios, sensitivity analysis under credible missing-not-at-random assumptions should be used to evaluate the robustness of results. We highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. Conclusions Missing data problems will be exacerbated for trials active during the Covid-19 pandemic. This four-step strategy will facilitate clear thinking about the appropriate analysis for relevant questions of interest.