Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
70 result(s) for "Ray, Surajit"
Sort by:
Correlation between pseudotyped virus and authentic virus neutralisation assays, a systematic review and meta-analysis of the literature
The virus neutralization assay is a principal method to assess the efficacy of antibodies in blocking viral entry. Due to biosafety handling requirements of viruses classified as hazard group 3 or 4, pseudotyped viruses can be used as a safer alternative. However, it is often queried how well the results derived from pseudotyped viruses correlate with authentic virus. This systematic review and meta-analysis was designed to comprehensively evaluate the correlation between the two assays. Using PubMed and Google Scholar, reports that incorporated neutralisation assays with both pseudotyped virus, authentic virus, and the application of a mathematical formula to assess the relationship between the results, were selected for review. Our searches identified 67 reports, of which 22 underwent a three-level meta-analysis. The three-level meta-analysis revealed a high level of correlation between pseudotyped viruses and authentic viruses when used in an neutralisation assay. Reports that were not included in the meta-analysis also showed a high degree of correlation, with the exception of lentiviral-based pseudotyped Ebola viruses. Pseudotyped viruses identified in this report can be used as a surrogate for authentic virus, though care must be taken in considering which pseudotype core to use when generating new uncharacterised pseudotyped viruses.
Spatial models with covariates improve estimates of peat depth in blanket peatlands
Peatlands are spatially heterogeneous ecosystems that develop due to a complex set of autogenic physical and biogeochemical processes and allogenic factors such as the climate and topography. They are significant stocks of global soil carbon, and therefore predicting the depth of peatlands is an important part of establishing an accurate assessment of their magnitude. Yet there have been few attempts to account for both internal and external processes when predicting the depth of peatlands. Using blanket peatlands in Great Britain as a case study, we compare a linear and geostatistical (spatial) model and several sets of covariates applicable for peatlands around the world that have developed over hilly or undulating terrain. We hypothesized that the spatial model would act as a proxy for the autogenic processes in peatlands that can mediate the accumulation of peat on plateaus or shallow slopes. Our findings show that the spatial model performs better than the linear model in all cases-root mean square errors (RMSE) are lower, and 95% prediction intervals are narrower. In support of our hypothesis, the spatial model also better predicts the deeper areas of peat, and we show that its predictive performance in areas of deep peat is dependent on depth observations being spatially autocorrelated. Where they are not, the spatial model performs only slightly better than the linear model. As a result, we recommend that practitioners carrying out depth surveys fully account for the variation of topographic features in prediction locations, and that sampling approach adopted enables observations to be spatially autocorrelated.
Comparison of CHOP-19 and CHOP-25 for treatment of peripheral nodal B-cell lymphoma in dogs: A European multicenter retrospective cohort study
Abstract Background Peripheral nodal B-cell lymphomas (PNBCL) represent the most common presentation of lymphomas in dogs. Multiagent CHOP (C = cyclophosphamide, H = hydroxydaunorubicin [Doxorubicin], O = Oncovin, P = prednisolone)-based chemotherapy protocols have been widely accepted as gold standard 1st-line treatment. CHOP-25 and CHOP-19 are most commonly prescribed but have never been directly compared. Objectives Our primary aim was to compare outcomes of dogs diagnosed with PNBCL, treated using a 1st-line CHOP-19 or CHOP-25 protocol. A secondary objective was to determine the impact of protocol-related variables on outcomes. Animals Five hundred two dogs from 16 European oncology referral centers. One hundred fifty-five dogs were treated with CHOP-19 and 347 dogs with CHOP-25. Methods Retrospective, multicentric cohort study of dogs diagnosed with PNBCL between 2014 and 2021. Results The 6-month, 1-year, and median progression-free survival (PFS) were 56.5% (95% confidence interval [CI], 49.2-65.0), 14.1% (95% CI, 9.4-21.0), and 196 days (95% CI, 176-233) with CHOP-19; and 56.4% (95% CI, 51.4-61.9), 17% (95% CI, 13.4-21.6), and 209 days (95% CI, 187-224) with CHOP-25. The 1-year, 2-year and median overall survival (OS) were 36.9% (95% CI, 29.7-46.0), 13.5% (95% CI, 8.6-21.1), and 302 days (95% CI, 249-338) with CHOP-19; and 42.8% (95% CI, 37.7-48.7), 15.4% (95% CI, 11.7-20.4), and 321 days (95% CI, 293-357) with CHOP-25. No significant difference in PFS and OS was found between the 2 protocols. Conclusions and Clinical Importance Our study confirmed similar outcomes for dogs with PNBCL treated with 1st-line CHOP-19 or CHOP-25. Both protocols therefore could be used as a standard of care in future trials.
A robust COVID-19 mortality prediction calculator based on Lymphocyte count, Urea, C-Reactive Protein, Age and Sex (LUCAS) with chest X-rays
There have been numerous risk tools developed to enable triaging of SARS-CoV-2 positive patients with diverse levels of complexity. Here we presented a simplified risk-tool based on minimal parameters and chest X-ray (CXR) image data that predicts the survival of adult SARS-CoV-2 positive patients at hospital admission. We analysed the NCCID database of patient blood variables and CXR images from 19 hospitals across the UK using multivariable logistic regression. The initial dataset was non-randomly split between development and internal validation dataset with 1434 and 310 SARS-CoV-2 positive patients, respectively. External validation of the final model was conducted on 741 Accident and Emergency (A&E) admissions with suspected SARS-CoV-2 infection from a separate NHS Trust. The LUCAS mortality score included five strongest predictors (Lymphocyte count, Urea, C-reactive protein, Age, Sex), which are available at any point of care with rapid turnaround of results. Our simple multivariable logistic model showed high discrimination for fatal outcome with the area under the receiving operating characteristics curve (AUC-ROC) in development cohort 0.765 (95% confidence interval (CI): 0.738–0.790), in internal validation cohort 0.744 (CI: 0.673–0.808), and in external validation cohort 0.752 (CI: 0.713–0.787). The discriminatory power of LUCAS increased slightly when including the CXR image data. LUCAS can be used to obtain valid predictions of mortality in patients within 60 days of SARS-CoV-2 RT-PCR results into low, moderate, high, or very high risk of fatality.
The SARS-CoV-2 Alpha variant was associated with increased clinical severity of COVID-19 in Scotland: A genomics-based retrospective cohort analysis
The SARS-CoV-2 Alpha variant was associated with increased transmission relative to other variants present at the time of its emergence and several studies have shown an association between Alpha variant infection and increased hospitalisation and 28-day mortality. However, none have addressed the impact on maximum severity of illness in the general population classified by the level of respiratory support required, or death. We aimed to do this. In this retrospective multi-centre clinical cohort sub-study of the COG-UK consortium, 1475 samples from Scottish hospitalised and community cases collected between 1st November 2020 and 30th January 2021 were sequenced. We matched sequence data to clinical outcomes as the Alpha variant became dominant in Scotland and modelled the association between Alpha variant infection and severe disease using a 4-point scale of maximum severity by 28 days: 1. no respiratory support, 2. supplemental oxygen, 3. ventilation and 4. death. Our cumulative generalised linear mixed model analyses found evidence (cumulative odds ratio: 1.40, 95% CI: 1.02, 1.93) of a positive association between increased clinical severity and lineage (Alpha variant versus pre-Alpha variants). The Alpha variant was associated with more severe clinical disease in the Scottish population than co-circulating lineages.
Robustness of textural analysis features in quantitative 99 mTc and 177Lu SPECT-CT phantom acquisitions
Background Textural Analysis features in molecular imaging require to be robust under repeat measurement and to be independent of volume for optimum use in clinical studies. Recent EANM and SNMMI guidelines for radiomics provide advice on the potential use of phantoms to identify robust features (Hatt in EJNMMI, 2022). This study applies the suggested phantoms to use in SPECT quantification for two radionuclides, 99 m Tc and 177 Lu. Methods Acquisitions were made with a uniform phantom to test volume dependency and with a customised ‘Revolver’ phantom, based on the PET phantom described in Hatt (EJNMMI, 2022) but with local adaptations for SPECT. Each phantom was filled separately with 99 m Tc and 177 Lu. Sixty-seven Textural Analysis features were extracted and tested for robustness and volume dependency. Results Features showing high volume dependency or high Coefficient of Variation (indicating poor repeatability) were removed from the list of features that may be suitable for use in clinical studies. After feature reduction, there were 39 features for 99 m Tc and 33 features for 177 Lu remaining. Conclusion The use of a uniform phantom to test volume dependency and a Revolver phantom to identify repeatable Textural Analysis features is possible for quantitative SPECT using 99 m Tc or 177 Lu. Selection of such features is likely to be centre-dependent due to differences in camera performance as well as acquisition and reconstruction protocols.
Top scoring pairs for feature selection in machine learning and applications to cancer outcome prediction
Background The widely used k top scoring pair ( k -TSP) algorithm is a simple yet powerful parameter-free classifier. It owes its success in many cancer microarray datasets to an effective feature selection algorithm that is based on relative expression ordering of gene pairs. However, its general robustness does not extend to some difficult datasets, such as those involving cancer outcome prediction, which may be due to the relatively simple voting scheme used by the classifier. We believe that the performance can be enhanced by separating its effective feature selection component and combining it with a powerful classifier such as the support vector machine (SVM). More generally the top scoring pairs generated by the k -TSP ranking algorithm can be used as a dimensionally reduced subspace for other machine learning classifiers. Results We developed an approach integrating the k -TSP ranking algorithm (TSP) with other machine learning methods, allowing combination of the computationally efficient, multivariate feature ranking of k -TSP with multivariate classifiers such as SVM. We evaluated this hybrid scheme ( k -TSP+SVM) in a range of simulated datasets with known data structures. As compared with other feature selection methods, such as a univariate method similar to Fisher's discriminant criterion (Fisher), or a recursive feature elimination embedded in SVM (RFE), TSP is increasingly more effective than the other two methods as the informative genes become progressively more correlated, which is demonstrated both in terms of the classification performance and the ability to recover true informative genes. We also applied this hybrid scheme to four cancer prognosis datasets, in which k -TSP+SVM outperforms k -TSP classifier in all datasets, and achieves either comparable or superior performance to that using SVM alone. In concurrence with what is observed in simulation, TSP appears to be a better feature selector than Fisher and RFE in some of the cancer datasets Conclusions The k -TSP ranking algorithm can be used as a computationally efficient, multivariate filter method for feature selection in machine learning. SVM in combination with k -TSP ranking algorithm outperforms k -TSP and SVM alone in simulated datasets and in some cancer prognosis datasets. Simulation studies suggest that as a feature selector, it is better tuned to certain data characteristics, i.e. correlations among informative genes, which is potentially interesting as an alternative feature ranking method in pathway analysis.
A Computational Framework to Emulate the Human Perspective in Flow Cytometric Data Analysis
In recent years, intense research efforts have focused on developing methods for automated flow cytometric data analysis. However, while designing such applications, little or no attention has been paid to the human perspective that is absolutely central to the manual gating process of identifying and characterizing cell populations. In particular, the assumption of many common techniques that cell populations could be modeled reliably with pre-specified distributions may not hold true in real-life samples, which can have populations of arbitrary shapes and considerable inter-sample variation. To address this, we developed a new framework flowScape for emulating certain key aspects of the human perspective in analyzing flow data, which we implemented in multiple steps. First, flowScape begins with creating a mathematically rigorous map of the high-dimensional flow data landscape based on dense and sparse regions defined by relative concentrations of events around modes. In the second step, these modal clusters are connected with a global hierarchical structure. This representation allows flowScape to perform ridgeline analysis for both traversing the landscape and isolating cell populations at different levels of resolution. Finally, we extended manual gating with a new capacity for constructing templates that can identify target populations in terms of their relative parameters, as opposed to the more commonly used absolute or physical parameters. This allows flowScape to apply such templates in batch mode for detecting the corresponding populations in a flexible, sample-specific manner. We also demonstrated different applications of our framework to flow data analysis and show its superiority over other analytical methods. The human perspective, built on top of intuition and experience, is a very important component of flow cytometric data analysis. By emulating some of its approaches and extending these with automation and rigor, flowScape provides a flexible and robust framework for computational cytomics.
Model selection in high dimensions: a quadratic-risk-based approach
We propose a general class of risk measures which can be used for data-based evaluation of parametric models. The loss function is defined as the generalized quadratic distance between the true density and the model proposed. These distances are characterized by a simple quadratic form structure that is adaptable through the choice of a non-negative definite kernel and a bandwidth parameter. Using asymptotic results for the quadratic distances we build a quick-to-compute approximation for the risk function. Its derivation is analogous to the Akaike information criterion but, unlike the Akaike information criterion, the quadratic risk is a global comparison tool. The method does not require resampling, which is a great advantage when point estimators are expensive to compute. The method is illustrated by using the problem of selecting the number of components in a mixture model, where it is shown that, by using an appropriate kernel, the method is computationally straightforward in arbitrarily high data dimensions. In this same context it is shown that the method has some clear advantages over the Akaike information criterion and Bayesian information criterion.
FUNCTIONAL FACTOR ANALYSIS FOR PERIODIC REMOTE SENSING DATA
We present a new approach to factor rotation for functional data. This is achieved by rotating the functional principal components toward a predefined space of periodic functions designed to decompose the total variation into components that are nearly-periodic and nearly-aperiodic with a predefined period. We show that the factor rotation can be obtained by calculation of canonical correlations between appropriate spaces which make the methodology computationally efficient. Moreover, we demonstrate that our proposed rotations provide stable and interprétable results in the presence of highly complex covariance. This work is motivated by the goal of finding interpretable sources of variability in gridded time series of vegetation index measurements obtained from remote sensing, and we demonstrate our methodology through an application of factor rotation of this data.