Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
42,024 result(s) for "Quality assurance."
Sort by:
EQ-5D and the EuroQol Group: Past, Present and Future
Over the period 1987–1991 an inter-disciplinary five-country group developed the EuroQol instrument, a five-dimensional three-level generic measure subsequently termed the ‘EQ-5D’. It was designed to measure and value health status. The salient features of its development and its consolidation and expansion are discussed. Initial expansion came, in particular, in the form of new language versions. Their development raised translation and semantic issues, experience with which helped feed into the design of two further instruments, the EQ-5D-5L and the youth version EQ-5D-Y. The expanded usage across clinical programmes, disease and condition areas, population surveys, patient-reported outcomes, and value sets is outlined. Valuation has been of continued relevance for the Group as this has allowed its instruments to be utilised as part of the economic appraisal of health programmes and their incorporation into health technology assessments. The future of the Group is considered in the context of: (1) its scientific strategy, (2) changes in the external environment affecting the demand for EQ-5D, and (3) a variety of issues it is facing in the context of the design of the instrument, its use in health technology assessment, and potential new uses for EQ-5D outside of clinical trials and technology appraisal.
The Marburg-Münster Affective Disorders Cohort Study (MACS): A quality assurance protocol for MR neuroimaging data
Large, longitudinal, multi-center MR neuroimaging studies require comprehensive quality assurance (QA) protocols for assessing the general quality of the compiled data, indicating potential malfunctions in the scanning equipment, and evaluating inter-site differences that need to be accounted for in subsequent analyses. We describe the implementation of a QA protocol for functional magnet resonance imaging (fMRI) data based on the regular measurement of an MRI phantom and an extensive variety of currently published QA statistics. The protocol is implemented in the MACS (Marburg-Münster Affective Disorders Cohort Study, http://for2107.de/), a two-center research consortium studying the neurobiological foundations of affective disorders. Between February 2015 and October 2016, 1214 phantom measurements have been acquired using a standard fMRI protocol. Using 444 healthy control subjects which have been measured between 2014 and 2016 in the cohort, we investigate the extent of between-site differences in contrast to the dependence on subject-specific covariates (age and sex) for structural MRI, fMRI, and diffusion tensor imaging (DTI) data. We show that most of the presented QA statistics differ severely not only between the two scanners used for the cohort but also between experimental settings (e.g. hardware and software changes), demonstrate that some of these statistics depend on external variables (e.g. time of day, temperature), highlight their strong dependence on proper handling of the MRI phantom, and show how the use of a phantom holder may balance this dependence. Site effects, however, do not only exist for the phantom data, but also for human MRI data. Using T1-weighted structural images, we show that total intracranial (TIV), grey matter (GMV), and white matter (WMV) volumes significantly differ between the MR scanners, showing large effect sizes. Voxel-based morphometry (VBM) analyses show that these structural differences observed between scanners are most pronounced in the bilateral basal ganglia, thalamus, and posterior regions. Using DTI data, we also show that fractional anisotropy (FA) differs between sites in almost all regions assessed. When pooling data from multiple centers, our data show that it is a necessity to account not only for inter-site differences but also for hardware and software changes of the scanning equipment. Also, the strong dependence of the QA statistics on the reliable placement of the MRI phantom shows that the use of a phantom holder is recommended to reduce the variance of the QA statistics and thus to increase the probability of detecting potential scanner malfunctions. •Quality assurance (QA) protocol for large, longitudinal, multi-center MR neuroimaging studies.•Dependence of QA statistics on MR-scanner type, hardware and software changes and external variables (e.g., time of day, temperature).•Consequences of phantom data variations for human MRI data.•Dependence of MR phantom placement on QA statistics.
Quality assurance and quality control reporting in untargeted metabolic phenotyping: mQACC recommendations for analytical quality management
BackgroundDemonstrating that the data produced in metabolic phenotyping investigations (metabolomics/metabonomics) is of good quality is increasingly seen as a key factor in gaining acceptance for the results of such studies. The use of established quality control (QC) protocols, including appropriate QC samples, is an important and evolving aspect of this process. However, inadequate or incorrect reporting of the QA/QC procedures followed in the study may lead to misinterpretation or overemphasis of the findings and prevent future metanalysis of the body of work.ObjectiveThe aim of this guidance is to provide researchers with a framework that encourages them to describe quality assessment and quality control procedures and outcomes in mass spectrometry and nuclear magnetic resonance spectroscopy-based methods in untargeted metabolomics, with a focus on reporting on QC samples in sufficient detail for them to be understood, trusted and replicated. There is no intent to be proscriptive with regard to analytical best practices; rather, guidance for reporting QA/QC procedures is suggested. A template that can be completed as studies progress to ensure that relevant data is collected, and further documents, are provided as on-line resources.Key reporting practicesMultiple topics should be considered when reporting QA/QC protocols and outcomes for metabolic phenotyping data. Coverage should include the role(s), sources, types, preparation and uses of the QC materials and samples generally employed in the generation of metabolomic data. Details such as sample matrices and sample preparation, the use of test mixtures and system suitability tests, blanks and technique-specific factors are considered and methods for reporting are discussed, including the importance of reporting the acceptance criteria for the QCs. To this end, the reporting of the QC samples and results are considered at two levels of detail: “minimal” and “best reporting practice” levels.
Reference materials for MS-based untargeted metabolomics and lipidomics: a review by the metabolomics quality assurance and quality control consortium (mQACC)
IntroductionThe metabolomics quality assurance and quality control consortium (mQACC) is enabling the identification, development, prioritization, and promotion of suitable reference materials (RMs) to be used in quality assurance (QA) and quality control (QC) for untargeted metabolomics research.ObjectivesThis review aims to highlight current RMs, and methodologies used within untargeted metabolomics and lipidomics communities to ensure standardization of results obtained from data analysis, interpretation and cross-study, and cross-laboratory comparisons. The essence of the aims is also applicable to other ‘omics areas that generate high dimensional data.ResultsThe potential for game-changing biochemical discoveries through mass spectrometry-based (MS) untargeted metabolomics and lipidomics are predicated on the evolution of more confident qualitative (and eventually quantitative) results from research laboratories. RMs are thus critical QC tools to be able to assure standardization, comparability, repeatability and reproducibility for untargeted data analysis, interpretation, to compare data within and across studies and across multiple laboratories. Standard operating procedures (SOPs) that promote, describe and exemplify the use of RMs will also improve QC for the metabolomics and lipidomics communities.ConclusionsThe application of RMs described in this review may significantly improve data quality to support metabolomics and lipidomics research. The continued development and deployment of new RMs, together with interlaboratory studies and educational outreach and training, will further promote sound QA practices in the community.
A predictive quality assurance model for patient‐specific gamma passing rate of hyperarc‐based stereotactic radiotherapy and radiosurgery of brain metastases
Objective Measurement‐based patient specific quality assurance (PSQA) is an increasingly debated topic among medical physicists. Developments like online adaptive radiotherapy and same‐day stereotactic treatments limit the time to do measurement‐based PSQA. Herein, we develop a predictive machine learning model to supplement PSQA by predicting the gamma passing rate (GPR) per stereotactic arc. This streamlines PSQA, providing planners the insight to replan potentially sub‐optimal plans, to mitigate machine time inefficiencies. Methods 122 patients that had previously received HyperArc stereotactic radiosurgery/radiotherapy on a TrueBeam LINAC (Millenium 120 MLCs, 6MV‐FFF) were used to generate a long short‐term memory (LSTM) recurrent neural network to predict the GPR for a 2%/2 mm criteria. GPRs were discretized into three classes: Ideal (≥95%), Investigate [85%–95%), and Replan (<85%). In total, 468 VMAT arcs were used for this model with a class distribution of 370 (Ideal), 65 (Investigate), and 33 (Replan). To counteract the imbalanced data, the minority classes were over‐sampled using synthetic minority over‐sampling technique to generate a balanced dataset. The LSTM model was trained in Python with an 80‐20 training‐testing stratified split. Individual class sensitivity and specificity were recorded following a one versus all method. The final model was deployed clinically through Eclipse Scripting. Results The model demonstrated the following (sensitivity, specificity) for the testing data: Ideal (78.4%, 87.2%), Investigate (75.7%, 89.9%), and Replan (93.2%, 96.6%). The primary focus of this model is to identify failing beams and allow the planner to address this prior to running the PSQA, as such the Replan class was the most important for evaluation. A sensitivity of 93.2% indicates that the model will identify 93.2% of HyperArc plans that need to be replanned with a very high certainty due to the 96.6% specificity. Conclusions The predictive GPR model developed within this research enables HyperArc planners to immediately assess the GPR for each stereotactic arc and preemptively replan potentially failing arcs to optimize the PSQA machine time.