Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,996 result(s) for "Computer Simulation - standards"
Sort by:
Virtual simulation with Sim&Size software for Pipeline Flex Embolization: evaluation of the technical and clinical impact
IntroductionDuring flow diversion, the choice of the length, diameter, and location of the deployed stent are critical for the success of the procedure. Sim&Size software, based on the three-dimensional rotational angiography (3D-RA) acquisition, simulates the release of the stent, suggesting optimal sizing, and displaying the degree of the wall apposition.ObjectiveTo demonstrate technical and clinical impacts of the Sim&Size simulation during treatment with the Pipeline Flex Embolization Device.MethodsConsecutive patients who underwent aneurysm embolization with Pipeline at our department were retrospectively enrolled (January 2015–December 2017) and divided into two groups: treated with and without simulation. Through univariate and multivariate models, we evaluated: (1) rate of corrective intervention for non-optimal stent placement, (2) duration of intervention, (3) radiation dose, and (4) stent length.Results189 patients, 95 (50.2%) without and 94 (49.7%) with software assistance were analyzed. Age, sex, comorbidities, aneurysm characteristics, and operator’s experience were comparable among the two groups. Procedures performed with the software had a lower rate of corrective intervention (9% vs 20%, p=0.036), a shorter intervention duration (46 min vs 52 min, p=0.002), a lower median radiation dose (1150 mGy vs 1558 mGy, p<0.001), and a shorter stent length (14 mm vs 16 mm, p<0.001).ConclusionsIn our experience, the use of the virtual simulation during Pipeline treatment significantly reduced the need for corrective intervention, the procedural time, the radiation dose, and the length of the stent.
Comparative-Effectiveness of Simulation-Based Deliberate Practice Versus Self-Guided Practice on Resident Anesthesiologists’ Acquisition of Ultrasound-Guided Regional Anesthesia Skills
Background and ObjectivesSimulation-based education strategies to teach regional anesthesia have been described, but their efficacy largely has been assumed. We designed this study to determine whether residents trained using the simulation-based strategy of deliberate practice show greater improvement of ultrasound-guided regional anesthesia (UGRA) skills than residents trained using self-guided practice in simulation.MethodsAnesthesiology residents new to UGRA were randomized to participate in either simulation-based deliberate practice (intervention) or self-guided practice (control). Participants were recorded and assessed while performing simulated peripheral nerve blocks at baseline, immediately after the experimental condition, and 3 months after enrollment. Subject performance was scored from video by 2 blinded reviewers using a composite tool. The amount of time each participant spent in deliberate or self-guided practice was recorded.ResultsTwenty-eight participants completed the study. Both groups showed within-group improvement from baseline scores immediately after the curriculum and 3 months following study enrollment. There was no difference between groups in changed composite scores immediately after the curriculum (P = 0.461) and 3 months following study enrollment (P = 0.927) from baseline. The average time in minutes that subjects spent in simulation practice was 6.8 minutes for the control group compared with 48.5 minutes for the intervention group (P < 0.001).ConclusionsIn this comparative effectiveness study, there was no difference in acquisition and retention of skills in UGRA for novice residents taught by either simulation-based deliberate practice or self-guided practice. Both methods increased skill from baseline; however, self-guided practice required less time and faculty resources.
Simulation-based training for burr hole surgery instrument recognition
Background The use of simulation training in postgraduate medical education is an area of rapidly growing popularity and research. This study was designed to assess the impact of simulation training for instrument knowledge and recognition among neurosurgery residents. Methods This was a randomized control trial of first year residents from neurosurgery residency training programs across Canada. Eighteen neurosurgery trainees were recruited to test two simulation-based applications: PeriopSim™ Instrument Trainer and PeriopSim™ for Burr Hole Surgery. The intervention was game-based simulation training for learning neurosurgical instruments and applying this knowledge to identify correct instruments during a simulated burr hole surgery procedure. Results Participants showed significant overall improvement in total score ( p  < 0.0005), number of errors ( p  = 0.019) and time saved ( p  < 0.0005), over three testing sessions when using the PeriopSim™ Instrument Trainer. Participants demonstrated further performance-trained improvements when using PeriopSim™ Burr Hole Surgery. Conclusions Training in the recognition and utilization of simulated surgical instruments by neurosurgery residents improved significantly with repetition when using PeriopSim™ Instrument Trainer and PeriopSim™ for Burr Hole Surgery.
A randomized crossover trial examining low- versus high-fidelity simulation in basic laparoscopic skills training
Background Previous randomized studies have compared high- versus low-fidelity laparoscopic simulators; however, no proficiency criteria were defined and results have been mixed. The purpose of this research was to determine whether there were any differences in the learning outcomes of participants who had trained to proficiency on low- or high-fidelity laparoscopic surgical simulators. Methods We conducted a randomized, prospective crossover trial with participants recruited from New South Wales, Western Australia, and South Australia. Participants were randomized to high-fidelity (LapSim, Surgical Science) or low-fidelity (FLS, SAGES) laparoscopic simulators and trained to proficiency in a defined number of tasks. They then crossed over to the other fidelity simulator and were tested. The outcomes of interest were the crossover mean scores, the proportion of tasks passed, and percentage passes for the crossover simulator tasks. Results Of the 228 participants recruited, 100 were randomized to LapSim and 128 to FLS. Mean crossover score increased from baseline for both simulators, but there was no significant difference between them (11.0 % vs. 11.9 %). FLS-trained participants passed a significantly higher proportion of crossover tasks compared with LapSim-trained participants (0.26 vs. 0.20, p  = 0.016). A significantly higher percentage of FLS-trained participants passed intracorporeal knot tying than LapSim-trained participants (35 % vs. 8 %, p  < 0.001). Conclusion Similar increases in participant score from baseline illustrate that training on either simulator type is beneficial. However, FLS-trained participants demonstrated a greater ability to translate their skills to successfully complete LapSim tasks. The ability of FLS-trained participants to transfer their skills to new settings suggests the benefit of this simulator type compared with the LapSim.
Comparing Effectiveness of High-Fidelity Human Patient Simulation vs Case-Based Learning in Pharmacy Education
Objective. To determine whether human patient simulation (HPS) is superior to case-based learning (CBL) in teaching diabetic ketoacidosis (DKA) and thyroid storm (TS) to pharmacy students. Design. In this cross-over, open-label, single center, randomized control trial, final-year undergraduate pharmacy students enrolled in an applied therapeutics course were randomized to HPS or CBL groups. Pretest, posttest, knowledge retention tests, and satisfaction survey were administered to students. Assessment. One hundred seventy-four students participated in this study. The effect sizes attributable to HPS were larger than CBL in both cases. HPS groups performed significantly better in posttest and knowledge retention test compared to CBL groups pertaining to TS case (p<0.05). Students expressed high levels of satisfaction with HPS sessions. Conclusion. HPS was superior to CBL in teaching DKA and TS to final-year undergraduate pharmacy students.
Virtual Patient Simulations in Health Professions Education: Systematic Review and Meta-Analysis by the Digital Health Education Collaboration
Virtual patients are interactive digital simulations of clinical scenarios for the purpose of health professions education. There is no current collated evidence on the effectiveness of this form of education. The goal of this study was to evaluate the effectiveness of virtual patients compared with traditional education, blended with traditional education, compared with other types of digital education, and design variants of virtual patients in health professions education. The outcomes of interest were knowledge, skills, attitudes, and satisfaction. We performed a systematic review on the effectiveness of virtual patient simulations in pre- and postregistration health professions education following Cochrane methodology. We searched 7 databases from the year 1990 up to September 2018. No language restrictions were applied. We included randomized controlled trials and cluster randomized trials. We independently selected studies, extracted data, and assessed risk of bias and then compared the information in pairs. We contacted study authors for additional information if necessary. All pooled analyses were based on random-effects models. A total of 51 trials involving 4696 participants met our inclusion criteria. Furthermore, 25 studies compared virtual patients with traditional education, 11 studies investigated virtual patients as blended learning, 5 studies compared virtual patients with different forms of digital education, and 10 studies compared different design variants. The pooled analysis of studies comparing the effect of virtual patients to traditional education showed similar results for knowledge (standardized mean difference [SMD]=0.11, 95% CI -0.17 to 0.39, I =74%, n=927) and favored virtual patients for skills (SMD=0.90, 95% CI 0.49 to 1.32, I =88%, n=897). Studies measuring attitudes and satisfaction predominantly used surveys with item-by-item comparison. Trials comparing virtual patients with different forms of digital education and design variants were not numerous enough to give clear recommendations. Several methodological limitations in the included studies and heterogeneity contributed to a generally low quality of evidence. Low to modest and mixed evidence suggests that when compared with traditional education, virtual patients can more effectively improve skills, and at least as effectively improve knowledge. The skills that improved were clinical reasoning, procedural skills, and a mix of procedural and team skills. We found evidence of effectiveness in both high-income and low- and middle-income countries, demonstrating the global applicability of virtual patients. Further research should explore the utility of different design variants of virtual patients.
Ten simple rules for the computational modeling of behavioral data
Computational modeling of behavior has revolutionized psychology and neuroscience. By fitting models to experimental data we can probe the algorithms underlying behavior, find neural correlates of computational variables and better understand the effects of drugs, illness and interventions. But with great power comes great responsibility. Here, we offer ten simple rules to ensure that computational modeling is used with care and yields meaningful insights. In particular, we present a beginner-friendly, pragmatic and details-oriented introduction on how to relate models to data. What, exactly, can a model tell us about the mind? To answer this, we apply our rules to the simplest modeling techniques most accessible to beginning modelers and illustrate them with examples and code available online. However, most rules apply to more advanced techniques. Our hope is that by following our guidelines, researchers will avoid many pitfalls and unleash the power of computational modeling on their own data.
The effect of simulation-based training on initial performance of ultrasound-guided axillary brachial plexus blockade in a clinical setting – a pilot study
Background In preparing novice anesthesiologists to perform their first ultrasound-guided axillary brachial plexus blockade, we hypothesized that virtual reality simulation-based training offers an additional learning benefit over standard training. We carried out pilot testing of this hypothesis using a prospective, single blind, randomized controlled trial. Methods We planned to recruit 20 anesthesiologists who had no experience of performing ultrasound-guided regional anesthesia. Initial standardized training, reflecting current best available practice was provided to all participating trainees. Trainees were randomized into one of two groups; (i) to undertake additional simulation-based training or (ii) no further training. On completion of their assigned training, trainees attempted their first ultrasound-guided axillary brachial plexus blockade. Two experts, blinded to the trainees’ group allocation, assessed the performance of trainees using validated tools. Results This study was discontinued following a planned interim analysis, having recruited 10 trainees. This occurred because it became clear that the functionality of the available simulator was insufficient to meet our training requirements. There were no statistically significant difference in clinical performance, as assessed using the sum of a Global Rating Score and a checklist score, between simulation-based training [mean 32.9 (standard deviation 11.1)] and control trainees [31.5 (4.2)] (p = 0.885). Conclusions We have described a methodology for assessing the effectiveness of a simulator, during its development, by means of a randomized controlled trial. We believe that the learning acquired will be useful if performing future trials on learning efficacy associated with simulation based training in procedural skills. Trial registration ClinicalTrials.gov identifier: NCT01965314 . Registered October 17th 2013.
Establishing microbial composition measurement standards with reference frames
Differential abundance analysis is controversial throughout microbiome research. Gold standard approaches require laborious measurements of total microbial load, or absolute number of microorganisms, to accurately determine taxonomic shifts. Therefore, most studies rely on relative abundance data. Here, we demonstrate common pitfalls in comparing relative abundance across samples and identify two solutions that reveal microbial changes without the need to estimate total microbial load. We define the notion of “reference frames”, which provide deep intuition about the compositional nature of microbiome data. In an oral time series experiment, reference frames alleviate false positives and produce consistent results on both raw and cell-count normalized data. Furthermore, reference frames identify consistent, differentially abundant microbes previously undetected in two independent published datasets from subjects with atopic dermatitis. These methods allow reassessment of published relative abundance data to reveal reproducible microbial changes from standard sequencing output without the need for new assays. Most microbiome studies make conclusions based on changes in relative abundance of taxa, inferred from sequencing data. Here, the authors highlight common pitfalls in comparing relative abundance across samples, and identify solutions that reveal microbial changes without the need to estimate total microbial load.
Why we need to abandon fixed cutoffs for goodness-of-fit indices: An extensive simulation and possible solutions
To evaluate model fit in confirmatory factor analysis, researchers compare goodness-of-fit indices (GOFs) against fixed cutoff values (e.g., CFI > .950) derived from simulation studies. Methodologists have cautioned that cutoffs for GOFs are only valid for settings similar to the simulation scenarios from which cutoffs originated. Despite these warnings, fixed cutoffs for popular GOFs (i.e., χ 2 , χ 2 / df , CFI, RMSEA, SRMR) continue to be widely used in applied research. We (1) argue that the practice of using fixed cutoffs needs to be abandoned and (2) review time-honored and emerging alternatives to fixed cutoffs. We first present the most in-depth simulation study to date on the sensitivity of GOFs to model misspecification (i.e., misspecified factor dimensionality and unmodeled cross-loadings) and their susceptibility to further data and analysis characteristics (i.e., estimator, number of indicators, number and distribution of response options, loading magnitude, sample size, and factor correlation). We included all characteristics identified as influential in previous studies. Our simulation enabled us to replicate well-known influences on GOFs and establish hitherto unknown or underappreciated ones. In particular, the magnitude of the factor correlation turned out to moderate the effects of several characteristics on GOFs. Second, to address these problems, we discuss several strategies for assessing model fit that take the dependency of GOFs on the modeling context into account. We highlight tailored (or “dynamic”) cutoffs as a way forward. We provide convenient tables with scenario-specific cutoffs as well as regression formulae to predict cutoffs tailored to the empirical setting of interest.