Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6
result(s) for
"Muller, Léonore"
Sort by:
Feasibility of large-scale eOSCES: the simultaneous evaluation of 500 medical students during a mock examination
by
Bouzid, Donia
,
Tran Dinh, Alexy
,
Peiffer Smadja, Nathan
in
Catching
,
COVID-19
,
digital training
2022
The COVID-19 pandemic has led health schools to cancel many on-site training and exams. Teachers were looking for the best option to carry out online OSCEs, and Zoom was the obvious choice since many schools have used it to pursue education purposes. Methods: We conducted a feasibility study during the 2020-2021 college year divided into six pilot phases and the large-scale eOSCEs on Zoom on June 30th, 2021. We developed a specific application allowing us to mass create Zoom meetings and built an entire organization, including a technical support system (an SOS room and catching-up rooms) and teachers' training sessions. We assessed satisfaction via an online survey. Results: On June 30th, 531/794 fifth-year medical students (67%) participated in a large-scale mock exam distributed in 135 Zoom meeting rooms with the mobilization of 298 teachers who either participated in the Zoom meetings as standardized patients (N =135, 45%) or examiners (N =135, 45%) or as supervisors in the catching-up rooms (N =16, 6%) or the SOS room (N =12, 4%). In addition, 32/270 teachers (12%) experienced difficulties connecting to their Zoom meetings and sought the help of an SOS room member. Furthermore, 40/531 students (7%) were either late to their station or had technical difficulties and declared those issues online and were welcomed in one of the catching-up rooms to perform their eOSCE stations. Additionally, 518/531 students (98%) completed the entire circuit of three stations, and 225/531 students (42%) answered the online survey. Among them, 194/225 (86%) found eOSCES helpful for training and expressed their satisfaction with this experience. Conclusion: Organizing large-scale eOSCEs on Zoom is feasible with the appropriate tools. In addition, eOCSEs should be considered complementary to on-site OSCEs and to train medical students in telemedicine.
Journal Article
Measuring and correcting staff variability in large-scale OSCEs
by
Ruszniewski, Philippe
,
Pellat, Anna
,
Mazar, Sophie
in
Evaluation
,
Medical education
,
Medical students
2024
Data originated from 3 OSCEs undergone by 900-student classes of 5.sup.th- and 6.sup.th-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects. From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (p<10.sup.-15), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (p=0.45). Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.
Journal Article
Measuring and correcting staff variability in large-scale OSCEs
by
Bensaadi, Saja
,
Bouzid, Donia
,
Haviari, Skerdi
in
Clinical Competence - standards
,
Down Syndrome
,
Education
2024
Context
Objective Structured Clinical Examinations (OSCEs) are an increasingly popular evaluation modality for medical students. While the face-to-face interaction allows for more in-depth assessment, it may cause standardization problems. Methods to quantify, limit or adjust for examiner effects are needed.
Methods
Data originated from 3 OSCEs undergone by 900-student classes of 5
th
- and 6
th
-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects.
Results
From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (
p
<10
-15
), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (
p
=0.45).
Conclusion
Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.
Journal Article
eOSCE stations live versus remote evaluation and scores variability
by
Bouzid, Donia
,
Zucman, Noémie
,
Holleville, Mathilde
in
Education
,
Evaluation
,
Humanities and Social Sciences
2022
We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students' performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50-70), 60 out of 100 (IQR 54-70), and 53 out of 100 (IQR 45-62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs.
Journal Article
On‐Liquid Surface Synthesis of Crystalline 2D Polyimine Thin Films
On‐water surface synthesis has emerged as a powerful approach for constructing thin‐layer, crystalline 2D polyimines and their layer‐stacked covalent organic frameworks. This is achieved by directing monomer preorganization and subsequent 2D polymerization on the water surface. However, the poor compatibility of water with many organic monomers has limited the range of accessible 2D polyimine structures. Herein, the on‐liquid surface synthesis of crystalline 2D polyimine films from a water‐insoluble, C 3 ‐symmetric monomer previously deemed incompatible with aqueous systems is reported. In situ grazing incidence X‐ray scattering reveals a stepwise evolution of monomer adsorption, preorganization, and 2D polymerization assisted by the fluorinated surfactant monolayer, leading to the formation of large‐area, face‐on‐oriented 2D polyimine films. Notably, a pronounced lattice expansion from 3.4 nm in the monomer assembly to 5.3 nm in the 2D polyimine framework is observed, highlighting the templating effect of the preorganized monomers in defining the final crystallinity. The representative 2DPI‐TCQ‐DHB is obtained as free‐standing thin film with well‐defined hexagonal pores, mechanical robustness, and a negatively charged surface (zeta potential: −58.8 mV). Leveraging these structural characteristics, it is integrated 2DPI‐TCQ‐DHB films into osmotic power generators, achieving a power density of 16.0 W m −2 by mixing artificial seawater and river water, surpassing most nanoporous 2D membranes.
Journal Article
GOAT: Deep learning-enhanced Generalized Organoid Annotation Tool
2022
Organoids have emerged as a powerful technology to investigate human development, model diseases and for drug discovery. However, analysis tools to rapidly and reproducibly quantify organoid parameters from microscopy images are lacking. We developed a deep-learning based generalized organoid annotation tool (GOAT) using instance segmentation with pixel-level identification of organoids to quantify advanced organoid features. Using a multicentric dataset, including multiple organoid systems (e.g. liver, intestine, tumor, lung), we demonstrate generalization of the tool to annotate a diverse range of organoids generated in different laboratories and high performance in comparison to previously published methods. In sum, GOAT provides fast and unbiased quantification of organoid experiments to accelerate organoid research and facilitates novel high-throughput applications.