Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,761 result(s) for "Assessment, Informal"
Sort by:
Scaffolding Students’ Writing Processes Through Dialogic Assessment
With dialogic writing assessment, teachers can scaffold students’ writing processes in ways that are flexible and responsive to students’ individual needs. Examples of teachers using this conference‐based method of classroom writing assessment illustrate how to practice assessment that is dynamic and relational rather than static and standardized, by allowing teachers to vary their support for student writers based on students’ unique needs. These examples also suggest that teachers’ epistemologies for writing instruction can influence how they practice dialogic writing assessment. The authors conclude with a discussion of how dynamic and responsive scaffolding can support an equity‐focused model for teaching academic writing and how teachers’ expertise may be a factor in how they apply dialogic writing assessment.
8257 Understanding the variations and barriers in investigation of early developmental impairment
Why did you do this work?We aimed to understand the variation in practice and investigations, including genetic testing, for children with Early Developmental Impairment (EDI) in the East of England (EoE) and East Midlands (EM). We sought to identify barriers and propose potential solutions to standardise and improve these processes across the regions.What did you do?An initial scoping survey across EoE was followed by a systematic review of articles from 2013 to 2023 towards formulating a proposed regional EDI investigation strategy. Working with stakeholders (Community Paediatricians and East Genomics) to establish its content, we distributed an electronic survey exploring attitudes around the current practices to paediatricians across EoE and EM.What did you find?Across EoE, 9 survey respondents identified 4 guidelines in use with differing recommendations on testing in EDI. Three centres reported following no guideline. The literature review explored recommended investigations for EDI, revealing possible publication bias towards genetic testing, being the focus of 16 of the 27 reviewed articles.We identified the highest overall diagnostic yield in whole exome and genome sequencing.Qualitative data from the survey (83 respondents) indicated that 69% of clinicians performed genetic tests as a first-line investigation, most commonly microarray (85%). However, significant barriers were identified: 84% of respondents found it difficult to discuss genetic results with patients, and logistical issues such as sample collection and time for patient consent were highlighted. Potential solutions identified by respondents included the development of a regional EDI investigation guideline (92%), access to genomic practitioners to assist with consent and logistics (73%), establishment of regional neurodevelopmental genetics multidisciplinary teams (51%), and implementation of electronic test ordering systems (63%). The majority did not routinely request neuroimaging for isolated EDI without other features (74%).While 97% expressed desire for a regional framework to guide investigations, significant variation existed in current practices. The majority relied on clinical acumen or informal assessments (89%), with only 13% using formal tools such as Bayley or Griffiths scales. Documentation of EDI severity was inconsistent: only 23% reported always documenting severity, with methods varying between giving overall ratings (36%) and domain-specific assessments (49%).What does it mean?Our findings highlight the challenges in overcoming variation in investigative approaches to EDI in particular relating to genetic testing. The data underlines the need for regional standardisation, increased education and support systems for clinicians. To address these issues, we are collaborating with paediatricians and geneticists to develop a practical framework that will facilitate consistent and equitable investigations across the region served by East Genomics.AcknowledgementsBen Marlow, Consultant Community Paediatrician, East Suffolk and North Essex NHSFTKate Baker, Honorary Consultant Geneticist, Cambridge University Hospitals NHSFTGillian Mitchell, Consultant Community Paediatrician, Cambridgeshire Community Services NHSFTIan Kingsbury, NHS East GenomicsGemma Chandratillake, NHS East GenomicsEast of England and East Midlands geneticists and Community Paediatricians
Where the rubber meets the road — An integrative review of programmatic assessment in health care professions education
Introduction Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice. Methods The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis. Results Twenty-seven studies were included, which used quantitative methods ( n  = 10), qualitative methods ( n  = 12) or mixed methods ( n  = 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions. Conclusion Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
Investigating the feasibility of using assessment and explanatory feedback in desktop virtual reality simulations
There is great potential in making assessment and learning complementary. In this study, we investigated the feasibility of developing a desktop virtual reality (VR) laboratory simulation on the topic of genetics, with integrated assessment using multiple choice questions based on item response theory (IRT) and feedback based on the cognitive theory of multimedia learning. A pre-test post-test design was used to investigate three research questions related to: (1) students’ perceptions of assessment in the form of MC questions within the VR genetics simulation; (2) the fit of the MC questions to the assumptions of the partial credit model (PCM) within the framework of IRT; and (3) if there was a significant increase in intrinsic motivation, self-efficacy, and transfer from pre- to post-test after using the VR genetics simulation as a classroom learning activity. The sample consisted of 208 undergraduate students taking a medical genetics course. The results showed that assessment items in the form of gamified multiple-choice questions were perceived by 97% of the students to lead to higher levels of understanding, and only 8% thought that they made the simulation more boring. Items within a simulation were found to fit the PCM and the results showed that the sample had a small significant increase in intrinsic motivation and self-efficacy, and a large significant increase in transfer following the genetics simulation. It was possible to develop assessments for online educational material and retain the relevance and connectedness of informal assessment while simultaneously serving the communicative and credibility-based functions of formal assessment, which is a great challenge facing education today.
Informal Assessment of Preschool Children’s Concepts of Zero
There is growing interest in mathematics learning progressions in early childhood education. Counting is a skill usually developed early in life. The application of the counting principles in early childhood typically entails counting objects. This poses challenges for learning about zero. Indeed, the word “zero” is seldom used in the context of early childhood education. Early childhood educators could purposefully introduce children to zero as a concept and facilitate children’s understanding that zero is a number and more than just the absence of something. “Zero” is introduced in school, but little guidance is provided to teachers within the Australian Curriculum for Mathematics in the Foundation year. This study contributes to a small corpus of research that has investigated preschool children’s understanding of the concept of zero. Unlike other studies, the method employed to elicit children’s knowledge was informal and more similar to educator-child conversations that occur within a playbased curriculum and contribute to formative assessment. Data are presented from 20 children, aged from three to five years, participating in a regional early learning centre. Six children demonstrated familiarity with the symbol for zero (“0”) and/or the concept that zero describes a numerical quantity. Asking a follow-up question encouraged children to share their thinking. The importance of early childhood educators purposefully supporting children’s familiarity with the word zero along as well as the concept of zero is proposed.
An Experiment of AI-Based Assessment: Perspectives of Learning Preferences, Benefits, Intention, Technology Affinity, and Trust
The rising integration of AI-driven assessment in education holds promise, yet it is crucial to evaluate the correlation between trust in general AI tools, AI-based scoring systems, and future behavioral intention toward using these technologies. This study explores students’ perspectives on AI-assisted assessment in higher education. We constructed a comprehensive questionnaire supported by relevant studies. Several hypotheses grounded in the literature review were formulated. In an experimental setup, the students were tasked to read a designated chapter of a paper, answer an essay question about this chapter, and then have their answers evaluated by an AI-based essay grading tool. A comprehensive data analysis using Bayesian regression was carried out to test several hypotheses. The study finds that remote learners are more inclined to use AI-based educational tools. The students who believe that AI-based essay grading is less effective than teacher feedback have less trust in AI-based essay grading, whereas those who find it more effective perceive more benefit from it. In addition, students’ affinity for technology does not significantly impact trust or perceived benefits in AI-based essay grading.
Training academic staff for effective feedback in workplace-based assessment: a study in Bhutan
Introduction The feedback plays a critical role in competency-based education in both undergraduate and Postgraduate medical education. The study explores the impact of a faculty development program on feedback practices of residents and faculty of ENT and ER medicine at Gyalpo University of Medical Sciences of Bhutan (KGUMSB). Methods This mixed method study was conducted in two departments with 14 faculty members participating in the study. The questionnaire was used to obtain the perception of feedback before and after a Faculty Development Training (FDT) on good feedback practices. Student “t” test was used to compare the feedback perception at day 0 and 6 months and the responses were qualitatively analyzed using thematic analysis. Results (a) Quantitative : The confidence of faculty to provide feedback improved significantly after FDT as compared to before FDT and it persisted in the same for 6 months ( p -value 0.041 and p -value 0.027 respectively). The overall perception of feedback as a tool significantly changed positively after FDT and at 6 months ( p -value p -value = 0.000). (b) Qualitative : Two thematic areas of process and teaching-learning were analyzed. Faculty showed improved and more focused feedback after training, but signs of decline by 6 months highlighted the need for refresher training. Feedback initially improved for residents, as it became more constructive and useful, though by 6 months, it showed potential for further refinement and consistency. 55 -Conclusions The findings from this study are suggestive that feedback may have excellent potential as a tool for enhanced student learning in WPBA encounters.
Planning for Guided Play: A Step-by-Step Guide
[...]the highly didactic... curriculum found in many kindergarten[s] . . . is unlikely to be engaging or meaningful for children; it is also unlikely to build the broad knowledge and vocabulary needed for reading comprehension... . Students viewed videos paired with a drawing opportunity, role-played with props, and drew inspiration from ocean books in the dancing area and from photographs in the art center. Given that the provided assessment for this content is a paper-based multiple choice format assessment comparing an octopus and jellyfish, consider instead how you will collect observational data and measure students\" progress toward the learning goal(s) through your observations of their play. What will make the lesson a playful learning experience is plenty of opportunities for students to engage actively with materials that spark curiosity and to explore through their choice of role playing, art, dance, independent reading in groups, or on their own in different spaces, all of which result in organic conversation.
Conversation in Aphasia Across Communication Partners: Exploring Stability of Microlinguistic Measures and Communicative Success
Purpose The aim of this study was to determine if people with aphasia demonstrate differences in microlinguistic skills and communicative success in unstructured, nontherapeutic conversations with a home communication partner (Home-P) as compared to a speech-language pathologist communication partner (SLP-P). Method Eight persons with aphasia participated in 2 unstructured, nontherapeutic 15-minute conversations, 1 each with an unfamiliar SLP-P and a Home-P. Utterance-level analysis evaluated communicative success. Two narrow measures of lexical relevance and sentence frame were used to evaluate independent clauses. Two broad lexical and morphosyntactic measures were used to evaluate elliptical and dependent clauses and to evaluate independent clauses for errors beyond lexical relevance and sentence frame (such as phonological and morphosyntactic errors). Utterances were further evaluated for presence of behaviors indicating lexical retrieval difficulty (pauses, repetitions, and false starts) and for referential cohesion. Results No statistical differences occurred for communicative success or for any of the microlinguistic measures between the SLP-P and Home-P conversation conditions. Four measures (2 of lexical retrieval and 1 each of communicative success and grammaticality) showed high correlations across the 2 conversation samples. Individuals showed variation of no more than 10 percentage points between the 2 conversation conditions for 46 of 56 data points. Variation greater than 10 percentage points tended to occur for the measure of referential cohesion and primarily for 1 participant. Conclusions Preliminary findings suggest that these microlinguistic measures and communicative success have potential for reliable comparison across Home-P and SLP-P conversations, with the possible exception of referential cohesion. However, further research is needed with a larger, more diverse sample. These findings suggest future assessment and treatment implications for clinical and research needs. Supplemental Material https://doi.org/10.23641/asha.7616312.
Fostering international mentorship and collaborations: evaluation of the Global Bridges program for early-career researchers in health care sciences
The Strategic Research Area Health Care Science was funded by the Swedish government to build strong research environments in 2008. This was assigned to Karolinska Institutet and Umeå University. A major initiative was the development of the Global Bridges program at Karolinska Institutet. The aim was to support junior researchers through mentoring and fostering international networks within health care sciences, as well as provide opportunity for them to cultivate long-lasting international collaborations essential for their careers. As participants, junior researchers were given the opportunity to invite an international scholar to Stockholm for a one-week intensive program consisting of seminars, individual mentoring sessions, and workshops. The Global Bridges program was organized six times between 2013 and 2022 with 48 junior researchers (94% women) at Karolinska Institutet and Umeå University, and 37 international scholars (68% women) from different higher education institutions in Africa, Asia, Australia, Europe, and North America. In this study, we used a mixed-method parallel design to evaluate whether the Global Bridges program had reached its intended objectives. A web-survey was sent out to all participating junior researchers and invited scholars and yielded a response rate of 71% and 83% respectively. The results indicated support for the academic development of junior researchers and that the individual components of the program were useful. Additionally, several collaborations had developed, resulting in peer-reviewed publications, conference presentations and research projects. However, more support was deemed necessary to foster long-lasting collaborations among junior researchers – invited scholar dyads.