Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
97,035 result(s) for "Response rates"
Sort by:
Nonresponse in Social Science Surveys
For many household surveys in the United States, responses rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries. Efforts to raise response rates have used such strategies as monetary incentives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys. This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining information from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis. The survey community is painfully aware of these trends and has responded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technologies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions to various aspects of the problem of survey nonresponse; the potential solutions that have been considered range from better training and deployment of interviewers to more use of incentives, better use of the information collected in the data collection, and increased use of auxiliary information from other sources in survey design and data collection. Nonresponse in Social Science Surveys: A Research Agenda also documents the increased use of information collected in the survey process in nonresponse adjustment.
8435 Paper versus paperless: maximising timely education feedback achieved through paper forms rather than online equivalent
Why did you do this work?The role of feedback in improving the quality of teaching in medical education is recognised by both educators and participants (Stertz et al 2016). Feedback plays an important role in improving knowledge and competence and helps to reflect on one’s performance (Van de Ridder et al 2018, Eraut et al 2016), but which method for obtaining feedback is more effective at obtaining feedback responses, paper-based feedback forms or paperless electronic feedback forms (‘e-feedback’)?What did you do?We retrospectively analysed feedback obtained from two medical education activities conducted by the same medical educator within the same paediatric hospital. Firstly, a simulation workshop for paediatric medical procedures in which traditional paper-based written feedback forms were obtained. Secondly, an online course on paediatric simulation design & debrief where feedback was obtained through e-feedback. Both educational activities involved Likert scale questionnaire feedback forms. The primary objective was to obtain the quantitative response rates from participants of both activities whilst analysing other variables such as response times from the educational activity.What did you find?We analysed feedback received from 380 attendees, inclusive of a range of professional groups including paediatric doctors, nurses/allied health professionals and students. Of 125 total attendees to the online simulation course we received 60 e-feedback responses, generating a response rate of 48%. This compared to a response rate of 96% when using paper-based forms, with feedback received from 244 of the 255 participants in our programme of medical simulation workshops.The mean average time to response to our request for e-feedback was 3 days. Time to response was highly variable with the latest feedback received 24 days post-course. Paper-based responses were collected at course culmination, so feedback was instantly available for analysis.We found that the use of paper-based written feedback forms rather than e-feedback was associated with a two-fold higher response rate. Use of paper forms also reduced both the average time to response and the longest time to receipt of feedback.Abstract 8435 Figure 1What does it mean?Our findings contrast with those of previous authors, who found e-feedback to result in higher response rates when compared to paper-based forms (Onimowo et al 2020). However, the continued use of paper forms has both cost and environmental implications, as well as impact on administrative workload. In terms of e-feedback completion, it is recognised that giving incentives has an impact in improving response rates (Natesan et al 2013) and such incentives were not offered to our cohort.ReferencesSterz, et al. The effect of written standardized feedback on the structure and quality of surgical lectures: a prospective cohort study. BMC medical education. 2016.Van de Ridder, et al. What is feedback in clinical education? Med Educ. 2008.Eraut M. Feedback. Learn Health Soc Care. 2006.Onimowo, et al. Use of quick response (QR) codes to achieve timely feedback in clinical simulation settings. BMJ Simul Technol Enhanc Learn. 2020.Natesan, et al. Feedback in medical education: an evidence-based guide to best practices from the council of residency directors in emergency medicine. West J Emerg Med. 2023.
How Important are High Response Rates for College Surveys?
Surveys play an important role in understanding the higher education landscape. About 60 percent of the published research in major higher education journals utilized survey data (Pike, 2007). Institutions also commonly use surveys to assess student outcomes and evaluate programs, instructors, and even cafeteria food. However, declining survey participation rates threaten this source of vital information and its perceived utility. Survey researchers across a number of social science disciplines in America and abroad have witnessed a gradual decrease in survey participation over time (Brick & Williams, 2013; National Research Council, 2013). Higher education researchers have not been immune from this trend; Dey (1997) long ago highlighted the steep decline in response rates in the American Council on Education and Cooperative Institutional Research Program (CIRP) senior follow-up surveys from 60 percent in the 1960s to 21 percent in 1991. Survey researchers have long assumed that the best way to obtain unbiased estimates is to achieve a high response rate. For this reason, the literature on survey methods is rife with best practices and suggestions to improve survey response rates (e.g., American Association for Public Opinion Research, n.d.; Dillman, 2000; Heberlein & Baumgartner, 1978). These methods can be costly or require significant time or effort by survey researchers and may be unfeasible for postsecondary institutions due to the increasing fiscal pressures placed upon them. However, many survey researchers have begun to question the widely held assumption that low response rates provide biased results (Curtin, Presser, & Singer, 2000; Groves, 2006; Keeter, Miler, Kohut, Groves, & Presser, 2000; Massey & Tourangeau, 2013; Peytchev, 2013). This study investigates this assumption with college student assessment data. It utilizes data from hundreds of samples of first-year and senior students with relatively high response rates using a common assessment instrument with a standardized administration protocol. It investigates how population estimates would have changed if researchers put forth less effort when collecting data and achieved lower response rates and respondent counts. Due to the prevalence of survey data in higher education research and assessment efforts, it is imperative to better understand the relationship between response rates and data quality.
6013 Assessing patient safety culture in a welsh neonatal unit
ObjectivesIn recent years, safety in neonatal units has been highlighted, and considered high priority on the healthcare quality program. Safety culture is believed to contribute and greatly influence the quality and safety of patients in the neonatal unit. Assessing safety culture is the first step toward improving patient safety.This study aimed to provide a baseline assessment of the patient safety culture in a neonatal unit in Wales, to provide an overview of factors that contribute to patient safety, to identify strengths and weaknesses, and to use the information to plan for improvement.MethodsAll staff working in the neonatal unit at Prince Charles Hospital were invited to participate in a web-based Survey on Patient Safety Culture (SOPS®) Hospital Survey version 2.0. The study period was from June 7 to 17, 2023. Positive response rate to 10 patient safety culture composite measures and 2 outcomes (overall patient safety rating and the number of safety events reported in the past year) were analysed. Composite measures with a positive rate of 75% or more are identified as strengths, measures with 50–75% have potential for improvement, and those with less than 50% as weaknesses.Results100 staff members were invited to participate in the study. 37 surveys were completed, giving us a response rate of 37%. 32% of participants were neonatal nurses and 46% were doctors. 49% of respondents reported no patient safety event, while 30% reported 1–2 events in the past year. The patient safety composite measures with the highest positive rates were ‘Teamwork’ at 91%,’Supervisor, manager, or clinical leader support for patient safety’ at 81%, and ‘Organisational learning – continuous improvement’ at 76%. The remaining 7 composite measures’ positivity rates ranged from 62 to 69%, identifying potential areas improvement.ConclusionThis patient safety culture study has identified strengths and areas for improvement. The next step is to use this information to develop an action plan to improve patient safety in our neonatal unit.
The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers
Background Artificial intelligence (AI) technologies are increasingly used in clinical practice. Although there is robust evidence that AI innovations can improve patient care, reduce clinicians’ workload and increase efficiency, their impact on medical training and education remains unclear. Methods A survey of trainee doctors’ perceived impact of AI technologies on clinical training and education was conducted at UK NHS postgraduate centers in London between October and December 2020. Impact assessment mirrored domains in training curricula such as ‘clinical judgement’, ‘practical skills’ and ‘research and quality improvement skills’. Significance between Likert-type data was analysed using Fisher’s exact test. Response variations between clinical specialities were analysed using k-modes clustering. Free-text responses were analysed by thematic analysis. Results Two hundred ten doctors responded to the survey (response rate 72%). The majority (58%) perceived an overall positive impact of AI technologies on their training and education. Respondents agreed that AI would reduce clinical workload (62%) and improve research and audit training (68%). Trainees were skeptical that it would improve clinical judgement (46% agree, p  = 0.12) and practical skills training (32% agree, p  < 0.01). The majority reported insufficient AI training in their current curricula (92%), and supported having more formal AI training (81%). Conclusions Trainee doctors have an overall positive perception of AI technologies’ impact on clinical training. There is optimism that it will improve ‘research and quality improvement’ skills and facilitate ‘curriculum mapping’. There is skepticism that it may reduce educational opportunities to develop ‘clinical judgement’ and ‘practical skills’. Medical educators should be mindful that these domains are protected as AI develops. We recommend that ‘Applied AI’ topics are formalized in curricula and digital technologies leveraged to deliver clinical education.
Do low survey response rates bias results? Evidence from Japan
In developed countries, response rates have dropped to such low levels that many in the population field question whether the data can provide unbiased results. The paper uses three Japanese surveys conducted in the 2000s to ask whether low survey response rates bias results. A secondary objective is to bring results reported in the survey response literature to the attention of the demographic research community. Using a longitudinal survey as well as paradata from a cross-sectional survey, a variety of statistical techniques (chi square, analysis of variance (ANOVA), logistic regression, ordered probit or ordinary least squares regression (OLS), as appropriate) are used to examine response-rate bias. Evidence of response-rate bias is found for the univariate distributions of some demographic characteristics, behaviors, and attitudinal items. But when examining relationships between variables in a multivariate analysis, controlling for a variety of background variables, for most dependent variables the authors do not find evidence of bias from low response rates.
6302 From feedback to action: a quality improvement journey in parent experience
ObjectivesIncrease the uptake of a feedback questionnaire for parents whose baby has been discharged from a Neonatal Intensive Care Unit.Improve the delivery of Neonatal service information to parents whose baby has been admitted to the Neonatal Unit.MethodsThe uptake of a feedback questionnaire for parents whose baby has been discharged from a Neonatal Intensive Care Unit was found to be low. We explored this by clarifying the group of parents offered the questionnaire, identifying barriers to questionnaire completion and reviewing when the questionnaire was being offeredResponse rate improvement activities included:Increasing awareness by joining nursing huddles, liaison with senior nurses and unit managers and sharing information by emailOptimizing questionnaire access by using QR codes and offering it 2–3 days prior to discharge.Creating posters publicizing the feedback questionnaire and including access QR codes (figure 1)PDFs of each poster was added to the parent information folders at each cot space.The use of business cards with questionnaire details and QR codes was explored.Information sharing was enhanced by:A new parent yellow folder for each cot space with QR codes (figure 2) to access specific information relevant to parents.A poster was developed (figure 3) for Delivery suite and postnatal wards signposting parents to a video tour of our both Neonatal Units.Data were collected from quarterly feedback review of Meridian database to assess response rates and the proportion of parents satisfied with the information given to them. Admissions, discharge destinations and mortality data were collected from Leicester Neonatal Service Badger net.Abstract 6302 Figure 1Abstract 6302 Figure 2Abstract 6302 Figure 3ResultsWe found that discharges from the transitional care/postnatal ward outnumbered those from the intensive care unit/special care (figure 4). Babies that stayed in the unit for less than 24 hours and those getting discharged to transitional care/postnatal ward were not given the questionnaires (figure 5).Response rates with denominator as the babies who stayed in the unit for more than 24 hours increased with every intervention we did (figure 6).The percentage of parents among the respondents, who were satisfied with the information given to them also showed an improvement with our interventions (figure 7).Abstract 6302 Figure 4Abstract 6302 Figure 5Abstract 6302 Figure 6Abstract 6302 Figure 7ConclusionParents of babies discharged from the neonatal units to postnatal wards or transitional care and those admitted for less 24 hours in the unit are to be targeted for feedback.Response rates and parent information scores show steady improvement with our initiatives.
Patient satisfaction and survey response in 717 hospital surveys in Switzerland: a cross-sectional study
Background The association between patient satisfaction and survey response is only partly understood. In this study, we describe the association between average satisfaction and survey response rate across hospital surveys, and model the association between satisfaction and propensity to respond for individual patients. Methods Secondary analysis of patient responses (166′014 respondents) and of average satisfaction scores and response rates obtained in 717 annual patient satisfaction surveys conducted between 2011 and 2015 at 164 Swiss hospitals. The satisfaction score was the average of 5 items scored between 0 and 10. The association between satisfaction and response propensity in individuals was modeled as the function that predicted best the observed response rates across surveys. Results Among the 717 surveys, response rates ranged from 16.1 to 80.0% (pooled average 49.8%), and average satisfaction scores ranged from 8.36 to 9.79 (pooled mean 9.15). At the survey level, the mean satisfaction score and response rate were correlated (r = 0.61). This correlation held for all subgroups of surveys, except for the 5 large university hospitals. The estimated individual response propensity function was “J-shaped”: the probability of responding was lowest (around 20%) for satisfaction scores between 3 and 7, increased sharply to about 70% for those maximally satisfied, and increased slightly for the least satisfied. Average satisfaction scores projected for 100% participation were lower than observed average scores. Conclusions The most satisfied patients were the most likely to participate in a post-hospitalization satisfaction survey. This tendency produces an upward bias in observed satisfaction scores, and a positive correlation between average satisfaction and response rate across surveys.
Do Politicians Outside the United States Also Think Voters Are More Conservative than They Really Are?
In an influential recent study, Broockman and Skovron (2018) found that American politicians consistently overestimate the conservativeness of their constituents on a host of issues. Whether this conservative bias in politicians’ perceptions of public opinion is a uniquely American phenomenon is an open question with broad implications for the quality and nature of democratic representation. We investigate it in four democracies: Belgium, Canada, Germany, and Switzerland. Despite these countries having political systems that differ greatly, we document a strong and persistent conservative bias held by a majority of the 866 representatives interviewed. Our findings highlight the conservative bias in elites’ perception of public opinion as a widespread regularity and point toward a pressing need for further research on its sources and impacts.