Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
147,227 result(s) for "Research Responses"
Sort by:
Structural Topic Models for Open-Ended Survey Responses
Collection and especially analysis of open-ended survey responses are relatively rare in the discipline and when conducted are almost exclusively done through human coding. We present an alternative, semiautomated approach, the structural topic model (STM) (Roberts, Stewart, and Airoldi 2013; Roberts et al. 2013), that draws on recent developments in machine learning based analysis of textual data. A crucial contribution of the method is that it incorporates information about the document, such as the authors gender, political affiliation, and treatment assignment (if an experimental study). This article focuses on how the STM is helpful for survey researchers and experimentalists. The STM makes analyzing open-ended responses easier, more revealing, and capable of being used to estimate treatment effects. We illustrate these innovations with analysis of text from surveys and experiments.
Identifying Research Trends and Gaps in the Context of COVID-19
The COVID-19 pandemic has affected the world in different ways. Not only are people’s lives and livelihoods affected, but the virus has also affected people’s lifestyles. In the research sector, there have been significant changes, and new research is coming very strongly in the related fields of virology and epidemiology. Similar trends were observed after the Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and Middle East Respiratory Syndrome Coronavirus (MERS-CoV) episodes of 2003 and 2012, respectively. Analyzing 20 years of published scientific papers, this article points out the highlights of coronavirus-related research. Significant progress is observed in the past research related to virology, epidemiology, infectious diseases among others. However, in research linked to public health, its governance, technology, and risk communication there seem to be gap areas. Although the World Health Organization (WHO) global research road map has identified social science-related research as a priority area, more focus needs to be given in the upcoming days for multi, cross and trans-disciplinary research related to public health and disaster risk reduction.
Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey
This paper investigates how expected and actual questionnaire length affects cooperation rates and a variety of indicators of data quality in web surveys. We hypothesized that the expected length of a web-based questionnaire is negatively related to the initial willingness to participate. Moreover, the serial position of questions was predicted to influence four indicators of data quality. We hypothesized that questions asked later in a web-based questionnaire will, compared to those asked earlier, be associated with (a) shorter response times, (b) higher item-nonresponse rates, (c) shorter answers to open-ended questions, and (d) less variability to items arranged in grids. To test these assumptions, we manipulated the stated length (10, 20, and 30 minutes) and the position of questions in an online questionnaire consisting of randomly ordered blocks of thematically related questions. As expected, the longer the stated length, the fewer respondents started and completed the questionnaire. In addition, answers to questions positioned later in the questionnaire were faster, shorter, and more uniform than answers to questions positioned near the beginning.
Do low survey response rates bias results? Evidence from Japan
In developed countries, response rates have dropped to such low levels that many in the population field question whether the data can provide unbiased results. The paper uses three Japanese surveys conducted in the 2000s to ask whether low survey response rates bias results. A secondary objective is to bring results reported in the survey response literature to the attention of the demographic research community. Using a longitudinal survey as well as paradata from a cross-sectional survey, a variety of statistical techniques (chi square, analysis of variance (ANOVA), logistic regression, ordered probit or ordinary least squares regression (OLS), as appropriate) are used to examine response-rate bias. Evidence of response-rate bias is found for the univariate distributions of some demographic characteristics, behaviors, and attitudinal items. But when examining relationships between variables in a multivariate analysis, controlling for a variety of background variables, for most dependent variables the authors do not find evidence of bias from low response rates.
IMPROVING RESPONSE TO WEB AND MIXED-MODE SURVEYS
We conducted two experiments designed to evaluate several strategies for improving response to Web and Web/mail mixed-mode surveys. Our goal was to determine the best ways to maximize Web response rates in a highly Internet-literate population with full Internet access. We find that providing a simultaneous choice of response modes does not improve response rates (compared to only providing a mail response option). However, offering the different response modes sequentially, in which Web is offered first and a mail follow-up option is used in the final contact, improves Web response rates and is overall equivalent to using only mail. We also show that utilizing a combination of both postal and email contacts and delivering a token cash incentive in advance are both useful methods for improving Web response rates. These experiments illustrate that although different implementation strategies are viable, the most effective strategy is the combined use of multiple responseinducing techniques.
Uncovering the Origins of the Gender Gap in Political Ambition
Based on survey responses from a national random sample of nearly 4,000 high school and college students, we uncover a dramatic gender gap in political ambition. This finding serves as striking evidence that the gap is present well before women and men enter the professions from which most candidates emerge. We then use political socialization—which we gauge through a myriad of socializing agents and early life experiences—as a lens through which to explain the individual-level differences we uncover. Our analysis reveals that parental encouragement, politicized educational and peer experiences, participation in competitive activities, and a sense of self-confidence propel young people's interest in running for office. But on each of these dimensions, women, particularly once they are in college, are at a disadvantage. By identifying when and why gender differences in interest in running for office materialize, we begin to uncover the origins of the gender gap in political ambition. Taken together, our results suggest that concerns about substantive and symbolic representation will likely persist.
COMPARING THE ACCURACY OF RDD TELEPHONE SURVEYS AND INTERNET SURVEYS CONDUCTED WITH PROBABILITY AND NON-PROBABILITY SAMPLES
This study assessed the accuracy of telephone and Internet surveys of probability samples and Internet surveys of non-probability samples of American adults by comparing aggregate survey results against benchmarks. The probability sample surveys were consistently more accurate than the non-probability sample surveys, even after post-stratification with demographics. The non-probability sample survey measurements were much more variable in their accuracy, both across measures within a single survey and across surveys with a single measure. Post-stratification improved the overall accuracy of some of the non-probability sample surveys but decreased the overall accuracy of others.
Research Synthesis: The Practice of Cognitive Interviewing
Cognitive interviewing has emerged as one of the more prominent methods for identifying and correcting problems with survey questions. We define cognitive interviewing as the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends. But beyond this general categorization, cognitive interviewing potentially includes a variety of activities that may be based on different assumptions about the type of data that are being collected and the role of the interviewer in that process. This synthesis reviews the range of current cognitive interviewing practices, focusing on three considerations: (1) what are the dominant paradigms of cognitive interviewing—what is produced under each, and what are their apparent advantages; (2) what key decisions about cognitive interview study design need to be made once the general approach is selected (e.g., who should be interviewed, how many interviews should be conducted, and how should probes be selected), and what bases exist for making these decisions; and (3) how cognitive interviewing data should be evaluated, and what standards of evidence exist for making questionnaire design decisions based on study findings. In considering these issues, we highlight where standards for best practices are not clearly defined, and suggest broad areas worthy of additional methodological research.
Gender discrimination among women healthcare workers during the COVID-19 pandemic: Findings from a mixed methods study
Gender discrimination among women healthcare workers (HCWs) negatively impacts job satisfaction, mental health, and career development; however, few studies have explored how experiences of gender discrimination change during times of health system strain. Thus, we conducted a survey study to characterize gender discrimination during a time of significant health system strain, i.e., the COVID-19 pandemic. We used a convenience sampling approach by inviting department chairs of academic medical centers in the United States to forward our online survey to their staff in January 2021. The survey included one item assessing frequency of gender discrimination, and an open-ended question asking respondents to detail experiences of discrimination. The survey also included questions about social and work stressors, such as needing additional childcare support. We used ordinal logistic regression models to identify predictors of gender discrimination, and grounded theory to characterize themes that emerged from open-ended responses. Among our sample of 716 women (mean age = 37.63 years, SD = 10.97), 521 (72.80%) were White, 102 (14.20%) Asian, 69 (9.60%) Black, 53 (7.4%) Latina, and 11 (1.50%) identified as another race. In an adjusted model that included demographic characteristics and social and work stressors as covariates, significant predictors of higher gender discrimination included younger age (OR = 0.98, 95%CI = 0.96, 0.99); greater support needs (OR = 1.26, 95%CI = 1.09,1.47); lower team cohesion (OR = 0.94, 95%CI = 0.91, 0.97); greater racial discrimination (OR = 1.07, 95%CI = 1.05,1.09); identifying as a physician (OR = 6.59, 95%CI = 3.95, 11.01), physician-in-training (i.e., residents and fellows; OR = 3.85, 95%CI = 2.27,6.52), or non-clinical worker (e.g., administrative assistants; OR = 3.08, 95%CI = 1.60,5.90), compared with nurses; and reporting the need for a lot more childcare support (OR = 1.84, 95%CI = 1.15, 2.97), compared with reporting no childcare support need. In their open-ended responses, women HCWs described seven themes: 1) belittlement by colleagues, 2) gendered workload distributions, 3) unequal opportunities for professional advancement, 4) expectations for communication, 5) objectification, 6) expectations of motherhood, and 7) mistreatment by patients. Our study underscores the severity of gender discrimination among women HCWs. Hospital systems should prioritize gender equity programs that improve workplace climate during and outside of times of health system strain.
Computing Response Metrics for Online Panels
As more researchers use online panels for studies, the need for standardized rates to evaluate these studies becomes paramount. There are currently many different ways and conflicting terminology used to compute various metrics for online panels. This paper discusses the sparse literature on how to compute response, refusal, and other rates and proposes a set of formulas and a standardized terminology that can be used to calculate and interpret these metrics for online panel studies. A description of and distinction between probability-based and volunteer opt-in panels is made since not all metrics apply to both types. A review of the existing discussion and recommendations, mostly from international organizations, is presented for background and context. In order to propose response and other metrics, the different stages involved in building an online panel are delineated. Metrics associated with these stages contribute to cumulative response rate formulas that can be used to evaluate studies using online probability-based panels. (Only completion rates can be calculated with opt-in panels.) We conclude with a discussion of the meaning of the different metrics proposed and what we think should be reported for which type of panel.