Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
29,000 result(s) for "Survey response"
Sort by:
Nonresponse in Social Science Surveys
For many household surveys in the United States, responses rates have been steadily declining for at least the past two decades. A similar decline in survey response can be observed in all wealthy countries. Efforts to raise response rates have used such strategies as monetary incentives or repeated attempts to contact sample members and obtain completed interviews, but these strategies increase the costs of surveys. This review addresses the core issues regarding survey nonresponse. It considers why response rates are declining and what that means for the accuracy of survey results. These trends are of particular concern for the social science community, which is heavily invested in obtaining information from household surveys. The evidence to date makes it apparent that current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis. The survey community is painfully aware of these trends and has responded aggressively to these threats. The interview modes employed by surveys in the public and private sectors have proliferated as new technologies and methods have emerged and matured. To the traditional trio of mail, telephone, and face-to-face surveys have been added interactive voice response (IVR), audio computer-assisted self-interviewing (ACASI), web surveys, and a number of hybrid methods. Similarly, a growing research agenda has emerged in the past decade or so focused on seeking solutions to various aspects of the problem of survey nonresponse; the potential solutions that have been considered range from better training and deployment of interviewers to more use of incentives, better use of the information collected in the data collection, and increased use of auxiliary information from other sources in survey design and data collection. Nonresponse in Social Science Surveys: A Research Agenda also documents the increased use of information collected in the survey process in nonresponse adjustment.
Patient satisfaction and survey response in 717 hospital surveys in Switzerland: a cross-sectional study
Background The association between patient satisfaction and survey response is only partly understood. In this study, we describe the association between average satisfaction and survey response rate across hospital surveys, and model the association between satisfaction and propensity to respond for individual patients. Methods Secondary analysis of patient responses (166′014 respondents) and of average satisfaction scores and response rates obtained in 717 annual patient satisfaction surveys conducted between 2011 and 2015 at 164 Swiss hospitals. The satisfaction score was the average of 5 items scored between 0 and 10. The association between satisfaction and response propensity in individuals was modeled as the function that predicted best the observed response rates across surveys. Results Among the 717 surveys, response rates ranged from 16.1 to 80.0% (pooled average 49.8%), and average satisfaction scores ranged from 8.36 to 9.79 (pooled mean 9.15). At the survey level, the mean satisfaction score and response rate were correlated (r = 0.61). This correlation held for all subgroups of surveys, except for the 5 large university hospitals. The estimated individual response propensity function was “J-shaped”: the probability of responding was lowest (around 20%) for satisfaction scores between 3 and 7, increased sharply to about 70% for those maximally satisfied, and increased slightly for the least satisfied. Average satisfaction scores projected for 100% participation were lower than observed average scores. Conclusions The most satisfied patients were the most likely to participate in a post-hospitalization satisfaction survey. This tendency produces an upward bias in observed satisfaction scores, and a positive correlation between average satisfaction and response rate across surveys.
Do low survey response rates bias results? Evidence from Japan
In developed countries, response rates have dropped to such low levels that many in the population field question whether the data can provide unbiased results. The paper uses three Japanese surveys conducted in the 2000s to ask whether low survey response rates bias results. A secondary objective is to bring results reported in the survey response literature to the attention of the demographic research community. Using a longitudinal survey as well as paradata from a cross-sectional survey, a variety of statistical techniques (chi square, analysis of variance (ANOVA), logistic regression, ordered probit or ordinary least squares regression (OLS), as appropriate) are used to examine response-rate bias. Evidence of response-rate bias is found for the univariate distributions of some demographic characteristics, behaviors, and attitudinal items. But when examining relationships between variables in a multivariate analysis, controlling for a variety of background variables, for most dependent variables the authors do not find evidence of bias from low response rates.
Improving survey response : lessons learned from the European Social Survey
High response rates have traditionally been considered as one of the main indicators of survey quality. Obtaining high response rates is sometimes difficult and expensive, but clearly plays a beneficial role in terms of improving data quality. It is becoming increasingly clear, however, that simply boosting response to achieve a higher response rate will not in itself eradicate nonresponse bias. In this book the authors argue that high response rates should not be seen as a goal in themselves, but rather as part of an overall survey quality strategy based on random probability sampling and aimed at minimising nonresponse bias. Key features of Improving Survey Response: A detailed coverage of nonresponse issues, including a unique examination of cross-national survey nonresponse processes and outcomes. A discussion of the potential causes of nonresponse and practical strategies to combat it. A detailed examination of the impact of nonresponse and of techniques for adjusting for it once it has occurred. Examples of best practices and experiments drawn from 25 European countries. Supplemented by the European Social Survey (ESS) websites, containing materials for the measurement and analysis of nonresponse based on detailed country-level response process datasets. The book is designed to help survey researchers and those commissioning surveys by explaining how to prioritise the reduction of nonresponse bias rather than focusing on increasing the overall response rate. It shows substantive researchers how nonresponse can impact on substantive outcomes.
State-Dependence Effects in Surveys
In recent years academic research has focused on understanding and modeling the survey response process. This paper examines an understudied systematic response tendency in surveys: the extent to which observed responses are subject to state dependence, i.e., response carryover from one item to another independent of specific item content. We develop a statistical model that simultaneously accounts for state dependence, item content, and scale usage heterogeneity. The paper explores how state dependence varies by response category, item characteristics, item sequence, respondent characteristics, and whether it becomes stronger as the survey progresses. Two empirical applications provide evidence of substantial and significant state dependence. We find that the degree of state dependence depends on item characteristics and item sequence, and it varies across individuals and countries. The article demonstrates that ignoring state dependence may affect reliability and predictive validity, and it provides recommendations for survey researchers.
The Relationship between Daily Behavior Changes and Vaccine Attitudes at the Early Stage of the COVID-19 Pandemic among Japanese People from Different Demographics: A Retrospective and Exploratory Examination Using a Free-Response Survey
This study investigated how daily behaviors of Japanese people changed during the early stages of the COVID-19 pandemic and whether the change was mediated by demographics. It also examined whether the magnitude of behavior change in a demographic group is related to their attitudes towards the COVID-19 vaccine. 301 Japanese responded to an online survey in February 2021, in which they first wrote some activities they frequently performed before the virus outbreak and then wrote about activities in their current life. The number of gathered answers were 1858 for ‘before’ and 1668 for ‘after’, and they were grouped into 19 behavior categories. Overall, behaviors such as traveling, eating out, and shopping were much less frequently described in the ‘after’ condition; while housework, food delivery, and pandemic prevention were mentioned more. However, the change pattern was significantly influenced by demographics of age, gender, having children or not, and household income. Especially women, younger generations, and people without children showed the greatest extent of behavior change compared with the other demographic cohorts. These groups were reported to be vaccine-hesitant in the literature. This study suggests that individuals with hesitant attitudes towards vaccines are more willing to change their behaviors to control viral transmission.
Values, Framing, and Citizens' Thoughts about Policy Issues: Effects on Content and Quantity
This study examines how frames invoking a core value shape the content and quantity of citizens' thoughts about a policy issue. An experimental study showed that exposure to a pro-school voucher equality frame increased the probability that participants would invoke equality in their open-ended survey responses. Exposure to an anti-school voucher equality frame produced the same effect, as did exposure to both frames. At the same time, participants who received either frame or both frames provided fewer open-ended responses. Thus, the frames appeared to focus participants' thoughts on one value while reducing the overall extent to which they thought about the issue. In broader terms, value framing may have implications for the nature and quality of public deliberation about policy issues-a point that scholars should keep in mind when considering how to define and study framing effects.
Non-Response in Student Surveys: The Role of Demographics, Engagement and Personality
What causes a student to participate in a survey? This paper looks at participation across multiple surveys to understand survey non-response; by using multiple surveys we minimize the impact of survey salience. Students at a selective liberal arts college were administered four different surveys throughout the 2002-2003 academic year, and we use the number of surveys participated in to understand how student characteristics such as demographics, engagement and Holland personality type affect cooperation. We find that survey respondents are more likely to be female and socially engaged, less likely to be on financial aid, more likely to be an investigative personality type and less likely to be an enterprising personality type.
Does more balanced survey response imply less non-response bias?
Recently, various indicators have been proposed as indirect measures of nonresponse error in surveys. They employ auxiliary variables, external to the survey, to detect non-representative or unbalanced response. A class of designs known as adaptive survey designs maximizes these indicators by applying different treatments to different subgroups. The natural question is whether the decrease in non-response bias that is caused by adaptive survey designs could also be achieved by non-response adjustment methods. We discuss this question and provide theoretical and empirical considerations, supported by a range of household and business surveys. We find evidence that more balanced response coincides with less non-response bias, even after adjustment.
Can incentives improve survey data quality in developing countries?
We report results of an experiment designed to assess whether the payment of contingent incentives to respondents in Karnataka, India, impacts the quality of survey data. Of 2276 households sampled at the city block level, 934 were randomly assigned to receive a small one-time payment at the time of the survey, whereas the remaining households did not receive this incentive. We analyse the effects of incentives across a range of questions that are common in survey research in less developed countries. Our study suggests that incentives reduced unit non-response. Conditionally on participation, we also find little impact of incentives on a broad range of sociodemographic, behavioural and attitudinal questions. In contrast, we consistently find that households that received incentives reported substantially lower consumption and income levels and fewer assets. Given random assignment and very high response rates, the most plausible interpretation of this finding is that incentivizing respondents in this setting may increase their motivation to present themselves as more needy, whether to justify the current payment or to increase the chance of receiving resources in the future. Therefore, despite early indications that contingent incentives may raise response rates, the net effect on data quality must be carefully considered.