Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
137,185
result(s) for
"survey method"
Sort by:
Misresponse to Reversed and Negated Items in Surveys: A Review
2012
There are important advantages to including reversed items in questionnaires (e.g., control of acquiescence, disruption of nonsubstantive responding, better coverage of the domain of content of a construct), but reversed items can also lead to measurement problems (e.g., low measure reliability, complex factor structures). The authors advocate the continued use of reversed items in measurement instruments but also argue that they should be used with caution. To help researchers improve their scale construction practices, the authors provide a comprehensive review of the literature on reversed and negated items and offer recommendations about their use in questionnaires. The theoretical discussion is supplemented with data on 1330 items from measurement scales that have appeared in Journal of Marketing Research and Journal of Consumer Research.
Journal Article
Population Survey Features and Response Rates: A Randomized Experiment
2016
Objectives. To study the effects of several survey features on response rates in a general population health survey. Methods. In 2012 and 2013, 8000 households in British Columbia, Canada, were randomly allocated to 1 of 7 survey variants, each containing a different combination of survey features. Features compared included administration modes (paper vs online), prepaid incentive ($2 coin vs none), lottery incentive (instant vs end-of-study), questionnaire length (10 minutes vs 30 minutes), and sampling frame (InfoCanada vs Canada Post). Results. The overall response rate across the 7 groups was 27.9% (range = 17.1–43.4). All survey features except the sampling frame were associated with statistically significant differences in response rates. The survey mode elicited the largest effect on the odds of response (odds ratio [OR] = 2.04; 95% confidence interval [CI] = 1.61, 2.59), whereas the sampling frame showed the least effect (OR = 1.14; 95% CI = 0.98, 1.34). The highest response was achieved by mailing a short paper survey with a prepaid incentive. Conclusions. In a mailed general population health survey in Canada, a 40% to 50% response rate can be expected. Questionnaire administration mode, survey length, and type of incentive affect response rates.
Journal Article
A methodological framework for model selection in interrupted time series studies
2018
Interrupted time series (ITS) is a powerful and increasingly popular design for evaluating public health and health service interventions. The design involves analyzing trends in the outcome of interest and estimating the change in trend following an intervention relative to the counterfactual (the expected ongoing trend if the intervention had not occurred). There are two key components to modeling this effect: first, defining the counterfactual; second, defining the type of effect that the intervention is expected to have on the outcome, known as the impact model. The counterfactual is defined by extrapolating the underlying trends observed before the intervention to the postintervention period. In doing this, authors must consider the preintervention period that will be included, any time-varying confounders, whether trends may vary within different subgroups of the population and whether trends are linear or nonlinear. Defining the impact model involves specifying the parameters that model the intervention, including for instance whether to allow for an abrupt level change or a gradual slope change, whether to allow for a lag before any effect on the outcome, whether to allow a transition period during which the intervention is being implemented, and whether a ceiling or floor effect might be expected. Inappropriate model specification can bias the results of an ITS analysis and using a model that is not closely tailored to the intervention or testing multiple models increases the risk of false positives being detected. It is important that authors use substantive knowledge to customize their ITS model a priori to the intervention and outcome under study. Where there is uncertainty in model specification, authors should consider using separate data sources to define the intervention, running limited sensitivity analyses or undertaking initial exploratory studies.
Journal Article
What did you really earn last year?
2019
The paper analyses the sources of income measurement error in surveys with a unique data set. We use the Austrian 2008–2011 waves of the European Union ‘Statistics on income and living conditions’ survey which provide individual information on wages, pensions and unemployment benefits from survey interviews and officially linked administrative records. Thus, we do not have to fall back on complex two-sample matching procedures like related studies. We empirically investigate four sources of measurement error, namely social desirability, sociodemographic characteristics of the respondent, the survey design and the presence of learning effects. We find strong evidence for a social desirability bias in income reporting, whereas the presence of learning effects is mixed and depends on the type of income under consideration. An Owen value decomposition reveals that social desirability is a major explanation of misreporting in wages and pensions, whereas sociodemographic characteristics are most relevant for mismatches in unemployment benefits.
Journal Article
Short assessment of the Big Five: robust across survey methods except telephone interviewing
2011
We examined measurement invariance and age-related robustness of a short 15-item Big Five Inventory (BFI–S) of personality dimensions, which is well suited for applications in large-scale multidisciplinary surveys. The BFI–S was assessed in three different interviewing conditions: computer-assisted or paper-assisted face-to-face interviewing, computer-assisted telephone interviewing, and a self-administered questionnaire. Randomized probability samples from a large-scale German panel survey and a related probability telephone study were used in order to test method effects on self-report measures of personality characteristics across early, middle, and late adulthood. Exploratory structural equation modeling was used in order to test for measurement invariance of the five-factor model of personality trait domains across different assessment methods. For the short inventory, findings suggest strong robustness of self-report measures of personality dimensions among young and middle-aged adults. In old age, telephone interviewing was associated with greater distortions in reliable personality assessment. It is concluded that the greater mental workload of telephone interviewing limits the reliability of self-report personality assessment. Face-to-face surveys and self-administrated questionnaire completion are clearly better suited than phone surveys when personality traits in age-heterogeneous samples are assessed.
Journal Article
In Pursuit of Balance: Randomization in Practice in Development Field Experiments
2009
We present new evidence on the randomization methods used in existing experiments, and new simulations comparing these methods. We find that many papers do not describe the randomization in detail, implying that better reporting is needed. Our simulations suggest that in samples of 300 or more, the different methods perform similarly. However, for very persistent outcome variables, and in smaller samples, pair-wise matching and stratification perform best and appear to dominate the rerandomization methods commonly used in practice. The simulations also point to specific recommendations for which variables to balance on, and for which controls to include in the ex post analysis.
Journal Article
Survey administration and participation: a randomized trial in a panel population health survey
2025
Background
Choice of survey administration features in surveys mapping population health may influence the participation and the generalizability of the results. This randomized study aimed to investigate whether three digital letters (denoted single-mode administration) lead to similar participation as two digital and three physical letters mailed with shorter duration between reminders (denoted sequential mixed-mode administration).
Methods
In total, 9,489 individuals who participated in The Danish Capital Region Health Survey in 2017 were randomized to re-invitation in 2021 (≥ 20 years) by either single-mode with three digital letters (N = 4,745) or sequential mixed-mode survey administration with two digital and then three physical letters (N = 4,744). To investigate the influence of survey administration on representativeness, the two groups were compared regarding sociodemographic characteristics of participants (age, sex, country of origin, education, labor market attachment). Generalized linear models were used to estimate absolute and relative differences between the two administration groups in participation rates (overall and the increase after reminders). It was also investigated whether sociodemographic factors moderated the association between administration group and participation.
Results
At the end of follow-up, the participation rate was significantly higher in the sequential mixed-mode group, which received five letters (78%), than in the single-mode group, which received three letters (61%), primarily due to a greater increase in participation after switching to physical administration and an increase after the two additional reminders in the mixed-mode group. Overall, individuals who decided to participate in the two groups were comparable in all sociodemographic factors, yet older participants appeared to benefit more from switching to physical administration and younger participants from additional reminders.
Conclusions
Depending on the target population, sequential mixed-mode survey administration and/or multiple reminders could be considered to increase participation; however, it does not necessarily improve the sociodemographic representativeness of participants.
Journal Article
Owl Pellet Content Analysis Proves an Effective Technique to Monitor a Population of Threatened Julia Creek Dunnarts (Sminthopsis douglasi) Throughout a Native Rodent Plague
by
Baker, Andrew M.
,
Charley, Cameron L.
,
Gray, Emma L.
in
Biodiversity
,
Community structure
,
Conservation Ecology
2025
Logistical, environmental and temporal considerations can limit the effectiveness of long‐term live trapping for small mammals in remote environments. Owl pellet content analysis offers a low‐cost, non‐invasive alternative to live trapping, as it is generally reflective of prey abundance within the broader small mammal community. One species to which this detection technique could be readily applied is the threatened Australian dasyurid, the Julia Creek dunnart, Sminthopsis douglasi. Most population information is outdated, and the species is notoriously difficult to monitor. Here, we aimed to monitor S. douglasi and other small terrestrial vertebrates over time and in relation to a native long‐haired rat (Rattus villosissimus) plague, assessing their occurrence as dietary items in eastern barn owl (Tyto javanica delicatula) pellets collected at Toorak, north‐west Queensland, Australia. A total of 1007 individual vertebrates were identified from 706 barn owl pellets spanning 3 present‐day collections (2023–2024), with further analysis incorporating a prior published historical dataset (1994–2001, 210 pellets). We demonstrated a shift in Toorak small mammal community structure both over time and in response to an active R. villosissimus plague. Despite declines across present‐day pellet collections, S. douglasi was always detected in high abundance, peaking at 30.75% of all individuals. Cumulative probability of detection indicated that analysis of owl pellets was highly effective at detecting S. douglasi (within 20 pellets) despite the ongoing rodent plague, which has undermined the effectiveness of parallel live trapping efforts across the region. Owl pellet analysis is thus an effective methodology for rapidly assessing S. douglasi populations and should be incorporated into both S. douglasi and other small mammal species monitoring regimes. Julia Creek dunnart (Sminthopsis douglasi) craniodental material from loose eastern barn owl (Tyto javanica delicatula) pellets collected from Toorak, Queensland, Australia. Photo: Cameron L. Charley.
Journal Article
A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors
2011
Background
Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors.
Methods
A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost.
Results
The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias.
Conclusions
Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.
Journal Article
Improving surveys with paradata
by
Kreuter, Frauke
in
Education
,
Social sciences
,
Social sciences -- Research -- Statistical methods
2013
Explore the practices and cutting-edge research on the new and exciting topic of paradata
Paradata are measurements related to the process of collecting survey data.
Improving Surveys with Paradata: Analytic Uses of Process Information is the most accessible and comprehensive contribution to this up-and-coming area in survey methodology.
Featuring contributions from leading experts in the field, Improving Surveys with Paradata: Analytic Uses of Process Information introduces and reviews issues involved in the collection and analysis of paradata. The book presents readers with an overview of the indispensable techniques and new, innovative research on improving survey quality and total survey error. Along with several case studies, topics include:
* Using paradata to monitor fieldwork activity in face-to-face, telephone, and web surveys
* Guiding intervention decisions during data collection
* Analysis of measurement, nonresponse, and coverage error via paradata
Providing a practical, encompassing guide to the subject of paradata, the book is aimed at both producers and users of survey data. Improving Surveys with Paradata: Analytic Uses of Process The book also serves as an excellent resource for courses on data collection, survey methodology, and nonresponse and measurement error.