Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
67 result(s) for "Shlomo, Natalie"
Sort by:
Confidentiality and Differential Privacy in the Dissemination of Frequency Tables
For decades, national statistical agencies and other data custodians have been publishing frequency tables based on census, survey and administrative data. In order to protect the confidentiality of individuals represented in the data, tables based on original data are modified before release. Recently, in response to user demand for more flexible and responsive table publication services, frequency table publication schemes have been augmented with on-line table generating servers such as the US Census Bureau FactFinder and the Australian Bureau of Statistics (ABS) TableBuilder. These systems allow users to build their own custom tables, and make use of automated perturbation routines to protect confidentiality. Motivated by the growing popularity of table generating servers, in this paper we study confidentiality protection for perturbed frequency tables, including the trade-off with analytical utility, focusing on a version of the ABS TableBuilder as a concrete example of a data release mechanism, and examining its properties. Confidentiality protection is assessed in terms of the differential privacy standard, and this paper can be used as a practical introduction to differential privacy, to calculations related to its application, to the relationship between confidentiality protection and utility and to confidentiality in general.
End-of-life care and achieving preferences for place of death in England: Results of a population-based survey using the VOICES-SF questionnaire
Background/aim: Health policy places emphasis on enabling patients to die in their place of choice, and increasing the proportion of home deaths. In this article, we seek to explore reported preferences for place of death and experiences of care in a population-based sample of deaths from all causes. Design: Self-completion post-bereavement survey. Setting/Participants: Census of deaths registered in two health districts between October 2009 and April 2010. Views of Informal Carers – Evaluation of Services Short Form was sent to each informant (n = 1422; usually bereaved relative) 6–12 months post-bereavement. Results: Response was 33%. In all, 35.7% of respondents reported that the deceased said where they wanted to die, and 49.3% of these were reported to achieve this. Whilist 73.9% of those who were reported to have a preference cited home as the preferred place, only 13.3% of the sample died at home. Cancer patients were more likely to be reported to achieve preferences than patients with other conditions (p < .01). Being reported to have a record of preferences for place of death increased the likelihood of dying at home (odds ratio = 22.10). When rating care in the last 2 days, respondents were more likely to rate ‘excellent’ or ‘good’ for nursing care (p < .01), relief of pain (p < .01) and other symptoms (p < .01), emotional support (p < .01) and privacy of patient’s environment (p < .01) if their relative died in their preferred place. Conclusions: More work is needed to encourage people to talk about their preferences at the end of life: this should not be restricted to those known to be dying. Increasing knowledge and achievement of preferences for place of death may also improve end-of-life care.
Participant recruitment in sensitive surveys: a comparative trial of ‘opt in’ versus ‘opt out’ approaches
Background Although in health services survey research we strive for a high response rate, this must be balanced against the need to recruit participants ethically and considerately, particularly in surveys with a sensitive nature. In survey research there are no established recommendations to guide recruitment approach and an ‘opt-in’ system that requires potential participants to request a copy of the questionnaire by returning a reply slip is frequently adopted. However, in observational research the risk to participants is lower than in clinical research and so some surveys have used an ‘opt-out’ system. The effect of this approach on response and distress is unknown. We sought to investigate this in a survey of end of life care completed by bereaved relatives. Methods Out of a sample of 1422 bereaved relatives we assigned potential participants to one of two study groups: an ‘opt in’ group (n=711) where a letter of invitation was issued with a reply slip to request a copy of the questionnaire; or an ‘opt out’ group (n=711) where the survey questionnaire was provided alongside the invitation letter. We assessed response and distress between groups. Results From a sample of 1422, 473 participants returned questionnaires. Response was higher in the ‘opt out’ group than in the ‘opt in’ group (40% compared to 26.4%: χ 2 =29.79, p-value<.01), there were no differences in distress or complaints about the survey between groups, and assignment to the ‘opt out’ group was an independent predictor of response (OR=1.84, 95% CI: 1.45-2.34). Moreover, the ‘opt in’ group were more likely to decline to participate (χ 2 =28.60, p-value<.01) and there was a difference in the pattern of questionnaire responses between study groups. Conclusion Given that the ‘opt out’ method of recruitment is associated with a higher response than the ‘opt in’ method, seems to have no impact on complaints or distress about the survey, and there are differences in the patterns of responses between groups, the ‘opt out’ method could be recommended as the most efficient way to recruit into surveys, even in those with a sensitive nature.
A new approach to assess the normalization of differential rates of protest participation
Research that compares those who do and do not participate in protest over time purports that protesters are becoming increasingly similar to the non-protesting population. Using a protest survey that includes the frequency of protest participation, we consider the extent to which those who protest to different degrees are similar to non-protesters. Selection bias in non-probability protest survey data is compensated for by combining the data with random reference samples from the European Social Survey under a quasi-randomisation approach. We test hypotheses on the normalization of protesters and compare two methods for compensating for selection bias: a proportional weighting method and a propensity score adjustment method. The propensity score adjustment method is more effective in mitigating selection bias by balancing on variables that explain the selection and outcome, and enables the comparison of groups of protesters to non-protesters. We find that protesters become increasingly differentiated from non-protesters according to their left-wing self-placement and education as their extent of protest participation increases.
Selecting Adaptive Survey Design Strata with Partial R-indicators
Recent survey literature shows an increasing interest in survey designs that adapt data collection to characteristics of the survey target population. Given a specified quality objective function, the designs attempt to find an optimal balance between quality and costs. Finding the optimal balance may not be straightforward as corresponding optimisation problems are often highly non-linear and non-convex. In this paper, we discuss how to choose strata in such designs and how to allocate these strata in a sequential design with two phases. We use partial R-indicators to build profiles of the data units where more or less attention is required in the data collection. In allocating cases, we look at two extremes: surveys that are run only once, or infrequent, and surveys that are run continuously. We demonstrate the impact of the sample size in a simulation study and provide an application to a real survey, the Dutch Crime Victimisation Survey.
A Probabilistic Procedure for Anonymisation, for Assessing the Risk of Re-identification and for the Analysis of Perturbed Data Sets
The requirement to anonymise data sets that are to be released for secondary analysis should be balanced by the need to allow their analysis to provide efficient and consistent parameter estimates. The proposal in this article is to integrate the process of anonymisation and data analysis. The first stage uses the addition of random noise with known distributional properties to some or all variables in a released (already pseudonymised) data set, in which the values of some identifying and sensitive variables for data subjects of interest are also available to an external ‘attacker’ who wishes to identify those data subjects in order to interrogate their records in the data set. The second stage of the analysis consists of specifying the model of interest so that parameter estimation accounts for the added noise. Where the characteristics of the noise are made available to the analyst by the data provider, we propose a new method that allows a valid analysis. This is formally a measurement error model and we describe a Bayesian MCMC algorithm that recovers consistent estimates of the true model parameters. A new method for handling categorical data is presented. The article shows how an appropriate noise distribution can be determined.
Using Response Propensity Models to Improve the Quality of Response Data in Longitudinal Studies
We review two approaches for improving the response in longitudinal (birth cohort) studies based on response propensity models: strategies for sample maintenance in longitudinal studies and improving the representativeness of the respondents over time through interventions. Based on estimated response propensities, we examine the effectiveness of different re-issuing strategies using Representativity Indicators (R-indicators). We also combine information from the Receiver Operating Characteristic (ROC) curve with a cost function to determine an optimal cut point for the propensity not to respond in order to target interventions efficiently at cases least likely to respond. We use the first four waves of the UK Millennium Cohort Study to illustrate these methods. Our results suggest that it is worth re-issuing to the field nonresponding cases from previous waves although re-issuing refusals might not be the best use of resources. Adapting the sample to target subgroups for re-issuing from wave to wave will improve the representativeness of response. However, in situations where discrimination between respondents and nonrespondents is not strong, it is doubtful whether specific interventions to reduce nonresponse will be cost effective.
Multivariate Small Area Estimation of Multidimensional Latent Economic Well-being Indicators
Factor analysis models are used in data dimensionality reduction problems where the variability among observed variables can be described through a smaller number of unobserved latent variables. This approach is often used to estimate the multidimensionality of well-being. We employ factor analysis models and use multivariate empirical best linear unbiased predictor (EBLUP) under a unit-level small area estimation approach to predict a vector of means of factor scores representing well-being for small areas. We compare this approach with the standard approach whereby we use small area estimation (univariate and multivariate) to estimate a dashboard of EBLUPs of the means of the original variables and then averaged. Our simulation study shows that the use of factor scores provides estimates with lower variability than weighted and simple averages of standardised multivariate EBLUPs and univariate EBLUPs. Moreover, we find that when the correlation in the observed data is taken into account before small area estimates are computed, multivariate modelling does not provide large improvements in the precision of the estimates over the univariate modelling. We close with an application using the European Union Statistics on Income and Living Conditions data.
Assessing Identification Risk in Survey Microdata Using Log-Linear Models
This article considers the assessment of the risk of identification of respondents in survey microdata, in the context of applications at the United Kingdom (UK) Office for National Statistics (ONS). The threat comes from the matching of categorical \"key\" variables between microdata records and external data sources and from the use of log-linear models to facilitate matching. While the potential use of such statistical models is well established in the literature, little consideration has been given to model specification or to the sensitivity of risk assessment to this specification. In numerical work not reported here, we have found that standard techniques for selecting log-linear models, such as chi-squared goodness-of-fit tests, provide little guidance regarding the accuracy of risk estimation for the very sparse tables generated by typical applications at ONS, for example, tables with millions of cells formed by cross-classifying six key variables, with sample sizes of 10 or 100,000. In this article we develop new criteria for assessing the specification of a log-linear model in relation to the accuracy of risk estimates. We find that, within a class of \"reasonable\" models, risk estimates tend to decrease as the complexity of the model increases. We develop criteria that detect \"underfitting\" (associated with overestimation of the risk). The criteria may also reveal \"overfitting\" (associated with underestimation) although not so clearly, so we suggest employing a forward model selection approach. Our criteria turn out to be related to established methods of testing for overdispersion in Poisson log-linear models. We show how our approach may be used for both file-level and record-level measures of risk. We evaluate the proposed procedures using samples drawn from the 2001 UK Census where the true risks can be determined and show that a forward selection approach leads to good risk estimates. There are several \"good\" models between which our approach provides little discrimination. The risk estimates are found to be stable across these models, implying a form of robustness. We also apply our approach to a large survey dataset. There is no indication that increasing the sample size necessarily leads to the selection of a more complex model. The risk estimates for this application display more variation but suggest a suitable upper bound.