Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
388 result(s) for "Mullainathan, Sendhil"
Sort by:
Machine Learning: An Applied Econometric Approach
Machines are increasingly doing “intelligent” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.
Dissecting racial bias in an algorithm used to manage the health of populations
Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Getting to the Top of Mind: How Reminders Increase Saving
We provide evidence from field experiments with three different banks that reminder messages increase commitment attainment for clients who recently opened commitment savings accounts. Messages that mention both savings goals and financial incentives are particularly effective, whereas other content variations such as gain versus loss framing do not have significantly different effects. Nor do we find evidence that receiving additional late reminders has an additive effect. These empirical results do not map neatly into existing models, so we provide a simple model where limited attention to exceptional expenses can generate undersaving that is in turn mitigated by reminders. Data, as supplemental material, are available at http://dx.doi.org/10.1287/mnsc.2015.2296 . This paper was accepted by Teck-Hua Ho, behavioral economics .
HUMAN DECISIONS AND MACHINE PREDICTIONS
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.
An algorithmic approach to reducing unexplained pain disparities in underserved populations
Underserved populations experience higher levels of pain. These disparities persist even after controlling for the objective severity of diseases like osteoarthritis, as graded by human physicians using medical images, raising the possibility that underserved patients’ pain stems from factors external to the knee, such as stress. Here we use a deep learning approach to measure the severity of osteoarthritis, by using knee X-rays to predict patients’ experienced pain. We show that this approach dramatically reduces unexplained racial disparities in pain. Relative to standard measures of severity graded by radiologists, which accounted for only 9% (95% confidence interval (CI), 3–16%) of racial disparities in pain, algorithmic predictions accounted for 43% of disparities, or 4.7× more (95% CI, 3.2–11.8×), with similar results for lower-income and less-educated patients. This suggests that much of underserved patients’ pain stems from factors within the knee not reflected in standard radiographic measures of severity. We show that the algorithm’s ability to reduce unexplained disparities is rooted in the racial and socioeconomic diversity of the training set. Because algorithmic severity measures better capture underserved patients’ pain, and severity measures influence treatment decisions, algorithmic predictions could potentially redress disparities in access to treatments like arthroplasty. An algorithmic, machine-learning approach to measuring severe pain from osteoarthritis applied to X-ray images of knees suggests that reported disparities in knee pain in underserved populations can be reduced by comparison with use of standard radiographic measures of disease severity.
Poverty Impedes Cognitive Function
The poor often behave in less capable ways, which can further perpetuate poverty. We hypothesize that poverty directly impedes cognitive function and present two studies that test this hypothesis. First, we experimentally induced thoughts about finances and found that this reduces cognitive performance among poor but not in well-off participants. Second, we examined the cognitive function of farmers over the planting cycle. We found that the same farmer shows diminished cognitive performance before harvest, when poor, as compared with after harvest, when rich. This cannot be explained by differences in time available, nutrition, or work effort. Nor can it be explained with stress: Although farmers do show more stress before harvest, that does not account for diminished cognitive performance. Instead, it appears that poverty itself reduces cognitive capacity. We suggest that this is because poverty-related concerns consume mental resources, leaving less for other tasks. These data provide a previously unexamined perspective and help explain a spectrum of behaviors among the poor. We discuss some implications for poverty policy.
The Psychological Lives of the Poor
All individuals rely on a fundamental set of mental capacities and functions, or bandwidth, in their economic and non-economic lives. Yet, many factors associated with poverty, such as malnutrition, alcohol consumption, or sleep deprivation, may tax this capacity. Previous research has demonstrated that such taxes often significantly alter judgments, preferences, and decision-making. A more suggestive but growing body of evidence points toward potential effects on productivity and utility. Considering the lives of the poor through the lens of bandwidth may improve our understanding of potential causes and consequences of poverty.
Behavior and Energy Policy
Investment in scalable, non–price-based behavioral interventions and research may prove valuable in improving energy efficiency. Many countries devote substantial public resources to research and development (R&D) for energy-efficient technologies. Energy efficiency, however, depends on both these technologies and the choices of the user. Policies to affect these choices focus on price changes (e.g., subsidies for energy-efficient goods) and information disclosure (e.g., mandated energy-use labels on appliances and autos). We argue that a broader approach is merited, one that draws on insights from the behavioral sciences. Just as we use R&D to develop “hard science” into useful technological solutions, a similar process can be used to develop basic behavioral science into large-scale business and policy innovations. Cost-effectiveness can be rigorously measured using scientific field-testing. Recent examples of scaling behaviorally informed R&D into large energy conservation programs suggest that this could have very high returns.
Predictive modeling of U.S. health care spending in late life
In the United States, one-quarter of Medicare spending occurs in the last 12 months of life, which is commonly seen as evidence of waste. Einav et al. used predictive modeling to reassess this interpretation. From detailed Medicare claims data, the extent to which spending is concentrated not just on those who die, but on those who are expected to die, can be estimated. Most deaths are unpredictable; hence, focusing on end-of-life spending does not necessarily identify “wasteful” spending. Science , this issue p. 1462 The United States spends a lot on people in the last year of life, but those at high risk of death account for relatively little spending. That one-quarter of Medicare spending in the United States occurs in the last year of life is commonly interpreted as waste. But this interpretation presumes knowledge of who will die and when. Here we analyze how spending is distributed by predicted mortality, based on a machine-learning model of annual mortality risk built using Medicare claims. Death is highly unpredictable. Less than 5% of spending is accounted for by individuals with predicted mortality above 50%. The simple fact that we spend more on the sick—both on those who recover and those who die—accounts for 30 to 50% of the concentration of spending on the dead. Our results suggest that spending on the ex post dead does not necessarily mean that we spend on the ex ante “hopeless.”
Prediction Policy Problems
Most empirical policy work focuses on causal inference. We argue an important class of policy problems does not require causal inference but instead requires predictive inference. Solving these “prediction policy problems” requires more than simple regression techniques, since these are tuned to generating unbiased estimates of coefficients rather than minimizing prediction error. We argue that new developments in the field of “machine learning” are particularly useful for addressing these prediction problems. We use an example from health policy to illustrate the large potential social welfare gains from improved prediction.