Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
728 result(s) for "Social indicators Mathematical models."
Sort by:
Outnumbered : from Facebook and Google to fake news and filter-bubbles - the algorithms that control our lives
\"In this book, David Sumpter takes an algorithm-strewn journey to the dark side of mathematics. He investigates the equations that analyse us., influence us and will (maybe) become like us, answering questions such as: Are Google algorithms racist and sexist? ; Why do election predictions fall so drastically? ; What does the future hold as we relinquish our decision-making to machines? Featuring interviews with those working at the cutting edge of algorithm research, along with a healthy dose of mathematical self-experiment, Outnumbered will explain how mathematics and statistics work in the real world, and what we should and shouldn't worry about.\"--from book cover
Transition Modeling and Econometric Convergence Tests
A new panel data model is proposed to represent the behavior of economies in transition, allowing for a wide range of possible time paths and individual heterogeneity. The model has both common and individual specific components, and is formulated as a nonlinear time varying factor model. When applied to a micro panel, the decomposition provides flexibility in idiosyncratic behavior over time and across section, while retaining some commonality across the panel by means of an unknown common growth component. This commonality means that when the heterogeneous time varying idiosyncratic components converge over time to a constant, a form of panel convergence holds, analogous to the concept of conditional sigma convergence. The paper provides a framework of asymptotic representations for the factor components that enables the development of econometric procedures of estimation and testing. In particular, a simple regression based convergence test is developed, whose asymptotic properties are analyzed under both null and local alternatives, and a new method of clustering panels into club convergence groups is constructed. These econometric methods are applied to analyze convergence in cost of living indices among 19 U.S. metropolitan cities.
DISAGREEMENT AMONG FORECASTERS IN G7 COUNTRIES
We investigate determinants of disagreement—cross-sectional dispersion of individual forecasts—about key economic indicators. Disagreement about economic activity, in particular about GDP growth, has a distinct dynamic from disagreement about prices: inflation and interest rates. Disagreement about GDP growth intensifies strongly during recessions. Disagreement about prices rises with their level, declines under independent central banks, and both its level and its sensitivity to macroeconomic variables are larger in countries where central banks became independent only around the mid-1990s. Our findings suggest that credible monetary policy contributes to anchoring of expectations about inflation and interest rates. Disagreement for both groups of indicators increases with uncertainty about the actual series.
Measurement and Meaning in Information Systems and Organizational Research: Methodological and Philosophical Foundations
Despite renewed interest and many advances in methodology in recent years, information systems and organizational researchers face confusing and inconsistent guidance on how to choose amongst, implement, and interpret findings from the use of different measurement procedures. In this article, the related topics of measurement and construct validity are summarized and discussed, with particular focus on formative and reflective indicators and common method bias, and, where relevant, a number of allied issues are considered. The perspective taken is an eclectic and holistic one and attempts to address conceptual and philosophical essentials, raise salient questions, and pose plausible solutions to critical measurement dilemmas occurring in the managerial, behavioral, and social sciences.
An aggregate quantity framework for measuring and decomposing productivity change
Total factor productivity (TFP) can be defined as the ratio of an aggregate output to an aggregate input. This definition naturally leads to TFP indexes that can be expressed as the ratio of an output quantity index to an input quantity index. If the aggregator functions satisfy certain regularity properties then these TFP indexes are said to be multiplicatively complete. This paper formally defines what is meant by completeness and reveals that (1) the class of multiplicatively complete TFP indexes includes Laspeyres, Paasche, Fisher, Törnqvist and Hicks-Moorsteen indexes, (2) the popular Malmquist TFP index of Caves et al. (Econometrica 50(6): 1393-1414, 1982a) is incomplete, implying it cannot always be interpreted as a measure of productivity change, (3) all multiplicatively complete TFP indexes can be exhaustively decomposed into measures of technical change and efficiency change, and (4) the efficiency change component can be further decomposed into measures of technical, mix and scale efficiency change. Artificial data are used to illustrate the decomposition of Hicks-Moorsteen and Fisher TFP indexes.
Urban scaling and its deviations: revealing the structure of wealth, innovation and crime across cities
With urban population increasing dramatically worldwide, cities are playing an increasingly critical role in human societies and the sustainability of the planet. An obstacle to effective policy is the lack of meaningful urban metrics based on a quantitative understanding of cities. Typically, linear per capita indicators are used to characterize and rank cities. However, these implicitly ignore the fundamental role of nonlinear agglomeration integral to the life history of cities. As such, per capita indicators conflate general nonlinear effects, common to all cities, with local dynamics, specific to each city, failing to provide direct measures of the impact of local events and policy. Agglomeration nonlinearities are explicitly manifested by the superlinear power law scaling of most urban socioeconomic indicators with population size, all with similar exponents (1.15). As a result larger cities are disproportionally the centers of innovation, wealth and crime, all to approximately the same degree. We use these general urban laws to develop new urban metrics that disentangle dynamics at different scales and provide true measures of local urban performance. New rankings of cities and a novel and simpler perspective on urban systems emerge. We find that local urban dynamics display long-term memory, so cities under or outperforming their size expectation maintain such (dis)advantage for decades. Spatiotemporal correlation analyses reveal a novel functional taxonomy of U.S. metropolitan areas that is generally not organized geographically but based instead on common local economic models, innovation strategies and patterns of crime.
A global Malmquist-Luenberger productivity index
This paper introduces an alternative environmentally sensitive productivity growth index, which is circular and free from the infeasibility problem. In doing so, we integrated the concept of the global production possibility set and the directional distance function. Like the conventional Malmquist-Luenberger productivity index, it can also be decomposed into sources of productivity growth. The suggested index is employed in analyzing 26 OECD countries for the period 1990-2003. We also employed the conventional Malmquist-Luenberger productivity index, the global Malmquist productivity index and the conventional Malmquist productivity index for comparative purposes in this empirical investigation.
Modeling future spread of infections via mobile geolocation data and population dynamics. An application to COVID-19 in Brazil
Mobile geolocation data is a valuable asset in the assessment of movement patterns of a population. Once a highly contagious disease takes place in a location the movement patterns aid in predicting the potential spatial spreading of the disease, hence mobile data becomes a crucial tool to epidemic models. In this work, based on millions of anonymized mobile visits data in Brazil, we investigate the most probable spreading patterns of the COVID-19 within states of Brazil. The study is intended to help public administrators in action plans and resources allocation, whilst studying how mobile geolocation data may be employed as a measure of population mobility during an epidemic. This study focuses on the states of São Paulo and Rio de Janeiro during the period of March 2020, when the disease first started to spread in these states. Metapopulation models for the disease spread were simulated in order to evaluate the risk of infection of each city within the states, by ranking them according to the time the disease will take to infect each city. We observed that, although the high-risk regions are those closer to the capital cities, where the outbreak has started, there are also cities in the countryside with great risk. The mathematical framework developed in this paper is quite general and may be applied to locations around the world to evaluate the risk of infection by diseases, in special the COVID-19, when geolocation data is available.
The COVID-19 Pandemic Vulnerability Index (PVI) Dashboard: Monitoring County-Level Vulnerability Using Visualization, Statistical Modeling, and Machine Learning
[Image omitted - see PDF] Methods The current PVI model integrates multiple data streams into an overall score derived from 12 key indicators—including well-established, general vulnerability factors for public health, plus emerging factors relevant to the pandemic—distributed across four domains: current infection rates, baseline population concentration, current interventions, and health and environmental vulnerabilities. Data sources in the current model (version 11.2.1) include the Social Vulnerability Index (SVI) of the Centers for Disease Control and Prevention (CDC) for emergency response and hazard mitigation planning (Horney et al. 2017), testing rates from the COVID Tracking Project (Atlantic Monthly Group 2020), social distancing metrics from mobile device data ( https://www.unacast.com/covid19/social-distancing-scoreboard), and dynamic measures of disease spread and case numbers ( https://usafacts.org/issues/coronavirus/). Acknowledgments We thank the information technology and web services staff at the National Institute of Environmental Health Sciences (NIEHS)/National Institutes of Health (NIH) for their help and support, as well as J.K. Cetina and D.J. Reif for their useful technical input and advice.
Realtime nowcasting with a Bayesian mixed frequency model with stochastic volatility
The paper develops a method for producing current quarter forecasts of gross domestic product growth with a (possibly large) range of available within-the-quarter monthly observations of economic indicators, such as employment and industrial production, and financial indicators, such as stock prices and interest rates. In light of existing evidence of time variation in the variances of shocks to gross domestic product, we consider versions of the model with both constant variances and stochastic volatility. We use Bayesian methods to estimate the model, to facilitate providing shrinkage on the (possibly large) set of model parameters and conveniently generate predictive densities. We provide results on the accuracy of nowcasts of realtime gross domestic product growth in the USA from 1985 through 2011. In terms of point forecasts, our proposal improves significantly on auto-regressive models and performs comparably with survey forecasts. In addition, it provides reliable density forecasts, for which the stochastic volatility specification is quite useful.