Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
4,297 result(s) for "Behavioral/Experimental Economics"
Sort by:
Multiple hypothesis testing in experimental economics
The analysis of data from experiments in economics routinely involves testing multiple null hypotheses simultaneously. These different null hypotheses arise naturally in this setting for at least three different reasons: when there are multiple outcomes of interest and it is desired to determine on which of these outcomes a treatment has an effect; when the effect of a treatment may be heterogeneous in that it varies across subgroups defined by observed characteristics and it is desired to determine for which of these subgroups a treatment has an effect; and finally when there are multiple treatments of interest and it is desired to determine which treatments have an effect relative to either the control or relative to each of the other treatments. In this paper, we provide a bootstrap-based procedure for testing these null hypotheses simultaneously using experimental data in which simple random sampling is used to assign treatment status to units. Using the general results in Romano and Wolf (Ann Stat 38:598–633, 2010 ), we show under weak assumptions that our procedure (1) asymptotically controls the familywise error rate—the probability of one or more false rejections—and (2) is asymptotically balanced in that the marginal probability of rejecting any true null hypothesis is approximately equal in large samples. Importantly, by incorporating information about dependence ignored in classical multiple testing procedures, such as the Bonferroni and Holm corrections, our procedure has much greater ability to detect truly false null hypotheses. In the presence of multiple treatments, we additionally show how to exploit logical restrictions across null hypotheses to further improve power. We illustrate our methodology by revisiting the study by Karlan and List (Am Econ Rev 97(5):1774–1793, 2007 ) of why people give to charitable causes.
Optimal Abatement Technology Licensing in a Dynamic Transboundary Pollution Game: Fixed Fee Versus Royalty
Transboundary pollution poses a major threat to environment and human health. An effective approach to addressing this problem is the adoption of long-term abatement technology; however, many developing regions are lacking in related technologies that can be acquired by licensing from developed regions. This study focuses on a differential game model of transboundary pollution between two asymmetric regions, one of which possesses advanced abatement technology that can reduce the abatement cost and licenses this technology to the other region by royalty or fixed-fee licensing. We characterize the equilibrium decisions in the regions and find that fixed-fee licensing is superior to royalty licensing from the viewpoint of both regions. The reason is that under fixed-fee licensing, the regions can gain improved incremental revenues and incur reduced environmental damage. Subsequently, we analyze the steady-state equilibrium behaviors and the effects of parameters on the licensing performance. The analysis indicates that the myopic view of the regions leads to short-term revenue maximization, resulting in an increase in total pollution stock. Moreover, a high level of abatement technology or emission tax prompts the licensee region to choose fixed-fee approach, which is beneficial both economically and environmentally for two regions.
Conducting interactive experiments online
Online labor markets provide new opportunities for behavioral research, but conducting economic experiments online raises important methodological challenges. This particularly holds for interactive designs. In this paper, we provide a methodological discussion of the similarities and differences between interactive experiments conducted in the laboratory and online. To this end, we conduct a repeated public goods experiment with and without punishment using samples from the laboratory and the online platform Amazon Mechanical Turk. We chose to replicate this experiment because it is long and logistically complex. It therefore provides a good case study for discussing the methodological and practical challenges of online interactive experimentation. We find that basic behavioral patterns of cooperation and punishment in the laboratory are replicable online. The most important challenge of online interactive experiments is participant dropout. We discuss measures for reducing dropout and show that, for our case study, dropouts are exogenous to the experiment. We conclude that data quality for interactive experiments via the Internet is adequate and reliable, making online interactive experimentation a potentially valuable complement to laboratory studies.
Extracting Appropriate Nodal Marginal Prices for All Types of Committed Reserve
This paper proposes a framework to extract appropriate locational marginal prices for each type of reserve (up-/down-going reserves at both generation- and demand-sides). The proposed reserve pricing scheme accounts for the lost opportunity of selling the convertible products (energy and reserve). The fair prices can be obtained for capacity reserves applying this framework, since this framework assigns the same prices to the same services provided at the same location. The proposed reserve pricing scheme provides all the market participants with the appropriate signals to modify their offers according to the system operator requirements. The pricing problem is decomposed to different hourly sub-problems considering the bounding constraints. To show the effectiveness of the proposed algorithm, it is applied to the IEEE reliability test system and the results are discussed.
Poverty and economic decision making: a review of scarcity theory
Poverty is associated with a wide range of counterproductive economic behaviors. Scarcity theory proposes that poverty itself induces a scarcity mindset, which subsequently forces the poor into suboptimal decisions and behaviors. The purpose of our work is to provide an integrated, up-to-date, critical review of this theory. To this end, we reviewed the empirical evidence for three fundamental propositions: (1) Poverty leads to attentional focus and neglect causing overborrowing, (2) poverty induces trade-off thinking resulting in more consistent consumption decisions, and (3) poverty reduces mental bandwidth and subsequently increases time discounting and risk aversion. Our findings indicate that the current literature predominantly confirms the first and second proposition, although methodological issues prevent a firm conclusion. Evidence for the third proposition was not conclusive. Additionally, we evaluated the overall status of scarcity theory. Although the theory provides an original, coherent, and parsimonious explanation for the relationship between financial scarcity and economic decision making, the theory does not fully accord with the data and lacks some precision. We conclude that both theoretical and empirical work are needed to build a stronger theory.
Explainable Machine Learning in Credit Risk Management
The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying explanations. The empirical analysis of 15,000 small and medium companies asking for credit reveals that both risky and not risky borrowers can be grouped according to a set of similar financial characteristics, which can be employed to explain their credit score and, therefore, to predict their future behaviour.
Bankruptcy Prediction using the XGBoost Algorithm and Variable Importance Feature Engineering
The emergence of big data, information technology, and social media provides an enormous amount of information about firms’ current financial health. When facing this abundance of data, decision makers must identify the crucial information to build upon an effective and operative prediction model with a high quality of the estimated output. The feature selection technique can be used to select significant variables without lowering the quality of performance classification. In addition, one of the main goals of bankruptcy prediction is to identify the model specification with the strongest explanatory power. Building on this premise, an improved XGBoost algorithm based on feature importance selection (FS-XGBoost) is proposed. FS-XGBoost is compared with seven machine learning algorithms based on three well-known feature selection methods that are frequently used in bankruptcy prediction: stepwise discriminant analysis, stepwise logistic regression, and partial least squares discriminant analysis (PLS-DA). Our experimental results confirm that FS-XGBoost provides more accurate predictions, outperforming traditional feature selection methods.
A survey of experimental research on contests, all-pay auctions and tournaments
Many economic, political and social environments can be described as contests in which agents exert costly effort while competing over the distribution of a scarce resource. These environments have been studied using Tullock contests, all-pay auctions and rank-order tournaments. This survey provides a comprehensive review of experimental research on these three canonical contests. First, we review studies investigating the basic structure of contests, including the number of players and prizes, spillovers and externalities, heterogeneity, risk and incomplete information. Second, we discuss dynamic contests and multi-battle contests. Then we review studies examining sabotage, feedback, bias, collusion, alliances, group contests and gender, as well as field experiments. Finally, we discuss applications of contests and suggest directions for future research.
Forecasting of Real GDP Growth Using Machine Learning Models: Gradient Boosting and Random Forest Approach
This paper presents a method for creating machine learning models, specifically a gradient boosting model and a random forest model, to forecast real GDP growth. This study focuses on the real GDP growth of Japan and produces forecasts for the years from 2001 to 2018. The forecasts by the International Monetary Fund and Bank of Japan are used as benchmarks. To improve out-of-sample prediction, the cross-validation process, which is designed to choose the optimal hyperparameters, is used. The accuracy of the forecast is measured by mean absolute percentage error and root squared mean error. The results of this paper show that for the 2001–2018 period, the forecasts by the gradient boosting model and random forest model are more accurate than the benchmark forecasts. Between the gradient boosting and random forest models, the gradient boosting model turns out to be more accurate. This study encourages increasing the use of machine learning models in macroeconomic forecasting.
Reinforcement Learning in Economics and Finance
Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy – a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.