Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
61 result(s) for "Guerci, Eric"
Sort by:
Meaningful learning in weighted voting games: an experiment
By employing binary committee choice problems, this paper investigates how varying or eliminating feedback about payoffs affects: (1) subjects’ learning about the underlying relationship between their nominal voting weights and their expected payoffs in weighted voting games; (2) the transfer of acquired learning from one committee choice problem to a similar but different problem. In the experiment, subjects choose to join one of two committees (weighted voting games) and obtain a payoff stochastically determined by a voting theory. We found that: (i) subjects learned to choose the committee that generates a higher expected payoff even without feedback about the payoffs they received; (ii) there was statistically significant evidence of “meaningful learning” (transfer of learning) only for the treatment with no payoff-related feedback. This finding calls for re-thinking existing models of learning to incorporate some type of introspection.
A methodological note on a weighted voting experiment
We conducted a sensitivity analysis of the results of weighted voting experiments by varying two features of the experimental protocol by Montero et al. (Soc Choice Weif 30(1):69-87, 2008): (1) the way in which the roles of subjects are reassigned in each round [random role (RR) vs. fixed role (FR)] and (2) the number of proposals that subjects can simultaneously approve [multiple approval (MA) vs. single approval (SA)]. It was observed that the differences in these protocols had impacts on the relative frequencies of minimum winning coalitions (MWCs) as well as how negotiations proceed. 3-player MWCs were more frequently observed, negotiations were much longer, subjects made less mistakes, and proposal-objection dynamics were more frequently observed, under the protocol with FR and SA than under the protocol with RR and MA.
Une nouvelle approche expérimentale pour tester les modèles quantiques de l’erreur de conjonction
La théorie classique des probabilités requiert que la probabilité de la conjonction de deux événements soit inférieure à la probabilité d’un des événements seul. Or les sujets ne jugent empiriquement pas toujours ainsi : c’est la traditionnelle erreur de conjonction. L’une des explications actuellement prometteuses de ce paradoxe repose sur des modèles dits quantiques, développés à partir des outils mathématiques de la mécanique quantique. Mais ces modèles sont-ils empiriquement adéquats ? Quelles versions de ces modèles peuvent être employées ? En particulier, les versions les plus simples, dites non dégénérées, peuvent-elles être suffisantes ? Nous proposons ici un protocole expérimental original pour tester en laboratoire les modèles quantiques de l’erreur de conjonction. Les résultats obtenus suggèrent que les modèles non dégénérés ne sont pas empiriquement adéquats, et que la recherche future concernant les modèles quantiques devrait s’orienter vers les modèles dégénérés. A new experimental approach to test quantum-like models for the conjunction fallacyIn classical probability theory, the probability of the conjunction of two events is smaller than the probability of only one of these events. Yet, agents do not always empirically judge in this way: this is the traditional conjunction fallacy. One of the currently promising accounts of this paradox relies on so-called quantum-like models, which have been developed from mathematical tools used in quantum theory. But are these models empirically adequate? Which versions of these models can be used? In particular, can the simplest versions, the non-degenerate ones, be sufficient? We propose here an original experimental protocol to test the quantum-like models for the conjunction fallacy in the lab. The results we obtain suggest that the non-degenerate models are not empirically adequate, and that future research on quantum-like models should consider degenerate ones.Classification JEL  : C60, C91, D03.
Quantum-like models cannot account for the conjunction fallacy
Human agents happen to judge that a conjunction of two terms is more probable than one of the terms, in contradiction with the rules of classical probabilities—this is the conjunction fallacy. One of the most discussed accounts of this fallacy is currently the quantum-like explanation, which relies on models exploiting the mathematics of quantum mechanics. The aim of this paper is to investigate the empirical adequacy of major quantum-like models which represent beliefs with quantum states. We first argue that they can be tested in three different ways, in a question order effect configuration which is different from the traditional conjunction fallacy experiment. We then carry out our proposed experiment, with varied methodologies from experimental economics. The experimental results we get are at odds with the predictions of the quantum-like models. This strongly suggests that this quantum-like account of the conjunction fallacy fails. Future possible research paths are discussed.
Recent advances in financial networks and agent-based model validation
We introduce the papers appearing in the special issue of this journal associated with the WEHIA 2015. The papers in issue deal with two growing fields in the in the literature inspired by the complexity-based approach to economic analysis. The first group of contributions develops network models of financial systems and show how these models can shed light on relevant issues that emerged in the aftermath of the last financial crisis. The second group of contributions deals with the issue of validation of agent-based model. Agent-based models have proven extremely useful to account for key features economic dynamics that are usually neglected by more standard models. At the same time, agent-based models have been criticized for the lack of an adequate validation against empirical data. The works in this issue propose useful techniques to validate agent-based models, thus contributing to the wider diffusion of these models in the economic discipline.
The triple-store experiment: a first simultaneous test of classical and quantum probabilities in choice over menus
Recently quantum probability theory started to be actively used in studies of human decision-making, in particular for the resolution of paradoxes (such as the Allais, Ellsberg, and Machina paradoxes). Previous studies were based on a cognitive metaphor of the quantum double-slit experiment—the basic quantum interference experiment. In this paper, we report on an economics experiment based on a triple-slit experiment design, where the slits are menus of alternatives from which one can choose. The test of nonclassicality is based on the Sorkin equality (which was only recently tested in quantum physics). Each alternative is a voucher to buy products in one or more stores. The alternatives are obtained from all disjunctions including one, two or three stores. The participants have to reveal the amount for which they are willing to sell the chosen voucher. Interference terms are computed by comparing the willingness to sell a voucher built as a disjunction of stores and the willingness to sell the vouchers corresponding to the singleton stores. These willingness to sell amounts are used to estimate probabilities and to test both the law of total probabilities and the Born Rule. Results reject neither classical nor quantum probability. We discuss this initial experiment and our results and provide guidelines for future studies.
A New Experimental Approach to Test Quantum-like Models for the Conjunction Fallacy
In classical probability theory, the probability of the conjunction of two events is smaller than the probability of only one of these events. Yet, agents do not always empirically judge in this way: this is the traditional conjunction fallacy. One of the currently promising accounts of this paradox relies on so-called quantum-like models, which have been developed from mathematical tools used in quantum theory. But are these models empirically adequate? Which versions of these models can be used? In particular, can the simplest versions, the non-degenerate ones, be sufficient? We propose here an original experimental protocol to test the quantum-like models for the conjunction fallacy in the lab. The results we obtain suggest that the non-degenerate models are not empirically adequate, and that future research on quantum-like models should consider degenerate ones. Classification JEL  : C60, C91, D03.
A dual-process memory account of how to make an evaluation from complex and complete information
Individuals are required to cope with uncertain, dispersed, incomplete, and incompatible sources of information in real life. We devised an experiment to reveal empirical “anomalies” in the process of acquisition, elaboration and retrieval of economic related information. Our results support the existence of a dual process in memory that is posited by the fuzzy-trace theory: acquisition of information leads to the formation of a gist representation which may be incompatible with the exact verbatim information stored in memory. We gave participants complex and complete information and then measured their cognitive ability. We conclude that individuals used their gist representation rather than processing verbatim information appropriately to make an evaluation. Finally, we provide evidence that subjects with low cognitive abilities tend to demonstrate more often this specific behavior. JEL  Codes: C91, D83, D89.
A Dual-Process Memory Account of How to Make an Evaluation from Complex and Complete Information
Dans la vie réelle, les individus font face à des sources d’informations incertaines, dispersées, incomplètes et incompatibles. Nous proposons une expérience visant à révéler des « anomalies » dans le processus d’acquisition, d’élaboration et de récupération d’informations économiques. Nos résultats corroborent l’existence d’un processus de mémorisation dual proposé par la fuzzy-trace theory  : l’acquisition d’informations conduit à la formation de représentations «  gist  » qui peuvent être incompatibles avec l’exact verbatim des informations rencontrées et stockées en mémoire. Nous avons donné aux participants des informations complexes et complètes, puis mesuré leurs capacités cognitives. Nous concluons qu’afin de procéder à une évaluation, les participants ont préféré l’usage de leurs représentations gist à un traitement approprié de l’information verbatim. Enfin, nous montrons que ce comportement est plus présent chez les participants ayant des capacités cognitives moins élevées. JEL  Codes: C91, D83, D89. Individuals are required to cope with uncertain, dispersed, incomplete, and incompatible sources of information in real life. We devised an experiment to reveal empirical “anomalies” in the process of acquisition, elaboration and retrieval of economic related information. Our results support the existence of a dual process in memory that is posited by the fuzzy-trace theory: acquisition of information leads to the formation of a gist representation which may be incompatible with the exact verbatim information stored in memory. We gave participants complex and complete information and then measured their cognitive ability. We conclude that individuals used their gist representation rather than processing verbatim information appropriately to make an evaluation. Finally, we provide evidence that subjects with low cognitive abilities tend to demonstrate more often this specific behavior.