Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
463 result(s) for "Modellbildung"
Sort by:
Experimental validation of the diffusion model based on a slow response time paradigm
The diffusion model (Ratcliff, Psychol Rev 85(2):59–108, 1978) is a stochastic model that is applied to response time (RT) data from binary decision tasks. The model is often used to disentangle different cognitive processes. The validity of the diffusion model parameters has, however, rarely been examined. Only few experimental paradigms have been analyzed with those being restricted to fast response time paradigms. This is attributable to a recommendation stated repeatedly in the diffusion model literature to restrict applications to fast RT paradigms (more specifically, to tasks with mean RTs below 1.5 s per trial). We conducted experimental validation studies in which we challenged the necessity of this restriction. We used a binary task that features RTs of several seconds per trial and experimentally examined the convergent and discriminant validity of the four main diffusion model parameters. More precisely, in three experiments, we selectively manipulated these parameters, using a difficulty manipulation (drift rate), speed-accuracy instructions (threshold separation), a more complex motoric task (non-decision time), and an asymmetric payoff matrix (starting point). The results were similar to the findings from experimental validation studies based on fast RT paradigms. Thus, our experiments support the validity of the parameters of the diffusion model and speak in favor of an extension of the model to paradigms based on slower RTs.
Generalization guides human exploration in vast decision spaces
From foraging for food to learning complex games, many aspects of human behaviour can be framed as a search problem with a vast space of possible actions. Under finite search horizons, optimal solutions are generally unobtainable. Yet, how do humans navigate vast problem spaces, which require intelligent exploration of unobserved actions? Using various bandit tasks with up to 121 arms, we study how humans search for rewards under limited search horizons, in which the spatial correlation of rewards (in both generated and natural environments) provides traction for generalization. Across various different probabilistic and heuristic models, we find evidence that Gaussian process function learning—combined with an optimistic upper confidence bound sampling strategy—provides a robust account of how people use generalization to guide search. Our modelling results and parameter estimates are recoverable and can be used to simulate human-like performance, providing insights about human behaviour in complex environments. When searching for rewards in complex, unfamiliar environments, it is often impossible to explore all options. Wu et al. show how a combination of generalization and optimistic sampling guides efficient human exploration in complex environments.
Random Effects Multinomial Processing Tree Models: A Maximum Likelihood Approach
The present article proposes and evaluates marginal maximum likelihood (ML) estimation methods for hierarchical multinomial processing tree (MPT) models with random and fixed effects. We assume that an identifiable MPT model with S parameters holds for each participant. Of these S parameters, R parameters are assumed to vary randomly between participants, and the remaining S - R parameters are assumed to be fixed. We also propose an extended version of the model that includes effects of covariates on MPT model parameters. Because the likelihood functions of both versions of the model are too complex to be tractable, we propose three numerical methods to approximate the integrals that occur in the likelihood function, namely, the Laplace approximation (LA), adaptive Gauss–Hermite quadrature (AGHQ), and Quasi Monte Carlo (QMC) integration. We compare these three methods in a simulation study and show that AGHQ performs well in terms of both bias and coverage rate. QMC also performs well but the number of responses per participant must be sufficiently large. In contrast, LA fails quite often due to undefined standard errors. We also suggest ML-based methods to test the goodness of fit and to compare models taking model complexity into account. The article closes with an illustrative empirical application and an outlook on possible extensions and future applications of the proposed ML approach.
Identity domains capture individual differences from across the behavioral repertoire
Personality traits can offer considerable insight into the biological basis of individual differences. However, existing approaches toward understanding personality across species rely on subjective criteria and limited sets of behavioral readouts, which result in noisy and often inconsistent outcomes. Here we introduce a mathematical framework for describing individual differences along dimensions with maximum consistency and discriminative power. We validate this framework in mice, using data from a system for high-throughput longitudinal monitoring of group-housed male mice that yields a variety of readouts from across the behavioral repertoire of individual animals. We demonstrate a set of stable traits that capture variability in behavior and gene expression in the brain, allowing for better-informed mechanistic investigations into the biology of individual differences.
Conflict resolution in the Eriksen flanker task: Similarities and differences to the Simon task
In the Eriksen flanker task as well as in the Simon task irrelevant activation produces a response conflict that has to be resolved by mental control mechanisms. Despite these similarities, however, the tasks differ with respect to their delta functions, which express how the congruency effects develop with response time. The slope of the delta function is mostly positive for the flanker task, but negative for the Simon task. Much effort has been spent to explain this difference and to investigate whether it results from task-specific control. A prominent account is that the temporal overlap between irrelevant and relevant response activation is larger in the flanker task than in the Simon task. To test this hypothesis, we increased the temporal distance in a flanker task by presenting the flankers ahead of the target. This not only produced negatively sloped delta functions but also caused reversed congruency effects. We also conducted a Simon-task experiment in which we varied the proportion of congruent stimuli. As a result, the delta function was negatively sloped only if the proportion was low. These results demonstrate that a long temporal distance is necessary but not sufficient for observing negatively sloped delta functions. Finally, we modeled the data with drift-diffusion models. Together, our results show that differently sloped delta functions can be produced with both tasks. They further indicate that activation suppression is an important control mechanism that can be adapted rather flexibly to the control demands.
OutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany
Mathematical models in epidemiology are an indispensable tool to determine the dynamics and important characteristics of infectious diseases. Apart from their scientific merit, these models are often used to inform political decisions and interventional measures during an ongoing outbreak. However, reliably inferring the epidemical dynamics by connecting complex models to real data is still hard and requires either laborious manual parameter fitting or expensive optimization methods which have to be repeated from scratch for every application of a given model. In this work, we address this problem with a novel combination of epidemiological modeling with specialized neural networks. Our approach entails two computational phases: In an initial training phase, a mathematical model describing the epidemic is used as a coach for a neural network, which acquires global knowledge about the full range of possible disease dynamics. In the subsequent inference phase, the trained neural network processes the observed data of an actual outbreak and infers the parameters of the model in order to realistically reproduce the observed dynamics and reliably predict future progression. With its flexible framework, our simulation-based approach is applicable to a variety of epidemiological models. Moreover, since our method is fully Bayesian, it is designed to incorporate all available prior knowledge about plausible parameter values and returns complete joint posterior distributions over these parameters. Application of our method to the early Covid-19 outbreak phase in Germany demonstrates that we are able to obtain reliable probabilistic estimates for important disease characteristics, such as generation time, fraction of undetected infections, likelihood of transmission before symptom onset, and reporting delays using a very moderate amount of real-world observations.
Sequential sampling models with variable boundaries and non-normal noise: A comparison of six models
One of the most prominent response-time models in cognitive psychology is the diffusion model, which assumes that decision-making is based on a continuous evidence accumulation described by a Wiener diffusion process. In the present paper, we examine two basic assumptions of standard diffusion model analyses. Firstly, we address the question of whether participants adjust their decision thresholds during the decision process. Secondly, we investigate whether so-called Lévy-flights that allow for random jumps in the decision process account better for experimental data than do diffusion models. Specifically, we compare the fit of six different versions of accumulator models to data from four conditions of a number-letter classification task. The experiment comprised a simple single-stimulus task and a more difficult multiple-stimulus task that were both administered under speed versus accuracy conditions. Across the four experimental conditions, we found little evidence for a collapsing of decision boundaries. However, our results suggest that the Lévy-flight model with heavy-tailed noise distributions (i.e., allowing for jumps in the accumulation process) fits data better than the Wiener diffusion model.
Meta-heuristics in short scale construction
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.
Delta plots for conflict tasks: An activation-suppression race model
We describe a mathematically simple yet precise model of activation suppression that can explain the negative-going delta plots often observed in standard Simon tasks. The model postulates a race between the identification of the relevant stimulus attribute and the suppression of irrelevant location-based activation, with the irrelevant activation only having an effect if the irrelevant activation is still present at the moment when central processing of the relevant attribute starts. The model can be fitted by maximum likelihood to observed distributions of RTs in congruent and incongruent trials, and it provides good fits to two previously-reported data sets with plausible parameter values. R and MATLAB software for use with the model is provided.
Grouped feature importance and combined features effect plot
Interpretable machine learning has become a very active area of research due to the rising popularity of machine learning algorithms and their inherently challenging interpretability. Most work in this area has been focused on the interpretation of single features in a model. However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups. To address this research gap, we provide a comprehensive overview of how existing model-agnostic techniques can be defined for feature groups to assess the grouped feature importance, focusing on permutation-based, refitting, and Shapley-based methods. We also introduce an importance-based sequential procedure that identifies a stable and well-performing combination of features in the grouped feature space. Furthermore, we introduce the combined features effect plot, which is a technique to visualize the effect of a group of features based on a sparse, interpretable linear combination of features. We used simulation studies and real data examples to analyze, compare, and discuss these methods.