Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
51 result(s) for "Reynaud-Bouret, Patricia"
Sort by:
Lasso and probabilistic inequalities for multivariate point processes
Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive ℓ₁-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities are established under assumptions on the Gram matrix of the dictionary. Nonasymptotic probabilistic results for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418-1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006) 1418-1429), our tuning procedure is proven to be robust with respect to all the parameters of the problem, revealing its potential for concrete purposes, in particular in neuroscience.
Investigating interactions between types of order in categorization
This study simultaneously manipulates within-category (rule-based vs. similarity-based), between-category (blocked vs. interleaved), and across-blocks (constant vs. variable) orders to investigate how different types of presentation order interact with one another. With regard to within-category orders, stimuli were presented either in a “rule plus exceptions” fashion (in the rule-based order) or by maximizing the similarity between contiguous examples (in the similarity-based order). As for the between-category manipulation, categories were either blocked (in the blocked order) or alternated (in the interleaved order). Finally, the sequence of stimuli was either repeated (in the constant order) or varied (in the variable order) across blocks. This research offers a novel approach through both an individual and concurrent analysis of the studied factors, with the investigation of across-blocks manipulations being unprecedented. We found a significant interaction between within-category and across-blocks orders, as well as between between-category and across-blocks orders. In particular, the combination similarity-based + variable orders was the most detrimental, whereas the combination blocked + constant was the most beneficial. We also found a main effect of across-blocks manipulation, with faster learning in the constant order as compared to the variable one. With regard to the classification of novel stimuli, learners in the rule-based and interleaved orders showed generalization patterns that were more consistent with a specific rule-based strategy, as compared to learners in the similarity-based and blocked orders, respectively. This study shows that different types of order can interact in a subtle fashion and thus should not be considered in isolation.
Strategy inference during learning via cognitive activity-based credit assignment models
We develop a method for selecting meaningful learning strategies based solely on the behavioral data of a single individual in a learning experiment. We use simple Activity-Credit Assignment algorithms to model the different strategies and couple them with a novel hold-out statistical selection method. Application on rat behavioral data in a continuous T-maze task reveals a particular learning strategy that consists in chunking the paths used by the animal. Neuronal data collected in the dorsomedial striatum confirm this strategy.
Adaptive estimation of the intensity of inhomogeneous Poisson processes via concentration inequalities
In this paper, we establish oracle inequalities for penalized projection estimators of the intensity of an inhomogeneous Poisson process. We study consequently the adaptive properties of penalized projection estimators. At first we provide lower bounds for the minimax risk over various sets of smoothness for the intensity and then we prove that our estimators achieve these lower bounds up to some constants. The crucial tools to obtain the oracle inequalities are new concentration inequalities for suprema of integral functionals of Poisson processes which are analogous to Talagrand's inequalities for empirical processes.
ADAPTIVE ESTIMATION FOR HAWKES PROCESSES; APPLICATION TO GENOME ANALYSIS
The aim of this paper is to provide a new method for the detection of either favored or avoided distances between genomic events along DNA sequences. These events are modeled by a Hawkes process. The biological problem is actually complex enough to need a nonasymptotic penalized model selection approach. We provide a theoretical penalty that satisfies an oracle inequality even for quite complex families of models. The consecutive theoretical estimator is shown to be adaptive minimax for Hölderian functions with regularity in (1/2, 1]: those aspects have not yet been studied for the Hawkes' process. Moreover, we introduce an efficient strategy, named Islands, which is not classically used in model selection, but that happens to be particularly relevant to the biological question we want to answer. Since a multiplicative constant in the theoretical penalty is not computable in practice, we provide extensive simulations to find a data-driven calibration of this constant. The results obtained on real genomic data are coherent with biological knowledge and eventually refine them.
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785 , 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material.
NONPARAMETRIC ESTIMATION OF THE DIVISION RATE OF A SIZE-STRUCTURED POPULATION
We consider the problem of estimating the division rate of a size-structured population in a nonparametric setting. The size of the system evolves according to a transport-fragmentation equation: each individual grows with a given transport rate and splits into two off spring of the same size, following a binary fragmentation process with unknown division rate that depends on its size. In contrast to a deterministic inverse problem approach, as in [B. Perthame and J. P. Zubelli, Inverse Problems, 23 (2007), pp. 1037-1052; M. Doumic, B. Perthame, and J. Zubelli, Inverse Problems, 25 (2009), pp. 1-22], in this paper we take the perspective of statistical inference: our data consists of a large sample of the size of individuals, when the evolution of the system is close to its time-asymptotic behavior, so that it can be related to the eigenproblem of the considered transport-fragmentation equation. By estimating statistically each term of the eigenvalue problem and by suitably inverting a certain linear operator, we are able to construct a more realistic estimator of the division rate that achieves the same optimal error bound as in related deterministic inverse problems. Our procedure relies on kernel methods with automatic bandwidth selection. It is inspired by model selection and recent results of Goldenshluger and Lepski [A. Goldenshluger and O. Lepski, arXiv. 0904.1950, 2009; arXiv: 1009.1016, 2010].
How to fit transfer models to learning data: a segmentation/clustering approach
Although transfer models are limited in their ability to evolve over time and account for a wide range of processes, they have repeatedly shown to be useful for testing categorization theories and predicting participants’ generalization performance. In this study, we propose a statistical framework that allows transfer models to be applied to category learning data. Our framework uses a segmentation/clustering technique specifically tailored to suit category learning data. We applied this technique to a well-known transfer model, the Generalized Context Model, in three novel experiments that manipulated ordinal effects in category learning. The difference in performance across the three contexts, as well as the benefit of the rule-based order observed in two out of three experiments, were mostly detected by the segmentation/clustering method. Furthermore, the analysis of the segmentation/clustering outputs using the backward learning curve revealed that participants’ performance suddenly improved, suggesting the detection of an “eureka” moment. Our adjusted segmentation/clustering framework allows transfer models to fit learning data while capturing relevant patterns.
Continuous testing for Poisson process intensities
We propose a continuous testing framework to test the intensities of Poisson processes that allows a rigorous definition of the complete testing procedure, from an infinite number of hypotheses to joint error rates. Our work extends procedures based on scanning windows by controlling the familywise error rate and the false discovery rate in a non-asymptotic manner and in a continuous way. We introduce the p-value process on which the decision rule is based. Our method is applied in neuroscience via the standard homogeneity and two-sample tests.