Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
17
result(s) for
"Lao, Junpeng"
Sort by:
PyMC: a modern, and comprehensive probabilistic programming framework in Python
by
Martin, Osvaldo A.
,
Andreani, Virgile
,
Carroll, Colin
in
Bayesian statistics
,
Data Science
,
Differential equations
2023
PyMC is a probabilistic programming library for Python that provides tools for constructing and fitting Bayesian models. It offers an intuitive, readable syntax that is close to the natural syntax statisticians use to describe models. PyMC leverages the symbolic computation library PyTensor, allowing it to be compiled into a variety of computational backends, such as C, JAX, and Numba, which in turn offer access to different computational architectures including CPU, GPU, and TPU. Being a general modeling framework, PyMC supports a variety of models including generalized hierarchical linear regression and classification, time series, ordinary differential equations (ODEs), and non-parametric models such as Gaussian processes (GPs). We demonstrate PyMC’s versatility and ease of use with examples spanning a range of common statistical models. Additionally, we discuss the positive role of PyMC in the development of the open-source ecosystem for probabilistic programming.
Journal Article
Developing attentional control in naturalistic dynamic road crossing situations
by
de Lissa, Peter
,
Jean-Charles, Geraldine
,
Nicholls, Victoria I.
in
631/378/2649/1310
,
631/378/2649/1723
,
Adolescent
2019
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles’ appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Journal Article
Quantifying Facial Expression Intensity and Signal Use in Deaf Signers
by
Stoll, Chloé
,
Pascalis, Olivier
,
Richoz, Anne-Raphaëlle
in
Adolescent
,
Adult
,
Deafness - psychology
2019
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel.We thus compared early and profoundly deaf signers (n =46) with hearing nonsigners (n =48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
Journal Article
Mapping female bodily features of attractiveness
by
Bovet, Jeanne
,
Bartholomée, Océane
,
Raymond, Michel
in
631/181/2470
,
631/477/2811
,
Attraction
2016
“Beauty is bought by judgment of the eye” (Shakespeare, Love’s Labour’s Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist-to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection.
Journal Article
Fear boosts the early neural coding of faces
by
Degosciu, Sarah B A
,
Richoz, Anne-Raphaëlle
,
Viggiano, Maria Pia
in
Adaptation
,
Adult
,
Electroencephalography
2017
The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.
Journal Article
Face Recognition is Shaped by the Use of Sign Language
2018
Abstract
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed–accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.
Journal Article
Tracking the temporal dynamics of cultural perceptual diversity in visual information processing
2014
Human perception and cognition processing are not universal. Culture and experience markedly modulate visual information sampling in humans. Cross-cultural studies comparing between Western Caucasians (WCs) and East Asians (EAs) have shown cultural differences in behaviour and neural activities in regarding to perception and cognition. Particularly, a number of studies suggest a local perceptual bias for Westerners (WCs) and a global bias for Easterners (EAs): WCs perceive most efficiently the salient information in the focal object; as a contrast EAs are biased toward the information in the background. Such visual processing bias has been observed in a wide range of tasks and stimuli. However, the underlying neural mechanisms of such perceptual tunings, especially the temporal dynamic of different information coding, have yet to be clarified. Here, in the first two experiments I focus on the perceptual function of the diverse eye movement strategies between WCs and EAs. Human observers engage in different eye movement strategies to gather facial information: WCs preferentially fixate on the eyes and mouth, whereas EAs allocate their gaze relatively more on the center of the face. By employing a fixational eye movement paradigm in Study 1 and electroencephalographic (EEG) recording in study 2, the results confirm the cultural differences in spatial-frequency information tuning and suggest the different perceptual functions of preferred eye movement pattern as a function of culture. The third study makes use of EEG adaptation and hierarchical visual stimulus to access the cultural tuning in global/local processing. Culture diversity driven by selective attention is revealed in the early sensory stage. The results here together showed the temporal dynamic of cultural perceptual diversity. Cultural distinctions in the early time course are driven by selective attention to global information in EAs, whereas late effects are modulated by detail processing of local information in WC observers.
Dissertation
MADNESS Deblender: Maximum A posteriori with Deep NEural networks for Source Separation
by
Biswas, Biswajit
,
the LSST Dark Energy Science Collaboration
,
Guinot, Axel
in
Algorithms
,
Artificial neural networks
,
Blending effects
2025
Due to the unprecedented depth of the upcoming ground-based Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory, approximately two-thirds of the galaxies are likely to be affected by blending - the overlap of physically separated galaxies in images. Thus, extracting reliable shapes and photometry from individual objects will be limited by our ability to correct blending and control any residual systematic effect. Deblending algorithms tackle this issue by reconstructing the isolated components from a blended scene, but the most commonly used algorithms often fail to model complex realistic galaxy morphologies. As part of an effort to address this major challenge, we present MADNESS, which takes a data-driven approach and combines pixel-level multi-band information to learn complex priors for obtaining the maximum a posteriori solution of deblending. MADNESS is based on deep neural network architectures such as variational auto-encoders and normalizing flows. The variational auto-encoder reduces the high-dimensional pixel space into a lower-dimensional space, while the normalizing flow models a data-driven prior in this latent space. Using a simulated test dataset with galaxy models for a 10-year LSST survey and a galaxy density ranging from 48 to 80 galaxies per arcmin2 we characterize the aperture-photometry g-r color, structural similarity index, and pixel cosine similarity of the galaxies reconstructed by MADNESS. We compare our results against state-of-the-art deblenders including scarlet. With the r-band of LSST as an example, we show that MADNESS performs better than in all the metrics. For instance, the average absolute value of relative flux residual in the r-band for MADNESS is approximately 29% lower than that of scarlet. The code is publicly available on GitHub.
Neural Representations of Faces are Tuned to Eye Movements
2018
Eye movements provide a functional signature of how human vision is achieved. Many recent studies have reported idiosyncratic visual sampling strategies during face recognition. Whether these inter-individual differences are mirrored by idiosyncratic neural responses has not been investigated yet. Here, we tracked observers’ eye movements during face recognition; additionally, we obtained an objective index of neural face discrimination through EEG that was recorded while subjects fixated different facial information.
Across all observers, we found that those facial features that were fixated longer during face recognition elicited stronger neural face discrimination responses. This relationship occurred independently of inter-individual differences in fixation biases. Our data show that eye movements play a functional role during face processing by providing the neural system with information that is diagnostic to a specific observer. The effective processing of face identity involves idiosyncratic, rather than universal representations.