Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
16 result(s) for "Temporal sampling framework"
Sort by:
Reduced phase locking to slow amplitude modulation in adults with dyslexia: An MEG study
Perception of speech at multiple temporal scales is important for the efficient extraction of meaningful phonological elements. Individuals with developmental dyslexia have difficulty in the accurate neural representation of phonological aspects of speech, across languages. Recently, it was proposed that these difficulties might arise in part because of impaired phase locking to the slower modulations in the speech signal (<10Hz), which would affect syllabic parsing and segmentation of the speech stream (the “temporal sampling” hypothesis, Goswami, 2011). Here we measured MEG responses to different rates of amplitude modulated white noise in adults with and without dyslexia. In line with the temporal sampling hypothesis, different patterns of phase locking to amplitude modulation at the delta rate of 2Hz were found when comparing participants with dyslexia to typically-reading participants. Typical readers exhibited better phase locking to slow modulations in right auditory cortex, whereas adults with dyslexia showed more bilateral phase locking. The results suggest that oscillatory phase locking mechanisms for slower temporal modulations are atypical in developmental dyslexia. ► MEG response to amplitude modulation at 2 and 4Hz rates shows two cortical sources. ► MEG response to amplitude modulation at 10 and 20Hz rates has one cortical source. ► Phase locking at 2Hz shows strong right auditory cortex activation in controls. ► Phase locking at 2Hz shows bilateral auditory cortex activation in dyslexics.
Language Acquisition in the Longitudinal Cambridge UK BabyRhythm Cohort
The Cambridge UK BabyRhythm project is a study of 122 infants as they age from 2 – 30 months, investigating cortical tracking and sensorimotor synchronisation to acoustic and visual rhythm in relation to language acquisition. As there are few standardised language tasks appropriate for this age range, the BabyRhythm project adapted a range of parent-report and infant-led experimental measures that could be used within a home testing environment. Here we present a rich description of infant performance on tasks intended to sample 5 linguistic domains: semantics, phonology, grammar, rhythmic timing and gesture. For each task we describe infant performance (mean, median, range), and we also report performance by sex (N female = 57) and by monolingual (N = 91) versus multilingual (N = 31) home environments. We report relations between measures. We share our unique longitudinal database (all data available on OSF), and ‘lessons learned’ on adapting language assessments for very young children. Critically, we identify the language tasks that will be utilised in our longitudinal brain-behaviour analyses, providing the benchmark upon which future neural and behavioural markers will be measured.
Is It About Speech or About Prediction? Testing Between Two Accounts of the Rhythm–Reading Link
Background/Objectives: The mechanisms underlying the positive association between reading and rhythmic skills remain unclear. Our goal was to systematically test between two major explanations: the Temporal Sampling Framework (TSF), which highlights the relation between rhythm and speech encoding, and a competing explanation based on rhythm’s role in enhancing prediction within visual and auditory sequences. Methods: We compared beat versus duration perception for their associations with encoding and sequence learning (prediction-related) tasks, using both visual and auditory sequences. We also compared these associations for Portuguese vs. Greek participants, since Portuguese stress-timed rhythm is more compatible with music-like beats lasting around 500 ms, in contrast to the syllable-timed rhythm of Greek. If rhythm acts via speech encoding, its effects should be more salient in Portuguese. Results: Consistent with the TSF’s predictions, we found a significant association between beat perception and auditory encoding in Portuguese but not in Greek participants. Correlations between time perception and sequence learning in both modalities were either null or insufficiently supported in both groups. Conclusions: Altogether, the evidence supported the TSF-related predictions in detriment of the Rhythm-as-Predictor (RaP) hypothesis.
Improved estimation of macroevolutionary rates from fossil data using a Bayesian framework
The estimation of origination and extinction rates and their temporal variation is central to understanding diversity patterns and the evolutionary history of clades. The fossil record provides the only direct evidence of extinction and biodiversity changes through time and has long been used to infer the dynamics of diversity changes in deep time. The software PyRate implements a Bayesian framework to analyze fossil occurrence data to estimate the rates of preservation, origination, and extinction while incorporating several sources of uncertainty. Building upon this framework, we present a suite of methodological advances including more complex and realistic models of preservation and the first likelihood-based test to compare the fit across different models. Further, we develop a new reversible jump Markov chain Monte Carlo algorithm to estimate origination and extinction rates and their temporal variation, which provides more reliable results and includes an explicit estimation of the number and temporal placement of statistically significant rate changes. Finally, we implement a new C++ library that speeds up the analyses by orders of magnitude, therefore facilitating the application of the PyRate methods to large data sets. We demonstrate the new functionalities through extensive simulations and with the analysis of a large data set of Cenozoic marine mammals. We compare our analytical framework against two widely used alternative methods to infer origination and extinction rates, revealing that PyRate decisively outperforms them across a range of simulated data sets. Our analyses indicate that explicit statistical model testing, which is often neglected in fossil-based macroevolutionary analyses, is crucial to obtain accurate and robust results.
Characterizing spatio-temporal variation in survival and recruitment with integrated population models
Efforts to understand population dynamics and identify high-quality habitat require information about spatial variation in demographic parameters. However, estimating demographic parameters typically requires labor-intensive capture–recapture methods that are difficult to implement over large spatial extents. Spatially explicit integrated population models (IPMs) provide a solution by accommodating spatial capture–recapture (SCR) data collected at a small number of sites with survey data that may be collected over a much larger extent. We extended the spatial IPM framework to include a spatio-temporal point process model for recruitment, and we applied the model to 4 yr of SCR and distance-sampling data on Canada Warblers (Cardellina canadensis) near the southern extent of the species' breeding range in North Carolina, USA, where climate change is predicted to cause population declines and distributional shifts toward higher elevations. To characterize spatial variation in demographic parameters over the climate gradient in our study area, we modeled density, survival, and per capita recruitment as functions of elevation. We used a male-only model because males comprised >90% of our point-count detections. Apparent survival was low but increased with elevation, from 0.040 (95% credible interval [CI]: 0.0032–0.12) at 900 m to 0.29 (95% CI: 0.16–0.42) at 1,500 m. Recruitment was not strongly associated with elevation, yet density varied greatly, from <0.03 males ha–1 below 1,000 m to >0.2 males ha–1 above 1,400 m. Point estimates of population growth rate were <1 at all elevations, but 95% CIs included 1. Additional research is needed to assess the possibility of a long-term decline and to examine the effects of abiotic variables and biotic interactions on the demographic parameters influencing the species' distribution. The modeling framework developed here provides a platform for addressing these issues and advancing knowledge about spatial demography and population dynamics.
Coordinated Static-Dynamic Framework for Hydropower Load Dispatch: Scenario Library-Based Design and Adaptive Compensation Mechanisms
Under the backdrop of surging dynamic regulation demands in new power systems, hydropower stations urgently need to enhance their intra-day real-time load regulation capabilities to address multi-scale fluctuations. However, current methods have critical limitations: equal-load distribution ignores unit efficiency variations, wasting water resources; intelligent algorithms lack computational speed for high-frequency dispatch; and pre-defined scheme libraries exhibit sparse solution spaces and poor dynamic adaptability, hindering practical application. A constrained Latin hypercube sampling method constructs a vibration-constrained static scenario library, ensuring complete hydraulic-electrical coupling coverage through 4D (head/load/status/scheme) storage. For dynamic regulation, minute-level contingencies are resolved via temporal-feature-enhanced decision trees (TF-EDT), while hourly head fluctuations are managed by a linearized head-flow compensation model (HFLC) to replace nonlinear iterations. Case studies demonstrate a 92% scheme-matching rate, 76% fewer shutdowns, and 15-second fault response times, achieving simultaneous improvements in rapidity (+89%), efficiency (+4.7%), and stability under high renewable penetration.
Spatiotemporal analysis of lake chlorophyll-a with combined in situ and satellite data
We estimated chlorophyll- a (Chl-a) concentration using various combinations of routine sampling, automatic station measurements, and MERIS satellite images. Our study site was the northern part of the large, shallow, mesotrophic Lake Pyhäjärvi located in southwestern Finland. Various combinations of measurements were interpolated spatiotemporally using a data fusion system (DFS) based on an ensemble Kalman filter and smoother algorithms. The estimated concentrations together with corresponding 68% confidence intervals are presented as time series at routine sampling and automated stations, as maps and as mean values over the EU Water Framework Directive monitoring period, to evaluate the efficiency of various monitoring methods. The mean Chl-a calculated with DFS in June–September was 6.5–7.5 µg/l, depending on the observations used as input. At the routine monitoring station where grab samples were used, the average uncertainty (standard deviation, SD) decreased from 2.7 to 1.6 µg/l when EO data were also included in the estimation. At the automatic station, located 0.9 km from the routine monitoring site, the SD was 0.7 µg/l. The SD of spatial mean concentration decreased from 6.7 to 2.9 µg/l when satellite observations were included in June–September, in addition to in situ monitoring data. This demonstrates the high value of the information derived from satellite observations. The conclusion is that the confidence of Chl-a monitoring could be increased by deploying spatially extensive measurements in the form of satellite imaging or transects conducted with flow-through sensors installed on a boat and spatiotemporal interpolation of the multisource data.
Estimation of the water quality of a large urbanized river as defined by the European WFD: what is the optimal sampling frequency?
Assessment of the quality of freshwater bodies is essential to determine the impact of human activities on water resources. The water quality status is estimated by comparing indicators with standard thresholds. Indicators are usually statistical criteria that are calculated on discrete measurements of water quality variables. If the time step of the measured time series is not sufficient to fully capture the variable’s variability, the deduced indicator may not reflect the system’s functioning. The goal of the present work is to assess, through a hydro-biogeochemical modeling approach, the optimal sampling frequency for an accurate estimation of 6 water quality indicators defined by the European Water Framework Directive (WFD) in a large human-impacted river, which receives large urban effluents (the Seine River across the Paris urban area). The optimal frequency depends on the sampling location and on the monitored variable. For fast varying compounds that originate from urban effluents, such as PO 4 3 − , NH 4 + and NO 2 − , a sampling time step of one week or less is necessary. To be able to reflect the highly transient character of bloom events, chl a concentrations also require a short monitoring time step. On the contrary, for variables that exert high seasonal variability, as NO 3 − and O 2 , monthly sampling can be sufficient for an accurate estimation of WFD indicators in locations far enough from major effluents. Integrative water quality variables, such as O 2 , can be highly sensitive to hydrological conditions. It would therefore be relevant to assess the quality of water bodies at a seasonal scale rather than at annual or pluri-annual scales. This study points out the possibility to develop smarter monitoring systems by coupling both time adaptative automated monitoring networks and modeling tools used as spatio-temporal interpolators.
Detecting dominant changes in irregularly sampled multivariate water quality data sets
Time series of groundwater and stream water quality often exhibit substantial temporal and spatial variability, whereas typical existing monitoring data sets, e.g. from environmental agencies, are usually characterized by relatively low sampling frequency and irregular sampling in space and/or time. This complicates the differentiation between anthropogenic influence and natural variability as well as the detection of changes in water quality which indicate changes in single drivers. We suggest the new term “dominant changes” for changes in multivariate water quality data which concern (1) multiple variables, (2) multiple sites and (3) long-term patterns and present an exploratory framework for the detection of such dominant changes in data sets with irregular sampling in space and time. Firstly, a non-linear dimension-reduction technique was used to summarize the dominant spatiotemporal dynamics in the multivariate water quality data set in a few components. Those were used to derive hypotheses on the dominant drivers influencing water quality. Secondly, different sampling sites were compared with respect to median component values. Thirdly, time series of the components at single sites were analysed for long-term patterns. We tested the approach with a joint stream water and groundwater data set quality consisting of 1572 samples, each comprising sixteen variables, sampled with a spatially and temporally irregular sampling scheme at 29 sites in northeast Germany from 1998 to 2009. The first four components were interpreted as (1) an agriculturally induced enhancement of the natural background level of solute concentration, (2) a redox sequence from reducing conditions in deep groundwater to post-oxic conditions in shallow groundwater and oxic conditions in stream water, (3) a mixing ratio of deep and shallow groundwater to the streamflow and (4) sporadic events of slurry application in the agricultural practice. Dominant changes were observed for the first two components. The changing intensity of the first component was interpreted as response to the temporal variability of the thickness of the unsaturated zone. A steady increase in the second component at most stream water sites pointed towards progressing depletion of the denitrification capacity of the deep aquifer.
Benchmarking inference methods for water quality monitoring and status classification
River water quality monitoring at limited temporal resolution can lead to imprecise and inaccurate classification of physicochemical status due to sampling error. Bayesian inference allows for the quantification of this uncertainty, which can assist decision-making. However, implicit assumptions of Bayesian methods can cause further uncertainty in the uncertainty quantification, so-called second-order uncertainty. In this study, and for the first time, we rigorously assessed this second-order uncertainty for inference of common water quality statistics (mean and 95th percentile) based on sub-sampling high-frequency (hourly) total reactive phosphorus (TRP) concentration data from three watersheds. The statistics were inferred with the low-resolution sub-samples using the Bayesian lognormal distribution and bootstrap, frequentist t test, and face-value approach and were compared with those of the high-frequency data as benchmarks. The t test exhibited a high risk of bias in estimating the water quality statistics of interest and corresponding physicochemical status (up to 99% of sub-samples). The Bayesian lognormal model provided a good fit to the high-frequency TRP concentration data and the least biased classification of physicochemical status (< 5% of sub-samples). Our results suggest wide applicability of Bayesian inference for water quality status classification, a new approach for regulatory practice that provides uncertainty information about water quality monitoring and regulatory classification with reduced bias compared to frequentist approaches. Furthermore, the study elucidates sizeable second-order uncertainty due to the choice of statistical model, which could be quantified based on the high-frequency data.