Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
60,089 result(s) for "stochastic model"
Sort by:
Local lockdowns outperform global lockdown on the far side of the COVID-19 epidemic curve
In the late stages of an epidemic, infections are often sporadic and geographically distributed. Spatially structured stochastic models can capture these important features of disease dynamics, thereby allowing a broader exploration of interventions. Here we develop a stochastic model of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission among an interconnected group of population centers representing counties, municipalities, and districts (collectively, “counties”). The model is parameterized with demographic, epidemiological, testing, and travel data from Ontario, Canada. We explore the effects of different control strategies after the epidemic curve has been flattened. We compare a local strategy of reopening (and reclosing, as needed) schools and workplaces county by county, according to triggers for county-specific infection prevalence, to a global strategy of province-wide reopening and reclosing, according to triggers for province-wide infection prevalence. For trigger levels that result in the same number of COVID-19 cases between the two strategies, the local strategy causes significantly fewer person-days of closure, even under high intercounty travel scenarios. However, both cases and person-days lost to closure rise when county triggers are not coordinated and when testing rates vary among counties. Finally, we show that local strategies can also do better in the early epidemic stage, but only if testing rates are high and the trigger prevalence is low. Our results suggest that pandemic planning for the far side of the COVID-19 epidemic curve should consider local strategies for reopening and reclosing.
RATE-OPTIMAL GRAPHON ESTIMATION
Network analysis is becoming one of the most active research areas in statistics. Significant advances have been made recently on developing theories, methodologies and algorithms for analyzing networks. However, there has been little fundamental study on optimal estimation. In this paper, we establish optimal rate of convergence for graphon estimation. For the stochastic block model with k clusters, we show that the optimal rate under the mean squared error is n⁻¹ log k + k²/n². The minimax upper bound improves the existing results in literature through a technique of solving a quadratic equation. When $k\\, \\leqslant \\,\\sqrt {n\\,\\log \\,n} $, as the number of the cluster k grows, the minimax rate grows slowly with only a logarithmic order n⁻¹ log k. A key step to establish the lower bound is to construct a novel subset of the parameter space and then apply Fano's lemma, from which we see a clear distinction of the non-parametric graphon estimation problem from classical nonparametric regression, due to the lack of identifiability of the order of nodes in exchangeable random graph models. As an immediate application, we consider nonparametric graphon estimation in a Holder class with smoothness α. When the smoothness α ≥ 1, the optimal rate of convergence is n⁻¹ log n, independent of α, while for α ∈ (0, 1), the rate is n-2α/(α+1), which is, to our surprise, identical to the classical nonparametric rate.
Stochastic Geometry for Wireless Networks
Covering point process theory, random geometric graphs and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs and accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques with practical examples and exercises, this is a comprehensive guide to the spatial stochastic models essential for modelling and analysis of wireless network performance.
A New Unbiased Stochastic Derivative Estimator for Discontinuous Sample Performances with Structural Parameters
In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2) the likelihood ratio (LR) method, and (3) the weak derivative method, to a setting where they did not previously apply. Examples in probability constraints, control charts, and financial derivatives demonstrate the broad applicability of the proposed framework. The new estimator preserves the single-run efficiency of the classic IPA-LR estimators in applications, which is substantiated by numerical experiments. The online appendix is available at https://doi.org/10.1287/opre.2017.1674 .
A Stochastic Optimization Model for Designing Last Mile Relief Networks
In this study, we introduce a distribution network design problem that determines the locations and capacities of the relief distribution points in the last mile network, while considering demand- and network-related uncertainties in the post-disaster environment. The problem addresses the critical concerns of relief organizations in designing last mile networks, which are providing accessible and equitable service to beneficiaries. We focus on two types of supply allocation policies and propose a hybrid version considering their different implications on equity and accessibility. Then, we develop a two-stage stochastic programming model that incorporates the hybrid allocation policy and achieves high levels of accessibility and equity simultaneously. We devise a branch-and-cut algorithm based on Benders decomposition to solve large problem instances in reasonable times and conduct a numerical study to demonstrate the computational effectiveness of the solution method. We also illustrate the application of our model on a case study based on real-world data from the 2011 Van earthquake in Turkey.
ANALYSIS OF STOCHASTIC PROCESS TO MODEL SAFETY RISK IN CONSTRUCTION INDUSTRY
There are many factors leading to construction safety accident. The rule presented under the influence of these factors should be a statistical random rule. To reveal those random rules and study the probability prediction method of construction safety accident, according to stochastic process theory, general stochastic process, Markov process and normal process are respectively used to simulate the risk-accident process in this paper. First, in the general-random-process-based analysis the probability of accidents in a period of time is calculated. Then, the Markov property of the construction safety risk evolution process is illustrated, and the analytical expression of probability density function of first-passage time of Markov-based risk-accident process is derived to calculate the construction safety probability. In the normal-process-based analysis, the construction safety probability formulas in cases of stationary normal risk process and non-stationary normal risk process with zero mean value are derived respectively. Finally, the number of accidents that may occur on construction site in a period is studied macroscopically based on Poisson process, and the probability distribution of time interval between adjacent accidents and the time of the nth accident are calculated respectively. The results provide useful reference for the prediction and management of construction accidents.
Learning action-oriented models through active inference
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms.
Stochastic modeling of triple-frequency BeiDou signals: estimation, assessment and impact analysis
Stochastic models are important in global navigation satellite systems (GNSS) estimation problems. One can achieve reliable ambiguity resolution and precise positioning only by use of a suitable stochastic model. The BeiDou system has received increased research focus, but based only on empirical stochastic models from the knowledge of GPS. In this paper, we will systematically study the estimation, assessment and impacts of a triple-frequency BeiDou stochastic model. In our estimation problem, a single-difference, geometry-free functional model is used to extract pure random noise. A very sophisticated structure of unknown variance matrix is designed to allow the estimation of satellite-specific variances, cross correlations between two arbitrary frequencies, as well as the time correlations for phase and code observations per frequency. In assessing the stochastic models, six data sets with four brands of BeiDou receivers on short and zero-length baselines are processed, and the results are compared. In impact analysis of stochastic model, the performance of integer ambiguity resolution and positioning are numerically demonstrated using a realistic stochastic model. The results from ultrashort (shorter than 10 m) and zero-length baselines indicate that BeiDou stochastic models are affected by both observation and receiver brands. The observation variances have been modeled by an elevation-dependent function, but the modeling errors for geostationary earth orbit (GEO) satellites are larger than for inclined geosynchronous satellite orbit (IGSO) and medium earth orbit (MEO) satellites. The stochastic model is governed by both the internal errors of the receiver and external errors at the site. Different receivers have different capabilities for resisting external errors. A realistic stochastic model is very important for achieving ambiguity resolution with a high success rate and small false alarm and for determining realistic variances for position estimates. To the best of our knowledge, this paper is the first comprehensive study on such stochastic models used specifically with BeiDou data.
Immersion mode heterogeneous ice nucleation by an illite rich powder representative of atmospheric mineral dust
Atmospheric dust rich in illite is transported globally from arid regions and impacts cloud properties through the nucleation of ice. We present measurements of ice nucleation in water droplets containing known quantities of an illite rich powder under atmospherically relevant conditions. The illite rich powder used here, NX illite, has a similar mineralogical composition to atmospheric mineral dust sampled in remote locations, i.e. dust which has been subject to long range transport, cloud processing and sedimentation. Arizona Test Dust, which is used in other ice nucleation studies as a model atmospheric dust, has a significantly different mineralogical composition and we suggest that NX illite is a better surrogate of natural atmospheric dust. Using optical microscopy, heterogeneous nucleation in the immersion mode by NX illite was observed to occur dominantly between 246 K and the homogeneous freezing limit. In general, higher freezing temperatures were observed when larger surface areas of NX illite were present within the drops. Homogenous nucleation was observed to occur in droplets containing low surface areas of NX illite. We show that NX illite exhibits strong particle to particle variability in terms of ice nucleating ability, with ~1 in 105 particles dominating ice nucleation when high surface areas were present. In fact, this work suggests that the bulk of atmospheric mineral dust particles may be less efficient at nucleating ice than assumed in current model parameterisations. For droplets containing ≤2 × 10−6 cm2 of NX illite, freezing temperatures did not noticeably change when the cooling rate was varied by an order of magnitude. The data obtained during cooling experiments (surface area ≤2 × 10−6 cm2) is shown to be inconsistent with the single component stochastic model, but is well described by the singular model (ns(236.2 K ≤ T ≤ 247.5 K) = exp(6.53043 × 104− 8.2153088 × 102T + 3.446885376T2 − 4.822268 × 10−3T3). However, droplets continued to freeze when the temperature was held constant, which is inconsistent with the time independent singular model. We show that this apparent discrepancy can be resolved using a multiple component stochastic model in which it is assumed that there are many types of nucleation sites, each with a unique temperature dependent nucleation coefficient. Cooling rate independence can be achieved with this time dependent model if the nucleation rate coefficients increase very rapidly with decreasing temperature, thus reconciling our measurement of nucleation at constant temperature with the cooling rate independence.
How to Efficiently Determine the Range Precision of 3D Terrestrial Laser Scanners
As laser scanning technology has improved a lot in recent years, terrestrial laser scanners (TLS) have become popular devices for surveying tasks with high accuracy demands, such as deformation analyses. For this reason, finding a stochastic model for TLS measurements is very important in order to get statistically reliable results. The measurement accuracy of laser scanners-especially of their rangefinders-is strongly dependent on the scanning conditions, such as the scan configuration, the object surface geometry and the object reflectivity. This study demonstrates a way to determine the intensity-dependent range precision of 3D points for terrestrial laser scanners that measure in 3D mode by using range residuals in laser beam direction of a best plane fit. This method does not require special targets or surfaces aligned perpendicular to the scanner, which allows a much quicker and easier determination of the stochastic properties of the rangefinder. Furthermore, the different intensity types-raw and scaled-intensities are investigated since some manufacturers only provide scaled intensities. It is demonstrated that the intensity function can be derived from raw intensity values as written in literature, and likewise-in a restricted measurement volume-from scaled intensity values if the raw intensities are not available.