Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
25 result(s) for "inferential optimization"
Sort by:
Gaussian Markov Random Fields for Discrete Optimization via Simulation: Framework and Algorithms
This paper lays the foundation for employing Gaussian Markov random fields (GMRFs) for discrete decision–variable optimization via simulation; that is, optimizing the performance of a simulated system. Gaussian processes have gained popularity for inferential optimization, which iteratively updates a model of the simulated solutions and selects the next solution to simulate by relying on statistical inference from that model. We show that, for a discrete problem, GMRFs, a type of Gaussian process defined on a graph, provides better inference on the remaining optimality gap than the typical choice of continuous Gaussian process and thereby enables the algorithm to search efficiently and stop correctly when the remaining optimality gap is below a predefined threshold. We also introduce the concept of multiresolution GMRFs for large-scale problems, with which GMRFs of different resolutions interact to efficiently focus the search on promising regions of solutions. We consider optimizing the expected value of some performance measure of a dynamic stochastic simulation with a statistical guarantee for optimality when the decision variables are discrete , in particular, integer-ordered; the number of feasible solutions is large; and the model execution is too slow to simulate even a substantial fraction of them. Our goal is to create algorithms that stop searching when they can provide inference about the remaining optimality gap similar to the correct-selection guarantee of ranking and selection when it simulates all solutions. Further, our algorithm remains competitive with fixed-budget algorithms that search efficiently but do not provide such inference. To accomplish this we learn and exploit spatial relationships among the decision variables and objective function values using a Gaussian Markov random field (GMRF). Gaussian random fields on continuous domains are already used in deterministic and stochastic optimization because they facilitate the computation of measures, such as expected improvement, that balance exploration and exploitation. We show that GMRFs are particularly well suited to the discrete decision–variable problem, from both a modeling and a computational perspective. Specifically, GMRFs permit the definition of a sensible neighborhood structure, and they are defined by their precision matrices, which can be constructed to be sparse. Using this framework, we create both single and multiresolution algorithms, prove the asymptotic convergence of both, and evaluate their finite-time performance empirically. The e-companion is available at https://doi.org/10.1287/opre.2018.1778 .
Systems biology informed deep learning for inferring parameters and hidden dynamics
Mathematical models of biological reactions at the system-level lead to a set of ordinary differential equations with many unknown parameters that need to be inferred using relatively few experimental measurements. Having a reliable and robust algorithm for parameter inference and prediction of the hidden dynamics has been one of the core subjects in systems biology, and is the focus of this study. We have developed a new systems-biology-informed deep learning algorithm that incorporates the system of ordinary differential equations into the neural networks. Enforcing these equations effectively adds constraints to the optimization procedure that manifests itself as an imposed structure on the observational data. Using few scattered and noisy measurements, we are able to infer the dynamics of unobserved species, external forcing, and the unknown model parameters. We have successfully tested the algorithm for three different benchmark problems.
Imperfect Bayesian inference in visual perception
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This \"imperfect Bayesian\" model convincingly outperformed the \"flawless Bayesian\" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.
Geometry-Inference Based Clustering Heuristic: New k-means Metric for Gaussian Data and Experimental Proof of Concept
K-means is one of the algorithms that are most utilized in data clustering; the number of metrics is coupled to k-means to reach reasonable levels of clusters’ compactness and separation. In addition, an efficient data assignment to their related clusters is conditioned by a priori selection of the optimal number of clusters which constitutes in fact a crucial step of this process. The present work aims at proposing a new clustering metric/heuristic taking into account both dispersion and statistical characteristics of data to be clustered; a Geometry-Inference based Clustering (GIC) heuristic is derived for selecting the optimal clusters’ number for k-means clustering. The conceptual approach proposed herein introduced the ‘initial speed rate’ as the main random variable to be statistically studied, while the corresponding histograms were fitted according to a set of classical probability distributions. In the case of Gaussian datasets, the estimated probability distributions’ parameters were found to be 2-stages linear according to the number of clusters ‘k’, where the optimal k ∗ was found perfectly matching the intersection of the 2-linear stages. Normal and exponential distribution parameters were experienced to be more accurate than other distributions with excellent Khi2 test fit. Furthermore, the GIC algorithm showed full quantitative aspects so that no qualitative or visual analysis was required. In contrast, the straightforward application of the GIC heuristic for non-Gaussian datasets resulted in weak clustering performance; then, an enhanced version of the GIC technique is currently under development using the geometrical data skeleton notion in 2D and higher dimension spaces.
Prediction of Academic Performance Applying NNs: A Focus on Statistical Feature-Shedding and Lifestyle
Automation has made it possible to garner and preserve students’ data and the modern advent in data science enthusiastically mines this data to predict performance, to the interest of both tutors and tutees. Academic excellence is a phenomenon resulting from a complex set of criteria originating in psychology, habits and according to this study, lifestyle and preferences–justifying machine learning to be ideal in classifying academic soundness. In this paper, computer science majors’ data have been gleaned consensually by surveying at Ahsanullah University, situated in Bangladesh. Visually aided exploratory analysis revealed interesting propensities as features, whose significance was further substantiated by statistically inferential Chi-squared (Χ2) independence tests and independent samples t-tests for categorical and continuous variables respectively, on median/mode-imputed data. The initially relaxed p-value retained all exploratorily analyzed features, but gradual rigidification exposed the most powerful features by fitting neural networks of decreasing complexity i.e., having 24, 20 and finally 12 hidden neurons. Statistical inference uniquely helped shed off weak features prior to training, thus optimizing time and generally large computational power to train expensive predictive models. The k-fold cross-validated, hyper-parametrically tuned, robust models performed with average accuracies wavering between 90% to 96% and an average 89.21% F1-score on the optimal model, with the incremental improvement in models proven by statistical ANOVA.
Simulation and the Asymptotics of Optimization Estimators
A general central limit theorem is proved for estimators defined by minimization of the length of a vector-valued, random criterion function. No smoothness assumptions are imposed on the criterion function, in order that the results might apply to a broad class of simulation estimators. Complete analyses of two simulation estimators, one introduced by Pakes and the other by McFadden, illustrate the application of the general theorems.
Two-step procedure for data-based modeling for inferential control applications
A two‐step procedure for building an inferential control model, which uses both historical operation data and plant test data, is proposed. Motivation for using the two types of data is given, and a systematic way to combine them in the model‐identification step is proposed. Some potential problems associated with the procedure in practice and their solutions are discussed. The efficacy of the procedure is demonstrated in a case study involving a multicomponent distillation column simulated in HYSYS.
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization
We consider optimization problems with a cost function consisting of a large number of component functions, such as minimize$\\sum\\limits_{i = 1}^m {{f_i}(x)} $subject to$x \\in X$, (4.1) where${f_i}:{R^n} \\mapsto R$, i = 1 , . . . ,mare real-valued functions, andXis a closed convex set.¹ We focus on the case where the number of componentsmis very large, and there is an incentive to use incremental methods that operate on a single component${f_i}$at each iteration, rather than on the entire cost function. If each incremental iteration tends to make reasonable progress in some “average” sense, then, depending
The Returns to Schooling: A Selectivity Bias Approach with a Continuous Choice Variable
The essence of selection bias is that we do not observe nonoptimal choices. This applies whether the choice variable is discrete or continuous. This paper extends the selection bias methodology to the case where the choice variable is continuous and the choice set is ordered. The leading practical application of this analysis is the schooling choice problem. Schooling is treated as a continuous choice variable and selectivity corrected rates of return are estimated. The findings suggest selectivity is of considerable importance and support the comparative advantage hypothesis of Willis and Rosen [18].