Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
86,864
result(s) for
"Optimization theory"
Sort by:
Novel grey wolf optimizer based parameters selection for GARCH and ARIMA models for stock price prediction
2024
Stock price data often exhibit nonlinear patterns and dynamics in nature. The parameter selection in generalized autoregressive conditional heteroskedasticity (GARCH) and autoregressive integrated moving average (ARIMA) models is challenging due to stock price volatility. Most studies examined the manual method for parameter selection in GARCH and ARIMA models. These procedures are time-consuming and based on trial and error. To overcome this, we considered a GWO method for finding the optimal parameters in GARCH and ARIMA models. The motivation behind considering the grey wolf optimizer (GWO) is one of the popular methods for parameter optimization. The novel GWO-based parameters selection approach for GARCH and ARIMA models aims to improve stock price prediction accuracy by optimizing the parameters of ARIMA and GARCH models. The hierarchical structure of GWO comprises four distinct categories: alpha ( α ), beta ( β ), delta ( δ ) and omega ( ω ). The predatory conduct of wolves primarily encompasses the act of pursuing and closing in on the prey, tracing the movements of the prey, and ultimately launching an attack on the prey. In the proposed context, attacking prey is a selection of the best parameters for GARCH and ARIMA models. The GWO algorithm iteratively updates the positions of wolves to provide potential solutions in the search space in GARCH and ARIMA models. The proposed model is evaluated using root mean squared error (RMSE), mean squared error (MSE), and mean absolute error (MAE). The GWO-based parameter selection for GARCH and ARIMA improves the performance of the model by 5% to 8% compared to existing traditional GARCH and ARIMA models.
Journal Article
An evolutionary decomposition-based multi-objective feature selection for multi-label classification
by
Asilian Bidgoli, Azam
,
Ebrahimpour-Komleh, Hossein
,
Rahnamayan, Shahryar
in
Accuracy
,
Algorithms
,
Analysis
2020
Data classification is a fundamental task in data mining. Within this field, the classification of multi-labeled data has been seriously considered in recent years. In such problems, each data entity can simultaneously belong to several categories. Multi-label classification is important because of many recent real-world applications in which each entity has more than one label. To improve the performance of multi-label classification, feature selection plays an important role. It involves identifying and removing irrelevant and redundant features that unnecessarily increase the dimensions of the search space for the classification problems. However, classification may fail with an extreme decrease in the number of relevant features. Thus, minimizing the number of features and maximizing the classification accuracy are two desirable but conflicting objectives in multi-label feature selection. In this article, we introduce a multi-objective optimization algorithm customized for selecting the features of multi-label data. The proposed algorithm is an enhanced variant of a decomposition-based multi-objective optimization approach, in which the multi-label feature selection problem is divided into single-objective subproblems that can be simultaneously solved using an evolutionary algorithm. This approach leads to accelerating the optimization process and finding more diverse feature subsets. The proposed method benefits from a local search operator to find better solutions for each subproblem. We also define a pool of genetic operators to generate new feature subsets based on old generation. To evaluate the performance of the proposed algorithm, we compare it with two other multi-objective feature selection approaches on eight real-world benchmark datasets that are commonly used for multi-label classification. The reported results of multi-objective method evaluation measures, such as hypervolume indicator and set coverage, illustrate an improvement in the results obtained by the proposed method. Moreover, the proposed method achieved better results in terms of classification accuracy with fewer features compared with state-of-the-art methods.
Journal Article
Application of particle swarm optimization in optimal placement of tsunami sensors
by
Ferrolino, Angelie
,
Lope, Jose Ernie
,
Magdalena, Ikha
in
Algorithms
,
Early warning systems
,
Earthquakes
2020
Rapid detection and early warning systems demonstrate crucial significance in tsunami risk reduction measures. So far, several tsunami observation networks have been deployed in tsunamigenic regions to issue effective local response. However, guidance on where to station these sensors are limited. In this article, we address the problem of determining the placement of tsunami sensors with the least possible tsunami detection time. We use the solutions of the 2D nonlinear shallow water equations to compute the wave travel time. The optimization problem is solved by implementing the particle swarm optimization algorithm. We apply our model to a simple test problem with varying depths. We also use our proposed method to determine the placement of sensors for early tsunami detection in Cotabato Trench, Philippines.
Journal Article
Adaptive divergence for rapid adversarial optimization
by
Gaintseva, Tatiana
,
Borisyak, Maxim
,
Ustyuzhanin, Andrey
in
Acceleration
,
Adversarial optimization
,
Algorithms
2020
Adversarial Optimization provides a reliable, practical way to match two implicitly defined distributions, one of which is typically represented by a sample of real data, and the other is represented by a parameterized generator. Matching of the distributions is achieved by minimizing a divergence between these distribution, and estimation of the divergence involves a secondary optimization task, which, typically, requires training a model to discriminate between these distributions. The choice of the model has its trade-off: high-capacity models provide good estimations of the divergence, but, generally, require large sample sizes to be properly trained. In contrast, low-capacity models tend to require fewer samples for training; however, they might provide biased estimations. Computational costs of Adversarial Optimization becomes significant when sampling from the generator is expensive. One of the practical examples of such settings is fine-tuning parameters of complex computer simulations. In this work, we introduce a novel family of divergences that enables faster optimization convergence measured by the number of samples drawn from the generator. The variation of the underlying discriminator model capacity during optimization leads to a significant speed-up. The proposed divergence family suggests using low-capacity models to compare distant distributions (typically, at early optimization steps), and the capacity gradually grows as the distributions become closer to each other. Thus, it allows for a significant acceleration of the initial stages of optimization. This acceleration was demonstrated on two fine-tuning problems involving Pythia event generator and two of the most popular black-box optimization algorithms: Bayesian Optimization and Variational Optimization. Experiments show that, given the same budget, adaptive divergences yield results up to an order of magnitude closer to the optimum than Jensen-Shannon divergence. While we consider physics-related simulations, adaptive divergences can be applied to any stochastic simulation.
Journal Article
On Sudakov’s type decomposition of transference plans with norm costs
2018
We consider the original strategy proposed by Sudakov for solving the Monge transportation problem with norm cost
In this paper we show
how these difficulties can be overcome, and that the original idea of Sudakov can be successfully implemented.
The results yield
a complete characterization of the Kantorovich optimal transportation problem, whose straightforward corollary is the solution of the
Monge problem in each set
The analysis requires
(1)
(2)
(3)
(4)
A Comprehensive Review of Swarm Optimization Algorithms
2015
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.
Journal Article
Robust supply chain network design: an optimization model with real world application
by
Fahimnia, Behnam
,
Jabbarzadeh, Armin
,
Zokaee, Shiva
in
Business and Management
,
Business logistics
,
Combinatorics
2017
This paper presents a robust optimization model for the design of a supply chain facing uncertainty in demand, supply capacity and major cost data including transportation and shortage cost parameters. We first present a base model that aims to determine the strategic ‘location’ and tactical ‘allocation’ decisions for a deterministic four-tier supply chain. The model is then extended to incorporate uncertainty in key input parameters using a robust optimization approach that can overcome the limitations of scenario-based solution methods in a tractable way, i.e. without excessive changes in complexity of the underlying base deterministic model. The application of the approach is investigated in an actual case study where real data is utilized to design a bread supply chain network. Numerical results obtained from model implementation and sensitivity analysis experiments arrive at important managerial insights and practical implications.
Journal Article
CO₂ fertilization of terrestrial photosynthesis inferred from site to global scales
by
Keenan, Trevor F.
,
Chen, Chi
,
Prentice, I. Colin
in
Annual variations
,
Atmospheric models
,
BASIC BIOLOGICAL SCIENCES
2022
Global photosynthesis is increasing with elevated atmospheric CO₂ concentrations, a response known as the CO₂ fertilization effect (CFE), but the key processes of CFE are not constrained and therefore remain uncertain. Here, we quantify CFE by combining observations from a globally distributed network of eddy covariance measurements with an analytical framework based on three well-established photosynthetic optimization theories. We report a strong enhancement of photosynthesis across the observational network (9.1 gC m−2 year−2) and show that the CFE is responsible for 44% of the gross primary production (GPP) enhancement since the 2000s, with additional contributions primarily from warming (28%). Soil moisture and specific humidity are the two largest contributors to GPP interannual variation through their influences on plant hydraulics. Applying our framework to satellite observations and meteorological reanalysis data, we diagnose a global CO₂- induced GPP trend of 4.4 gC m−2 year−2, which is at least one-third stronger than the median trends of 13 dynamic global vegetation models and eight satellite-derived GPP products, mainly because of their differences in the magnitude of CFE in evergreen broadleaf forests. These results highlight the critical role that CFE has played in the global carbon cycle in recent decades.
Journal Article
Optimization algorithms on matrix manifolds
2008
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.