Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
5,006
result(s) for
"Parameter selection"
Sort by:
Combinatorial MAB-Based Joint Channel and Spreading Factor Selection for LoRa Devices
2023
Long-Range (LoRa) devices have been deployed in many Internet of Things (IoT) applications due to their ability to communicate over long distances with low power consumption. The scalability and communication performance of the LoRa systems are highly dependent on the spreading factor (SF) and channel allocations. In particular, it is important to set the SF appropriately according to the distance between the LoRa device and the gateway since the signal reception sensitivity and bit rate depend on the used SF, which are in a trade-off relationship. In addition, considering the surge in the number of LoRa devices recently, the scalability of LoRa systems is also greatly affected by the channels that the LoRa devices use for communications. It was demonstrated that the lightweight decentralized learning-based joint channel and SF-selection methods can make appropriate decisions with low computational complexity and power consumption in our previous study. However, the effect of the location situation of the LoRa devices on the communication performance in a practical larger-scale LoRa system has not been studied. Hence, to clarify the effect of the location situation of the LoRa devices on the communication performance in LoRa systems, in this paper, we implemented and evaluated the learning-based joint channel and SF-selection methods in a practical LoRa system. In the learning-based methods, the channel and SF are decided only based on the ACKnowledge information. The learning methods evaluated in this paper were the Tug of War dynamics, Upper Confidence Bound 1, and ϵ-greedy algorithms. Moreover, to consider the relevance of the channel and SF, we propose a combinational multi-armed bandit-based joint channel and SF-selection method. Compared with the independent methods, the combinations of the channel and SF are set as arms. Conversely, the SF and channel are set as independent arms in the independent methods that are evaluated in our previous work. From the experimental results, we can see the following points. First, the combinatorial methods can achieve a higher frame success rate and fairness than the independent methods. In addition, the FSR can be improved by joint channel and SF selection compared to SF selection only. Moreover, the channel and SF selection dependents on the location situation to a great extent.
Journal Article
Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach
by
Abunawas, Eman
,
Hassanat, Ahmad
,
Alkafaween, Esra’a
in
Artificial intelligence
,
Chromosomes
,
Crossovers
2019
Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested.
Journal Article
Uninformative Parameters and Model Selection Using Akaike's Information Criterion
2010
As use of Akaike's Information Criterion (AIC) for model selection has become increasingly common, so has a mistake involving interpretation of models that are within 2 AIC units (ΔAIC ≤ 2) of the top-supported model. Such models are <2 ΔAIC units because the penalty for one additional parameter is +2 AIC units, but model deviance is not reduced by an amount sufficient to overcome the 2-unit penalty and, hence, the additional parameter provides no net reduction in AIC. Simply put, the uninformative parameter does not explain enough variation to justify its inclusion in the model and it should not be interpreted as having any ecological effect. Models with uninformative parameters are frequently presented as being competitive in the Journal of Wildlife Management, including 72% of all AIC-based papers in 2008, and authors and readers need to be more aware of this problem and take appropriate steps to eliminate misinterpretation. I reviewed 5 potential solutions to this problem: 1) report all models but ignore or dismiss those with uninformative parameters, 2) use model averaging to ameliorate the effect of uninformative parameters, 3) use 95% confidence intervals to identify uninformative parameters, 4) perform all-possible subsets regression and use weight-of-evidence approaches to discriminate useful from uninformative parameters, or 5) adopt a methodological approach that allows models containing uninformative parameters to be culled from reported model sets. The first approach is preferable for small sets of a priori models, whereas the last 2 approaches should be used for large model sets or exploratory modeling.
Journal Article
SPARSE MODELS AND METHODS FOR OPTIMAL INSTRUMENTS WITH AN APPLICATION TO EMINENT DOMAIN
2012
We develop results for the use of Lasso and post-Lasso methods to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post-Lasso in the first stage is root-n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well-approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic \"beta-min\" conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso-based IV estimator with a data-driven penalty performs well compared to recently advocated many-instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso-based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post-Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non-Gaussian, heteroscedastic disturbances that uses a data-weighted 𝓁₁-penalty function. By innovatively using moderate deviation theory for self-normalized sums, we provide convergence rates for the resulting Lasso and post-Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that log p = o(n 1/3 ). We also provide a data-driven method for choosing the penalty level that must be specified in obtaining Lasso and post-Lasso estimates and establish its asymptotic validity under non-Gaussian, heteroscedastic disturbances.
Journal Article
Selecting single cell clustering parameter values using subsampling-based robustness metrics
by
Levine, Ariel J.
,
Patterson-Cross, Ryan B.
,
Menon, Vilas
in
Algorithms
,
Analysis
,
Benchmarking
2021
Background
Generating and analysing single-cell data has become a widespread approach to examine tissue heterogeneity, and numerous algorithms exist for clustering these datasets to identify putative cell types with shared transcriptomic signatures. However, many of these clustering workflows rely on user-tuned parameter values, tailored to each dataset, to identify a set of biologically relevant clusters. Whereas users often develop their own intuition as to the optimal range of parameters for clustering on each data set, the lack of systematic approaches to identify this range can be daunting to new users of any given workflow. In addition, an optimal parameter set does not guarantee that all clusters are equally well-resolved, given the heterogeneity in transcriptomic signatures in most biological systems.
Results
Here, we illustrate a subsampling-based approach (chooseR) that simultaneously guides parameter selection and characterizes cluster robustness. Through bootstrapped iterative clustering across a range of parameters, chooseR was used to select parameter values for two distinct clustering workflows (Seurat and scVI). In each case, chooseR identified parameters that produced biologically relevant clusters from both well-characterized (human PBMC) and complex (mouse spinal cord) datasets. Moreover, it provided a simple “robustness score” for each of these clusters, facilitating the assessment of cluster quality.
Conclusion
chooseR is a simple, conceptually understandable tool that can be used flexibly across clustering algorithms, workflows, and datasets to guide clustering parameter selection and characterize cluster robustness.
Journal Article
Optimal Data-Driven Regression Discontinuity Plots
2015
Exploratory data analysis plays a central role in applied statistics and econometrics. In the popular regression-discontinuity (RD) design, the use of graphical analysis has been strongly advocated because it provides both easy presentation and transparent validation of the design. RD plots are nowadays widely used in applications, despite its formal properties being unknown: these plots are typically presented employing ad hoc choices of tuning parameters, which makes these procedures less automatic and more subjective. In this article, we formally study the most common RD plot based on an evenly spaced binning of the data, and propose several (optimal) data-driven choices for the number of bins depending on the goal of the researcher. These RD plots are constructed either to approximate the underlying unknown regression functions without imposing smoothness in the estimator, or to approximate the underlying variability of the raw data while smoothing out the otherwise uninformative scatterplot of the data. In addition, we introduce an alternative RD plot based on quantile spaced binning, study its formal properties, and propose similar (optimal) data-driven choices for the number of bins. The main proposed data-driven selectors employ spacings estimators, which are simple and easy to implement in applications because they do not require additional choices of tuning parameters. Altogether, our results offer an array of alternative RD plots that are objective and automatic when implemented, providing a reliable benchmark for graphical analysis in RD designs. We illustrate the performance of our automatic RD plots using several empirical examples and a Monte Carlo study. All results are readily available in R and STATA using the software packages described in Calonico, Cattaneo, and Titiunik. Supplementary materials for this article are available online.
Journal Article
Selection and Optimization of Carbon-Reinforced Polyether Ether Ketone Process Parameters in 3D Printing—A Rotating Component Application
by
Rusho, Maher Ali
,
Vijayakumar, Praveenkumar
,
Thirugnanasambandam, Arun Kumar
in
3-D printers
,
3D printing
,
Additive manufacturing
2024
The selection of process parameters is crucial in 3D printing for product manufacturing. These parameters govern the operation of production machinery and influence the mechanical properties, production time, and other aspects of the final product. The optimal process parameter settings vary depending on the product and printing application. This study identifies the most suitable cluster of process parameters for producing rotating components, specifically impellers, using carbon-reinforced Polyether Ether Ketone (CF-PEEK) thermoplastic filament. A mathematical programming technique using a rating method was employed to select the appropriate process parameters. The research concludes that an infill density of 70%, a layer height of 0.15 mm, a printing speed of 60 mm/s, a platform temperature of 195 °C, an extruder temperature of 445 °C, and an extruder travel speed of 95 mm/s are optimal process parameters for manufacturing rotating components using carbon-reinforced PEEK material.
Journal Article
Tuning parameter selection in high dimensional penalized likelihood
2013
Determining how to select the tuning parameter appropriately is essential in penalized likelihood methods for high dimensional data analysis. We examine this problem in the setting of penalized likelihood methods for generalized linear models, where the dimensionality of covariates p is allowed to increase exponentially with the sample size n. We propose to select the tuning parameter by optimizing the generalized information criterion with an appropriate model complexity penalty. To ensure that we consistently identify the true model, a range for the model complexity penalty is identified in the generlized information criterion. We find that this model complexity penalty should diverge at the rate of some power of log(p) depending on the tail probability behaviour of the response variables. This reveals that using the Akaike information criterion or Bayes information criterion to select the tuning parameter may not be adequate for consistently identifying the true model. On the basis of our theoretical study, we propose a uniform choice of the model complexity penalty and show that the approach proposed consistently identifies the true model among candidate models with asymptotic probability 1. We justify the performance of the procedure proposed by numerical simulations and a gene expression data analysis.
Journal Article
On selecting proper process parameters for cold metal transfer (CMT)–based wire arc additive manufacturing (WAAM) process
by
Mirakhorli, Fatemeh
,
Nadeau, François
,
Béland, Jean-François
in
bead geometry
,
Beads
,
Bonding strength
2024
This paper addresses the selection of process parameters for the cold metal transfer (CMT)–based wire and arc additive manufacture (WAAM) process. Experimental tests were conducted using different wire feed and travel speeds with a Fronius-CMT welding source to produce single-weld beads using an H13 filler wire. Optimizing these two parameters enables the achievement of desired bead geometry, improved part quality, and reduced fabrication time and material consumption. Geometric characteristics of the weld bead, such as width, height, and contact angle, were measured using a scanning microscope and analyzed with PolyWorks software. The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was employed to determine the optimal process parameters considering heat input, height, width, and contact angle. The results revealed the strong influence of heat input on weld bead geometry. Increased heat input resulted in a greater width and height of the weld bead. However, the relationship between heat input and contact angle was more complex, with the contact angle significantly influenced by the applied power. Higher power values were associated with a decreased contact angle. The parameters were selected, thanks to the TOPSIS, and were used to fabricate an H13 part. Sound metallurgical bonding, absence of large defects, and expected geometries were obtained, validating the proposed method.
Journal Article
Model Selection via Bayesian Information Criterion for Quantile Regression Models
by
Park, Byeong U.
,
Lee, Eun Ryung
,
Noh, Hohsuk
in
Analytical estimating
,
Approximation
,
Bayesian analysis
2014
Bayesian information criterion (BIC) is known to identify the true model consistently as long as the predictor dimension is finite. Recently, its moderate modifications have been shown to be consistent in model selection even when the number of variables diverges. Those works have been done mostly in mean regression, but rarely in quantile regression. The best-known results about BIC for quantile regression are for linear models with a fixed number of variables. In this article, we investigate how BIC can be adapted to high-dimensional linear quantile regression and show that a modified BIC is consistent in model selection when the number of variables diverges as the sample size increases. We also discuss how it can be used for choosing the regularization parameters of penalized approaches that are designed to conduct variable selection and shrinkage estimation simultaneously. Moreover, we extend the results to structured nonparametric quantile models with a diverging number of covariates. We illustrate our theoretical results via some simulated examples and a real data analysis on human eye disease. Supplementary materials for this article are available online.
Journal Article