Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
402 result(s) for "elastic net"
Sort by:
Hybrid Optimization of CO₂ Emissions and Energyin High-Performance Concrete Using KNN, Elastic Net, and Artificial Rabbits Optimization Models
In this study, an integrated machine learning framework is proposed to accurately predict and minimize CO₂ emissions and energy consumption in the manufacturing of High-Performance Concrete (HPC). The methodology combines K-Nearest Neighbor (KNN) and Elastic Net Regression (ENR) models with the Artificial Rabbits Optimization (ARO) algorithm for hyperparameter tuning, and employs Recursive Feature Elimination (RFE) to isolate the most influential input variables. A dataset comprising key HPC mix components was curated from experimental sources and subjected to rigorous preprocessing. Among the tested models, the hybrid ENR + ARO (ENAR) model achieved the best performance for energy prediction with an R² of 0.986 and RMSE of 52.63 MJ/m³, while the KNN + ARO (KNAR) model yielded the highest accuracy for CO₂ emission prediction with an R² of 0.992 and RMSE of 7.57 kg/m³. The application of RFE improved model performance by 12.4% in RMSE reduction for energy prediction and 9.6% for CO₂ estimation, by eliminating redundant features. Cement and superplasticizer content were identified as the most influential predictors. These results provide a reliable and interpretable framework for enhancing the sustainability of concrete production through data-driven mix optimization.
Geographically Weighted Elastic Net: A Variable-Selection and Modeling Method under the Spatially Nonstationary Condition
This study develops a linear regression model to select local, low-collinear explanatory variables. This model combines two well-known models: geographically weighted regression (GWR) and elastic net (EN). The GWR model posits that the regression coefficients vary as a function of location and focuses on solving the problem of explaining the relationships under the spatially nonstationary condition, which a global model cannot solve. GWR cannot fulfill the task of variable selection, however, which is problematic when there are many explanatory variables with nonnegligible multicollinearity. On the other hand, the EN model is a member of the regulated regression family. EN can trim the number of explanatory variables and select the most important ones by adding penalty terms in its cost function, and it has been proven to be robust under the high-multicollinearity condition. The EN model is a global model, however, and does not consider the spatial nonstationarity. To overcome these deficiencies, we proposed the geographically weighted elastic net (GWEN) model. GWEN uses the kernel weights derived from GWR and applies EN locally to select variables for each geographical location. The result is a set of locally selected, low-collinear explanatory variables with spatially varying coefficients. We demonstrated the GWEN method on a data set relating population changes to a set of social, economic, and environmental variables in the Lower Mississippi River Basin. The results show that GWEN has the advantages of both the high prediction accuracy of GWR and the low multicollinearity among explanatory variables of EN.
Machine learning methods in the computational biology of cancer
The objectives of this Perspective paper are to review some recent advances in sparse feature selection for regression and classification, as well as compressed sensing, and to discuss how these might be used to develop tools to advance personalized cancer therapy. As an illustration of the possibilities, a new algorithm for sparse regression is presented and is applied to predict the time to tumour recurrence in ovarian cancer. A new algorithm for sparse feature selection in classification problems is presented, and its validation in endometrial cancer is briefly discussed. Some open problems are also presented.
Priority-Elastic net for binary disease outcome prediction based on multi-omics data
Background High-dimensional omics data integration has emerged as a prominent avenue within the healthcare industry, presenting substantial potential to improve predictive models. However, the data integration process faces several challenges, including data heterogeneity, priority sequence in which data blocks are prioritized for rendering predictive information contained in multiple blocks, assessing the flow of information from one omics level to the other and multicollinearity. Methods We propose the Priority-Elastic net algorithm, a hierarchical regression method extending Priority-Lasso for the binary logistic regression model by incorporating a priority order for blocks of variables while fitting Elastic-net models sequentially for each block. The fitted values from each step are then used as an offset in the subsequent step. Additionally, we considered the adaptive elastic-net penalty within our priority framework to compare the results. Results The Priority-Elastic net and Priority-Adaptive Elastic net algorithms were evaluated on a brain tumor dataset available from The Cancer Genome Atlas (TCGA), accounting for transcriptomics, proteomics, and clinical information measured over two glioma types: Lower-grade glioma (LGG) and glioblastoma (GBM). Conclusion Our findings suggest that the Priority-Elastic net is a highly advantageous choice for a wide range of applications. It offers moderate computational complexity, flexibility in integrating prior knowledge while introducing a hierarchical modeling perspective, and, importantly, improved stability and accuracy in predictions, making it superior to the other methods discussed. This evolution marks a significant step forward in predictive modeling, offering a sophisticated tool for navigating the complexities of multi-omics datasets in pursuit of precision medicine’s ultimate goal: personalized treatment optimization based on a comprehensive array of patient-specific data. This framework can be generalized to time-to-event, Cox proportional hazards regression and multicategorical outcomes. A practical implementation of this method is available upon request in R script, complete with an example to facilitate its application.
Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Functional trait complementarity and dominance both determine benthic secondary production in temperate seagrass beds
Defining relationships between biodiversity and ecosystem functioning (BEF) is key to understanding the consequences of biodiversity loss. Although species functional traits are strongly linked to ecosystem processes, their integration into BEF models has been focused mainly on terrestrial ecosystems. Application is limited because functional trait‐based BEF models typically have small‐sample sizes and highly correlated predictors, making it difficult for model selection and identifying underlying drivers. We examine the BEF relationship between secondary production and benthic invertebrate taxonomic and functional diversity for seagrass beds located across a range of environmental conditions. Specifically, we evaluate the role of complementarity (i.e., dissimilarity in species or traits) and dominance (disproportional importance of traits) in determining secondary production at 20 sites using 34 metrics of taxonomic diversity, functional diversity, and functional traits. Here, diversity metrics represent complementarity and functional traits represent dominance. We used elastic‐net regression and commonality analysis to evaluate the BEF model, because its properties (i.e., few observations and many potential, and highly correlated, predictors) precluded more standard approaches and it is well suited to this situation. Functional richness and five functional traits (crawling, surface deposit feeding [SurDF], location on sediment surface, lifespan 1–3 yr, and semi‐continuous breeding) were identified as important determinants of secondary production, explaining 74% of the variance. SurDF was the most important predictor that acted in isolation to influence secondary production, while all other predictors acted together. All six selected variables in three different combinations explained 68% of the total variance in the BEF model. These results indicate that both dominance and complementarity mechanisms were important for the BEF relationship. Our study highlights elastic‐net regression and commonality analysis as a powerful approach to model functional trait‐based BEF relationships. We further show that inclusion of the functional landscape into BEF models is highly valuable, allowing the implications of species loss for ecosystem functioning to be mechanistically understood.
Statistical predictions with glmnet
Elastic net type regression methods have become very popular for prediction of certain outcomes in epigenome-wide association studies (EWAS). The methods considered accept biased coefficient estimates in return for lower variance thus obtaining improved prediction accuracy. We provide guidelines on how to obtain parsimonious models with low mean squared error and include easy to follow walk-through examples for each step in R.
Adaptive Channel Selection for Robust Visual Object Tracking with Discriminative Correlation Filters
Discriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than 10% deep feature channels.
Combining multiple connectomes improves predictive modeling of phenotypic measures
Resting-state and task-based functional connectivity matrices, or connectomes, are powerful predictors of individual differences in phenotypic measures. However, most of the current state-of-the-art algorithms only build predictive models based on a single connectome for each individual. This approach neglects the complementary information contained in connectomes from different sources and reduces prediction performance. In order to combine different task connectomes into a single predictive model in a principled way, we propose a novel prediction framework, termed multidimensional connectome-based predictive modeling. Two specific algorithms are developed and implemented under this framework. Using two large open-source datasets with multiple tasks—the Human Connectome Project and the Philadelphia Neurodevelopmental Cohort, we validate and compare our framework against performing connectome-based predictive modeling (CPM) on each task connectome independently, CPM on a general functional connectivity matrix created by averaging together all task connectomes for an individual, and CPM with a naïve extension to multiple connectomes where each edge for each task is selected independently. Our framework exhibits superior performance in prediction compared with the other competing methods. We found that different tasks contribute differentially to the final predictive model, suggesting that the battery of tasks used in prediction is an important consideration. This work makes two major contributions: First, two methods for combining multiple connectomes from different task conditions in one predictive model are demonstrated; Second, we show that these models outperform a previously validated single connectome-based predictive model approach. •A multidimensional prediction framework that combines different task connectomes•2 specific algorithms for multidimensional prediction are evaluated•Predict fluid intelligence with higher accuracy than using a single connectome•Different task connectomes offer complementary information about fluid intelligence•Algorithm performance is not sensitive to hyperparameters
Facets of Fakes in Cyberspace: Machine and Ensemble Learning-Based Decisions and Detections
Fake online reviews hinder internet marketing efforts to build businesses and brands in a competitive market with changing consumer expectations. This helps brands attract clients, making fake online reviews hard to uncover. Hence, fake reviews and websites are extensively examined. AI models like the Generalized Additive2 Model (GA2M) and its ensemble with the Elastic-net Classifier model have been studied using Log-Loss metric. This research, analysis, and depiction help demarcate bogus hotel reviews and websites from genuine entities. The paper uses ML classifiers (Decision Tree, Logistic Regression, Naïve Bayes) and ensemble models (Random Forest, Gradient Boosting) to identify legitimate websites using binary classification. This article compares ML classifiers and ensemble models by accuracy, precision, recall, f1-score, and ROC-AUC to evaluate their pros and downsides. Elastic-Net Classifier (L2 / Binomial Deviance) with score of 0.2879 outperformed GA2M model by 0.66% in LogLoss holdout score on Hotel dataset. LogLoss predicts values better than ROC-AUC due to its closer proximity to predicting actual values. Elastic-Net Classifier (L2 / Binomial Deviance) surpassed GA2M in F1 score, precision, and accuracy by 0.4%, 1.84%, and 0.63%. Ensemble techniques outperform ML classifiers in the Fraudulent and Legitimate Online Shops dataset with ROC-AUC scores of 0.71%, 1.73%, 0.76%, 1.10%, and 0.63% using 50% to 90% training datasets and 50% to 10% holdout datasets.