Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
119,562 result(s) for "Computational Science and Engineering"
Sort by:
Gazelle optimization algorithm: a novel nature-inspired metaheuristic optimizer
This study proposes a novel population-based metaheuristic algorithm called the Gazelle Optimization Algorithm (GOA), inspired by the gazelles’ survival ability in their predator-dominated environment. Every day, the gazelle knows that if it does not outrun and outmaneuver its predators, it becomes meat for the day, and to survive, the gazelles have to escape from their predators consistently. This information is vital to proposing a new metaheuristic algorithm that uses the gazelle’s survival abilities to solve real-world optimization problems. The exploitation phase of the algorithm simulates the gazelles grazing peacefully in the absence of the predator or while the predator is stalking it. The GOA goes into the exploration phase once a predator is spotted. The exploration phase consists of the gazelle outrunning and outmaneuvering the predator to a safe haven. These two phases are iteratively repeated, subject to the termination criteria, and finding optimal solutions to the optimization problems. The robustness and efficiency of the developed algorithm as an optimization tool were tested using benchmark optimization test functions and selected engineering design problems (fifteen classical, ten composited functions, and four mechanical engineering design problems). The results of the GOA are compared with nine other state-of-the-art algorithms. The simulation results obtained confirm the superiority and competitiveness of the GOA algorithm over nine state-of-the-art algorithms available in the literature. Also, the standard statistical analysis test carried out on the results further confirmed the ability of GOA to find solutions to the selected optimization problems. It also showed that GOA performed better or, in some cases, was very competitive with some state-of-the-art algorithms. Also, the results show that GOA is a potent tool for optimization that can be adapted to solve problems in different optimization domains.
Finite basis physics-informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations
Recently, physics-informed neural networks (PINNs) have offered a powerful new paradigm for solving problems relating to differential equations. Compared to classical numerical methods, PINNs have several advantages, for example their ability to provide mesh-free solutions of differential equations and their ability to carry out forward and inverse modelling within the same optimisation problem. Whilst promising, a key limitation to date is that PINNs have struggled to accurately and efficiently solve problems with large domains and/or multi-scale solutions, which is crucial for their real-world application. Multiple significant and related factors contribute to this issue, including the increasing complexity of the underlying PINN optimisation problem as the problem size grows and the spectral bias of neural networks. In this work, we propose a new, scalable approach for solving large problems relating to differential equations called finite basis physics-informed neural networks (FBPINNs) . FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support. In FBPINNs, neural networks are used to learn these basis functions, which are defined over small, overlapping subdomains. FBINNs are designed to address the spectral bias of neural networks by using separate input normalisation over each subdomain and reduce the complexity of the underlying optimisation problem by using many smaller neural networks in a parallel divide-and-conquer approach. Our numerical experiments show that FBPINNs are effective in solving both small and larger, multi-scale problems, outperforming standard PINNs in both accuracy and computational resources required, potentially paving the way to the application of PINNs on large, real-world problems.
Prairie Dog Optimization Algorithm
This study proposes a new nature-inspired metaheuristic that mimics the behaviour of the prairie dogs in their natural habitat called the prairie dog optimization (PDO). The proposed algorithm uses four prairie dog activities to achieve the two common optimization phases, exploration and exploitation. The prairie dogs' foraging and burrow build activities are used to provide exploratory behaviour for PDO. The prairie dogs build their burrows around an abundant food source. As the food source gets depleted, they search for a new food source and build new burrows around it, exploring the whole colony or problem space to discover new food sources or solutions. The specific response of the prairie dogs to two unique communication or alert sound is used to accomplish exploitation. The prairie dogs have signals or sounds for different scenarios ranging from predator threats to food availability. Their communication skills play a significant role in satisfying the prairie dogs' nutritional needs and anti-predation abilities. These two specific behaviours result in the prairie dogs converging to a specific location or a promising location in the case of PDO implementation, where further search (exploitation) is carried out to find better or near-optimal solutions. The performance of PDO in carrying out optimization is tested on a set of twenty-two classical benchmark functions and ten CEC 2020 test functions. The experimental results demonstrate that PDO benefits from a good balance of exploration and exploitation. Compared with the results of other well-known population-based metaheuristic algorithms available in the literature, the PDO shows stronger performance and higher capabilities than the other algorithms. Furthermore, twelve benchmark engineering design problems are used to test the performance of PDO, and the results indicate that the proposed PDO is effective in estimating optimal solutions for real-world optimization problems with unknown global optima. The PDO algorithm source codes is publicly available at https://www.mathworks.com/matlabcentral/fileexchange/110980-prairie-dog-optimization-algorithm .
An improved fire detection approach based on YOLO-v8 for smart cities
Fires in smart cities can have devastating consequences, causing damage to property, and endangering the lives of citizens. Traditional fire detection methods have limitations in terms of accuracy and speed, making it challenging to detect fires in real time. This paper proposes an improved fire detection approach for smart cities based on the YOLOv8 algorithm, called the smart fire detection system (SFDS), which leverages the strengths of deep learning to detect fire-specific features in real time. The SFDS approach has the potential to improve the accuracy of fire detection, reduce false alarms, and be cost-effective compared to traditional fire detection methods. It can also be extended to detect other objects of interest in smart cities, such as gas leaks or flooding. The proposed framework for a smart city consists of four primary layers: (i) Application layer, (ii) Fog layer, (iii) Cloud layer, and (iv) IoT layer. The proposed algorithm utilizes Fog and Cloud computing, along with the IoT layer, to collect and process data in real time, enabling faster response times and reducing the risk of damage to property and human life. The SFDS achieved state-of-the-art performance in terms of both precision and recall, with a high precision rate of 97.1% for all classes. The proposed approach has several potential applications, including fire safety management in public areas, forest fire monitoring, and intelligent security systems.
Parallel spatio-temporal attention-based TCN for multivariate time series prediction
As industrial systems become more complex and monitoring sensors for everything from surveillance to our health become more ubiquitous, multivariate time series prediction is taking an important place in the smooth-running of our society. A recurrent neural network with attention to help extend the prediction windows is the current-state-of-the-art for this task. However, we argue that their vanishing gradients, short memories, and serial architecture make RNNs fundamentally unsuited to long-horizon forecasting with complex data. Temporal convolutional networks (TCNs) do not suffer from gradient problems and they support parallel calculations, making them a more appropriate choice. Additionally, they have longer memories than RNNs, albeit with some instability and efficiency problems. Hence, we propose a framework, called PSTA-TCN, that combines a parallel spatio-temporal attention mechanism to extract dynamic internal correlations with stacked TCN backbones to extract features from different window sizes. The framework makes full use parallel calculations to dramatically reduce training times, while substantially increasing accuracy with stable prediction windows up to 13 times longer than the status quo.
Deep learning: systematic review, models, challenges, and research directions
The current development in deep learning is witnessing an exponential transition into automation applications. This automation transition can provide a promising framework for higher performance and lower complexity. This ongoing transition undergoes several rapid changes, resulting in the processing of the data by several studies, while it may lead to time-consuming and costly models. Thus, to address these challenges, several studies have been conducted to investigate deep learning techniques; however, they mostly focused on specific learning approaches, such as supervised deep learning. In addition, these studies did not comprehensively investigate other deep learning techniques, such as deep unsupervised and deep reinforcement learning techniques. Moreover, the majority of these studies neglect to discuss some main methodologies in deep learning, such as transfer learning, federated learning, and online learning. Therefore, motivated by the limitations of the existing studies, this study summarizes the deep learning techniques into supervised, unsupervised, reinforcement, and hybrid learning-based models. In addition to address each category, a brief description of these categories and their models is provided. Some of the critical topics in deep learning, namely, transfer, federated, and online learning models, are explored and discussed in detail. Finally, challenges and future directions are outlined to provide wider outlooks for future researchers.
Deep learning in computational mechanics: a review
The rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. To help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. Five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. This review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. As such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning—instead, the primary audience is researchers on the verge of entering this field or those attempting to gain an overview of deep learning in computational mechanics. The discussed concepts are, therefore, explained as simple as possible.
A modified Adam algorithm for deep neural network optimization
Deep Neural Networks (DNNs) are widely regarded as the most effective learning tool for dealing with large datasets, and they have been successfully used in thousands of applications in a variety of fields. Based on these large datasets, they are trained to learn the relationships between various variables. The adaptive moment estimation (Adam) algorithm, a highly efficient adaptive optimization algorithm, is widely used as a learning algorithm in various fields for training DNN models. However, it needs to improve its generalization performance, especially when training with large-scale datasets. Therefore, in this paper, we propose HN Adam, a modified version of the Adam Algorithm, to improve its accuracy and convergence speed. The HN_Adam algorithm is modified by automatically adjusting the step size of the parameter updates over the training epochs. This automatic adjustment is based on the norm value of the parameter update formula according to the gradient values obtained during the training epochs. Furthermore, a hybrid mechanism was created by combining the standard Adam algorithm and the AMSGrad algorithm. As a result of these changes, the HN_Adam algorithm, like the stochastic gradient descent (SGD) algorithm, has good generalization performance and achieves fast convergence like other adaptive algorithms. To test the proposed HN_Adam algorithm performance, it is evaluated to train a deep convolutional neural network (CNN) model that classifies images using two different standard datasets: MNIST and CIFAR-10. The algorithm results are compared to the basic Adam algorithm and the SGD algorithm, in addition to other five recent SGD adaptive algorithms. In most comparisons, the HN Adam algorithm outperforms the compared algorithms in terms of accuracy and convergence speed. AdaBelief is the most competitive of the compared algorithms. In terms of testing accuracy and convergence speed (represented by the consumed training time), the HN-Adam algorithm outperforms the AdaBelief algorithm by an improvement of 1.0% and 0.29% for the MNIST dataset, and 0.93% and 1.68% for the CIFAR-10 dataset, respectively.
A physics-informed neural network technique based on a modified loss function for computational 2D and 3D solid mechanics
Despite its rapid development, Physics-Informed Neural Network (PINN)-based computational solid mechanics is still in its infancy. In PINN, the loss function plays a critical role that significantly influences the performance of the predictions. In this paper, by using the Least Squares Weighted Residual (LSWR) method, we proposed a modified loss function, namely the LSWR loss function, which is tailored to a dimensionless form with only one manually determined parameter. Based on the LSWR loss function, an advanced PINN technique is developed for computational 2D and 3D solid mechanics. The performance of the proposed PINN technique with the LSWR loss function is tested through 2D and 3D (geometrically nonlinear) problems. Thoroughly studies and comparisons are conducted between the two existing loss functions, the energy-based loss function and the collocation loss function, and the proposed LSWR loss function. Through numerical experiments, we show that the PINN based on the LSWR loss function is effective, robust, and accurate for predicting both the displacement and stress fields. The source codes for the numerical examples in this work are available at https://github.com/JinshuaiBai/LSWR_loss_function_PINN/ .