Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,190
result(s) for
"Load balancing"
Sort by:
DE-RALBA: dynamic enhanced resource aware load balancing algorithm for cloud computing
by
Aleem, Muhammad
,
Arshad, Umer
,
Hussain, Altaf
in
Algorithms
,
Algorithms and Analysis of Algorithms
,
Cloud computing
2025
Cloud computing provides an opportunity to gain access to the large-scale and high-speed resources without establishing your own computing infrastructure for executing the high-performance computing (HPC) applications. Cloud has the computing resources ( i.e ., computation power, storage, operating system, network, and database etc .) as a public utility and provides services to the end users on a pay-as-you-go model. From past several years, the efficient utilization of resources on a compute cloud has become a prime interest for the scientific community. One of the key reasons behind inefficient resource utilization is the imbalance distribution of workload while executing the HPC applications in a heterogenous computing environment. The static scheduling technique usually produces lower resource utilization and higher makespan, while the dynamic scheduling achieves better resource utilization and load-balancing by incorporating a dynamic resource pool. The dynamic techniques lead to increased overhead by requiring a continuous system monitoring, job requirement assessments and real-time allocation decisions. This additional load has the potential to impact the performance and responsiveness on computing system. In this article, a dynamic enhanced resource-aware load balancing algorithm (DE-RALBA) is proposed to mitigate the load-imbalance in job scheduling by considering the computing capabilities of all VMs in cloud computing. The empirical assessments are performed on CloudSim simulator using instances of two scientific benchmark datasets ( i.e ., heterogeneous computing scheduling problems (HCSP) instances and Google Cloud Jobs (GoCJ) dataset). The obtained results revealed that the DE-RALBA mitigates the load imbalance and provides a significant improvement in terms of makespan and resource utilization against existing algorithms, namely PSSLB, PSSELB, Dynamic MaxMin, and DRALBA. Using HCSP instances, the DE-RALBA algorithm achieves up to 52.35% improved resources utilization as compared to existing technique, while more superior resource utilization is achieved using the GoCJ dataset.
Journal Article
The Power of Slightly More than One Sample in Randomized Load Balancing
2017
In many computing and networking applications, arriving tasks have to be routed to one of many servers, with the goal of minimizing queueing delays. When the number of processors is very large, a popular routing algorithm works as follows: select two servers at random and route an arriving task to the least loaded of the two. It is well known that this algorithm dramatically reduces queueing delays compared to an algorithm, which routes to a single randomly selected server. In recent cloud computing applications, it has been observed that even sampling two queues per arriving task can be expensive and can even increase delays due to messaging overhead. So there is an interest in reducing the number of sampled queues per arriving task. In this paper, we show that the number of sampled queues can be dramatically reduced by using the fact that tasks arrive in batches (called jobs). In particular, we sample a subset of the queues such that the size of the subset is slightly larger than the batch size (thus, on average, we only sample slightly more than one queue per task). Once a random subset of the queues is sampled, we propose a new load-balancing method called
batch-filling
to attempt to equalize the load among the sampled servers. We show that, asymptotically, our algorithm dramatically reduces the sample complexity compared to previously proposed algorithms.
Journal Article
Smart load balancing in cloud computing: Integrating feature selection with advanced deep learning models
by
Alzubi, Emran
,
Makhadmeh, Sharif
,
Al-E’mari, Salam
in
Algorithms
,
Artificial intelligence
,
Artificial neural networks
2025
The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments’ dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations. Specifically, it addresses the critical need for a more adaptive and efficient approach to workload management in cloud environments, where conventional methods fall short in handling dynamic and fluctuating workloads. To bridge this gap, the paper proposes a hybrid load-balancing methodology that integrates feature selection and deep learning models for optimizing resource allocation. The proposed Smart Load Adaptive Distribution with Reinforcement and Optimization approach, SLADRO , combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms for load prediction, a hybrid bio-inspired optimization technique—Orthogonal Arrays and Particle Swarm Optimization (OOA-PSO)—for feature selection algorithms, and Deep Reinforcement Learning (DRL) for dynamic task scheduling. Extensive simulations conducted on a real-world dataset called Google Cluster Trace dataset reveal that the SLADRO model significantly outperforms traditional load-balancing approaches, yielding notable improvements in throughput, makespan, resource utilization, and energy efficiency. This integration of advanced techniques offers a scalable and adaptive solution, providing a comprehensive framework for efficient load balancing in cloud computing environments.
Journal Article
Load balance -aware dynamic cloud-edge-end collaborative offloading strategy
2024
Cloud-edge-end (CEE) computing is a hybrid computing paradigm that converges the principles of edge and cloud computing. In the design of CEE systems, a crucial challenge is to develop efficient offloading strategies to achieve the collaboration of edge and cloud offloading. Although CEE offloading problems have been widely studied under various backgrounds and methodologies, load balance, which is an indispensable scheme in CEE systems to ensure the full utilization of edge resources, is still a factor that has not yet been accounted for. To fill this research gap, we are devoted to developing a dynamic load balance -aware CEE offloading strategy. First, we propose a load evolution model to characterize the influences of offloading strategies on the system load dynamics and, on this basis, establish a latency model as a performance metric of different offloading strategies. Then, we formulate an optimal control model to seek the optimal offloading strategy that minimizes the latency. Second, we analyze the feasibility of typical optimal control numerical methods in solving our proposed model, and develop a numerical method based on the framework of genetic algorithm. Third, through a series of numerical experiments, we verify our proposed method. Results show that our method is effective.
Journal Article
An optimized approach for container deployment driven by a two-stage load balancing mechanism
2025
Lightweight container technology has emerged as a fundamental component of cloud-native computing, with the deployment of containers and the balancing of loads on virtual machines representing significant challenges. This paper presents an optimization strategy for container deployment that consists of two stages: coarse-grained and fine-grained load balancing. In the initial stage, a greedy algorithm is employed for coarse-grained deployment, facilitating the distribution of container services across virtual machines in a balanced manner based on resource requests. The subsequent stage utilizes a genetic algorithm for fine-grained resource allocation, ensuring an equitable distribution of resources to each container service on a single virtual machine. This two-stage optimization enhances load balancing and resource utilization throughout the system. Empirical results indicate that this approach is more efficient and adaptable in comparison to the Grey Wolf Optimization (GWO) Algorithm, the Simulated Annealing (SA) Algorithm, and the GWO-SA Algorithm, significantly improving both resource utilization and load balancing performance on virtual machines.
Journal Article
Comparative analysis of metaheuristic load balancing algorithms for efficient load balancing in cloud computing
2023
Load balancing is a serious problem in cloud computing that makes it challenging to ensure the proper functioning of services contiguous to the Quality of Service, performance assessment, and compliance to the service contract as demanded from cloud service providers (CSP) to organizations. The primary objective of load balancing is to map workloads to use computing resources that significantly improve performance. Load balancing in cloud computing falls under the class of concerns defined as \"NP-hard\" issues due to vast solution space. Therefore it requires more time to predict the best possible solution. Few techniques can perhaps generate an ideal solution under a polynomial period to fix these issues. In previous research, Metaheuristic based strategies have been confirmed to accomplish accurate solutions under a decent period for those kinds of issues. This paper provides a comparative analysis of various metaheuristic load balancing algorithms for cloud computing based on performance factors i.e., Makespan time, degree of imbalance, response time, data center processing time, flow time, and resource utilization. The simulation results show the performance of various Meta-heuristic Load balancing methods, based on performance factors. The Particle swarm optimization method performs better in improving makespan, flow time, throughput time, response time, and degree of imbalance.
Journal Article
An Optimized, Dynamic, and Efficient Load-Balancing Framework for Resource Management in the Internet of Things (IoT) Environment
2023
Major problems and issues in Internet of Things (IoT) systems include load balancing, lowering operational expenses, and power usage. IoT devices typically run on batteries because they lack direct access to a power source. Geographical conditions that make it difficult to access the electrical network are a common cause. Finding ways to ensure that IoT devices consume the least amount of energy possible is essential. When the network is experiencing high traffic, locating and interacting with the next hop is critical. Finding the best route to load balance by switching to a less crowded channel is hence crucial in network congestion. Due to the restrictions indicated above, this study analyzes three significant issues—load balancing, energy utilization, and computation cost—and offers a solution. To address these resource allocation issues in the IoT, we suggest a reliable method in this study termed Dynamic Energy-Efficient Load Balancing (DEELB). We conducted several experiments, such as bandwidth analysis, in which the DEELB method used 990.65 kbps of bandwidth for 50 operations, while other existing techniques, such as EEFO (Energy-Efficient Opportunistic), DEERA (Dynamic Energy-Efficient Resource Allocation), ELBS (Efficient Load-Balancing Security), and DEBTS (Delay Energy Balanced Task Scheduling), used 1700.91 kbps, 1500.82 kbps, 1300.65 kbps, and 1200.15 kbps of bandwidth, respectively. The experiment’s numerical analysis showed that our method was superior to other ways in terms of effectiveness and efficiency.
Journal Article
Triple Tier Framework for Intellectual Edge Assisted Multicontroller Load Balancing in SDN
2024
SDN is a new networking method that uses software controllers and physical infrastructure to guide network traffic. Due to its larger size, the network often experiences severe traffic congestion; load balancers improve network efficiency. Previous works used proactive or reactive load balancing, which caused substantial packet loss or shambolic data plane load balancing. In this research, we addressed previous concerns and introduced the Triple Tier model, a triple-tier architecture for intellectual edge aided Multi-controller Load Balancing utilizing AI. Three sequential processes—user selection, sensitivity-based flow categorization, and hybrid multi-controller load balancing—are presented. First, Interval Type-II Hesitant Fuzzy Entropy Measure (IT-II-HFE) method selects energy-efficient participants for service, avoiding abundant flows. We maintained the D-Plane device threshold level to improve reaction time and calculation. For sensitivity-aware processing, the Non-Deep Lightweight Parallel Network (ND-LPN) classifies arriving flows as delay-sensitive or non-delay-sensitive and prioritizes them for controller processing. Then, the global controller enabled proactive and reactive load balancing (PLB and RLB) for hybrid multi-controller load balancing (MCLB). The PLB is performed using Dual Agent-based Geometric Actor-Critic Algorithm (DAGAC) for flow engineer prediction-based load balancing. Finally, event-based Hybrid Leader Optimization Algorithm (HLO) is used for RLB, resulting in successful load balancing. The suggested AI-MLB model is tested in Network Simulator-3.26 and outperforms previous works in total load, packet loss rate, reaction time, number of migration, and network load.
Journal Article
Autonomic task scheduling algorithm for dynamic workloads through a load balancing technique for the cloud-computing environment
by
Babamir, Seyed Morteza
,
Ebadifard, Fatemeh
in
Algorithms
,
Bandwidths
,
Central processing units
2021
Applying the load balancing technique to allocate requests that dynamically enter the cloud environment is contributive in maintaining the system stability, reducing the response time, and increasing the resource productivity. One of the main challenges in dynamic load balancing is that it increases inter-
VM
communication overheads (swapping files between
VMs
). In most of the methods proposed for load balancing the issue of communication overheads is overlooked. Attempt is made here to address this problem through the Autonomous Load Balancing method. In the available studies on task scheduling in cloud computing, the focus is mostly on CPU-bound requests. Here, based on the resources, the needed the requests are divided into CPU-bound and I/O-bound requests. Considering both types of requests leads to the inability to apply the available load balancing methods. The CloudSim tool is applied here to evaluate this proposed method, which is then compared with Round Robin, Autonomous, Honey-Bee and Naïve Bayesian Load Balancing approaches. The results for the actual workloads of the NASA and Calgary servers and sample workload indicate that upon an increase in the requests and their variations together with heterogeneity of different
VMs
, this proposed algorithm can distribute the workload among them equally and allocate requests to appropriate
VMs
based on the required resources; thus, a decrease in the communication overheads and an increase in load balancing degree.
Journal Article
Recent advancement in VM task allocation system for cloud computing: review from 2015 to2021
2022
Cloud computing is new technology that has considerably changed human life at different aspect over the last decade. Especially after the COVID-19 pandemic, almost all life activity shifted into cloud base. Cloud computing is a utility where different hardware and software resources are accessed on pay per user ground base. Most of these resources are available in virtualized form and virtual machine (VM) is one of the main elements of visualization.VM used in data center for distribution of resource and application according to benefactor demand. Cloud data center faces different issue in respect of performance and efficiency for improvement of these issues different approaches are used. Virtual machine play important role for improvement of data center performance therefore different approach are used for improvement of virtual machine efficiency (i-e) load balancing of resource and task. For the improvement of this section different parameter of VM improve like makespan, quality of service, energy, data accuracy and network utilization. Improvement of different parameter in VM directly improve the performance of cloud computing. Therefore, we conducting this review paper that we can discuss about various improvements that took place in VM from 2015 to 20,201. This review paper also contain information about various parameter of cloud computing and final section of paper present the role of machine learning algorithm in VM as well load balancing approach along with the future direction of VM in cloud data center.
Journal Article