Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,067 result(s) for "Response time (computers)"
Sort by:
Deploying Data-intensive Applications with Multiple Services Components on Edge
In the information age, the amount of data is huge which shows an exponential growth. In addition, most services of application need to be interdependent with data, cause that they can be executed under the driven data. In fact, such a data-intensive service deployment requires a good coordination among different edge servers. It is not easy to handle such issues while data transmission and load balancing conditions change constantly between edge servers and data-intensive services. Based on the above description, this paper proposes a Data-intensive Service Edge deployment scheme based on Genetic Algorithm (DSEGA). Firstly, a data-intensive edge service composition and an edge server model will be generated based on a graph theory algorithm, then five algorithms of Genetic Algorithm (GA), Simulated Annealing Algorithm (SA), Ant Colony Algorithm (ACO), Optimized Ant Colony Algorithm (ACO_v) and Hill Climbing will be respectively used to obtain an optimal deployment scheme, so that the response time of the data-intensive edge service deployment reaches a minimum under storage constraints and load balancing conditions. The experimental results show that the DSEGA algorithm can get the shortest response time among the service, data components and edge servers.
Task scheduling in edge-fog-cloud architecture: a multi-objective load balancing approach using reinforcement learning algorithm
The rapid development of internet of things (IoT) gadgets and the increase in the rate of sending requests from these devices to cloud data centers resulted in congestion and consequently service provisioning delays in the cloud data centers. Accordingly, fog computing emerged as a new computing model to address this challenge. In fogging, services are provisioned at the edge of the network using devices with computing and storage capabilities, which are located through the way to connect IoT devices to cloud data centers. Fog computing aims to alleviate the computing load in data centers and cut the delay of requests down, notably real-time and delay-sensitive requests. To achieve these goals, vitally important challenges such as scheduling requests, balancing the load, and reducing energy consumption, which affects performance and reliability in the edge-fog-cloud computing architecture, should be considered into account. In this paper, a reinforcement learning fog scheduling algorithm is proposed to address these challenges. The experimental results indicate that the proposed algorithm raises the load balance and diminishes the response time compared to the existing scheduling algorithms. Additionally, the proposed algorithm outperforms other approaches in terms of the number of used devices.
A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment
Fog computing (FC) can be considered as a computing paradigm which performs Internet of Things (IoT) applications at the edge of the network. Recently, there is a great growth of data requests and FC which lead to enhance data accessibility and adaptability. However, FC has been exposed to many challenges as load balancing (LB) and adaptation to failure. Many LB strategies have been proposed in cloud computing, but they are still not applied effectively in fog. LB is an important issue to achieve high resource utilization, avoid bottlenecks, avoid overload and low load, and reduce response time. In this paper, a LB and optimization strategy (LBOS) using dynamic resource allocation method based on Reinforcement learning and genetic algorithm is proposed. LBOS monitors the traffic in the network continuously, collects the information about each server load, handles the incoming requests, and distributes them between the available servers equally using dynamic resource allocation method. Hence, it enhances the performance even when it’s the peak time. Accordingly, LBOS is simple and efficient in real-time systems in fog computing such as in the case of healthcare system. LBOS is concerned with designing an IoT-Fog based healthcare system. The proposed IoT-Fog system consists of three layers, namely: (1) IoT layer, (2) fog layer, and (3) cloud layer. Finally, the experiments are carried out and the results show that the proposed solution improves the quality-of-service in the cloud/fog computing environment in terms of the allocation cost and reduce the response time. Comparing the LBOS with the state-of-the-art algorithms, it achieved the best load balancing Level (85.71%). Hence, LBOS is an efficient way to establish the resource utilization and ensure the continuous service.
Fog-based healthcare systems: A systematic review
The healthcare system aims to provide a reliable and organized solution to enhance the health of human society. Studying the history of patients can help physicians to consider patients’ needs in healthcare system designing and offering service, which leads to an increase in patient satisfaction. Therefore, healthcare is becoming a growing contesting market. With this significant growth in healthcare systems, such challenges as huge data volume, response time, latency, and security vulnerability are raised. Therefore, fog computing, as a well-known distributed architecture, could help to solve such challenges. In fog computing architecture, processing components are placed between the end devices and cloud components, and they execute applications. This architecture is suitable for such applications as healthcare systems that need a real-time response and low latency. In this paper, a systematic review of available approaches in the field of fog-based healthcare systems is proposed; the challenges of its application in healthcare are explored, classified, and discussed. First, the fog computing approaches in healthcare are categorized into three main classes: communication, application, and resource/service. Then, they are discussed and compared based on their tools, evaluation methods, and evaluation metrics. Finally, based on observations, some open issues and challenges are highlighted for further studies in fog-based healthcare.
A systematic literature review for load balancing and task scheduling techniques in cloud computing
Cloud computing is an emerging technology composed of several key components that work together to create a seamless network of interconnected devices. These interconnected devices, such as sensors, routers, smartphones, and smart appliances, are the foundation of the Internet of Everything (IoE). Huge volumes of data generated by IoE devices are processed and accumulated in the cloud, allowing for real-time analysis and insights. As a result, there is a dire need for load-balancing and task-scheduling techniques in cloud computing. The primary objective of these techniques is to divide the workload evenly across all available resources and handle other issues like reducing execution time and response time, increasing throughput and fault detection. This systematic literature review (SLR) aims to analyze various technologies comprising optimization and machine learning algorithms used for load balancing and task-scheduling problems in a cloud computing environment. To analyze the load-balancing patterns and task-scheduling techniques, we opted for a representative set of 63 research articles written in English from 2014 to 2024 that has been selected using suitable exclusion-inclusion criteria. The SLR aims to minimize bias and increase objectivity by designing research questions about the topic. We have focused on the technologies used, the merits-demerits of diverse technologies, gaps within the research, insights into tools, forthcoming opportunities, performance metrics, and an in-depth investigation into ML-based optimization techniques.
Efficient Smart Grid Load Balancing via Fog and Cloud Computing
As the cloud data centers size increases, the number of virtual machines (VMs) grows speedily. Application requests are served by VMs be located in the physical machine (PM). The rapid growth of Internet services has created an imbalance of network resources. Some hosts have high bandwidth usage and can cause network congestion. Network congestion affects overall network performance. Cloud computing load balancing is an important feature that needs to be optimized. Therefore, this research proposes a 3-tier architecture, which consists of Cloud layer, Fog layer, and Consumer layer. The Cloud serves the world, and Fog analyzes the services at the local edge of network. Fog stores data temporarily, and the data is transmitted to the cloud. The world is classified into 6 regions on the basis of 6 continents in consumer layer. Consider Area 0 as North America, for which two fogs and two cluster buildings are considered. Microgrids (MG) are used to supply energy to consumers. In this research, a real-time VM migration algorithm for balancing fog load has been proposed. Load balancing algorithms focus on effective resource utilization, maximum throughput, and optimal response time. Compared to the closest data center (CDC), the real-time VM migration algorithm achieves 18% better cost results and optimized response time (ORT). Realtime VM migration and ORT increase response time by 11% compared to dynamic reconFigure with load (DRL) with load. Realtime VM migration always seeks the best solution to minimize cost and increase processing time.
Genetic-Based Algorithm for Task Scheduling in Fog–Cloud Environment
Over the past few years, there has been a consistent increase in the number of Internet of Things (IoT) devices utilizing Cloud services. However, this growth has brought about new challenges, particularly in terms of latency. To tackle this issue, fog computing has emerged as a promising trend. By incorporating additional resources at the edge of the Cloud architecture, the fog–cloud architecture aims to reduce latency by bringing processing closer to end-users. This trend has significant implications for enhancing the overall performance and user experience of IoT systems. One major challenge in achieving this is minimizing latency without increasing total energy consumption. To address this challenge, it is crucial to employ a powerful scheduling solution. Unfortunately, this scheduling problem is generally known as NP-hard, implying that no optimal solution that can be obtained in a reasonable time has been discovered to date. In this paper, we focus on the problem of task scheduling in a fog–cloud based environment. Therefore, we propose a novel genetic-based algorithm called GAMMR that aims to achieve an optimal balance between total consumed energy and total response time. We evaluate the proposed algorithm using simulations on 8 datasets of varying sizes. The results demonstrate that our proposed GAMMR algorithm outperforms the standard genetic algorithm in all tested cases, with an average improvement of 3.4% in the normalized function.
IoT-Lite: a lightweight semantic model for the internet of things and its use with dynamic semantics
Over the past few years, the semantics community has developed several ontologies to describe concepts and relationships for internet of things (IoT) applications. A key problem is that most of the IoT-related semantic descriptions are not as widely adopted as expected. One of the main concerns of users and developers is that semantic techniques increase the complexity and processing time, and therefore, they are unsuitable for dynamic and responsive environments such as the IoT. To address this concern, we propose IoT-Lite, an instantiation of the semantic sensor network ontology to describe key IoT concepts allowing interoperability and discovery of sensory data in heterogeneous IoT platforms by a lightweight semantics. We propose 10 rules for good and scalable semantic model design and follow them to create IoT-Lite. We also demonstrate the scalability of IoT-Lite by providing some experimental analysis and assess IoT-Lite against another solution in terms of round trip time performance for query-response times. We have linked IoT-Lite with stream annotation ontology, to allow queries over stream data annotations, and we have also added dynamic semantics in the form of MathML annotations to IoT-Lite. Dynamic semantics allows the annotation of spatio-temporal values, reducing storage requirements and therefore the response time for queries. Dynamic semantics stores mathematical formulas to recover estimated values when actual values are missing.
Enhanced Round-Robin Algorithm in the Cloud Computing Environment for Optimal Task Scheduling
Recently, there has been significant growth in the popularity of cloud computing systems. One of the main issues in building cloud computing systems is task scheduling. It plays a critical role in achieving high-level performance and outstanding throughput by having the greatest benefit from the resources. Therefore, enhancing task scheduling algorithms will enhance the QoS, thus leading to more sustainability of cloud computing systems. This paper introduces a novel technique called the dynamic round-robin heuristic algorithm (DRRHA) by utilizing the round-robin algorithm and tuning its time quantum in a dynamic manner based on the mean of the time quantum. Moreover, we applied the remaining burst time of the task as a factor to decide the continuity of executing the task during the current round. The experimental results obtained using the CloudSim Plus tool showed that the DRRHA significantly outperformed the competition in terms of the average waiting time, turnaround time, and response time compared with several studied algorithms, including IRRVQ, dynamic time slice round-robin, improved RR, and SRDQ algorithms.
An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment
Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.