Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,060
result(s) for
"Response time (computers)"
Sort by:
An improved genetic algorithm using greedy strategy toward task scheduling optimization in cloud environments
by
Zhu, Huaxi
,
Zhou, Zhou
,
Chowdhury, Morshed U.
in
Artificial Intelligence
,
Cloud computing
,
Completion time
2020
Cloud computing is an emerging distributed system that provides flexible and dynamically scalable computing resources for use at low cost. Task scheduling in cloud computing environment is one of the main problems that need to be addressed in order to improve system performance and increase cloud consumer satisfaction. Although there are many task scheduling algorithms, existing approaches mainly focus on minimizing the total completion time while ignoring workload balancing. Moreover, managing the quality of service (QoS) of the existing approaches still needs to be improved. In this paper, we propose a novel algorithm named MGGS (modified genetic algorithm (GA) combined with greedy strategy). The proposed algorithm leverages the modified GA algorithm combined with greedy strategy to optimize task scheduling process. Different from existing algorithms, MGGS can find an optimal solution using fewer number of iterations. To evaluate the performance of MGGS, we compared the performance of the proposed algorithm with several existing algorithms based on the total completion time, average response time, and QoS parameters. The results obtained from the experiments show that MGGS performs well as compared to other task scheduling algorithms.
Journal Article
Deploying Data-intensive Applications with Multiple Services Components on Edge
by
Deng, Shuiguang
,
Chen, Yishan
,
Ma, Hongtao
in
Ant colony optimization
,
Computer simulation
,
Data transmission
2020
In the information age, the amount of data is huge which shows an exponential growth. In addition, most services of application need to be interdependent with data, cause that they can be executed under the driven data. In fact, such a data-intensive service deployment requires a good coordination among different edge servers. It is not easy to handle such issues while data transmission and load balancing conditions change constantly between edge servers and data-intensive services. Based on the above description, this paper proposes a Data-intensive Service Edge deployment scheme based on Genetic Algorithm (DSEGA). Firstly, a data-intensive edge service composition and an edge server model will be generated based on a graph theory algorithm, then five algorithms of Genetic Algorithm (GA), Simulated Annealing Algorithm (SA), Ant Colony Algorithm (ACO), Optimized Ant Colony Algorithm (ACO_v) and Hill Climbing will be respectively used to obtain an optimal deployment scheme, so that the response time of the data-intensive edge service deployment reaches a minimum under storage constraints and load balancing conditions. The experimental results show that the DSEGA algorithm can get the shortest response time among the service, data components and edge servers.
Journal Article
Task scheduling in edge-fog-cloud architecture: a multi-objective load balancing approach using reinforcement learning algorithm
by
Ramezani Shahidani, Fatemeh
,
Ghasemi, Arezoo
,
Toroghi Haghighat, Abolfazl
in
Algorithms
,
Cloud computing
,
Computer architecture
2023
The rapid development of internet of things (IoT) gadgets and the increase in the rate of sending requests from these devices to cloud data centers resulted in congestion and consequently service provisioning delays in the cloud data centers. Accordingly, fog computing emerged as a new computing model to address this challenge. In fogging, services are provisioned at the edge of the network using devices with computing and storage capabilities, which are located through the way to connect IoT devices to cloud data centers. Fog computing aims to alleviate the computing load in data centers and cut the delay of requests down, notably real-time and delay-sensitive requests. To achieve these goals, vitally important challenges such as scheduling requests, balancing the load, and reducing energy consumption, which affects performance and reliability in the edge-fog-cloud computing architecture, should be considered into account. In this paper, a reinforcement learning fog scheduling algorithm is proposed to address these challenges. The experimental results indicate that the proposed algorithm raises the load balance and diminishes the response time compared to the existing scheduling algorithms. Additionally, the proposed algorithm outperforms other approaches in terms of the number of used devices.
Journal Article
Fog-based healthcare systems: A systematic review
2021
The healthcare system aims to provide a reliable and organized solution to enhance the health of human society. Studying the history of patients can help physicians to consider patients’ needs in healthcare system designing and offering service, which leads to an increase in patient satisfaction. Therefore, healthcare is becoming a growing contesting market. With this significant growth in healthcare systems, such challenges as huge data volume, response time, latency, and security vulnerability are raised. Therefore, fog computing, as a well-known distributed architecture, could help to solve such challenges. In fog computing architecture, processing components are placed between the end devices and cloud components, and they execute applications. This architecture is suitable for such applications as healthcare systems that need a real-time response and low latency. In this paper, a systematic review of available approaches in the field of fog-based healthcare systems is proposed; the challenges of its application in healthcare are explored, classified, and discussed. First, the fog computing approaches in healthcare are categorized into three main classes: communication, application, and resource/service. Then, they are discussed and compared based on their tools, evaluation methods, and evaluation metrics. Finally, based on observations, some open issues and challenges are highlighted for further studies in fog-based healthcare.
Journal Article
A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment
by
Ali, Hesham A.
,
Saraya, Mohamed S.
,
Saleh, Ahmed I.
in
Artificial Intelligence
,
Cloud computing
,
Computational Intelligence
2020
Fog computing (FC) can be considered as a computing paradigm which performs Internet of Things (IoT) applications at the edge of the network. Recently, there is a great growth of data requests and FC which lead to enhance data accessibility and adaptability. However, FC has been exposed to many challenges as load balancing (LB) and adaptation to failure. Many LB strategies have been proposed in cloud computing, but they are still not applied effectively in fog. LB is an important issue to achieve high resource utilization, avoid bottlenecks, avoid overload and low load, and reduce response time. In this paper, a LB and optimization strategy (LBOS) using dynamic resource allocation method based on Reinforcement learning and genetic algorithm is proposed. LBOS monitors the traffic in the network continuously, collects the information about each server load, handles the incoming requests, and distributes them between the available servers equally using dynamic resource allocation method. Hence, it enhances the performance even when it’s the peak time. Accordingly, LBOS is simple and efficient in real-time systems in fog computing such as in the case of healthcare system. LBOS is concerned with designing an IoT-Fog based healthcare system. The proposed IoT-Fog system consists of three layers, namely: (1) IoT layer, (2) fog layer, and (3) cloud layer. Finally, the experiments are carried out and the results show that the proposed solution improves the quality-of-service in the cloud/fog computing environment in terms of the allocation cost and reduce the response time. Comparing the LBOS with the state-of-the-art algorithms, it achieved the best load balancing Level (85.71%). Hence, LBOS is an efficient way to establish the resource utilization and ensure the continuous service.
Journal Article
Efficient Smart Grid Load Balancing via Fog and Cloud Computing
2022
As the cloud data centers size increases, the number of virtual machines (VMs) grows speedily. Application requests are served by VMs be located in the physical machine (PM). The rapid growth of Internet services has created an imbalance of network resources. Some hosts have high bandwidth usage and can cause network congestion. Network congestion affects overall network performance. Cloud computing load balancing is an important feature that needs to be optimized. Therefore, this research proposes a 3-tier architecture, which consists of Cloud layer, Fog layer, and Consumer layer. The Cloud serves the world, and Fog analyzes the services at the local edge of network. Fog stores data temporarily, and the data is transmitted to the cloud. The world is classified into 6 regions on the basis of 6 continents in consumer layer. Consider Area 0 as North America, for which two fogs and two cluster buildings are considered. Microgrids (MG) are used to supply energy to consumers. In this research, a real-time VM migration algorithm for balancing fog load has been proposed. Load balancing algorithms focus on effective resource utilization, maximum throughput, and optimal response time. Compared to the closest data center (CDC), the real-time VM migration algorithm achieves 18% better cost results and optimized response time (ORT). Realtime VM migration and ORT increase response time by 11% compared to dynamic reconFigure with load (DRL) with load. Realtime VM migration always seeks the best solution to minimize cost and increase processing time.
Journal Article
IoT-Lite: a lightweight semantic model for the internet of things and its use with dynamic semantics
by
Bermudez-Edo, Maria
,
Taylor, Kerry
,
Barnaghi, Payam
in
Annotations
,
Complexity
,
Computer Science
2017
Over the past few years, the semantics community has developed several ontologies to describe concepts and relationships for internet of things (IoT) applications. A key problem is that most of the IoT-related semantic descriptions are not as widely adopted as expected. One of the main concerns of users and developers is that semantic techniques increase the complexity and processing time, and therefore, they are unsuitable for dynamic and responsive environments such as the IoT. To address this concern, we propose IoT-Lite, an instantiation of the semantic sensor network ontology to describe key IoT concepts allowing interoperability and discovery of sensory data in heterogeneous IoT platforms by a lightweight semantics. We propose 10 rules for good and scalable semantic model design and follow them to create IoT-Lite. We also demonstrate the scalability of IoT-Lite by providing some experimental analysis and assess IoT-Lite against another solution in terms of round trip time performance for query-response times. We have linked IoT-Lite with stream annotation ontology, to allow queries over stream data annotations, and we have also added dynamic semantics in the form of MathML annotations to IoT-Lite. Dynamic semantics allows the annotation of spatio-temporal values, reducing storage requirements and therefore the response time for queries. Dynamic semantics stores mathematical formulas to recover estimated values when actual values are missing.
Journal Article
Representations of common event structure in medial temporal lobe and frontoparietal cortex support efficient inference
by
Preston, Alison R.
,
Schlichting, Margaret L.
,
Morton, Neal W
in
Adolescent
,
Adult
,
Biological Sciences
2020
Prior work has shown that the brain represents memories within a cognitive map that supports inference about connections between individual related events. Real-world adaptive behavior is also supported by recognizing common structure among numerous distinct contexts; for example, based on prior experience with restaurants, when visiting a new restaurant one can expect to first get a table, then order, eat, and finally pay the bill. We used a neurocomputational approach to examine how the brain extracts and uses abstract representations of common structure to support novel decisions. Participants learned image pairs (AB, BC) drawn from distinct triads (ABC) that shared the same internal structure and were then tested on their ability to infer indirect (AC) associations. We found that hippocampal and frontoparietal regions formed abstract representations that coded cross-triad relationships with a common geometric structure. Critically, such common representational geometries were formed despite the lack of explicit reinforcement to do so. Furthermore, we found that representations in parahippocampal cortex are hierarchical, reflecting both cross-triad relationships and distinctions between triads. We propose that representations with common geometric structure provide a vector space that codes inferred item relationships with a direction vector that is consistent across triads, thus supporting faster inference. Using computational modeling of response time data, we found evidence for dissociable vector-based retrieval and pattern-completion processes that contribute to successful inference. Moreover, we found evidence that these processes are mediated by distinct regions, with pattern completion supported by hippocampus and vector-based retrieval supported by parahippocampal cortex and lateral parietal cortex.
Journal Article
Genetic-Based Algorithm for Task Scheduling in Fog–Cloud Environment
by
Khiat, Abdelhamid
,
Haddadi, Mohamed
,
Bahnes, Nacera
in
Algorithms
,
Cloud computing
,
Computer architecture
2024
Over the past few years, there has been a consistent increase in the number of Internet of Things (IoT) devices utilizing Cloud services. However, this growth has brought about new challenges, particularly in terms of latency. To tackle this issue, fog computing has emerged as a promising trend. By incorporating additional resources at the edge of the Cloud architecture, the fog–cloud architecture aims to reduce latency by bringing processing closer to end-users. This trend has significant implications for enhancing the overall performance and user experience of IoT systems. One major challenge in achieving this is minimizing latency without increasing total energy consumption. To address this challenge, it is crucial to employ a powerful scheduling solution. Unfortunately, this scheduling problem is generally known as NP-hard, implying that no optimal solution that can be obtained in a reasonable time has been discovered to date. In this paper, we focus on the problem of task scheduling in a fog–cloud based environment. Therefore, we propose a novel genetic-based algorithm called GAMMR that aims to achieve an optimal balance between total consumed energy and total response time. We evaluate the proposed algorithm using simulations on 8 datasets of varying sizes. The results demonstrate that our proposed GAMMR algorithm outperforms the standard genetic algorithm in all tested cases, with an average improvement of 3.4% in the normalized function.
Journal Article
Task scheduling for improved response time of latency sensitive applications in fog integrated cloud environment
by
Khanna, Kavita
,
Sahni, Jyoti
,
Mehta, Rishika
in
Algorithms
,
Cloud computing
,
Computer networks
2023
Fog integrated Cloud Computing is a distributed computing paradigm where near-user end devices known as fog nodes cooperate with cloud resources hosted at distant datacentres for providing computational and storage services to end user applications. One of the most challenging issues in fog integrated cloud based system is task scheduling. Most of the existing scheduling approaches involve centralized decision making which fail to exploit the advantages that may be achieved by a decentralized approach, that directly maps with the distributed architecture of fog based systems. This work proposes a decentralized heuristic algorithm for scheduling real-time IoT applications bounded by tolerable latency as the Quality of Service (QoS) constraint. The proposed technique aims to take into consideration the resource constraints of the fog resources to yield a schedule that not only meets the QoS requirements defined in terms of tolerable latency but also improves the response time of applications hosted on a fog-cloud infrastructure. Performance evaluation on different IoT applications indicate that the presented algorithm delivers better performance by reducing response time by 11% on an average in comparison to the other state-of-the-art policies.
Journal Article