Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
353
result(s) for
"Edge-cloud computing"
Sort by:
A novel approach for IoT tasks offloading in edge-cloud environments
2021
Recently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.
Journal Article
Deep learning-driven wireless communication for edge-cloud computing: opportunities and challenges
2020
Future wireless communications are becoming increasingly complex with different radio access technologies, transmission backhauls, and network slices, and they play an important role in the emerging edge computing paradigm, which aims to reduce the wireless transmission latency between end-users and edge clouds. Deep learning techniques, which have already demonstrated overwhelming advantages in a wide range of internet of things (IoT) applications, show significant promise for solving such complicated real-world scenarios. Although the convergence of radio access networks and deep learning is still in the preliminary exploration stage, it has already attracted tremendous concern from both academia and industry. To address emerging theoretical and practical issues, ranging from basic concepts to research directions in future wireless networking applications and architectures, this paper mainly reviews the latest research progress and major technological deployment of deep learning in the development of wireless communications. We highlight the intuitions and key technologies of deep learning-driven wireless communication from the aspects of end-to-end communication, signal detection, channel estimation and compression sensing, encoding and decoding, and security and privacy. Main challenges, potential opportunities and future trends in incorporating deep learning schemes in wireless communications environments are further illustrated.
Journal Article
A novel privacy-preserving speech recognition framework using bidirectional LSTM
2020
Utilizing speech as the transmission medium in Internet of things (IoTs) is an effective way to reduce latency while improving the efficiency of human-machine interaction. In the field of speech recognition, Recurrent Neural Network (RNN) has significant advantages to achieve accuracy improvement on speech recognition. However, some of RNN-based intelligence speech recognition applications are insufficient in the privacy-preserving of speech data, and others with privacy-preserving are time-consuming, especially about model training and speech recognition. Therefore, in this paper we propose a novel Privacy-preserving Speech Recognition framework using Bidirectional Long short-term memory neural network, namely PSRBL. On the one hand, PSRBL designs new functions to construct security activation functions by combing with an additive secret sharing protocol, namely a secure piecewise-linear Sigmoid and a secure piecewise-linear Tanh respectively, to achieve privacy-preserving of speech data during speech recognition process running on edge servers. On the other hand, in order to reduce the time spent on both the training and the recognition of the speech model while keeping high accuracy during speech recognition process, PSRBL first utilizes secure activation functions to refit original activation functions in the bidirectional Long Short-Term Memory neural network (LSTM), and then makes full use of the left and the right context information of speech data by employing bidirectional LSTM. Experiments conducted on the speech dataset TIMIT show that our framework PSRBL performs well. Specifically compared with the state-of-the-art ones, PSRBL significantly reduces the time consumption on both the training and the recognition of the speech model under the premise that PSRBL and the comparisons are consistent in the privacy-preserving of speech data.
Journal Article
Genetic algorithm-based secure cooperative control for high-order nonlinear multi-agent systems with unknown dynamics
by
Dolly, D Raveena Judie
,
Alassafi, Madini O.
,
Wang, Xin
in
Cloud computing
,
Computer Communication Networks
,
Computer Science
2024
Research has recently grown on multi-agent systems (MAS) and their coordination and secure cooperative control, for example in the field of edge-cloud computing. MAS offers robustness and flexibility compared to centralized systems by distributing control across decentralized agents, allowing the system to adapt and scale without overhaul. The collective behavior emerging from agent interactions can solve complex tasks beyond individual capabilities. However, controlling high-order nonlinear MAS with unknown dynamics raises challenges. This paper proposes an enhanced genetic algorithm strategy to enhance secure cooperative control performance. An efficient encoding method, adaptive decoding schemes, and heuristic initialization are introduced. These innovations enable compelling exploration of the solution space and accelerate convergence. Individual enhancement via load balancing, communication avoidance, and iterative refinement intensifies local search. Simulations demonstrate superior performance over conventional algorithms for complex control problems with uncertainty. The proposed method promises robust, efficient, and consistent solutions by adapting to find optimal points and exploiting promising areas in the space. This has implications for securely controlling real-world MAS across domains like robotics, power systems, and autonomous vehicles.
Journal Article
Task partitioning and offloading in IoT cloud-edge collaborative computing framework: a survey
2022
Internet of Things (IoT) is made up with growing number of facilities, which are digitalized to have sensing, networking and computing capabilities. Traditionally, the large volume of data generated by the IoT devices are processed in a centralized cloud computing model. However, it is no longer able to meet the computational demands of large-scale and geographically distributed IoT devices for executing tasks of high performance, low latency, and low energy consumption. Therefore, edge computing has emerged as a complement of cloud computing. To improve system performance, it is necessary to partition and offload some tasks generated by local devices to the remote cloud or edge nodes. However, most of the current research work focuses on designing efficient offloading strategies and service orchestration. Little attention has been paid to the problem of jointly optimizing task partitioning and offloading for different application types. In this paper, we make a comprehensive overview on the existing task partitioning and offloading frameworks, focusing on the input and core of decision engine of the framework for task partitioning and offloading. We also propose comprehensive taxonomy metrics for comparing task partitioning and offloading approaches in the IoT cloud-edge collaborative computing framework. Finally, we discuss the problems and challenges that may be encountered in the future.
Journal Article
HVS-inspired adversarial image generation with high perceptual quality
2023
Adversarial images are able to fool the Deep Neural Network (DNN) based visual identity recognition systems, with the potential to be widely used in online social media for privacy-preserving purposes, especially in edge-cloud computing. However, most of the current techniques used for adversarial attacks focus on enhancing their ability to attack without making a deliberate, methodical, and well-researched effort to retain the perceptual quality of the resulting adversarial examples. This makes obvious distortion observed in the adversarial examples and affects users’ photo-sharing experience. In this work, we propose a method for generating images inspired by the Human Visual System (HVS) in order to maintain a high level of perceptual quality. Firstly, a novel perceptual loss function is proposed based on Just Noticeable Difference (JND), which considered the loss beyond the JND thresholds. Then, a perturbation adjustment strategy is developed to assign more perturbation to the insensitive color channel according to the sensitivity of the HVS for different colors. Experimental results indicate that our algorithm surpasses the SOTA techniques in both subjective viewing and objective assessment on the VGGFace2 dataset.
Journal Article
Rough fuzzy model based feature discretization in intelligent data preprocess
2021
Feature discretization is an important preprocessing technology for massive data in industrial control. It improves the efficiency of edge-cloud computing by transforming continuous features into discrete ones, so as to meet the requirements of high-quality cloud services. Compared with other discretization methods, the discretization based on rough set has achieved good results in many applications because it can make full use of the known knowledge base without any prior information. However, the equivalence class of rough set is an ordinary set, which is difficult to describe the fuzzy components in the data, and the accuracy is low in some complex data types in big data environment. Therefore, we propose a rough fuzzy model based discretization algorithm (RFMD). Firstly, we use fuzzy c-means clustering to get the membership of each sample to each category. Then, we fuzzify the equivalence class of rough set by the obtained membership, and establish the fitness function of genetic algorithm based on rough fuzzy model to select the optimal discrete breakpoints on the continuous features. Finally, we compare the proposed method with the discretization algorithm based on rough set, the discretization algorithm based on information entropy, and the discretization algorithm based on chi-square test on remote sensing datasets. The experimental results verify the effectiveness of our method.
Journal Article
Take one for the team: on the time efficiency of application-level buffer-aided relaying in edge cloud communication
by
Li, Zheng
,
Millar-Bilbao, Francisco
,
Rojas-Durán, Gonzalo
in
Buffers
,
Cloud computing
,
Communication
2021
BackgroundAdding buffers to networks is part of the fundamental advance in data communication. Since edge cloud computing is based on the heterogeneous collaboration network model in a federated environment, it is natural to consider buffer-aided data communication for edge cloud applications. However, the existing studies generally pursue the beneficial features of buffering at a cost of time, not to mention that many investigations are focused on lower-layer data packets rather than application-level communication transactions.AimsDriven by our argument against the claim that buffers “can introduce additional delay to the communication between the source and destination”, this research aims to investigate whether or not (and if yes, to what extent) the application-level buffering mechanism can improve the time efficiency in edge-cloud data transmissions.MethodTo collect empirical evidence for the theoretical discussion, we built up a testbed to simulate a remote health monitoring system, and conducted both experimental and modeling investigations into the first-in-first-served (FIFS) and buffer-aided data transmissions at a relay node in the system.ResultsAn empirical inequality system is established for revealing the time efficiency of buffer-aided edge cloud communication. For example, given the reference of transmitting the 11th data entity in the FIFS manner, the inequality system suggests buffering up to 50 data entities into one transmission transaction on our testbed.ConclusionsDespite the trade-off benefits (e.g., energy efficiency and fault tolerance) of buffering data, our investigation argues that the buffering mechanism can also speed up data transmission under certain circumstances, and thus it would be worth taking data buffering into account when designing and developing edge cloud applications even in the time-critical context.
Journal Article
Optimal cloud assistance policy of end-edge-cloud ecosystem for mitigating edge distributed denial of service attacks
2021
Edge computing has become a fundamental technology for Internet of Things (IoT) applications. To provide reliable services for latency-sensitive applications, edge servers must respond to end devices within the shortest amount of time possible. Edge distributed denial-of-service (DDoS) attacks, which render edge servers unusable by legitimate IoT applications by sending heavy requests from distributed attacking sources, is a threat that leads to severe latency. To protect edge servers from DDoS attacks, a hybrid computing paradigm known as an end-edge-cloud ecosystem provides a possible solution. Cloud assistance is allowed with this architecture. Edge servers can upload their pending tasks onto a cloud center for a workload reduction when encountering a DDoS attack, similar to borrowing resources from the cloud. Nevertheless, before using the ecosystem to mitigate edge DDoS attacks, we must address the core problem that edge servers must decide when and to what extent they should upload tasks to the cloud center. In this study, we focus on the design of optimal cloud assistance policies. First, we propose an edge workload evolution model that describes how the workload of the edge servers change over time with a given cloud assistance policy. On this basis, we quantify the effectiveness of the policy by using the resulting overall latency and formulate an optimal control problem for seeking optimal policies that can minimize such latency. We then provide solutions by deriving the optimality system and discuss some properties of the optimal solutions to accelerate the problem solving. Next, we introduce a numerical iterative algorithm to seek solutions that can satisfy the optimality system. Finally, we provide several illustrative numerical examples. The results show that the optimal policies obtained can effectively mitigate edge DDoS attacks.
Journal Article
Optimizing task offloading and resource allocation in edge-cloud networks: a DRL approach
by
Han, Youn-Hee
,
Lim, Hyun-Kyo
,
Seok, Yeong-Jun
in
Algorithms
,
Cloud computing
,
Computation offloading
2023
Edge-cloud computing is an emerging approach in which tasks are offloaded from mobile devices to edge or cloud servers. However, Task offloading may result in increased energy consumption and delays, and the decision to offload the task is dependent on various factors such as time-varying radio channels, available computation resources, and the location of devices. As edge-cloud computing is a dynamic and resource-constrained environment, making optimal offloading decisions is a challenging task. This paper aims to optimize offloading and resource allocation to minimize delay and meet computation and communication needs in edge-cloud computing. The problem of optimizing task offloading in the edge-cloud computing environment is a multi-objective problem, for which we employ deep reinforcement learning to find the optimal solution. To accomplish this, we formulate the problem as a Markov decision process and use a Double Deep Q-Network (DDQN) algorithm. Our DDQN-edge-cloud (DDQNEC) scheme dynamically makes offloading decisions by analyzing resource utilization, task constraints, and the current status of the edge-cloud network. Simulation results demonstrate that DDQNEC outperforms heuristic approaches in terms of resource utilization, task offloading, and task rejection.
Journal Article