Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,480 result(s) for "Bandwidth (computing)"
Sort by:
Joint optimization of wireless bandwidth and computing resource in cloudlet-based mobile cloud computing environment
Mobile cloud computing (MCC) is an emerging technology to relieve the tension between compute-intensive mobile applications and resource-constrained mobile terminals by offloading computing tasks to remote cloud servers. In this paper, we consider a novel MCC architecture consisting of remote cloud server, cloudlet and mobile terminal to guarantee low latency and low energy mobile consumption. To overcome the main bottlenecks of wireless bandwidth between mobile terminal and cloudlet, and the computation capability of cloudlet, the joint optimization strategy is proposed to enhance the quality of mobile cloud service. We formulate the wireless bandwidth and computing resource allocation model as a triple-stage Stackelberg game, and solve it by using backward method. In addition, the interplays of triple-stage game are discussed and the subgame optimal equilibrium for each stage is analyzed. An iterative algorithm is proposed to obtain Stackelberg equilibrium. Numerical results demonstrate the effectiveness of the proposed algorithm.
Vehicular Edge Computing and Networking: A Survey
As one key enabler of Intelligent Transportation System (ITS), Vehicular Ad Hoc Network (VANET) has received remarkable interest from academia and industry. The emerging vehicular applications and the exponential growing data have naturally led to the increased needs of communication, computation and storage resources, and also to strict performance requirements on response time and network bandwidth. In order to deal with these challenges, Mobile Edge Computing (MEC) is regarded as a promising solution. MEC pushes powerful computational and storage capacities from the remote cloud to the edge of networks in close proximity of vehicular users, which enables low latency and reduced bandwidth consumption. Driven by the benefits of MEC, many efforts have been devoted to integrating vehicular networks into MEC, thereby forming a novel paradigm named as Vehicular Edge Computing (VEC). In this paper, we provide a comprehensive survey of state-of-art research on VEC. First of all, we provide an overview of VEC, including the introduction, architecture, key enablers, advantages, challenges as well as several attractive application scenarios. Then, we describe several typical research topics where VEC is applied. After that, we present a careful literature review on existing research work in VEC by classification. Finally, we identify open research issues and discuss future research directions.
Computing in the Sky: A Survey on Intelligent Ubiquitous Computing for UAV-Assisted 6G Networks and Industry 4.0/5.0
Unmanned Aerial Vehicles (UAVs) are increasingly being used in a high-computation paradigm enabled with smart applications in the Beyond Fifth Generation (B5G) wireless communication networks. These networks have an avenue for generating a considerable amount of heterogeneous data by the expanding number of Internet of Things (IoT) devices in smart environments. However, storing and processing massive data with limited computational capability and energy availability at local nodes in the IoT network has been a significant difficulty, mainly when deploying Artificial Intelligence (AI) techniques to extract discriminatory information from the massive amount of data for different tasks.Therefore, Mobile Edge Computing (MEC) has evolved as a promising computing paradigm leveraged with efficient technology to improve the quality of services of edge devices and network performance better than cloud computing networks, addressing challenging problems of latency and computation-intensive offloading in a UAV-assisted framework. This paper provides a comprehensive review of intelligent UAV computing technology to enable 6G networks over smart environments. We highlight the utility of UAV computing and the critical role of Federated Learning (FL) in meeting the challenges related to energy, security, task offloading, and latency of IoT data in smart environments. We present the reader with an insight into UAV computing, advantages, applications, and challenges that can provide helpful guidance for future research.
Parallel convolutional processing using an integrated photonic tensor core
With the proliferation of ultrahigh-speed mobile networks and internet-connected devices, along with the rise of artificial intelligence (AI) 1 , the world is generating exponentially increasing amounts of data that need to be processed in a fast and efficient way. Highly parallelized, fast and scalable hardware is therefore becoming progressively more important 2 . Here we demonstrate a computationally specific integrated photonic hardware accelerator (tensor core) that is capable of operating at speeds of trillions of multiply-accumulate operations per second (10 12 MAC operations per second or tera-MACs per second). The tensor core can be considered as the optical analogue of an application-specific integrated circuit (ASIC). It achieves parallelized photonic in-memory computing using phase-change-material memory arrays and photonic chip-based optical frequency combs (soliton microcombs 3 ). The computation is reduced to measuring the optical transmission of reconfigurable and non-resonant passive components and can operate at a bandwidth exceeding 14 gigahertz, limited only by the speed of the modulators and photodetectors. Given recent advances in hybrid integration of soliton microcombs at microwave line rates 3 – 5 , ultralow-loss silicon nitride waveguides 6 , 7 , and high-speed on-chip detectors and modulators, our approach provides a path towards full complementary metal–oxide–semiconductor (CMOS) wafer-scale integration of the photonic tensor core. Although we focus on convolutional processing, more generally our results indicate the potential of integrated photonics for parallel, fast, and efficient computational hardware in data-heavy AI applications such as autonomous driving, live video processing, and next-generation cloud computing services. An integrated photonic processor, based on phase-change-material memory arrays and chip-based optical frequency combs, which can operate at speeds of trillions of multiply-accumulate (MAC) operations per second, is demonstrated.
A resource allocation model based on double-sided combinational auctions for transparent computing
Transparent Computing (TC) is becoming a promising paradigm in network computing era. Although many researchers believe that TC model has a high requirement for the communication bandwidth, there is no research on the communication bandwidth boundary or resource allocation, which impedes the development of TC. This paper focuses on studying an efficient transparent computing resource allocation model in an economic view. First, under the quality of experiments (QoE) ensured, the utility function of clients and transparent computing providers (TCPs) is constructed. After that, the demand boundary of communication bandwidth is analyzed under the ideal transparent computing model. Based on the above analyses, a resource allocation scheme based on double-sided combinational auctions (DCA) is proposed so that the resource can be shared by both the service side and the client side with the welfare of the whole society being maximized. Afterward, the results scheduled in different experimental scenarios are given, which verifies the effectiveness of the proposed strategy. Overall, this work provides an effective resource allocation model for optimizing the performance of TC.
Distributed Deep Learning-based Offloading for Mobile Edge Computing Networks
This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) choose to offload their computation tasks to an edge server. To conserve energy and maintain quality of service for WDs, the optimization of joint offloading decision and bandwidth allocation is formulated as a mixed integer programming problem. However, the problem is computationally limited by the curse of dimensionality, which cannot be solved by general optimization tools in an effective and efficient way, especially for large-scale WDs. In this paper, we propose a distributed deep learning-based offloading (DDLO) algorithm for MEC networks, where multiple parallel DNNs are used to generate offloading decisions. We adopt a shared replay memory to store newly generated offloading decisions which are further to train and improve all DNNs. Extensive numerical results show that the proposed DDLO algorithm can generate near-optimal offloading decisions in less than one second.
Intelligent resource allocation in mobile blockchain for privacy and security transactions: a deep reinforcement learning based approach
In order to protect the privacy and data security of mobile devices during the transactions in the industrial Internet of Things (IIoT), we propose a mobile edge computing (MEC)-based mobile blockchain framework by considering the limited bandwidth and computing power of small base stations (SBSs). First, we formulate a joint bandwidth and computing resource allocation problem to maximize the long-term utility of all mobile devices, and take into account the mobility of devices as well as the blockchain throughput. We decompose the formulated problem into two subproblems to decrease the dimension of action space. Then, we propose a deep reinforcement learning additional particle swarm optimization (DRPO) algorithm to solve the two subproblems, in which a particle swarm optimization algorithm is leveraged to avoid the unnecessary search of a deep deterministic policy gradient approach. Simulation results demonstrate the effectiveness of our method from various aspects.
Federated Learning in Edge Computing: A Systematic Survey
Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.