Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
183 result(s) for "Queuing networks (Data transmission)"
Sort by:
Performance Modeling and Design of Computer Systems
Tackling the questions that systems designers care about, this book brings queueing theory decisively back to computer science. The book is written with computer scientists and engineers in mind and is full of examples from computer systems, as well as manufacturing and operations research. Fun and readable, the book is highly approachable, even for undergraduates, while still being thoroughly rigorous and also covering a much wider span of topics than many queueing books. Readers benefit from a lively mix of motivation and intuition, with illustrations, examples and more than 300 exercises – all while acquiring the skills needed to model, analyze and design large-scale systems with good performance and low cost. The exercises are an important feature, teaching research-level counterintuitive lessons in the design of computer systems. The goal is to train readers not only to customize existing analyses but also to invent their own.
Performance modeling and design of computer systems : queueing theory in action
\"Computer systems design is full of conundrums. Tackling the questions that systems designers care about, this book brings queueing theory decisively back to computer science. The book is written with computer scientists and engineers in mind and is full of examples from computer systems, as well as manufacturing and operations research. Fun and readable, the book is highly approachable, even for undergraduates, while still being thoroughly rigorous and also covering a much wider span of topics than many queueing books. Readers benefit from a lively mix of motivation and intuition, with illustrations, examples and more than 300 exercises - all while acquiring the skills needed to model, analyze and design large-scale systems with good performance and low cost. The exercises are an important feature, teaching research-level counterintuitive lessons in the design of computer systems. The goal is to train readers not only to customize existing analyses but also to invent their own\"-- Provided by publisher.
Mathematical Modeling of Network Nodes and Topologies of Modern Data Networks
Mathematical and simulation models of network nodes and the simplest topologies of modern data transmission networks are developed. The functional model of a modern network node is described, and its mathematical model is developed. A simulation model of a topology consisting of network nodes is developed to evaluate probabilistic and temporal indicators of the service quality of a communication network. Based on the simulation model, the probability of packet loss is plotted versus the packet arrival rate for the topology under study.
Improving latency in Internet-of-Things and cloud computing for real-time data transmission: a systematic literature review (SLR)
To store, analyse and process the large volume of data generated by IoT traditional cloud computing, is used everywhere. However, the traditional cloud data centres have their limitations to handle high latency issues in time-critical applications of IoT and cloud. Their applications are computer gaming, e-healthcare, telemedicine and robot surgery. The high latency in IoTs and cloud includes high computational, communication latency (service) and network latencies. The vital requirement of IoT is to have minimum network, service and computation latencies for real-time applications. Network latency causes a delay in transmitting a message or communication from one location to another. Services that require data in real-time are almost impossible to access the data via the cloud. Traditional cloud computing approaches are unable to fulfil the quality-of-service (QoS) requirements in IoT devices. Researches related to latency reduction techniques are still in infancy. Some new approaches to minimize the latency for transmitting time-sensitive data in real-time are discussed in this paper for cloud and IoT devices. This research will help the researchers and industries to identify the techniques and technologies to minimize the latencies in IoT and cloud. The paper also discusses the research trends and the technical differences between the various technologies and techniques. With the increasing interest in the literature on latency minimization and its requirements for time-sensitive applications; it is important to systematically review and synthesize the approaches, tools, challenges and techniques to minimize latencies in IoT and cloud. This paper aims at systematically reviewing the state of the art of latency minimization to classify approaches, and techniques. The paper uses a PRISMA technique for a systematic review. The paper further identifies challenges and gaps in this regard for future research. We have identified 23 approaches and 32 technologies associated with latencies in the cloud and IoT. A total of 112 papers on latency reduction have been examined under this study. The existing research gaps and works for latency reduction in IoTs are discussed in detail. There are several challenges and gaps, which requires future research work for improving the latency minimization techniques and technologies. Finally, we present some open issues which will determine the future research direction.
Client-Aware Negotiation for Secure and Efficient Data Transmission
In Wireless Sensor Networks (WSNs), server clusters, and other systems requiring secure transmission, the overhead of data encryption and transmission is often not negligible. Unfortunately, a conflict exists between security and efficiency in processing data. Therefore, this paper proposes a strategy to overcome this conflict, called Client-Aware Negotiation for Secure and Efficient Data Transmission (CAN-SEAT). This strategy allows a client with different security transmission requirements to use the appropriate data security transmission without modifying the client. Two methods are designed for different clients. The first method is based on two-way authentication and renegotiation. After handshakes, the appropriate data security transmission scheme is selected according to the client requirements. Another method is based on redirection, which can be applied when the client does not support two-way authentication or renegotiation. For the characteristics of different architecture, this paper classifies and discusses symmetric key algorithms, asymmetric key algorithms, and hardware encryption instructions. In four application scenarios, the CAN-SEAT strategy is tested. Compared with the general transmission strategy, when only software encryption is used, the data processing and transmission cost can be reduced by 89.41% in the best case and by 15.40% in the worst case. When supporting hardware encryption, the cost can be reduced by 85.30% and 24.63%, respectively. A good effect was produced on the experimental platforms XiLinx, FT-2000+, and Intel processors. To the best of our knowledge, for Client-Aware Negotiation (CAN), this is the first method to be successfully deployed on a general system. CAN-SEAT can be easily combined with other energy-efficient strategies.
Secure UAV adhoc network with blockchain technology
Recent advances in aerial robotics and wireless transceivers have generated an enormous interest in networks constituted by multiple compact unmanned aerial vehicles (UAVs). UAV adhoc networks, i.e., aerial networks with dynamic topology and no centralized control, are found suitable for a unique set of applications, yet their operation is vulnerable to cyberattacks. In many applications, such as IoT networks or emergency failover networks, UAVs augment and provide support to the sensor nodes or mobile nodes in the ground network in data acquisition and also improve the overall network performance. In this situation, ensuring the security of the adhoc UAV network and the integrity of data is paramount to accomplishing network mission objectives. In this paper, we propose a novel approach to secure UAV adhoc networks, referred to as the blockchain-assisted security framework (BCSF). We demonstrate that the proposed system provides security without sacrificing the performance of the network through blockchain technology adopted to the priority of the message to be communicated over the adhoc UAV network. Theoretical analysis for computing average latency is performed based on queuing theory models followed by an evaluation of the proposed BCSF approach through simulations that establish the superior performance of the proposed methodology in terms of transaction delay, data secrecy, data recovery, and energy efficiency.
A Particle Swarm Optimization-Based Queue Scheduling and Optimization Mechanism for Large-Scale Low-Earth-Orbit Satellite Communication Networks
The spatial topology of large-scale low-Earth-orbit satellite communication networks is dynamically time-variant, and the load on the output ports of network nodes is continuously changing. The lengths and numbers of output port queues at each network node can affect the packet loss rate and end-to-end latency of traffic flows. In order to provide high-quality satellite communication services, it is necessary to schedule and optimize the lengths and numbers of queues used for transmitting time-sensitive traffic flows at each node’s output port to achieve the best deterministic transmission performance. This paper introduces a queue scheduling optimization mechanism based on the Particle Swarm Optimization algorithm (PSO-QSO) for large-scale low-Earth-orbit satellite communication networks. This method analyzes the relevant parameters of various traffic flows transmitted through the network and calculates the maximum time-sensitive business load within network nodes. It applies the Particle Swarm Optimization algorithm to calculate the optimal solution for the length and number of queues at each node’s output port used for forwarding time-sensitive traffic flows. The mechanism proposed in this paper ensures the deterministic end-to-end transmission of time sensitive traffic in large-scale low-Earth-orbit satellite communication networks and can provide real-time satellite communication services.
Pre-emptive Priority Queueing Based Multipath Routing (PPQM) to Enhance the QoS for Video Transmission in H-MANETs
Addressing latency concerns and ensuring high-quality video services in Heterogeneous Mobile Adhoc Networks (H-MANETs) are paramount challenges. This paper presents a pioneering solution: the Pre-emptive Priority Queueing based Multipath Routing algorithm (PPQM). Our approach prioritizes video traffic within OpenFlow switches, directing it across multiple paths in H-MANETs. Integrating the PPQ module within Cluster Heads operating in the software-defined networking (SDN) architecture is central to our design. We rigorously evaluate delay for each path by employing an M/M/1 queueing policy based on a Poisson arrival process and an exponential service time distribution. Utilizing Burke's theorem, our calculation spans the entire route from the cluster head to a sink node. By meticulously assessing the delay characteristics of individual paths, our model facilitates the selection of the most optimal path to minimize overall delay and enhance network performance. Our proposed model amalgamates clustering, FIFO with M/M/1 queueing, and SDN techniques. In a comprehensive evaluation against existing technologies, the implementation of PPQM demonstrates superior performance in crucial Quality of Service (QoS) metrics, including end-to-end delay, queue size, waiting time, throughput, and response time. Furthermore, our research achieves a significant 4.2% improvement in QoS metrics compared to contemporary approaches, highlighting the effectiveness of the PPQM algorithm in enhancing network performance. This research contributes a robust solution for advancing QoS in H-MANETs, demonstrating the efficacy of the PPQM algorithm compared to contemporary approaches.
Network carrier allocation optimization based on immune algorithm under massive concurrent access
In small multi-functional base stations such as 230 MHz power wireless private network LTE, when there is concurrent transient access of a large number of terminals, issues such as packet blocking and loss frequently occur, severely degrading overall system performance. To this end, the total delay during data transmission and queuing in the massive concurrent access of the power wireless private network is modeled, and a carrier allocation optimization method based on the optimized heuristic algorithm - immune algorithm is proposed. First, for the multi-objective problem with high real-time data requirements and packet loss rate requirements in the problem, an operational research model of data delay mechanism is constructed with the total data transmission delay as the objective function; An optimal resource allocation method based on immune algorithm is proposed to optimize the solution process; The minimum existence and convergence of the data delay model were analyzed and proved. The experimental results show that in the case of massive concurrent access, the proposed method enables the base station to maintain more stable performance under carrier limitations and massive concurrent access.