Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,726 result(s) for "Queueing theory"
Sort by:
Optimal Task Allocation Algorithm Based on Queueing Theory for Future Internet Application in Mobile Edge Computing Platform
For 5G and future Internet, in this paper, we propose a task allocation method for future Internet application to reduce the total latency in a mobile edge computing (MEC) platform with three types of servers: a dedicated MEC server, a shared MEC server, and a cloud server. For this platform, we first calculate the delay between sending a task and receiving a response for the dedicated MEC server, shared MEC server, and cloud server by considering the processing time and transmission delay. Here, the transmission delay for the shared MEC server is derived using queueing theory. Then, we formulate an optimization problem for task allocation to minimize the total latency for all tasks. By solving this optimization problem, tasks can be allocated to the MEC servers and cloud server appropriately. In addition, we propose a heuristic algorithm to obtain the approximate optimal solution in a shorter time. This heuristic algorithm consists of four algorithms: a main algorithm and three additional algorithms. In this algorithm, tasks are divided into two groups, and task allocation is executed for each group. We compare the performance of our proposed heuristic algorithm with the solution obtained by three other methods and investigate the effectiveness of our algorithm. Numerical examples are used to demonstrate the effectiveness of our proposed heuristic algorithm. From some results, we observe that our proposed heuristic algorithm can perform task allocation in a short time and can effectively reduce the total latency in a short time. We conclude that our proposed heuristic algorithm is effective for task allocation in a MEC platform with multiple types of MEC servers.
Equilibrium threshold joining strategies in partially observable batch service queueing systems
We study the strategic customer behavior in queueing systems with batch services under incomplete information. In particular, we assume that arriving customers have the opportunity to observe only the number of waiting batches upon arrival and, afterwards, they make their join/balk decisions. We prove that equilibrium strategies always exist within the legitimate class of threshold strategies, but they may not be unique. We also provide an algorithmic scheme for their computation. Moreover, we compare the strategic behavior under this information level with the corresponding behavior in the complete information case.
Analysis of Dynamic Transaction Fee Blockchain Using Queueing Theory
In recent years, blockchains have been attracting attention because they are decentralized networks with transparency and trustworthiness. Generally, transactions on blockchain networks with higher transaction fees are processed preferentially compared to others. The processing fee varies significantly depending on other transactions; it is difficult to predict the fee, and it may be significantly high. These are major barriers to blockchain utilization. Although several consensus algorithms have been proposed to solve these problems, their performance has not been fully evaluated. In this study, we model a blockchain system with a base fee, such as in Ethereum, via a priority queueing model. To assess the model’s performance, we derive the stability condition, stationary probability, average number of customers, and average waiting time for each type of customer. In deriving the stability conditions, we propose a method that uses the theoretical values of the partial models. These theoretical values match well with those obtained from Monte Carlo simulations, confirming the validity of the analysis.
Analysis of a queue with general service demands and correlated service capacities
We present the study of a non-classical discrete-time queueing model in which the customers each request a variable amount of service, called their “service demand”, from a server which is able to execute a variable amount of work, called its “service capacity”, during each time slot. We assume that the numbers of arrivals in consecutive time slots and the service demands of consecutive customers form two independent and identically distributed sequences. However, we allow the service capacities in consecutive time slots to be correlated according to a discrete-batch Markovian process. We study this model analytically and obtain expressions for the probability generating function of the steady-state system content and customer delay, as well as their moments and an approximation for their tail probabilities. The results are illustrated with several numerical examples.
Cloud data storage: a queueing model with thresholds
In the past decade, cloud platforms have become a standard across the industry for data storage and operations. Such platforms offer high quality of service in terms of reliability and ease of setup at an effective cost. With exponentially high rates of increase of data storage requirements, data is now increasingly stored in clouds. However, there are limited studies which analyze the processes performing the storage operations. Queueing models offer a very natural way of modeling these storage processes. The data packets waiting for storage form a queue which is served by a storage server. Since data packets are transmitted to the cloud in batches for efficiency, this storage server is modelled as a batch server. The storage server goes into sleep mode in between data transmission periods which are, in turn, modelled as vacations. The storage service is resumed after a vacation if there are enough packets in backlog or enough time has elapsed since last storage. This is modelled as restarting thresholds in our model. Analyzing this model helps us evaluate the quality of service (QoS) of storage processes in terms of measures such as backlog size and probability of a new connection to cloud server. These measures are then used to define a user cost function and QoS constraints, and compute optimal storage parameters.
Age of Information Violation Probability in a Multi‐Source Information Update System With a Shared Channel
We investigate the age of information (AoI) violation probability in an information update system with multiple sources sharing a channel with an infinite‐capacity buffer under non‐preemptive policies. A simple and accurate approximation formula for the AoI violation probability is derived, and numerical examples confirm its high accuracy.  
Face-to-Face Communication in Organizations
Communication is integral to organizations and yet field evidence on the relation between communication and worker productivity remains scarce. We argue that a core role of communication is to transmit information that helps co-workers do their job better. We build a simple model in which workers choose the amount of communication by trading off this benefit against the time cost incurred by the sender, and use it to derive a set of empirical predictions. We then exploit a natural experiment in an organization where problems arrive and must be sequentially dealt with by two workers. For exogenous reasons, the first worker can sometimes communicate face-to-face with their colleague. Consistently with the predictions of our model, we find that: (1) the second worker works faster (at the cost of the first worker having less time to deal with incoming problems) when face-to-face communication is possible, (2) this effect is stronger when the second worker is busier and for homogenous and closely located teams, and (3) the (career) incentives of workers determine how much they communicate with their colleagues. We also find that workers partially internalise social outcomes in their communication decisions. Our findings illustrate how workers in teams adjust the amount of mutual communication to its costs and benefits.
A Review of Auto-scaling Techniques for Elastic Applications in Cloud Environments
Cloud computing environments allow customers to dynamically scale their applications. The key problem is how to lease the right amount of resources, on a pay-as-you-go basis. Application re-dimensioning can be implemented effortlessly, adapting the resources assigned to the application to the incoming user demand. However, the identification of the right amount of resources to lease in order to meet the required Service Level Agreement, while keeping the overall cost low, is not an easy task. Many techniques have been proposed for automating application scaling. We propose a classification of these techniques into five main categories: static threshold-based rules, control theory, reinforcement learning, queuing theory and time series analysis. Then we use this classification to carry out a literature review of proposals for auto-scaling in the cloud.
Can Yardstick Competition Reduce Waiting Times?
Yardstick competition is a regulatory scheme for local monopolists (e.g., hospitals), where the monopolist’s reimbursement is linked to performance relative to other equivalent monopolists. This regulatory scheme is known to provide cost-reduction incentives and serves as the theoretical underpinning behind the hospital prospective reimbursement system used throughout the developed world. This paper uses a game-theoretic queueing model to investigate how yardstick competition performs in service systems (e.g., hospital emergency departments), where in addition to incentivizing cost reduction the regulator wants to incentivize waiting time reduction. We first show that the form of cost-based yardstick competition used in practice results in inefficiently long waiting times. We then demonstrate how yardstick competition can be appropriately modified to achieve the dual goal of cost and waiting-time reduction. In particular, we show that full efficiency ( first-best ) can be restored if the regulator makes the providers’ reimbursement contingent on their service rates and is also able to charge a provider-specific “toll” to consumers. More important, if such a toll is not feasible, as may be the case in healthcare, we show that there exists an alternative and particularly simple yardstick-competition scheme, which depends on the average waiting time only, that can significantly improve system efficiency ( second-best ). This scheme is easier to implement because it does not require the regulator to have detailed knowledge of the queueing discipline. We conclude with a numerical investigation that provides insights on the practical implementation of yardstick competition for hospital emergency departments, and we also present a series of modelling extensions. The e-companion is available at https://doi.org/10.1287/mnsc.2018.3089 . This paper was accepted by Serguei Netessine, operations management.