Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,115 result(s) for "strategic servers"
Sort by:
The Diseconomies of Queue Pooling: An Empirical Investigation of Emergency Department Length of Stay
We conduct an empirical investigation of the impact of queue management on patients’ average wait time and length of stay (LOS). Using an emergency department’s (ED) patient-level data from 2007 to 2010, we find that patients’ average wait time and LOS are longer when physicians are assigned patients under a pooled queuing system with a fairness constraint compared to a dedicated queuing system with the same fairness constraint. Using a difference-in-differences approach, we find the dedicated queuing system is associated with a 17% decrease in average LOS and a 9% decrease in average wait time relative to the control group—a 39-minute reduction in LOS and a four-minute reduction in wait time for an average patient of medium severity in this ED. Interviews and observations of physicians suggest that the improved performance stems from the physicians’ increased ownership over patients and resources that is afforded by a dedicated queuing system, which enables physicians to more actively manage the flow of patients into and out of ED beds. Our findings suggest that the benefits from improved flow management in a dedicated queuing system can be large enough to overcome the longer wait time predicted to arise from nonpooled queues. We conduct additional analyses to rule out alternate explanations for the reduced average wait time and LOS in the dedicated system, such as stinting and decreased quality of care. Our paper has implications for healthcare organizations and others seeking to reduce patient wait time and LOS without increasing costs. This paper was accepted by Serguei Netessine, operations management.
Routing and Staffing When Servers Are Strategic
Traditionally, research focusing on the design of routing and staffing policies for service systems has modeled servers as having fixed (possibly heterogeneous) service rates. However, service systems are generally staffed by people. Furthermore, people respond to workload incentives; that is, how hard a person works can depend both on how much work there is and how the work is divided between the people responsible for it. In a service system, the routing and staffing policies control such workload incentives; and so the rate servers work will be impacted by the system’s routing and staffing policies. This observation has consequences when modeling service system performance, and our objective in this paper is to investigate those consequences. We do this in the context of the M / M / N queue, which is the canonical model for large service systems. First, we present a model for “strategic” servers that choose their service rate to maximize a trade-off between an “effort cost,” which captures the idea that servers exert more effort when working at a faster rate, and a “value of idleness,” which assumes that servers value having idle time. Next, we characterize the symmetric Nash equilibrium service rate under any routing policy that routes based on the server idle time (such as the longest idle server first policy). We find that the system must operate in a quality-driven regime, in which servers have idle time, for an equilibrium to exist. The implication is that to have an equilibrium solution the staffing must have a first-order term that strictly exceeds that of the common square-root staffing policy. Then, within the class of policies that admit an equilibrium, we (asymptotically) solve the problem of minimizing the total cost, when there are linear staffing costs and linear waiting costs. Finally, we end by exploring the question of whether routing policies that are based on the service rate, instead of the server idle time, can improve system performance.
Staffing, Routing, and Payment to Trade off Speed and Quality in Large Service Systems
Three fundamental questions when operating a service system are (1) how many employees to staff, and (2) how to route work to them, and (iii) how to pay them. These questions have often been studied separately; that is, the queueing and network-design literature that considers staffing and workload routing generally ignores payment, and the literature on employee payment generally ignores issues surrounding staffing and routing. In “Staffing, Routing, and Payment to Trade Off Speed and Quality in Large Service Systems,” D. Zhan and A.R. Ward study how the aforementioned three decisions jointly affect system throughput and the quality of the service delivered when the employers maximize their own payment. They find that the system manager should first solve a joint optimization problem to determine the staffing level, the routing policy, and the service speed, and second, design a payment contract under which the employees work at the desired service speed. Most common queueing models used for service-system design assume that the servers work at fixed (possibly heterogeneous) rates. However, real-life service systems are staffed by people, and people may change their service speed in response to incentives. The delicacy is that the resulting service speed is jointly affected by staffing, routing, and payment decisions. Our objective in this paper is to find a joint staffing, routing, and payment policy that induces optimal service-system performance. We do this under the assumption that there is a trade-off between service speed and quality and that employees are paid based on both. The employees selfishly choose their own service speed to maximize their own expected utility (which depends on the staffing through their busy time). The endogenous service-rate assumption leads to a centralized control problem in which the system manager jointly optimizes over the staffing, routing, and service rate. By solving the centralized control problem under fluid scaling, we find four different economically optimal operating regimes: critically loaded, efficiency driven, quality driven, and intentional idling (in which there is simultaneous customer abandonment and server idling). Then we show that a simple piece-rate payment scheme can be used to solve the associated decentralized control problem under fluid scaling.
Incentive Contracts for a Queueing System with a Strategic Server: A Principal-Agent Perspective
Queueing systems with strategic servers are common in the service industry. The self-interested service rate decision of the strategic server will be detrimental to the queueing system. To improve the service rates, designing incentive contracts for the server from the queueing system owner’s perspective is critical. This study investigates the incentive contracts of queueing systems under exogenous and endogenous price scenarios. The unit-price and cost-sharing contracts are introduced to coordinate the queueing system. The effects of pricing mechanisms and contract types on the queueing system are investigated theoretically and experimentally. The results reveal that regardless of whether the price scenario is exogenous or endogenous, the cost-sharing contract is more effective than the unit-price contract in incentivizing the server to make a service effort. The cost-sharing contract with endogenous price can reduce the service price. The cost-sharing contract can boost profits for both the owner and server, albeit with conditions.
The Smart Network Management Automation Algorithm for Administration of Reliable 5G Communication Networks
Smart network management automation (SNMA) is a technology that enables the automation of network management tasks. It provides a platform for automating network management, allowing administrators to manage their networks more reliably and efficiently. It provides the tools to monitor and manage network performance, identify and troubleshoot problems, and automate the deployment of new services and applications. SNMA also allows for scalability to meet the needs of any organization, regardless of size or complexity. With SNMA, network administrators can quickly and accurately identify problems, increase network uptime, and reduce time spent on manual tasks. The SNMA provides the ability to adjust configurations quickly and respond to changes in the network environment. It can help reduce costs and improve the efficiency of network operations. This paper proposed an intelligent network management automation algorithm for network administration and management in 5G communication networks. The proposed algorithm achieved 91.82% of remote network administration, 95.25% of global network administration, 96.59% of urban network administration, and 95.07% of local network administration. It creates a single submission to the network to make configuration changes, register network resources, manage users’ IP addresses, and filter packets to ensure information security and other tasks.
Knowledge strategy planning: an integrated approach to manage uncertainty, turbulence, and dynamics
Purpose Knowledge strategy and its planning are affected by uncertainty and environmental turbulence. This paper aims to discuss these issues and present knowledge strategy planning as an integrated approach for facing these conditions. Design/methodology/approach Based on an extensive survey and an original re-elaboration of the literature, the paper addresses these research questions: What is the meaning of knowledge strategy, and how can it be related to concepts such as strategic thinking, business strategy and knowledge management (KM) in organizations? What are the limitations of a pure rational approach to knowledge strategy in turbulent environments and under uncertainty? and What approaches can be consequently proposed to formulate knowledge strategies? Findings The study provides a critical reading of the current literature. Also, it proposes an integrated approach that sees planning as a continuous effort of learning and adaptation to needs and opportunities that dynamically emerge from daily practices. Research limitations/implications The proposed framework can inspire a new research agenda to detect how knowledge strategies are planned in companies and how they are continuously adapted on the basis of a dialog between rational contributions and perceptions of reality, practical views, intuitions and emotions. This can also inspire a new agenda for company strategists and KM professionals. Originality/value In the literature, little attention has been devoted to knowledge strategy planning. The paper contributes to fill this gap and proposes a new way to see knowledge strategy as an integration of rational thinking and dynamic learning.
Strategic Behavior and Social Optimization in Markovian Vacation Queues
We consider a single server queueing system in which service shuts down when there are no customers present and is resumed only when the queue length reaches a given critical length. We analyze the strategic response of customers to this mechanism and compare it to the overall optimal behavior, with and without information on delay. The results are significantly different from those obtained when the server is continuously available. We show that there may exist multiple equilibria in such a system and the optimal arrival rate may be greater or smaller than that of the decentralized equilibrium. Finally, the critical length is taken as a decision variable, and the optimal operations policy is discussed by taking strategic customers into consideration.
Optimizing storage on fog computing edge servers: A recent algorithm design with minimal interference
The burgeoning field of fog computing introduces a transformative computing paradigm with extensive applications across diverse sectors. At the heart of this paradigm lies the pivotal role of edge servers, which are entrusted with critical computing and storage functions. The optimization of these servers’ storage capacities emerges as a crucial factor in augmenting the efficacy of fog computing infrastructures. This paper presents a novel storage optimization algorithm, dubbed LIRU (Low Interference Recently Used), which synthesizes the strengths of the LIRS (Low Interference Recency Set) and LRU (Least Recently Used) replacement algorithms. Set against the backdrop of constrained storage resources, this research endeavours to formulate an algorithm that optimizes storage space utilization, elevates data access efficiency, and diminishes access latencies. The investigation initiates a comprehensive analysis of the storage resources available on edge servers, pinpointing the essential considerations for optimization algorithms: storage resource utilization and data access frequency. The study then constructs an optimization model that harmonizes data frequency with cache capacity, employing optimization theory to discern the optimal solution for storage maximization. Subsequent experimental validations of the LIRU algorithm underscore its superiority over conventional replacement algorithms, showcasing significant improvements in storage utilization, data access efficiency, and reduced access delays. Notably, the LIRU algorithm registers a 5% increment in one-hop hit ratio relative to the LFU algorithm, a 66% enhancement over the LRU algorithm, and a 14% elevation in system hit ratio against the LRU algorithm. Moreover, it curtails the average system response time by 2.4% and 16.5% compared to the LRU and LFU algorithms, respectively, particularly in scenarios involving large cache sizes. This research not only sheds light on the intricacies of edge server storage optimization but also significantly propels the performance and efficiency of the broader fog computing ecosystem. Through these insights, the study contributes a valuable framework for enhancing data management strategies within fog computing architectures, marking a noteworthy advancement in the field.
Optimal Signaling Mechanisms in Unobservable Queues
In many systems with limited service capacity, customers must wait in a queue for service. When customers cannot observe the queue, how should a revenue-maximizing service provider convey information about wait times? In “Optimal Signaling Mechanisms in Unobservable Queues,” D. Lingenbrink and K. Iyer study this problem and characterize the structure of the optimal signaling mechanism. To signal optimally, the service provider uses two possible signals, “short” and “long,” to tell customers the queue length is short when below a threshold and long when above it. For the specific case of linear waiting costs, the authors explicitly compute this threshold. Furthermore, they show that for an optimally chosen fixed service price, optimal signaling produces the same expected revenue as a pricing mechanism that sets prices based on the number of customers waiting. This suggests that in settings where one cannot dynamically update prices, signaling can be effective in generating revenue. We consider the problem of optimal information sharing in an unobservable single-server queue offering service at a fixed price to a Poisson arrival of delay-sensitive customers. The service provider observes the queue and may share state information with arriving customers. The customers, who are Bayesian and strategic, incorporate this information into their beliefs before deciding whether to join the queue. We pose the following question: Which signaling mechanism should the service provider adopt to maximize her expected revenue? We formulate this problem as an infinite linear program in the queue’s steady-state distribution and establish that, in general, the optimal signaling mechanism requires the service provider to strategically conceal information in order to incentivize customers to join. In particular, we show that a binary signaling mechanism with a threshold structure is optimal. Finally, we prove that coupled with an optimal fixed price, the optimal signaling mechanism generates the same expected revenue as the optimal state-dependent pricing mechanism. This suggests that in settings where state-dependent pricing is infeasible, signaling can be effective in achieving the optimal revenue. Our work contributes to the literature on dynamic Bayesian persuasion and provides many interesting directions for extensions.
Quality-Speed Conundrum: Trade-offs in Customer-Intensive Services
In many services, the quality or value provided by the service increases with the time the service provider spends with the customer. However, longer service times also result in longer waits for customers. We term such services, in which the interaction between quality and speed is critical, as customer-intensive services . In a queueing framework, we parameterize the degree of customer intensity of the service. The service speed chosen by the service provider affects the quality of the service through its customer intensity. Customers queue for the service based on service quality, delay costs, and price. We study how a service provider facing such customers makes the optimal \"quality-speed trade-off.\" Our results demonstrate that the customer intensity of the service is a critical driver of equilibrium price, service speed, demand, congestion in queues, and service provider revenues. Customer intensity leads to outcomes very different from those of traditional models of service rate competition. For instance, as the number of competing servers increases, the price increases, and the servers become slower. This paper was accepted by Sampath Rajagopalan, operations and supply chain management.