Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
71 result(s) for "Conditional Value-at-Risk (CVaR)"
Sort by:
Making and Evaluating Point Forecasts
Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual scores are averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that the expected score is minimized when following the directive. A functional is elicitable if there exists a scoring function that is strictly consistent for it. Expectations, ratios of expectations and quantiles are elicitable. For example, a scoring function is consistent for the mean functional if and only if it is a Bregman function. It is consistent for a quantile if and only if it is generalized piecewise linear. Similar characterizations apply to ratios of expectations and to expectiles. Weighted scoring functions are consistent for functionals that adapt to the weighting in peculiar ways. Not all functionals are elicitable; for instance, conditional value-at-risk is not, despite its popularity in quantitative finance.
Partially ordered data sets and a new efficient method for calculating multivariate conditional value-at-risk
Recent studies in Lee and Prékopa (Oper Res Lett 45:19–24, 2017) and Lee (Oper Res Lett 45:1204–1220, 2017) showed that a union of partially ordered orthants in Rn can be decomposed only into the largest and the second largest chains. This allows us to calculate the probability of the union of such events in a recursive manner. If the vertices of such orthants designate p-level efficient points, i.e., the multivariate quantile or the multivariate value-at-risk (MVaR) in Rn, then the number of them, say N, is typically very large, which makes it almost impossible to calculate the multivariate conditional value-at-risk (MCVaR) introduced by Prékopa (Ann Oper Res 193(1):49–69, 2012). This is because it takes O(2N) in case of N MVaRs in Rn to find the exact value of MCVaR. In this paper, upon the basis of ideas in Lee and Prékopa (Oper Res Lett 45:19–24, 2017) and Lee (Oper Res Lett 45:1204–1220, 2017), together with proper adjustments, we study efficient methods for the calculation of the MCVaR without resorting to an approximation. In fact, the proposed methods not only have polynomial time complexity but also computes the exact value of MCVaR. We also discuss additional benefits MCVaR has to offer over its univariate counter part, the conditional value-at-risk, by providing numerical results. Numerical examples are presented with computing time in both cases of given population and sample data sets.
Entropic Value-at-Risk: A New Coherent Risk Measure
This paper introduces the concept of entropic value-at-risk (EVaR), a new coherent risk measure that corresponds to the tightest possible upper bound obtained from the Chernoff inequality for the value-at-risk (VaR) as well as the conditional value-at-risk (CVaR). We show that a broad class of stochastic optimization problems that are computationally intractable with the CVaR is efficiently solvable when the EVaR is incorporated. We also prove that if two distributions have the same EVaR at all confidence levels, then they are identical at all points. The dual representation of the EVaR is closely related to the Kullback-Leibler divergence, also known as the relative entropy. Inspired by this dual representation, we define a large class of coherent risk measures, called g-entropic risk measures. The new class includes both the CVaR and the EVaR.
Managing customer waiting times in an inventory system using Conditional Value-at-Risk measure
In today’s fast-paced world, delays or prolonged customer waiting times pose a threat to the firm’s profitability. This study utilizes the mean-CVaR metric to incorporate the risk associated with prolonged customer waiting times into the optimal trade-off decisions. For this purpose, we consider a single inventory system that faces Poisson demand and utilizes a base-stock policy to replenish its inventory, which takes a fixed amount of time. The firm implements a preorder strategy, encouraging customers to place their orders a fixed amount of time in advance of their actual needs, a period referred to as the commitment lead time. The firm rewards customers with a bonus termed the commitment cost, which increases with the length of the commitment lead time. We aim to determine the optimal control policy, including the optimal base-stock level and optimal commitment lead time, that minimizes the long-run average cost. The cost includes inventory holding, commitment, and customer waiting costs, with the latter adjusted for the firm’s degree of risk aversion. The optimal policy depends on the interdependence of the decisions, with the optimal commitment lead time following a “bang-bang” pattern, and the corresponding optimal base-stock level taking an “all-or-nothing” form. For linear commitment costs with a cost factor per time unit, we identify a threshold that increases with the firm’s risk aversion degree. Firms with greater risk aversion typically favor the buy-to-order strategy, while those with lower risk aversion may opt for either buy-to-stock or buy-to-order depending on their assessment of waiting costs.
Adaptive energy management in smart homes through fuzzy reinforcement learning and metaheuristic optimization algorithms to minimize costs
The integration of advanced technology in smart homes has made the prevention of energy waste in the residential and building sectors a significant concern for both developed and developing nations in recent decades. This paper offers a thorough model for maximizing energy generation and consumption in smart homes with demand-responsive loads, energy storage systems (ESS), solar photovoltaic (PV) panels, bidirectional electric vehicles (EVs) that can communicate with both grid-to-vehicle (G2V) and vehicle-to-grid (V2G). The model uses a mixed-integer linear programming (MILP) framework to assess the technical and economic effects of these factors while accounting for the inherent uncertainties in outside temperatures, lighting loads, sun irradiation, and EV supply. Important situations include time-shifting deferrable loads (like washing machines), selling excess PV-generated energy to the grid, and putting price-based demand response (DR) techniques like real-time pricing (RTP) and day-ahead pricing (DAP) into practice. To manage uncertainties and adaptively schedule the operations of appliances, electric vehicles, and energy storage systems (ESS), the proposed HEMS uses a fuzzy programming technique supplemented by reinforcement learning. Harris Hawks Optimization (HHO) and Wild Horse Optimization (WHO) are two examples of metaheuristic algorithms used for optimization, whereas the conditional value at risk (CVaR) criterion is used for risk management. MATLAB simulations show that this adaptive technique can save up to 53% of home electricity expenses in tested scenarios while keeping computational efficiency under 60 s, which makes it suitable for real-time applications. The strategy opens the door for resilient and sustainable residential energy systems by highlighting new developments in smart grid integration, renewable energy use, and AI-driven optimization.
Multilocation Newsvendor Problem: Centralization and Inventory Pooling
We study a multilocation newsvendor model with a retailer owning multiple retail stores, each of which is operated by a manager who decides the order quantity for filling random customer demand of a product. Store managers and the retailer are all risk averse, but managers are more risk averse than the retailer. We adopt conditional value-at-risk (CVaR) as the performance measure and consider two alternative strategies to improve the system’s performance. First, the retailer centralizes the ordering decisions. Second, managers still decide the order quantity for their own store, whereas their inventories are pooled together. We analyze and compare the optimal order quantities and the resultant CVaR values of the systems and study their comparative statistics. For centralization, we find that each store has a higher inventory level in the centralized system than in the decentralized system, and centralization positively benefits the retailer as long as some store managers are strictly more risk averse than the retailer. When there is inventory pooling, the ordering decisions in the decentralized system depend on how the additional profit from pooling is allocated among the stores. We consider a weighted proportional allocation rule and characterize the Nash equilibrium of the resultant ordering game among the store managers. Our key finding is that as long as the store managers are sufficiently more risk averse than the retailer or the demands are very heavy tailed, inventory pooling is less beneficial than centralization. We further derive a lower bound on the value of centralization and two upper bounds on the value of inventory pooling. Finally, our analytical results are illustrated using a data set from an online retailer in China, and various comparative statics are further examined via extensive numerical experiments. This paper was accepted by Charles Corbett, operations management .
Coordination for contract farming supply chain with stochastic yield and demand under CVaR criterion
This paper analyzes the optimal production and pricing decisions in an agricultural supply chain formed by contract farming scheme consisting of an agribusiness firm and multiple risk-averse farmers. The effects of yield and demand uncertainties and the farmer’s risk aversion on the optimal decisions of the production quantity, wholesale price and retail price are analyzed. Our analyses provide managerial insights on the contract terms of the agricultural supply chain. We show that the production quantity decreases as the farmer is more risk-averse and faces higher yield uncertainty, while the retail price subsequently increases. However, the wholesale price is influenced by the interaction effect of the farmer’s risk aversion and yield uncertainty. The retail price is influenced by the interaction effect of demand uncertainty and price elasticity. In particular, we show that the loss due to the decentralized decisions increases as the farmer is more risk-averse and yield uncertainty is higher. Thus, a RPG (Revenue sharing + Production cost sharing + Guaranteed money) mechanism is developed to facilitate the coordination of the agricultural supply chain under uncertainty environment with risk-averse agents based on contract farming practices. The cost allocation ratio of the RPG mechanism borne by the agribusiness firm increases in the yield and market demand uncertainties and decreases in the farmer’s risk aversion. Specially, if the farmer is extremely risk averse, as well as the yield and demand becomes extremely higher, the RPG mechanism cannot achieve perfect coordination of the agricultural supply chain.
Risk-Constrained Optimization Framework for Generation and Transmission Maintenance Scheduling Under Economic and Carbon Emission Constraints
Power generation and transmission systems face increasing challenges in coordinating maintenance planning under economic pressure and carbon emission constraints. This study proposes an optimization framework that integrates preventive maintenance scheduling with operational dispatch decisions, aiming to achieve both cost efficiency and emission reduction. A bi-layer scenario-based mixed-integer optimization model is formulated, where the upper layer determines annual preventive maintenance windows, and the lower layer performs hourly economic dispatch considering renewable generation and demand uncertainty. To manage the exposure to extreme carbon outcomes, a Conditional Value-at-Risk (CVaR) constraint is embedded, jointly controlling economic and environmental risks. A parallel cut-generation decomposition algorithm is developed to ensure computational scalability for large-scale systems. Numerical experiments on six-bus and IEEE 118-bus systems demonstrate that the proposed model reduces total carbon emissions by up to 32.1%, while maintaining cost efficiency and system reliability. The scenario analyses further show that adjusting maintenance schedules according to seasonal carbon intensity effectively balances operation and emission targets. The results confirm that the proposed optimization framework provides a practical and scalable approach for achieving low-carbon, reliable, and economically efficient power system maintenance planning.
Stage-t scenario dominance for risk-averse multi-stage stochastic mixed-integer programs
This paper presents a new and general approach, named “Stage-t Scenario Dominance,” to solve the risk-averse multi-stage stochastic mixed-integer programs (M-SMIPs). Given a monotonic objective function, our method derives a partial ordering of scenarios by pairwise comparing the realization of uncertain parameters at each time stage under each scenario. Specifically, we derive bounds and implications from the “Stage-t Scenario Dominance” by using the partial ordering of scenarios and solving a subset of individual scenario sub-problems up to stage t. Using these inferences, we generate new cutting planes to tackle the computational difficulty of risk-averse M-SMIPs. We also derive results on the minimum number of scenario-dominance relations generated. We demonstrate the use of this methodology on a stochastic version of the mean-conditional value-at-risk (CVaR) dynamic knapsack problem. Our computational experiments address those instances that have uncertainty, which correspond to the objective, left-hand side, and right-hand side parameters. Computational results show that our “scenario dominance\"-based method can reduce the solution time for mean-risk, stochastic, multi-stage, and multi-dimensional knapsack problems with both integer and continuous variables, whose structure is similar to the mean-risk M-SMIPs, with varying risk characteristics by one-to-two orders of magnitude for varying numbers of random variables in each stage. Computational results also demonstrate that strong dominance cuts perform well for those instances with ten random variables in each stage, and ninety random variables in total. The proposed scenario dominance framework is general and can be applied to a wide range of risk-averse and risk-neutral M-SMIP problems.
Stochastic Optimization Scheduling Method for Mine Electricity–Heat Energy Systems Considering Power-to-Gas and Conditional Value-at-Risk
To fully accommodate renewable and derivative energy sources in mine energy systems under supply and demand uncertainties, this paper proposes an optimized electricity–heat scheduling method for mining areas that incorporates Power-to-Gas (P2G) technology and Conditional Value-at-Risk (CVaR). First, to address uncertainties on both the supply and demand sides, a P2G unit is introduced, and a Latin hypercube sampling technique based on Cholesky decomposition is employed to generate wind–solar-load sample matrices that capture source–load correlations, which are subsequently used to construct representative scenarios. Second, a stochastic optimization scheduling model is developed for the mine electricity–heat energy system, aiming to minimize the total scheduling cost comprising day-ahead scheduling cost, expected reserve adjustment cost, and CVaR. Finally, a case study on a typical mine electricity–heat energy system is conducted to validate the effectiveness of the proposed method in terms of operational cost reduction and system reliability. The results demonstrate a 1.4% reduction in the total operating cost, achieving a balance between economic efficiency and system security.