Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,074 result(s) for "Completion time"
Sort by:
Single Machine Scheduling Proportionally Deteriorating Jobs with Ready Times Subject to the Total Weighted Completion Time Minimization
In this paper, we investigate a single machine scheduling problem with a proportional job deterioration. Under release times (dates) of jobs, the objective is to minimize the total weighted completion time. For the general condition, some dominance properties, a lower bound and an upper bound are given, then a branch-and-bound algorithm is proposed. In addition, some meta-heuristic algorithms (including the tabu search (TS), simulated annealing (SA) and heuristic (NEH) algorithms) are proposed. Finally, experimental results are provided to compare the branch-and-bound algorithm and another three algorithms, which indicate that the branch-and-bound algorithm can solve instances of 40 jobs within a reasonable time and that the NEH and SA are more accurate than the TS.
Deep neural networks based order completion time prediction by using real-time job shop RFID data
In the traditional order completion time (OCT) prediction methods, some mutable and ideal production data (e.g., the arrival time of work in process (WIP), the planned processing time of all operations, and the expected waiting time per operation) are often used. Thus, the prediction time always deviates from the actual completion time dramatically even though the dynamicity of the production capacity and the real-time load conditions of job shop are considered in the OCT prediction method. On account of this, a new prediction method of OCT using the composition of order and real-time job shop RFID data is proposed in this article. It applies accurate RFID data to depict the real-time load conditions of job shop, and attempts to mine the mapping relationship between RFID data and OCT from historical data. Firstly, RFID devices capture the types and waiting list information of all WIPs which are in the in-stocks and out-stocks of machining workstations, and the real-time processing progress of all WIPs which are under machining at machining workstations. Secondly, a description model of real-time job shop load conditions is put forward by using the RFID data. Next, the mapping model based on the composition of order and real-time RFID data is established. Finally, deep belief network, which is one of the major technologies of deep neural networks, is applied to mine the mapping relationship. To illustrate the advantages of the proposed method, a numerical experiment compared with back-propagation (BP) network based prediction method, multi-hidden-layers BP network based prediction method and the principal components analysis and BP network based prediction method is conducted at last.
Stacked encoded cascade error feedback deep extreme learning machine network for manufacturing order completion time
In this paper, a novel stacked encoded cascade error feedback deep extreme learning machine (SEC-E-DELM) network is proposed to predict order completion time (OCT) considering the historical production planning and control data. Usually, the actual OCT significantly deviates from the planned because of recessive disturbances. The disturbances do not shut down production but slow down the production that accumulates over time, causing significant deviation of actual time from planned. The generation of weight parameters in neural networks using a randomization approach has a significant effect on generalization performance. To predict the OCT, firstly, the stacked autoencoder is used to generate input connection weights for the network by learning a deep representation of the real data. Secondly, the learned distribution of the encoder is connected to the network output through output connection weights incrementally learned by the Moore–Penrose inverse. Thirdly, the new hidden unit is added one by one to the network, which receives input connections from the input units and the last layer of the encoder to avoid overfitting and improve model generalization. The input connection weights for the newly added hidden unit are analytically calculated by the error feedback function to enhance the convergence rate by further learning deep features. Lastly, the hidden unit keeps on adding one by one by receiving connections from input units and some of the existing hidden units to make a deep cascade architecture. An extensive comparative study demonstrates that calculating connection weights by the proposed method helps to significantly improve the generalization performance and robustness of the network.
Optimal algorithms for scheduling under time-of-use tariffs
We consider a natural generalization of classical scheduling problems to a setting in which using a time unit for processing a job causes some time-dependent cost, the time-of-use tariff, which must be paid in addition to the standard scheduling cost. We focus on preemptive single-machine scheduling and two classical scheduling cost functions, the sum of (weighted) completion times and the maximum completion time, that is, the makespan. While these problems are easy to solve in the classical scheduling setting, they are considerably more complex when time-of-use tariffs must be considered. We contribute optimal polynomial-time algorithms and best possible approximation algorithms. For the problem of minimizing the total (weighted) completion time on a single machine, we present a polynomial-time algorithm that computes for any given sequence of jobs an optimal schedule, i.e., the optimal set of time slots to be used for preemptively scheduling jobs according to the given sequence. This result is based on dynamic programming using a subtle analysis of the structure of optimal solutions and a potential function argument. With this algorithm, we solve the unweighted problem optimally in polynomial time. For the more general problem, in which jobs may have individual weights, we develop a polynomial-time approximation scheme (PTAS) based on a dual scheduling approach introduced for scheduling on a machine of varying speed. As the weighted problem is strongly NP-hard, our PTAS is the best possible approximation we can hope for. For preemptive scheduling to minimize the makespan, we show that there is a comparably simple optimal algorithm with polynomial running time. This is true even in a certain generalized model with unrelated machines.
Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost
Single-machine group scheduling with general logarithmic deterioration is investigated, where the actual job processing (resp. group setup) time is a non-decreasing function of the sum of the logarithmic job processing (resp. group setup) times of the jobs (resp. groups) already processed. Under some optimal properties, it is shown that the maximal completion time (i.e., makespan) cost is solved in polynomial time and the optimal algorithm is presented. In addition, an extension of the general weighted deterioration model is given.
A cooperative adaptive genetic algorithm for reentrant hybrid flow shop scheduling with sequence-dependent setup time and limited buffers
This paper deals with a reentrant hybrid flow shop problem with sequence-dependent setup time and limited buffers where there are multiple unrelated parallel machines at each stage. A mathematical model with the minimization of total weighted completion time is constructed for this problem. Considering the complexity of the problem at hand, an effective cooperative adaptive genetic algorithm (CAGA) is proposed. In the algorithm, a dual chain coding scheme and a staged-hierarchical decoding approach are, respectively, designed to encode and decode each solution. Six dispatch heuristics and a dynamic adjustment method are introduced to define initial population. To balance the exploration and exploitation abilities, three effective operations are implemented: (1) two new crossover and mutation operators with collaborative mechanism are imposed on genetic algorithm; (2) an adaptive adjustment strategy is introduced to re-optimize better solutions after mutation operations, where ant colony search algorithm and modified greedy heuristic are intelligently switched; (3) a reset strategy with dynamic variable step strategy is embedded to re-generate some non-improved solutions. A Taguchi method of design of experiment is adopted to calibrate the parameter values in the presented algorithm. Comparison experiments are executed on test instances with different scale problems. Computational results show that the proposed CAGA is more effective and efficient than several well-known algorithms in solving the studied problem.
Energy Efficiency and Total Mission Completion Time Tradeoff in Multiple UAVs-Mounted IRS-Assisted Data Collection System
UAV-mounted intelligent reflecting surface (IRS) helps address the line-of-sight (LoS) blockage between sensor nodes (SNs) and the fusion center (FC) in Internet of Things (IoT). This paper considers an IoT assisted by multiple UAVs-mounted IRS (U-IRS), where the data from ground SNs are transmitted to the FC. In practice, energy efficiency (EE) and mission completion time are crucial metrics for evaluating system performance and operational costs. Recognizing their importance during data collection, we formulate a multi-objective optimization problem to maximize EE and minimize total mission completion time simultaneously. To characterize this tradeoff while considering optimization objective consistency, we construct an optimization problem that minimizes the weighted sum of the total mission completion time and the reciprocal of EE. Due to the non-convex nature of the formulated problem, obtaining optimal solutions is generally challenging. To tackle this issue, we decompose it into three sub-problems: UAV-SN association, number of reflecting elements allocation, and UAV trajectory optimization. An iterative algorithm combining genetic algorithm, CS-BJ algorithm, and successive convex approximation technique is proposed to solve these sub-problems. Simulation results demonstrate that when the transmitted data amount is 10 and 30 Mbits, compared to the static collection benchmark (the UAV hovers directly above each SN), the EE of the proposed method improves by more than 10.4% and 5.2%, while the total mission completion time is reduced by more than 5.4% and 3.3%, respectively.
Free electricity tandem-twin-hybrid solar-biomass dryer increased the performance of coffee cherry drying
A free electricity tandem-twin-hybrid-solar-biomass dryer comprised of two drying rooms and operated with solar and biomass energy combustion of 10 kg rubber wood per hour separately to dry Robusta coffee cherries with 3, 6, 9, and 12 cm bed thicknesses were studied with the drying completion time (tc), number of defects (ND), and colour parameters, i.e., lightness (L*), hue angle [H(o)], and chroma (C), used as the performance indicators. The experimental results indicated that the drying room, bed thickness, and drying room-bed thickness interaction significantly affected the tc and ND and bed thickness only significantly affected C for both the solar energy drying and the biomass energy drying. The solar energy drying generated a drying air temperature of 44.6 ± 3.5°C with a tc of 70.9–90.2 h for the front drying room and 40.1 ± 2.8°C with a tc of 77.2–116.5 h for the rear drying room, whereas the biomass energy drying produced a drying air temperature of 57.2 ± 3.6 °C with a tc of 34.1–44.9 h for the front drying room and 45.6 ± 6.0 °C with a tc of 56.3–96.6 h for the rear drying room. Both drying processes produced coffee beans with the NDs less than 11 qualified for Grade 1 with similar colour characteristics.
The impact of disruption characteristics on the performance of a server
In this paper, we study a queueing system serving N customers with an unreliable server subject to disruptions even when idle. Times between server interruptions, service times, and times between customer arrivals are assumed to follow exponential distributions. The main contribution of the paper is to use general distributions for the length of server interruption periods/down times. Our numerical analysis reveals the importance of incorporating the down time distribution into the model, since their impact on customer service levels could be counterintuitive. For instance, while higher down time variability increases the mean queue length, for other service levels, can prove to be improving system performance. We also show how the process completion time approach from the literature can be extended to analyze the queueing system if the unreliable server fails only when it is serving a customer.
Learning to Predict Code Review Completion Time In Modern Code Review
Modern Code Review (MCR) is being adopted in both open-source and proprietary projects as a common practice. MCR is a widely acknowledged quality assurance practice that allows early detection of defects as well as poor coding practices. It also brings several other benefits such as knowledge sharing, team awareness, and collaboration. For a successful review process, peer reviewers should perform their review tasks promptly while providing relevant feedback about the code change being reviewed. However, in practice, code reviews can experience significant delays to be completed due to various socio-technical factors which can affect the project quality and cost. That is, existing MCR frameworks lack tool support to help developers estimate the time required to complete a code review before accepting or declining a review request. In this paper, we aim to build and validate an automated approach to predict the code review completion time in the context of MCR. We believe that the predictions of our approach can improve the engagement of developers by raising their awareness regarding potential delays while doing code reviews. To this end, we formulate the prediction of the code review completion time as a learning problem. In particular, we propose a framework based on regression machine learning (ML) models based on 69 features that stem from 8 dimensions to (i) effectively estimate the code review completion time, and (ii) investigate the main factors influencing code review completion time. We conduct an empirical study on more than 280K code reviews spanning over five projects hosted on Gerrit. Results indicate that ML models significantly outperform baseline approaches with a relative improvement ranging from 7% to 49%. Furthermore, our experiments show that features related to the date of the code review request, the previous owner and reviewers’ activities as well as the history of their interactions are the most important features. Our approach can help further engage the change owner and reviewers by raising their awareness regarding potential delays based on the predicted code review completion time.