Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
866 result(s) for "integer-only inference"
Sort by:
Efficient Integer Quantization for Compressed DETR Models
The Transformer-based target detection model, DETR, has powerful feature extraction and recognition capabilities, but its high computational and storage requirements limit its deployment on resource-constrained devices. To solve this problem, we first replace the ResNet-50 backbone network in DETR with Swin-T, which realizes the unification of the backbone network with the Transformer encoder and decoder under the same Transformer processing paradigm. On this basis, we propose a quantized inference scheme based entirely on integers, which effectively serves as a data compression method for reducing memory occupation and computational complexity. Unlike previous approaches that only quantize the linear layer of DETR, we further apply integer approximation to all non-linear operational layers (e.g., Sigmoid, Softmax, LayerNorm, GELU), thus realizing the execution of the entire inference process in the integer domain. Experimental results show that our method reduces the computation and storage to 6.3% and 25% of the original model, respectively, while the average accuracy decreases by only 1.1%, which validates the effectiveness of the method as an efficient and hardware-friendly solution for target detection.
Incorporating Competence Sets of Decision Makers by Deduction Graphs
This paper proposes an optimization model of incorporating competence sets of group decision makers to maximize the total benefit of the whole group. Such an incorporation model is formulated as finding a deduction graph linked from the nodes of existing competencies to the nodes of desired competencies. Compared with other methods treating competence set problems (Yu and Zhang 1991, Li and Yu 1994, and Shi and Yu 1996), the proposed model can solve problems involving multiple decision makers; in addition it allows the network to be cyclic and to contain compound nodes.
Designing a sustainable closed-loop supply chain network considering lateral resupply and backup suppliers using fuzzy inference system
Sustainability is key factor for transforming traditional supply chain networks into modern ones. This study, for the first time, considers the impacts of the backup suppliers and lateral transshipment/resupply simultaneously on designing a Sustainable Closed-Loop Supply Chain Network (SCLSCN) to decrease the shortage that may occur during the transmission of produced goods in the network. In this manner, the fuzzy multi-objective mixed-integer linear programming model is proposed to design an efficient SCLSCN resiliently. Moreover, the concept of circular economy has been studied in this paper to reduce environmental effects. This study aims to optimize total and environmental costs, including energy consumption and pollution emissions, while increasing job opportunities. A demand uncertainty component is considered to represent reality more closely. Due to the importance of demand, this parameter is estimated using the Fuzzy Inference System (FIS) as an input into the proposed mathematical model. Then, the fuzzy robust optimization approach is applied in a fuzzy set's environment. The model is tackled by a Multi-Choice Goal Programming Approach with Utility Function (MCGP-UF) to be solved in a timely manner, and the equivalent auxiliary crisp model is employed to convert the multi-objective function to a single objective. The proposed model is tested on the case study of the tire industry in terms of costs, environmental impacts, and social effects. The result confirmed that considering the concept of lateral resupply and backup supplier could considerably decrease the total costs and reduce shortages on the designed SCLSCN. Finally, sensitivity analysis on some crucial parameters is conducted, and future research directions are discussed.
Cycle-configuration descriptors: a novel graph-theoretic approach to enhancing molecular inference
Inference of molecules with desired activities/properties is one of the key and challenging issues in cheminformatics and bioinformatics. For that purpose, our research group has recently developed a state-of-the-art framework mol-infer for molecular inference. This framework first constructs a prediction function for a fixed property using machine learning models, which is then simulated by mixed-integer linear programming to infer desired molecules. The accuracy of the framework heavily relies on the representation power of the descriptors. In this study, we highlight a typical class of non-isomorphic chemical graphs with reasonably different property values that cannot be distinguished by the standard “two-layered (2L) model\" of mol-infer . To address this distinguishability problem of the 2L model, we propose a novel family of descriptors, named cycle-configuration (CC) , which captures the notion of ortho/meta/para patterns that appear in aromatic rings, which was impossible in the framework so far. Extensive computational experiments show that with the new descriptors, we can construct prediction functions with similar or better performance for all 44 tested chemical properties, including 27 regression datasets and 17 classification datasets comparing with our previous studies, confirming the effectiveness of the CC descriptors. For inference, we also provide a system of linear constraints to formulate the CC descriptors as linear constraints. We demonstrate that a chemical graph with up to 50 non-hydrogen vertices can be inferred within a practical time frame. Scientific Contribution This study proposes a new family of descriptors, cycle-configuration ( CC ), for the molecular inference framework mol-infer . Computational experiments demonstrate that incorporating CC descriptors into the 2L model (2L+CC model) can improve the performance of a prediction function in many cases. We also provide an MILP formulation that can infer chemical graphs with up to 50 non-hydrogen atoms within a few minutes.
Rank-Based Copula-Adjusted Mann–Kendall (R-CaMK)—A Copula–Vine Framework for Trend Detection and Sensor Selection in Spatially Dependent Environmental Networks
A Rank-Based Copula-Adjusted Mann–Kendall (R-CaMK) is proposed, with an end-to-end mathematical and computational framework that integrates rank-based multivariate dependence modelling (regular vines where data permit, Gaussian copula fallback otherwise), parametric spatial bootstrap for calibrated Mann–Kendall inference, and integer programming for budgeted sensor selection. At each site, the deterministic trend is removed, AR(1) margins are fitted, and residuals are transformed to ranks; the joint rank structure is modelled via R-vines or a Gaussian copula. Spatially coherent null series are simulated from the fitted model to estimate VarS for the Mann–Kendall S-statistic and to compute empirical p-values. A detection score  wj is defined and an integer linear programme (ILP) is solved to select sensors under cost/budget constraints. Simulation experiments show improved Type-I control and realistic power estimation relative to standard corrections; an application to seven long annual maximum flow sites in New South Wales demonstrates calibrated inference and operational selection decisions.
A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems
Szeliski et al. published an influential study in 2006 on energy minimization methods for Markov random fields. This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.
PLEACH: a new heuristic algorithm for pure parsimony haplotyping problem
Haplotype inference is an important issue in computational biology due to its various applications in diagnosing and treating genetic diseases such as diabetes, Alzheimer, and heart defects. There are different criteria to choose the solution from the alternatives. Parsimony is one of the most important criteria according to which the problem is known as Pure Parsimony Haplotyping (PPH) problem. The approaches to solve PPH are classified to two groups: exact and non-exact. The exact approaches often model the problem as a Mixed Integer Linear Programming (MILP) problem. Although in solving the small instances, these models generate the optimal solution in a reasonable time, because of the NP-hardness characteristic of PPH problem, they are ineffective in solving very large instances. This deficiency is compensated by non-exact algorithms. In this paper, we present a non-exact algorithm for large instances of PPH problem based on the divide-and-conquer technique. This algorithm, first, divides the problem into small sub-problems, which are solved by one of the previous exact approaches, and finally the solutions of the sub-problems are combined through solving an MILP. The appeared MILPs for solving the sub-problems and those for combining the solutions are so small that are solved rapidly. The performance of this algorithm has been evaluated by implementing it on real and simulated instances and in comparison with two well-known methods of PHASE and WinHap2.
LARGE SAMPLE PROPERTIES OF MATCHING FOR BALANCE
Matching methods are widely used for causal inference in observational studies. Of these methods, nearest neighbor matching is arguably the most popular. However, nearest neighbor matching does not, in general, yield an average treatment effect estimator that is consistent at the √n rate. Are matching methods not √n-consistent in general? In this paper, we examine a recent class of matching methods that use integer programming to directly target aggregate covariate balance, in addition to finding close neighbor matches. We show that under suitable conditions, these methods can yield simple estimators that are √n-consistent and asymptotically optimal.
Optimal a priori balance in the design of controlled experiments
We develop a unified theory of designs for controlled experiments that balance baseline covariates a priori (before treatment and before randomization) using the framework of minimax variance and a new method called kernel allocation. We show that any notion of a priori balance must go hand in hand with a notion of structure, since with no structure on the dependence of outcomes on baseline covariates complete randomization (no special covariate balance) is always minimax optimal. Restricting the structure of dependence, either parametrically or non-parametrically, gives rise to certain covariate imbalance metrics and optimal designs. This recovers many popular imbalance metrics and designs previously developed ad hoc, including randomized block designs, pairwise-matched allocation and rerandomization. We develop a new design method called kernel allocation based on the optimal design when structure is expressed by using kernels, which can be parametric or non-parametric. Relying on modern optimization methods, kernel allocation, which ensures nearly perfect covariate balance without biasing estimates under model misspecification, offers sizable advantages in precision and power as demonstrated in a range of real and synthetic examples. We provide strong theoretical guarantees on variance, consistency and rates of convergence and develop special algorithms for design and hypothesis testing.
Discrete Optimization for Interpretable Study Populations and Randomization Inference in an Observational Study of Severe Sepsis Mortality
Motivated by an observational study of the effect of hospital ward versus intensive care unit admission on severe sepsis mortality, we develop methods to address two common problems in observational studies: (1) when there is a lack of covariate overlap between the treated and control groups, how to define an interpretable study population wherein inference can be conducted without extrapolating with respect to important variables; and (2) how to use randomization inference to form confidence intervals for the average treatment effect with binary outcomes. Our solution to problem (1) incorporates existing suggestions in the literature while yielding a study population that is easily understood in terms of the covariates themselves, and can be solved using an efficient branch-and-bound algorithm. We address problem (2) by solving a linear integer program to use the worst-case variance of the average treatment effect among values for unobserved potential outcomes that are compatible with the null hypothesis. Our analysis finds no evidence for a difference between the 60-day mortality rates if all individuals were admitted to the ICU and if all patients were admitted to the hospital ward among less severely ill patients and among patients with cryptic septic shock. We implement our methodology in R, providing scripts in the supplementary material.