Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
99
result(s) for
"Chekuri, Chandra"
Sort by:
Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes
2014
We consider the problem of maximizing a nonnegative submodular set function $f:2 perpendicular \\rightarrow {\\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was known only when the polytope was either the intersection of a fixed number of knapsack constraints or a matroid polytope. Second, we show that contention resolution schemes are an effective way to round a fractional solution, even when $f$ is nonmonotone. In particular, contention resolution schemes for different polytopes can be combined to handle the intersection of different constraints. Via linear programming duality we show that a contention resolution scheme for a constraint is related to the correlation gap of weighted rank functions of the constraint. This leads to an optimal contention resolution scheme for the matroid polytope. Our results provide a broadly applicable framework for maximizing linear and submodular functions subject to independence constraints. We give several illustrative examples. Contention resolution schemes may find other applications.
Journal Article
Maximizing a Monotone Submodular Function Subject to a Matroid Constraint
by
Calinescu, Gruia
,
Chekuri, Chandra
,
Pál, Martin
in
Algorithms
,
Approximation
,
Assignment problem
2011
Let $f:2^X \\rightarrow \\cal R_+$ be a monotone submodular set function, and let $(X,\\cal I)$ be a matroid. We consider the problem ${\\rm max}_{S \\in \\cal I} f(S)$. It is known that the greedy algorithm yields a $1/2$-approximation [M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, Math. Programming Stud., no. 8 (1978), pp. 73-87] for this problem. For certain special cases, e.g., ${\\rm max}_{|S| \\leq k} f(S)$, the greedy algorithm yields a $(1-1/e)$-approximation. It is known that this is optimal both in the value oracle model (where the only access to f is through a black box returning $f(S)$ for a given set S) [G. L. Nemhauser and L. A. Wolsey, Math. Oper. Res., 3 (1978), pp. 177-188] and for explicitly posed instances assuming $P \\neq NP$ [U. Feige, J. ACM, 45 (1998), pp. 634-652]. In this paper, we provide a randomized $(1-1/e)$-approximation for any monotone submodular function and an arbitrary matroid. The algorithm works in the value oracle model. Our main tools are a variant of the pipage rounding technique of Ageev and Sviridenko [J. Combin. Optim., 8 (2004), pp. 307-328], and a continuous greedy process that may be of independent interest. As a special case, our algorithm implies an optimal approximation for the submodular welfare problem in the value oracle model [J. Vondrák, Proceedings of the $38$th ACM Symposium on Theory of Computing, 2008, pp. 67-74]. As a second application, we show that the generalized assignment problem (GAP) is also a special case; although the reduction requires $|X|$ to be exponential in the original problem size, we are able to achieve a $(1-1/e-o(1))$-approximation for GAP, simplifying previously known algorithms. Additionally, the reduction enables us to obtain approximation algorithms for variants of GAP with more general constraints. [PUBLICATION ABSTRACT]
Journal Article
A Polynomial Time Approximation Scheme for the Multiple Knapsack Problem
by
Chekuri, Chandra
,
Khanna, Sanjeev
in
Algorithmics. Computability. Computer arithmetics
,
Algorithms
,
Applied sciences
2005
The multiple knapsack problem (MKP) is a natural and well-known generalization of the single knapsack problem and is defined as follows. We are given a set of $n$ items and $m$ bins (knapsacks) such that each item $i$ has a profit $p(i)$ and a size $s(i)$, and each bin $j$ has a capacity $c(j)$. The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the generalized assignment problem (GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APX-hard and a 2-approximation, for it is implicit in the work of Shmoys and Tardos [Math. Program. A, 62 (1993), pp. 461-474], and thus far, this was also the best known approximation for MKP\\@. The main result of this paper is a polynomial time approximation scheme (PTAS) for MKP\\@. Apart from its inherent theoretical interest as a common generalization of the well-studied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APX-hard. We substantiate this by showing that slight generalizations of MKP are APX-hard. Thus our results help demarcate the boundary at which instances of GAP become APX-hard. An interesting aspect of our approach is a PTAS-preserving reduction from an arbitrary instance of MKP to an instance with $O(\\log n)$ distinct sizes and profits.
Journal Article
Incremental Clustering and Dynamic Information Retrieval
by
Feder, Tomas
,
Motwani, Rajeev
,
Charikar, Moses
in
Algorithmics. Computability. Computer arithmetics
,
Algorithms
,
Applied sciences
2004
Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications. The goal is to efficiently maintain clusters of small diameter as new points are inserted. We analyze several natural greedy algorithms and demonstrate that they perform poorly. We propose new deterministic and randomized incremental clustering algorithms which have a provably good performance, and which we believe should also perform well in practice. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number of clusters.
Journal Article
The All-or-Nothing Multicommodity Flow Problem
by
Chekuri, Chandra
,
Shepherd, F. Bruce
,
Khanna, Sanjeev
in
Admission control
,
Algorithms
,
Approximation
2013
We consider the all-or-nothing multicommodity flow problem in general graphs. We are given a capacitated undirected graph $G=(V,E,u)$ and a set of $k$ node pairs $s_1 t_1, s_2t_2, \\ldots ,s_kt_k$. Each pair has a unit demand. A subset $S$ of $\\{1,2,\\ldots,k\\}$ is routable if there is a multicommodity flow in $G$ that simultaneously sends one unit of flow between $s_i$ and $t_i$ for each $i$ in $S$. Note that this differs from the edge-disjoint path problem (edp) in that we do not insist on integral flows for the pairs. The objective is to find a maximum routable subset $S$. When $G$ is a capacitated tree, the problem already generalizes $b$-matchings, and even in this case it is NP-hard and APX-hard to approximate. For trees, a $2$-approximation is known for the cardinality case and a $4$-approximation for the weighted case. In this paper we show that the natural linear programming relaxation for the all-or-nothing flow problem has a polylogarithmic integrality gap in general undirected graphs. This is in sharp contrast to edp, where the gap is known to be $\\Theta(\\sqrt{n})$; this ratio is also the best approximation ratio currently known for edp. Our algorithm extends to the case where each pair $s_it_i$ has a demand $d_i$ associated with it and we need to completely route $d_i$ to get credit for pair $i$; we assume that the maximum demand of the pairs is at most the minimum capacity of the edges. We also consider the online admission control version where pairs arrive online and the algorithm has to decide immediately on its arrival whether to accept it and the accepted pairs have to be routed. We obtain a randomized algorithm which has a polylogarithmic competitive ratio for maximizing throughput of the accepted requests if it is allowed to violate edge capacities by a $(2+\\epsilon)$ factor. [PUBLICATION ABSTRACT]
Journal Article
The all-or-nothing flow problem in directed graphs with symmetric demand pairs
by
Chekuri, Chandra
,
Ene, Alina
in
Approximation
,
Calculus of Variations and Optimal Control; Optimization
,
Collection
2015
We study the approximability of the All-or-Nothing multicommodity flow problem in directed graphs with symmetric demand pairs (SymANF). The input consists of a directed graph
G
=
(
V
,
E
)
and a collection of (unordered) pairs of nodes
M
=
s
1
t
1
,
s
2
t
2
,
…
,
s
k
t
k
. A subset
M
′
of the pairs is
routable
if there is a feasible multicommodity flow in
G
such that, for each pair
s
i
t
i
∈
M
′
, the amount of flow from
s
i
to
t
i
is at least one
and
the amount of flow from
t
i
to
s
i
is at least one. The goal is to find a maximum cardinality subset of the given pairs that can be routed. Our main result is a poly-logarithmic approximation with constant congestion for SymANF. We obtain this result by extending the well-linked decomposition framework of Chekuri et al. (
2005
) to the directed graph setting with symmetric demand pairs. We point out the importance of studying routing problems in this setting and the relevance of our result to future work.
Journal Article
Centrality of trees for capacitated k-center
by
Madan, Vivek
,
Bhaskara, Aditya
,
Gupta, Shalmoli
in
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
,
Full Length Paper
2015
We consider the capacitated
k
-center problem. In this problem we are given a finite set of locations in a metric space and each location has an associated non-negative integer capacity. The goal is to choose (open)
k
locations (called centers) and assign each location to an open center to minimize the maximum, over all locations, of the distance of the location to its assigned center. The number of locations assigned to a center cannot exceed the center’s capacity. The uncapacitated
k
-center problem has a simple tight
2
-approximation from the 80’s. In contrast, the first constant factor approximation for the capacitated problem was obtained only recently by Cygan, Hajiaghayi and Khuller who gave an intricate LP-rounding algorithm that achieves an approximation guarantee in the hundreds. In this paper we give a simple algorithm with a clean analysis and prove an approximation guarantee of
9
. It uses the standard LP relaxation and comes close to settling the integrality gap (after necessary preprocessing), which is narrowed down to either
7
,
8
or
9
. The algorithm proceeds by first reducing to special
tree instances
, and then uses our best-possible algorithm to solve such instances. Our concept of tree instances is versatile and applies to natural variants of the capacitated
k
-center problem for which we also obtain improved algorithms. Finally, we give evidence to show that more powerful preprocessing could lead to better algorithms, by giving an approximation algorithm that beats the integrality gap for instances where all non-zero capacities are the same.
Journal Article
A Graph Reduction Step Preserving Element-Connectivity and Packing Steiner Trees and Forests
2014
Given an undirected graph $G=(V,E)$ and a subset of vertices called terminals $T \\subseteq V$, the element-connectivity $\\elconn_G(u,v)$ of two terminals $u,v \\in T$ is the maximum number of $u$-$v$ paths that are pairwise element-disjoint, that is, disjoint in both edges and nonterminals $V \\setminus T$. (Element-connectivity was first (implicitly) defined by Frank, Ibaraki, and Nagamochi in [J. Graph Theory, 17 (1993), pp. 275--281].) (Element-disjoint paths need not be disjoint in terminals.) Hind and Oellermann [\\em Congr. Numer.}, 113 (1996), pp. 179--204] gave a graph reduction step that preserves the global element-connectivity of the terminals. We show that one can also apply such a reduction step while preserving local connectivity, that is, all the pairwise element-connectivities of the terminals. We illustrate the usefulness of this more general reduction step by giving applications to packing element-disjoint Steiner trees and forests: Given a graph $G$ and disjoint terminal sets $T_1, T_2, \\ldots, T_h$, we seek a maximum number of element-disjoint Steiner forests where each forest connects each $T_i$. We prove that if each $T_i$ is $k$-element-connected, then there exist $\\Omega(\\frac{k}{\\log |T| \\log h})$ element-disjoint Steiner forests, where $T = \\bigcup_i T_i$. If $G$ is planar (or has fixed genus), we show that there exist $\\Omega(k)$ Steiner forests. Our proofs are constructive, giving poly-time algorithms to find these forests; these are the first nontrivial algorithms for packing element-disjoint Steiner forests. [PUBLICATION ABSTRACT]
Journal Article
On Multidimensional Packing Problems
by
Chekuri, Chandra
,
Khanna, Sanjeev
in
Algorithmics. Computability. Computer arithmetics
,
Algorithms
,
Applied sciences
2004
We study the approximability of multidimensional generalizations of three classical packing problems: multiprocessor scheduling, bin packing, and the knapsack problem. Specifically, we study the vector scheduling problem, its dual problem, namely, the vector bin packing problem, and a class of packing integer programs. The vector scheduling problem is to schedule nd-dimensional tasks on m machines such that the maximum load over all dimensions and all machines is minimized. The vector bin packing problem, on the other hand, seeks to minimize the number of bins needed to schedule all n tasks such that the maximum load on any dimension across all bins is bounded by a fixed quantity, say, 1. Such problems naturally arise when scheduling tasks that have multiple resource requirements. Finally, packing integer programs capture a core problem that directly relates to both vector scheduling and vector bin packing, namely, the problem of packing a maximum number of vectors in a single bin of unit height. We obtain a variety of new algorithmic as well as inapproximability results for these three problems.
Journal Article
Centrality of trees for capacitated ...-center
2015
We consider the capacitated ...-center problem. In this problem we are given a finite set of locations in a metric space and each location has an associated non-negative integer capacity. The goal is to choose (open) ... locations (called centers) and assign each location to an open center to minimize the maximum, over all locations, of the distance of the location to its assigned center. The number of locations assigned to a center cannot exceed the center's capacity. The uncapacitated ...-center problem has a simple tight ...-approximation from the 80's. In contrast, the first constant factor approximation for the capacitated problem was obtained only recently by Cygan, Hajiaghayi and Khuller who gave an intricate LP-rounding algorithm that achieves an approximation guarantee in the hundreds. In this paper we give a simple algorithm with a clean analysis and prove an approximation guarantee of ... It uses the standard LP relaxation and comes close to settling the integrality gap (after necessary preprocessing), which is narrowed down to either ... or ... The algorithm proceeds by first reducing to special tree instances, and then uses our best-possible algorithm to solve such instances. Our concept of tree instances is versatile and applies to natural variants of the capacitated ...-center problem for which we also obtain improved algorithms. Finally, we give evidence to show that more powerful preprocessing could lead to better algorithms, by giving an approximation algorithm that beats the integrality gap for instances where all non-zero capacities are the same.
Journal Article