Skip to main content Skip to navigation

Abstracts

All talks will take place at room B3.02 (3rd floor, Maths building)


Thursday 10:45-11:30

Anna Adamaszek (University of Warwick)

Generalized Caching

In the generalized caching problem, we have a set of pages and a cache of size k. Each page has a specified size and fetching cost for loading the page into the cache. The input consists of a sequence of page requests. If a page is not present in the cache at the time it is requested, it has to be loaded into the cache, incurring cost. We give a randomized O(log k)-competitive online algorithm for the generalized caching problem, improving the previous bound of O(log2k). The improved bound is tight.


Monday 14:00-14:45

Xiaotie Deng (University of Liverpool)

Solution Concepts in Internet Market Design

While the Internet has been a new venue to create markets for selling products to millions of buyers, a new challenge has been created at the same time to our understanding of the working mechanisms of market making. With the matching market as an example, we explore the solution concept issues that are generated at the new frontiers: auction, equilibrium, envy-free.


Tuesday 14:00-14:45

Michael Dinitz (Weizmann Institute)

Fault-Tolerant Spanners: Better and Simpler

A natural requirement for many distributed structures is fault-tolerance: after some failures in the underlying network, whatever remains from the structure should still be effective for whatever remains from the network. One well-studied class of structures are graph spanners, in which we are given a graph and are asked to find a sparse subgraph that preserves pairwise distances up to some multiplicative factor (the stretch of the spanner). We examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults r for all stretch bounds. For stretch k 3 we design a simple transformation that converts every k-spanner construction with at most f(n) edges into an r-fault-tolerant k-spanner construction with at most O(r3 log n) f(2n/r) edges. Applying this to standard greedy spanner constructions gives r-fault tolerant k-spanners with O(r2 n1+2/(k+1)) edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on n but exponentially on r (approximately like kr). For the case of k=2 we improve a previous O(r log n)-approximation to an O(log n)-approximation, which is, notably, independent of the number of faults r. Our main technique here is a different (stronger) LP relaxation that uses the knapsack cover constraints.

Joint work with Robert Krauthgamer.


Friday 14:45-15:30

Charilaos Efthymiou (University of Warwick)

On Independent Sets in Random Graphs

The independence number of a sparse random graph G(n,m) of average degree d=2m/n is well-known to be (2-eps_d) n ln(d)/d < alpha(G(n,m))< (2+eps_d)n ln(d)/d with high probability, with eps_d->0 in the limit of large d. Moreover, a trivial greedy algorithm whp finds an independent set of size n ln(d)/d, i.e., about half the maximum size. Yet in spite of 30 years of extensive research no efficient algorithm has emerged to produce an independent set with (1+eps)n ln(d)/d for any fixed eps>0 (independent of both d and n).

In this paper we prove that the combinatorial structure of the independent set problem in random graphs undergoes a phase transition as the size k of the independent sets passes the point n ln(d)/d.

Roughly speaking, we prove that independent sets of size k>(1+eps)n ln(d)/d form an intricately ragged landscape, in which local search algorithms are bound to get stuck. We illustrate this phenomenon by providing an exponential lower bound for the Metropolis process, a Markov chain for sampling independents sets.


Monday 10:30-11:30

Kousha Etessami (University of Edinburgh)

One-Counter MDPs and Stochastic Games

I'll discuss algorithms for, and the complexity of, analysing a class of finitely presented countable-state Markov chains, Markov decision processes, and stochastic games, that arise by adding probabilistic and control/game behavior to classic one-counter automata.

We will see that this model subsumes a recently studied rudimentary model of risk-averse gambling/investment called "solvency games", that it also subsumes a well-studied stochastic model in queueing theory called quasi-birth-death processes, and that it is subsumed by a richer model called "recursive" stochastic games. Some problems that are undecidable for the richer recursive models become tractible in the one-counter setting.

The classic theory of random walks on the integers and sums of i.i.d. random variables play a key role in the analysis of these models, as do other tools from probability theory, in particular martingales.

(This talk describes joint work in a series of papers with a number of colleagues: Brazdil, Brozek, Kucera, Wojtczak, and Yannakakis.)


Tuesday 10:30-11:30

Uriel Feige (Weizmann Institute)

Mechanism Design with Uncertain Inputs

We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word).

For a probabilistic model for uncertainty we show that even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are Ω(1)-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest.

Joint work with Moshe Tennenholtz.


Friday 11:30-12:15

Marcin Jurdziński (University of Warwick)

Algorithms for Solving Parity Games

This talk is a selective survey of algorithms for solving parity games. Several state-of-the-art algorithms for solving parity games are presented, using disparate algorithmic techniques, such as divide-and-conquer, value iteration, local search by strategy improvement, as well as hybrid approaches that dovetail some of the former. While the problem of solving parity games is in NP and co-NP, and also in PLS and PPAD, and hence unlikely to be complete for any of the four complexity classes, no polynomial-time algorithms are known.


Friday 14:00-14:45

Oded Lachish (Birkbeck, University of London)

Matroid Secretary Problem

Imagine that you want to hire a single secretary. For this goal you interview fifty candidates. Ideally you would like to decide whom to hire after the interviews. However due to managerial constraints, after each interview you must make an irrevocable decision whether to hire or not. How can the probability of hiring an optimal candidate be maximized?

The scenario just described is an illustration of the classical-secretary-problem, which was introduced in the sixties. Although the problem has long been solved, many of its generalizations are still open and receive significant attention. We will discuss on-going research regarding a specific generalization called the Matroid-Secretary-Problem.There the candidates are elements of a Matroid. Like in the classical case the decision whether to hire is irrevocable. Yet unlike the classical problem it is possible to hire more than one candidate, as long as all the hired candidates form an independent set. The goal is to maximize the expected value of the candidates hired.


Friday 10:30-11:30

Angelica Pachón Pinzón (University of Warwick)

Reconstruction of a Many-dimensional Scenery

We consider a d -dimensional scenery seen along a simple symmetric branching random walk, where at each time each particle gives the color record it is seeing. We show that we can reconstruct the scenery up to equivalence from a color record. For this we assume that the scenery has at least 2d colors which are i.i.d. with uniform probability.


Thursday 11:30-12:15

Daniel Reichman (Weizmann Institute)

The Tale of the Second Layer

TBA


Tuesday 14:45-15:30

Maxim Sviridenko (IBM TJ Watson Research Center)

Preemptive and Non-Preemptive Generalized Min Sum Set Cover

In the (non-preemptive) Generalized Min Sum Set Cover Problem, we are given n ground elements and a collection of subsets S1, ..., Sm of the ground set. Each set S has a positive requirement k(S) that has to be fulfilled. We would like to order all elements of the ground set to minimize the total (weighted) cover time of all sets. The cover time of a set S is defined as the first index j in the ordering such that the first j elements in the ordering contain k(S) elements in S. This problem was introduced by Azar, Gamzu, Yin (2009) with interesting motivations in web page ranking and broadcast scheduling. For this problem, constant approximations are known due to Bansal, Gupta, Krishnaswamy (2010) (factor 485) and Skutella, Williamson (2011) (factor 28).

We study the version where preemption is allowed. The difference is that elements can be fractionally scheduled (assigned to indices) and a set S is covered at the first time step when k(S) amount of elements in S are scheduled. We give a 2-approximation for this preemptive problem based on a novel linear programming relaxation. This guarantee is tight under Unique Games Conjecture. We also show that a preemptive schedule could be converted into a non-preemptive one with factor 6.2 loss in the performance. Combining two results we obtain 12.4-approximation for the non-preemptive Generalized Min Sum Set Cover Problem.

Joint work with Sungjin Im and Ruben van der Zwaan.


Monday 10:30-11:30

Troels Bjerre Sørensen (University of Warwick)

Risk when Playing for Broke

When playing a zero-sum game once, a risk neutral player should maximize his expectation by playing a minimax strategy. If the game instead has to be repeated until one of the players is out of money, the variance of the outcome becomes important as well. If the player does not adjust, obliviously playing a minimax strategy, he could turn a sure win into an almost certain loss, even if the minimax strategy is unique. In this talk, we show how to compute strategies that take variance into consideration. This translates directly into rigorous lower bounds on the performance of the strategy in the repeated setting, as well as upper and lower bounds on the performance of any strategy.


Thursday 14:45-15:30

Gilad Tsur (Weizmann Institute)

Sublinear Time Approximation of Image Matching: It's all about the perimeter

In this work we consider the distance between two n x n binary images M1 and M2 when we perform affine transformations on their pixels. Namely, given an affine transformation T, we define the distance with respect to T as the maximum between two values – the portion of pixels in M1 that are not mapped by T to pixels with the same value in M2, and the portion of those in M2 that aren't mapped by T-1 to pixels with the same value in M1. The distance between M1 and M2 is defined as the minimum such distance taken over all affine transformations.

We show that in general there is an Ω(n) lower bound for estimating this distance. However, we find that the number of queries required to give such an approximation is related to the size of the perimeter of the images, a parameter that depends on the number of pixels that are adjacent to pixels of a different value. Having a small perimeter is a typical attribute of natural images. We show that if the perimeter of both images is O(n) we can provide an approximation of the distance between the images within an additive error of e using a number of queries depending only on e but not on n. We then give a lower bound on the number of queries required to estimate this distance as a function of the perimeter size. The lower bound implies that when the perimeter size is not O(n) the number of queries must depend on the image size.

This work extends naturally to additional groups of transformations that are often used in the image-processing and vision communities, e.g., similarities. In addition, similar techniques may be useful in dealing with grayscale images. Unlike work done in the past on sublinear algorithms for visual properties, the nature of the problem and the easy adaptation of the algorithm to grayscale images may make this work directly relevant to real world applications.

Joint work with Simon Korman and Daniel Reichman.


Thursday 14:00-14:45

Lutz Warnke (University of Oxford)

Achlioptas Process Phase Transitions are Continuous

It is widely believed that certain simple modifications of the random graph process lead to discontinuous phase transitions. In particular, starting with the empty graph on n vertices, suppose that at each step two pairs of vertices are chosen uniformly at random, but only one pair is joined, namely one minimizing the product of the sizes of the components to be joined. Making explicit an earlier belief of Achlioptas and others, in 2009, Achlioptas, D'Souza and Spencer conjectured that there exists a δ>0 (in fact, δ1/2 ) such that with high probability the order of the largest component `jumps' from o(n) to at least δn in o(n) steps of the process, a phenomenon known as `explosive percolation'.

We give a simple proof that this is not the case. Our result applies to all `Achlioptas processes', and more generally to any process where a fixed number of independent random vertices are chosen at each step, and (at least) one edge between these vertices is added to the current graph, according to any (online) rule.

We also prove the existence and continuity of the limit of the rescaled size of the giant component in a class of such processes, settling a number of conjectures. Intriguing questions remain, however, especially for the product rule described above.


Wednesday 10:30-11:15

Danny Vilenchik (Weizmann Institute)

Constructing Uniquely Realizable Graphs

In the Graph Realization Problem (GRP), one is given a graph G, a set of non-negative edge-weights, and an integer d. The goal is to find, if possible, a realization of G in the Euclidian space Rd, such that the distance between any two vertices is the assigned edge weight. The problem has many applications in mathematics and computer science, but is NP-hard when the dimension d is fixed. Characterizing tractable instances of GRP is a classical problem, first studied by Menger in 1931 in the case of a complete graph. We construct two new infinite families of GRP instances whose solution can be approximated up to an arbitrary precision in polynomial time. Both constructions are based on the blow-up of fixed small graphs with large expanders. Our main tool is Connelly's condition in Rigidity Theory, combined with an explicit construction and algebraic calculations of the rigidity stress matrix. One application of our results is a deterministic construction of uniquely k-colorable graphs.

Joint work with Igor Pak.