Skip to main content Skip to navigation

Abstracts (Warwick-Weizmann 2019 workshop)

Sayan Bhattacharya (Warwick):

Dynamic algorithms for graph coloring

Abstract:

We design fast dynamic algorithms for proper vertex and edge colorings in a graph undergoing edge insertions and deletions. In the static setting, there are simple linear time algorithms for (Δ + 1)-vertex coloring and (2Δ - 1)-edge coloring in a graph with maximum degree Δ. It is natural to ask if we can efficiently maintain such colorings in the dynamic setting as well. We get the following three results.

  • We present a randomized algorithm which maintains a (Δ+1)-vertex coloring with O(log Δ) expected amortized update time.
  • We present a deterministic algorithm which maintains a (1+o(1))Δ-vertex coloring with O(polylog Δ) amortized update time.
  • We present a simple, deterministic algorithm which maintains a (2Δ - 1)-edge coloring with O(log Δ) worst-case update time.

Joint work with Deeparnab Chakrabarty, Monika Henzinger, and Danupon Nanongkai.


Charis Efthymiou (Warwick):

Stochastic block model and the replica symmetric phase

Abstract:

We consider the community detection problem for the stochastic block model. This is a standard inference problem on networks which has gathered interest by many communities including computer science, biology, physics etc.

The focus of this talk is on estimating information theoretic lower bounds for the inference algorithms for this problem. We provide tight estimations for these bounds and we prove a conjecture originating from statistical physics, [Decelle et al.: Phys. Rev. E 2011]. Our approach is based on using the formalism from the so-called diluted mean-field models in physics. These are spin systems whose geometry of interactions is induced by a sparse random graph or hypergraph.

In a path-breaking paper based on the non-rigorous “cavity method,” physicists predicted not only the existence of a replica symmetry breaking phase transition in such models but also sketched a detailed picture of the evolution of the Gibbs measure within the replica symmetric phase and its impact on important problems in combinatorics, computer science and physics [Krzakala et al.: PNAS 2007]. In this talk we rigorize this picture completely for a broad class of models, encompassing the Potts antiferromagnet on the random graph, the k-XORSAT model and the diluted k-spin model for even k. We prove a correspondence between the replica symmetry breaking phase and the information theoretic lower bound for the stochastic block model.

This is a joint work with A. Coja-Oghlan, N. Jaafari, M. Kang and T. Kapetanopoulos.


Tom Gur (Warwick):

Zero knowledge proofs in a quantum world

Abstract:

The seminal work of Ben-Or et al. (STOC 1988) shows that zero knowledge can be achieved unconditionally by making the physical assumption that spatially-isolated provers are playing independent strategies. In this talk I will discuss new techniques involving coding theory, cryptography, complexity, and sublinear-time algorithms, that allow us to strengthen the foregoing result to hold even in light of quantum mechanics, which tells us that spatially-isolated provers can realise non-local correlated strategies by sharing a quantum entangled state.

We provide the first construction of a zero-knowledge proof system that is sound against quantum entangled provers. Along the way, we introduce a framework that leverages local testability to "lift" classical protocols to quantum protocols in a blackbox manner. The talk is directed towards a broad theoretical computer science audience, and in particular, no familiarity with quantum computing and cryptography is required.

The talk is based on a FOCS 2018 and QIP 2019 work, joint with Alessandro Chiesa, Michael Forbes, and Nick Spooner, as well as on forthcoming works.


Shaofeng Jiang (Weizmann):

Coresets for clustering in graphs with bounded treewidth

Abstract:

We consider the coreset for the k-median clustering problem on a graph G=(V, E). Let d be the shortest path metric of G. For some k-subset C of V, the objective for the k-median clustering is defined as the sum of the distances to C from all points in V. An ε-coreset S is a small (multi)-subset of V, such that for any k-subset C, the clustering objective evaluated on S is within (1±ε) from that of V.

We present a sampling based eps-coreset construction for graphs with bounded treewidth, whose size (the number of distinct elements) is O(k32 tw(G)). The construction is based on the importance sampling framework that was proposed by [Feldman, Langberg, 11]. The importance sampling framework essentially reduces the coreset problem to bounding (some variation of) the shattering dimension of the metric space (V, d), and our technical contribution is bounding the shattering dimension for bounded treewidth graphs.

We start with an introduction to the Feldman-Langberg framework. Then we present how the bounded treewidth implies the bounded shattering dimension. We conclude with future directions. This talk is based on a work-in-progress project.


Marcin Jurdziński (Warwick):

Universal trees and quasi-polynomial bounds for parity games

Abstract:

Universal trees are a very basic combinatorial object: an ordered tree is (n, h)-universal if every ordered tree of height at most h and with at most n leaves can be embedded into it.

Parity games play a fundamental role in the theory of logic and automata on infinite words and trees, and its applications to automated verification and synthesis of hardware and software, but they are at least as intriguing from the algorithmic point of view:

  • deciding the winner in them is both in NP and in co-NP, and whether there is a polynomial-time algorithm is a long-standing open problem;
  • examples of hard parity games led to breakthroughs in our understanding of the complexity of policy iteration in Markov Decision Processes (Fearnley, 2010) and of randomized simplex pivoting rules (Hansen, Friedmann, Zwick, 2011); and
  • in a recent breakthrough (Calude, Jain, Khoussainov, Li, and Stephan, 2017), the first quasi-polynomial algorithm was discovered for solving parity games.

We argue that the size of universal trees is intimately linked to the worst-case running times of all currently known quasi-polynomial algorithms for solving parity games. We give nearly matching quasi-polynomial upper and lower bounds for the size of smallest universal trees, suggesting that we may need to develop new algorithmic tools to be able to solve parity games in time better than the current quasi-polynomial state-of-the-art.

The talk is based on several joint works with various subsets of: Wojciech Czerwiński, Laure Daviaud, Nathanaël Fijalkow, Ranko Lazić, Karoliina Lehtinen, Paweł Parys, and Thejaswini K. S..


Ranko Lazić (Warwick):

The reachability problem for Petri nets is not elementary (STOC 2019 best paper)

Abstract:

Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a non-elementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack.


Merav Parter (Weizmann):

Low congestion cycle covers and their applications


Havana Rika (Weizmann):

Refined vertex sparsifiers of planar graphs

Abstract:

In this talk, we will study the following version of cut sparsification. Given a large edge-weighted network G with k terminal vertices, compress it into a smaller network H with the same terminals, such that every minimum terminal cut in H approximates the corresponding one in G, up to a factor q ≥ 1 that is called the quality. (The case q=1 is known also as a mimicking network). I will present new insights about the structure of minimum terminal cuts, leading to new results for cut sparsifiers of planar graphs.

The first result identifies a subset of the minimum terminal cuts, which we call elementary, that generates all the others. Consequently, H is a cut sparsifier if and only if it preserves all the elementary terminal cuts (up to this factor q).

The second and main result refines the known bounds in terms of γ = γ(G), which is defined as the minimum number of faces that are incident to all the terminals in a planar graph G. In particular, the number of elementary terminal cuts in G is O((2k/γ)) (compared to O(2k) terminal cuts), and it obtains a mimicking network of size O(γ 2 k4), which is near-optimal as a function of γ.


Pavel Veselý (Warwick):

Tight lower bound for comparison-based quantile summaries

Abstract:

Quantiles, such as the median or percentiles, provide concise and useful information about the distribution of a collection of items, drawn from a linearly ordered universe. We study data structures, called quantile summaries, which keep track of all quantiles, up to an error of at most ε. That is, an ε-approximate quantile summary first processes a stream of items and then, given any quantile query 0 ≤ ϕ ≤ 1, returns an item from the stream, which is a ϕ'-quantile for some ϕ' = ϕ±ε. We focus on comparison-based quantile summaries that can only compare two items and are otherwise completely oblivious of the universe.

The best deterministic comparison-based quantile summary to date, by Greenwald and Khanna [ACM SIGMOD '01], stores at most O(1/ε · log εN) items, where N is the number of items in the stream. We prove that this space bound is optimal by providing a matching lower bound. Our result thus rules out the possibility of constructing a deterministic comparison-based quantile summary in space f(ε)· o(log N), for any function f that does not depend on N.

Joint work with Graham Cormode.