Mike Paterson's 75th Birthday Workshop - Schedule
Schedule of the workshop
All talks will be in the room MS.02 in the Zeeman building (Mathematics).
Thursday 14th December | |
---|---|
8:45 | Welcome remarks |
9:00 | Leslie Valiant |
9:45 | Uri Zwick |
10:30 | Coffee break |
11:00 | Haris Aziz |
11:45 | Paul Goldberg |
12:30 | Lunch |
14:00 | Michael Fischer |
14:40 | Leslie Goldberg |
15:30 | Coffee break |
16:00 | Martin Dyer |
16:45 | Mark Jerrum |
18:00 | Dinner |
Haris Aziz (Data61 and UNSW, Sydney): A discrete and bounded envy-free cake cutting protocol for any number of agents
We consider the well-studied cake cutting problem in which the goal is to find an envy-free allocation. The problem has received attention in computer science, mathematics, and economics. It had been an open problem whether there exists a discrete and bounded envy-free protocol. In this talk I will discuss our discrete and bounded envy-free protocol that resolved the problem.
Based on joint work with Simon Mackenzie.
Martin Dyer (Leeds): Graph homomorphisms and constraint satisfaction
Graph homomorphism problems are as old as graph theory, and their computational complexity has been studied since the 1980’s. The generalisation to constraint satisfaction problems has been studied since the 1970’s, and its complexity since the 1990’s. Several variants of they underlying problem have been investigated. For some, a complete complexity categorisation remains open. However, the two most basic problems have been completely settled: the decision problem (in 2017) and the exact counting problem (in 2010). The latter has a natural generalisation to “weighted” problems, which was almost completely settled in 2012. For both, the solution is expressed as a “dichotomy theorem”, which partitions all instances into an “easy” class and a “hard” class. These results, and their history, will be reviewed.
Michael Fischer (Yale): Consensus-like problems in social choice theory
Social choice theory is the study of group decision-making. Examples are choosing a leader, filling at-large seats in an assembly, or selecting the medalists in a figure skating competition. The group of selectors are called voters or electors or judges depending on context. Similarly, the outcome of the selection process is a subset of winning candidate(s) or a ranking of the contestants.
Nobel Prize winner Kenneth Arrow created a theory of social welfare functions based on ordinal rankings of the possible outcomes by electors. His impossibility theorem shows that no voting electoral system exists that converts ranked preferences into a common output ranking and also satisfies three intuitively compelling axioms. (See Wikipedia page, "Arrow's impossibility theorem".)
Rather than take an axiomatic approach, we postulate a single optimal ranking that every honest voter would choose if given complete information about the environment. Honest voters approximate this ranking to the best of their abilities given the information available to them. Strategic voters attempt to in uence the output to their own ends. Thus, in our framework, we view the social welfare function as a kind of fault-tolerant approximate consensus problem. We illustrate our ideas with respect to the 6.0 Ranking System formerly used to judge international figure skating competitions and still used in some local competitions.
Our work is still very preliminary, and we raise more questions than we answer. We hope it will stimulate others to look at voting systems and social choice theory through the lens of fault-tolerant distributed computing.
Leslie Goldberg (Oxford): The computational complexity of two-state spin systems (revisited)
I will revisit a 2003 paper ``The computational complexity of two-state spin systems'' written by Mike together with the speaker and Mark Jerrum. I'll survey what has been done on this problem since the original paper.
Paul Goldberg (Oxford): TFNP: An update
The complexity class TFNP comprises problems in which every instance has an easily-checkable solution. There are many and varied TFNP problems that seem to be computationally hard, but NP-hardness is unlikely, and instead we rely on a diverse set of alternative notions of hardness. We show how TFNP problems can be expressed in terms of proofs in formal logic, in a way that's specific to TFNP. This provides a unifying framework that leads to new and interesting challenges that generalise existing known TFNP problems.
Mark Jerrum (QMUL): Partial rejection sampling
Rejection sampling, sometimes called the acceptance-rejection method, is a simple and classical technique for sampling from a conditional distribution given that some desirable event occurs. The idea is to sample from the unconditioned distribution (assumed to be simple, for example a product distribution), accept the sample if the desirable event occurs and reject otherwise. This trial is repeated until the first acceptance occurs. Rejection sampling in this form is rarely a feasible approach to sampling combinatorial structures, as the acceptance probability is generally exponentially small in the size of the problem instance. However, some isolated cases had been discovered where an exact sample is obtained by resampling only that part of the structure that “goes wrong”, an example being the “sink-popping” algorithm of Cohn, Pemantle and Propp for sampling sink-free orientations in an undirected graph.
The situations in which this shortcut still yields an exact sample from the desired distribution can be characterised, and are related to so-called extreme instances for the Lovász Local Lemma. Even when this ideal situation does not obtain, we find that we can generate exact samples by resampling just the parts of the structure that go wrong “plus a bit more”. We call this “Partial rejection sampling”. Provided that the size of the part that go wrong decays in expectation with time, partial rejection sampling can provide a very efficient exact sampling algorithm.
Based on joint work with Heng Guo (Edinburgh) and Jingcheng Liu (UC, Berkeley).
Leslie Valiant (Harvard): What needs to be added to machine learning?
Supervised learning is a cognitive phenomenon which has proved amenable both to theoretical analysis as well as exploitation as a technology. However, not all of cognition can be accounted for by supervised learning. The question we ask here is whether one can build on the success of supervised learning to address the broader goals of artificial intelligence. We regard reasoning as the major component of cognition that needs to be added. We suggest that the central challenge therefore is to unify the formulation of these two phenomena, learning and reasoning, into a single framework with a common semantics . We propose Robust Logic for this role, as a framework with a satisfactory theoretical basis. Testing it experimentally on a significant scale remains a major challenge for the future.
Uri Zwick (Tel Aviv): Selection from heaps, row-sorted matrices and X+Y using soft heaps
We use soft heaps to obtain simpler optimal algorithms for selecting the k-th smallest item, and the set of k smallest items, from a heap-ordered tree, from a collection of sorted lists, and from X+Y, where X and Y are two unsorted sets. Our results match, and in some ways extend and improve, classical results of Frederickson (1993) and Frederickson and Johnson (1982). In particular, for selecting the k-th smallest item, or the set of k smallest items, from a collection of m sorted lists we obtain a new optimal "output-sensitive" algorithm that performs only O(m+∑i=1,...,m log(ki+1)) comparisons, where ki is the number of items of the i-th list that belong to the overall set of k smallest items.
Joint work with Haim Kaplan, László Kozma and Or Zamir.