# Algorithms and Complexity Seminar

The seminar (usually) takes place on Wednesdays 15:00-16:00 in room CS.101, or on Tuesdays 16:00-17:00 in MS B3.03, as a joint seminar with DIMAP. The organizer is Oded Lachish. Please feel free to contact us with any enquiries you may have.

Query Complexity Lower Bounds for Reconstruction of Codes

We investigate the problem of "local reconstruction", as defined by Saks and Seshadhri (2008), in the context of error correcting codes.

The first problem we address is that of "message reconstruction", where given an oracle access to a corrupted encoding $w \in \{0,1\}^n$ of some message $x \in \{0,1\}^k$ our goal is to probabilistically recover $x$ (or some portion of it). This should be done by a procedure (reconstructor) that given an index $i$ as input, probes $w$ at few locations and outputs the value of $x_i$. The reconstructor can (and indeed must) be randomized, but all its randomness is specified in advance by a single random seed, such that with high probability ALL $k$ values $x_i$ for $1 \leq i \leq k$ are reconstructed correctly.

Using the reconstructor as a filter allows to evaluate certain classes of algorithms on $x$ efficiently. For instance, in case of a parallel algorithm, one can initialize several copies of the reconstructor with the same random seed, and they can autonomously handle decoding requests while producing outputs that are consistent with the original message $x$. Another example is that of adaptive querying algorithms, that need to know the value of some $x_i$ before deciding which index should be decoded next.

The second problem that we address is "codeword reconstruction", which is similarly defined, but instead of reconstructing the message our goal is to reconstruct the codeword itself, given an oracle access to its corrupted version.

Error correcting codes that admit message and codeword reconstruction can be obtained from Locally Decodable Codes (LDC) and Self Correctible Codes (SCC) respectively. The main contribution of this paper is a proof that in terms of query complexity, these are close to be the best possible constructions, even when we disregard the length of the encoding.

This is joint work with Eldar Fischer and Arie Matsliah.

Altruism in Atomic Congestion Games

We study the effects of introducing altruistic agents into atomic congestion games. Altruistic behavior is modeled by a linear trade-off between selfish and social objectives. Our model can be embedded in the framework of congestion games with player-specific latency functions. Stable states are the pure Nash equilibria of these games, and we examine their existence and the convergence of sequential best-response dynamics. In general, pure Nash equilibria are often absent and existence is \NP-hard to decide. Perhaps surprisingly, if all delay functions are linear, the games remain potential games even when agents are arbitrarily altruistic. This result can be extended to a class of general potential games and social cost functions, and we study a number of prominent examples.

In addition to these results for uncoordinated dynamics, we consider a scenario with a central altruistic institution that can set incentives for the agents. We provide constructive and hardness results for finding the minimum number of altruists to stabilize an optimal congestion profile and more general mechanisms to incentivize agents to adopt favorable behavior.

Energy Efficient Job Scheduling with Speed Scaling and Sleep Management

Energy usage has become a major issue in the design of microprocessors, especially for battery-operated devices. Many modern processors support dynamic speed scaling to reduce energy usage. The speed scaling model assumes that a processor, when running at speed s, consumes energy at the rate of s^\alpha, where \alpha is typically 2 or 3. In older days when speed scaling was not available, energy reduction was mainly achieved by allowing a processor to enter a low-power sleep state, yet waking up requires extra energy. It is natural to study job scheduling on a processor that allows both sleep state and speed scaling. In the awake state, a processor running at speed s>0 consumes energy at the rate s^\alpha + \sigma , where \sigma > 0 is the static power and s^\alpha is the dynamic power. In this case, job scheduling involves two components: a sleep management algorithm to determine when to work or sleep, and a speed scaling algorithm to determine which job to run and at what speed to run. Adding a sleep state changes the nature of speed scaling. Without sleep state, running a job slower is a natural way to save energy. With sleep state, one can also save energy by working faster to allow a longer sleep period. It is not trivial to strike a balance. In this talk, we will discuss some new scheduling results involving both speed scaling and sleep management.

A Survey of Connectivity Approximation via a Survey of the Techniques

We survey some crucial techniques in approximating connectivity problems. The most general question we study is the Steiner Network problem, where we are given an undirected weighted graph with costs on the edges, and required number r_{ij} of paths between every i, j. The paths need to be vertex or edge disjoint depending on the problem. The goal is to find a minimum cost feasible solution.

The full talk has the following techniques and problems:

- Solving k out-connectivity in polynomial time in the edge (Edmonds) and in the vertex (Frank Tardos) cases. This gives a simple ratio 2 for edge k-connectivity.
- The cycle lemma of Mader: together with technique 1 it both gives results for minimum power k-connectivity power problems (see the talk for exact definition) and an improved result for k-edge connectivity in the metric case.
- Laminarity and the new charging scheme by Ravi et. al. getting a much simplified version of Jain's theorem of 2 approximating of Steiner network in the edge disjoint paths case.

The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements

The king of refinements of Nash equilibrium is trembling hand perfection. In this talk, we show that it is NP-hard and SQRTSUM-hard to decide if a given pure strategy Nash equilibrium of a given three-player game in strategic form with integer payoffs is trembling hand perfect. Analogous results are shown for a number of other solution concepts, including proper equilibrium, (the strategy part of) sequential equilibrium, quasi-perfect equilibrium and CURB.

Exponential Lower Bounds For Policy Iteration

We study policy iteration for infinite-horizon Markov decision processes. In particular, we study greedy policy iteration. This is an algorithm that has been found to work very well in practice, where it is used as an alternative to linear programming. Despite this, very little is known about its worst case complexity. Friedmann has recently shown that policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total-reward and average-reward optimality criteria.

Local Algorithms for Robotic Formation Problems

Consider a scenario with a set of autonomous mobile robots having initial positions in the plane. Their goal is to move in such a way that they eventually reach a prescribed formation. Such a formation may be a straight line between two given endpoints (short communication chain), a circle or any other geometric pattern, or just one point (gathering problem). In this talk, I consider simple local strategies for such robotic formation problems: the robots are limited to see only robots within a bounded radius; they are memoryless, without common orientation. Thus, their decisions where to move next are solely based on the relative positions of robots within the bounded radius. I will present local strategies for short communication chains and gathering, and present runtime bounds assuming different time models. All previous algorithms with a proven time bound assume global view on the positions of all robots.

A Direct Reduction from k-Player to 2-Player Approximate Nash Equilibrium

Computing an approximate Nash equilibrium for games represented in normal (strategic) form is considered a challenging problem. Following [Bubelis79], we define the notion of a direct reduction between games which preserves approximate Nash equilibria, and present such a direct reduction from k-player games to 2-player games. Previously, the computational equivalence of computing approximate Nash equilibria in k-player and 2-player games was established via an indirect reduction, based on a sequence of works defining the complexity class PPAD and its complete problems. Our direct reduction makes no use of the concept of PPAD, thus eliminating some of the difficulties involved in following the known indirect reduction. Nevertheless, the new gadgets that we introduce are relevant to the notion of PPAD-completeness, as they can be used in other reductions among PPAD problems.

Joint work with Prof. Uriel Feige.

A Nonlinear Approach to Dimension Reduction

The celebrated Johnson-Lindenstrauss lemma says that every n points in Euclidean space can be represented using O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a much-improved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimension of the data, e.g. by replacing the log(n) term with the doubling dimension. This question appears to be quite challenging, requiring new (nonlinear) embedding techniques.

We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavour to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding.

Joint work with Lee-Ad (Adi) Gottlieb.

k-Means has Polynomial Smoothed Complexity

The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, we study the k-means method in the model of smoothed analysis.

Smoothed analysis is a hybrid of worst-case and average-case analysis, which is based on a semi-random input model in which an adversary first specifies an arbitrary input that is subsequently slightly perturbed at random. This models random influences (e.g., measurement errors or numerical imprecision) that are present in most applications, and it often yields more realistic results than a worst-case or average-case analysis.

We show that the smoothed running time of the k-means method is bounded by a polynomial in the input size and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.

This talk is based on joint work with David Arthur (Stanford University) and Bodo Manthey (University of Twente).

Using Neighbourhood Exploration to Speed up Random Walks

We consider strategies that can be used to speed-up the cover time of a random walk on undirected connected graphs. The price of this speed up is normally some extra work that can be performed locally by the walk or by the vertices of the graph. Typical assumptions about what is allowed include: Biased walk transitions, Use of previous history, Local exploration of the graph.

Methods of local exploration include the neighbour marking process RW-MARK and look-ahead RW-LOOK (searching to fixed depth).

The marking process, RW-MARK, made by a random walk on an undirected graph G is as follows. Upon arrival at a vertex v, the walk marks v if unmarked and otherwise it marks a randomly chosen unmarked neighbor of v.

Depending on the degree and the expansion of the graph, we prove several upper bounds on the time required by the process RW-MARK to mark all vertices of G. If, for instance G is the hypercube on n vertices the processes marks all vertices in time O(n), with high probability. This significantly reduces the n ln n cover time of the hypercube using a standard random walk.

The process RW-MARK can be compared to the marking process where a vertex v is chosen uniformly at random (coupon collecting) at each step. For the hypercube also has a marking time of O(n).

In the related look-ahead process RW-LOOK, the walk marks all neighbours of the visited vertex to some depth k. For the hypercube, for example, the performance of the processes RW-LOOK-1, and CC-LOOK-1 is asymptotic to n ln 2 with high probability.

This research is joint work with Petra Berenbrink, Robert Elsaesser, Tomasz Radzik and Thomas Sauerwald.

Robustness of the Rotor-router Mechanism for Graph Exploration

We consider the model of exploration of an undirected graph G by a single agent which is called the rotor-router mechanism or the Propp machine (among other names). Let p_v indicate the edge adjacent to a node v which the agent took on its last exit from v. The next time when the agent enters node v, first the "rotor" at node v advances pointer p_v to the next edge in a fixed cyclic order of the edges adjacent to v. Then the agent is directed onto edge p_v to move to the next node. It was shown before that after initial O(mD) steps, the agent periodically follows one established Eulerian cycle (that is, in each period of 2m consecutive steps the agent will traverse each edge exactly twice, once in each direction). The parameters m and D are the number of edges in G and the diameter of G. We investigate robustness of such exploration in presence of faults in the pointers p_v or dynamic changes in the graph. In particular, we show that after the exploration establishes an Eulerian cycle, if at some step k edges are added to the graph, then a new Eulerian cycle is established within O(km) steps. We show similar results for the case when the values of k pointers p_v are arbitrarily changed and when an arbitrary edge is deleted from the graph. Our proofs are based on the relation between Eulerian cycles and spanning trees known as the "BEST" Theorem (after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte).

This is joint work with: E. Bampas, L. Gasieniec, R. Klasing and A. Kosowski.

Approximability and Parameterized Complexity of Minmax Values

We consider approximating the minmax value of a multiplayer game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of "ε log n" digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand, approximating the value with a precision of c log log n digits (for any constant c ≤ 1) can be done in quasi-polynomial time.We consider the parameterized complexity of the problem, with the parameter being the number of pure strategies k of the player for which the minmax value is computed. We show that if there are three players, k = 2 and there are only two possible rational payoffs, the minmax value is a rational number and can be computed exactly in linear time. In the general case, we show that the value can be approximated with any polynomial number of digits of accuracy in time n^{O(k)}. On the other hand, we show that minmax value approximation is W[1]-hard and hence not likely to be fixed parameter tractable. Concretely, we show that if k-CLIQUE requires time n^{Ω(k)} then so does minmax value computation.

This is joint work with Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, and Peter Bro Miltersen.

Profit-maximizing Pricing: The Highway and Tollbooth Problem

We consider the profit maximizing pricing problem for single-minded buyers. Here, we wish to sell a set of m items to n buyers, each of whom is interested in buying a single set of items. Our goal is to set prices for the items such that the profit obtained by selling the items to the buyers who can afford them is maximized. We also assume in our case, that we have arbitrarily many copies of each item to sell.

When the underlying set of items are edges of a graph, and the buyers are interested in buying specific paths, this is called the tollbooth problem. We consider the special case where the graph is a tree, or a path. In the case of a tree, the problem is already known to be APX-hard. We give an O(log n) approximation algorithm. When the graph is a path, the problem is called the highway problem. In this case, we show that the problem is strongly NP-hard, complementing an earlier QPTAS. We also consider the discount model where some items are allowed to have negative prices, and show that a very simple case is already APX-hard.

This is joint work with K. Elbassioni, S. Ray and R. Sitters.

Pricing Lotteries

Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes ("lotteries") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? We show that the answers to these questions hinge on the exact model of consumer behavior we deploy and present several tight bounds on the increase in revenue obtainable via randomization and the computational complexity of revenue maximization in these different models.

This is joint work with Shuchi Chawla, Bobby Kleinberg, and Matt Weinberg.

Some Multiobjective Optimization Problems

Multiobjective Optimization has many applications in such fields as the internet, finance, biomedicine, management science, game theory and engineering. However, solving MO problems is not an easy task. Searching for all Pareto optimal solutions is expensive and time consuming process because there are usually exponentially large (or infinite) Pareto optimal solutions. Even for the simplest problem determining whether a point belongs to the Pareto curve is NP-hard.

In this talk we are going to discuss some continuous and combinatorial multiobjective optimization problems and their applications in management, finance and military. Exact and heuristic techniques for solving these problems are presented.

We also consider nondifferentiable multiobjective programming problems involving generalized convex functions and present optimality conditions and duality results for the problems.

Correlation Decay and Applications to Counting Colourings

We present two algorithms for counting the number of colourings of sparse random graph. Our approach is based on correlation decay techniques originating in statistical physics.

The first algorithm is based on establishing correlation decay properties of Gibbs distribution which are related to Dobrushin's condition for uniqueness of Gibbs measure on infinite trees. More specifically, we impose boundary conditions to a specific set of vertices of the graph and we show that the effect of this boundary decays as we move away.

For the second algorithm we set a new context for exploiting correlation decay properties. Instead of imposing boundary conditions -fixing the colouring of vertices-, we impose a specific graph structure to some region (i.e. delete edges) and show that the effect of this change on the Gibbs distribution decays as we move away. It turns out that this approach designates a new set of spatial correlation decay conditions that can be used for counting algorithms.

In both cases the algorithms with high probability provide in polynomial time a (1/poly(n))-approximation of the logarithm of the number of k-colourings of the graph ("free energy") with k constant. The value of k depends on the expected degree of the graph. The second technique gives better results than the first one in terms of minimum number of colours needed.

Finally, the second algorithm can be applied to another class of graphs which we call locally a-dense graphs of bounded maximum degree Δ. A graph G = (V, E) in this family has following property: For all {u,v} in E the number of vertices which are adjacent to v but not adjacent to u are at most (1-a)Δ, where 0<a<1 is a parameter of the model. For a locally a-dense graph G with bounded Δ the algorithm computes in polynomial time a (1/polylogn)-approximation to the logarithm of the number of k-colourings, for k> (2-a)Δ. By restricting the treewidth of the neighbourhoods in G we can improve the approximation.

Wiretapping a Hidden Network

We consider the problem of maximizing the probability of hitting a strategically chosen hidden *virtual network* by placing a wiretap on a single link of a communication network. This can be seen as a two-player win-lose (zero-sum) game that we call the *wiretap game*. The *value* of this game is the greatest probability that the wiretapper can secure for hitting the virtual network. The value is shown to be equal the reciprocal of the *strength* of the underlying graph. We provide a polynomial time algorithm that finds linear-sized description of the maxmin-polytope, and a characterization of its extreme points. It also provides a succint representation of all equilibrium strategies of the wiretapper that minimize the number of pure best responses of the hider. Among these strategies, we efficiently compute the *unique* strategy that maximizes the least punishment that the hider incurs for playing a pure strategy that is not a best response.

Finally, we show that this unique strategy is the nucleolus of the recently studied simple cooperative *spanning connectivity game*.

Joint work with: Haris Aziz, Mike Paterson and Rahul Savani.

The Power of Online Reordering

Online algorithms studied in theory are characterized by the fact that they get to know the input sequence incrementally, one job at a time, and a new job is not issued until the previous one is processed by the algorithm. In real applications, jobs can usually be delayed for a short amount of time. As a consequence, the input sequence of jobs can be reordered in a limited fashion to optimize the performance. In this talk, the power and limits of this online reodering paradigm is discussed for several problems.

Bounding Rationality by Discounting Time

Consider the following "Factoring Game": Alice plays a composite number X, and Bob responds with numbers Y and Z. Bob wins if and only if Y and Z are non-trivial factors of X. Traditional game theory predicts that Bob has a winning strategy in this game, yet in practice one would expect Alice to win.

To explain this situation, we propose a new notion of "discounted machine game", where strategies are implemented by Turing machines and a player's payoff is discounted by the time taken to play an action in the game. We prove that Alice has a winning strategy in the discounted version of the Factoring game if and only if Factoring is in probabilistic polynomial time on average. We also give general results in this new framework, extending Nash's theorem about existence of equilibria from finite games to countable games, and showing that every mixed-strategy equilibrium has an equivalent pure-strategy equilibrium.

Our notion can be seen as a new way of bounded rationality - humans (and machines) cannot make perfectly optimal decisions, because there's a tradeoff between the quality of a decision and the resources consumed to make that decision. We explain how our approach models relevant factors such as time, patience and the evolution of technology.

Based on joint work with Lance Fortnow.

Improved Randomized Online Scheduling of Intervals and Jobs

We study the online preemptive scheduling of intervals and jobs (with restarts). Each interval or job arrives online, and has a deadline, a length and a weight. The objective is to maximize the total weight of completed intervals or jobs. While the deterministic case for intervals was settled a long time ago, the randomized case remains open. In this talk we discuss barely random algorithms (i.e., randomized algorithms using very few random bits) for these problems. We first give a 2-competitive algorithm for the case of unit length intervals. Then we extend the algorithm to cover several other cases of interval scheduling with the same competitive ratio. We also extend the algorithm to the scheduling of unit length jobs, and prove that it is 3-competitive. The algorithms are surprisingly simple and have the best competitive ratio against all previous (fully or barely) randomized algorithms.

This is joint work with Chung Keung Poon and Feifeng Zheng.

Social Context Games

We introduce the study of social context games. A social context game is defined by an underlying game in strategic form, and a social context consisting of an undirected graph of neighborhood among players and aggregation functions. The players and strategies in a social context game are as in the underlying game, while the players' utilities in a social context game are computed from their payoffs in the underlying game based on the graph of neighborhood and the aggregation functions. Examples of social context games are ranking games and coalitional con- gestion games. A signifcant challenge is the study of how various social contexts affect various properties of the game. In this work we consider resource selection games as the underlying games, and four basic social contexts. An important property of resource selection games is the existence of pure strategy equilibrium. We study the existence of pure strategy Nash equilibrium in the corresponding social context games. We also show that the social context games possessing pure strategy Nash equilibria are not potential games, and therefore are distinguished from congestion games.

Joint work with Itai Ashlagi and Moshe Tennenholtz.

**16:00**

The best-secretary problem for partially ordered sets

In the best-secretary problem, an on-line optimisation problem, an employer is interviewing for a new secretary. The employer knows there are n candidates and has totally ordered the candidates before the interviews. However, the candidates appear for interview sequentially, in a random order, and the interviews are held in an on-line fashion. This means that after each interview the employer must decide whether to accept the current candidate and stop interviewing, or reject and continue interviewing. Crucially, once a candidate has been rejected that candidate cannot be recalled. (If the employer interviews all the candidates, he must accept the last candidate to be interviewed.) In the classical version of this problem the employer wins only if (s)he accepts the best secretary. What should the employer's strategy be to maximise the probability of winning?

This problem has an elegant solution, that allows the employer to win with probability 1/e ~ 0.37 (asymptotically in n).

In this talk we will look at a variant of this problem, where the candidates are not ordered totally, but partially. In this variation the employer wins if (s)he accepts any maximally ordered candidate (of which there could be many). When the partial order on the candidates is unknown to the employer, the optimal strategy for the employer is unknown. In joint work with Małgorzata Kuchta, Michał Morayne and Jarosław Niemiec, we improve on the best known strategy, increasing the probability of success from 1/8 to 1/4.

Oblivious Interference Scheduling

In the *interference scheduling problem*, one is given a set of *n* communication requests described by pairs of points from a metric space. The points correspond to devices in a wireless network. In the *directed version* of the problem, each pair of points consists of a dedicated sending and a dedicated receiving device. In the *bidirectional version* the devices within a pair shall be able to exchange signals in both directions. In both versions, each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference Plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. This is commonly referred to as the "physical model" and is the established way of modelling interference in the engineering community. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests.

We study *oblivious power assignments* in which the power value of a pair only depends on the distance between the points of this pair. We prove that oblivious power assignments cannot yield approximation ratios better than Ω(n) for the directed version of the problem, which is the worst possible performance guarantee as there is a straightforward algorithm that achieves an O(n)-approximation. For the bidirectional version, however, we can show the existence of a universally good oblivious power assignment: For any set of *n* bidirectional communication requests, the so-called "*square root assignment*" admits a coloring with at most polylog(*n*) times the minimal number of colors. The proof for the existence of this coloring is non-constructive. We complement it by an approximation algorithm for the coloring problem under the square root assignment. This way, we obtain the first polynomial time algorithm with approximation ratio polylog(*n*) for interference scheduling in the physical model.

This is joint work with Alexander Fanghänel, Thomas Keßelheim and Harald Räcke

Algorithmic aspects of claw-free graphs

Chudnovsky and Seymour recently characterized the structure of quasi-line graphs and claw-free graphs, which represent two natural generalizations of line graphs. In this talk I will explain how we can use their structure theorems to extend two colouring results from line graphs to claw-free graphs.

Our approach uses decompositions that suggest natural combinatorial algorithms for finding near-optimal colourings of claw-free graphs. The problem of computing the chromatic number of claw-free graphs is NP-complete. I will discuss our algorithms and explain how the approach might be applied to other problems on claw-free graphs.

This is joint work with Bruce Reed.

Online Algorithms For Buying Resources With Stochastic Prices

We study the management of storages in an environment with stochastically generated prices in an average case analysis. In the Online Buying Problem we get a price and a request of a resource R. Furthermore we are given a storage of a fixed size. In every time step an online algorithm has to decide how much of R it wants to buy, without knowing future requests or prices. If the algorithm buys more than needed in the current time step, the remaining units can be stored in the storage, if it buys less than it has to use units from the storage to fulfill the request. We have used a random walk to model the prices of the resource and bounded the possible prices of R by 0 and t. Interesting applications for this problem are for example stream caching in mobile communication networks, buying resources like oil or petrol or the battery management in a hybrid car.

We have considered an algorithm which fills the storage completely when the price of R is at the lower price boundary and uses the storage otherwise. This algorithm is called Greedy Algorithm and we have shown that this is the best online algorithm possible, i. e. the one with the lowest expected cost. We have furthermore shown that we can save a fixed amount compared to the algorithm which uses no storage for a storage size quadratic in the size of the price interval.

Characterization of Revenue Equivalence

The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds for any (possibly infinite) outcome space and many of the known results are immediate consequences. Moreover, revenue equivalence can be identified in cases where existing theorems are silent.

Joint work with Birgit Heydenreich, Marc Uetz, and Rakesh Vohra

From coding theory to efficient pattern matching

We consider the classic problem of pattern matching with few mismatches in the presence of promiscuously matching wildcard symbols. Given a text t of length n and a pattern p of length m with optional wildcard symbols and a bound k, our algorithm finds all the alignments for which the pattern matches the text with Hamming distance at most k and also returns the location and identity of each mismatch. The algorithm we present is deterministic and runs in O˜(kn) time, matching the best known randomised time complexity to within logarithmic factors. The solutions we develop borrow from the tool set of algebraic coding theory and provide a new framework in which to tackle approximate pattern matching problems.

Computing an Optimal Strategy for One-Player Simple Stochastic Games in Strongly Polynomial Time Using Interior-Point Methods

We introduce simple stochastic games (SSG) and give reasons why it is of interest to find the optimal strategy when considering a one-player SSG. In this setup, the optimal strategy can be found by solving a linear program (LP). We show that the constraint matrix of the corresponding linear program contains entries only from the set *{-2,0,1,2}*. We introduce Vavasis' and Ye's layered interior point algorithm, the running time of which is polynomial and only depends on the encoding of the constraint matrix of the corresponding LP. For one-player SSG we show that we can always bound the encoding of the constraint matrix by *O(n ^{2})*, where

*n*is the number of vertices of the one-player SSG. This implies that the problem of finding the optimal strategy in the one-player SSG can be found in

*O(n*, which is strongly polynomial.

^{5.5})On a connection between Ramsey number and chromatic number

We apply a partitioning method, inspired by the Szemerédi Regularity Lemma, to find exactly the Ramsey number R(P_n,H) for any graph H, for sufficiently large n. We show how to generalise this by replacing P_n with any connected n-vertex graph with small bandwidth, and by extending the result to many colours.

Constant Width Characterizations of Small Depth Circuit Classes

We consider broadcasting in random $d$-regular graphs by using a simple modification of the so-called random phone call model introduced by Karp et al. In the phone call model every time step each node calls on a randomly chosen neighbour to establish a communication channel with this node. The communication channels can then be used to transmit messages in both directions. We show that, if we allow every node to choose four distinct neighbours instead of one, then the average number of message transmissions per node decreases exponentially. Formally, we present a broadcasting algorithm that has time complexity O(log n) and uses O(n loglog n) transmissions per message. In contrast, we show for the standard model that every distributed and address-oblivious algorithm that broadcasts a message in time O(log n) needs Omega(n log n / log d) message transmissions.

Our algorithm can efficiently handle limited communication failures, only requires rough estimates of the number of nodes, and is robust against limited changes in the size of the network. Our results have applications in peer-to-peer networks and replicated databases.

Constant Width Characterizations of Small Depth Circuit Classes

Barrington's Theorem showed that constant width branching programs or equivalently, constant width circuits are surprisingly powerful. They compute exactly the functions computed by AND/OR circuits of constant fanin and of logarithmic depth, i.e NC^1.

At the heart of this result is an algebraic construction, showing that the word problem for any finite unsolvable monoid is complete for NC^1. Shortly after Barrington's result Barrington and Thérien gave characterizations for the constant depth circuit classes AC^0 and ACC^0 within the same algebraic framework of finite monoids, by using aperiodic and solvable monoids, respectively.

In this talk we will survey recent results providing characterizations of AC^0 and ACC^0 by imposing certain topological constraints (for instance planarity) on constant width branching programs and circuits.

Space complexity versus query complexity

Combinatorial property testing deals with the following relaxation of decision problems: Given a fixed property and an input $x$, one wants to decide whether x satisfies the property or is ``far'' from satisfying it. The main focus of property testing is in identifying large families of properties that can be tested with a certain number of queries to the input. In this paper we study the relation between the space complexity of a language and its query complexity. Our main result is that for any space complexity s(n)\leq \log{n} there is a language with space complexity O(s(n)) and query complexity 2^{\Omega(s(n))}.

Our result has implications with respect to testing languages accepted by certain restricted machines. Alon et al. [FOCS 1999] have shown that any regular language is testable with a constant number of queries. It is well known that any language in space o(\log \log n) is regular, thus implying that such languages can be so tested. It was previously known that there are languages in space O(\log n) that are not testable with a constant number of queries and Newman [FOCS 2000] raised the question of closing the exponential gap between these two results. A special case of our main result resolves this problem as it implies that there is a language in space O(\log \log n) that is not testable with a constant number of queries. It was also previously known that the class of testable properties cannot be extended to all context-free languages. We further show that one cannot even extend the family of testable languages to the class of languages accepted by single counter machines.

Algorithms to Approximate Treewidth

Many NP-hard combinatorial problems can be solved in polynomial time if the underlying graph has a special structure, like for example a tree. The notion of a tree decomposition generalizes the tree structure in such a way that for many problems dynamic programming algorithms can be developed that run in polynomial time, except for one parameter, the width of the tree decomposition. Therefore, it is of utmost importance to find tree decompositions of smallest possible width, known as the treewidth of a graph.

In this talk, we discuss how such tree decompositions can be found, and how lower bounds on the treewidth can be determined. Computing the treewidth of a graph is itself an NP-hard problem. Therefore, typically heuristics are applied to obtain good but possible non-optimal tree decompositions. To benchmark such results, a wide variety of lower bounds have been derived in recent years. Finally, several approaches for exact methods have been developed and tested, including integer linear programming and exponential time and space algorithms.

Geometric Representations and Algorithms arising in Some Problems involving Dynamical Systems

Dynamical systems are the source of a number of interesting algorithmic challenges. In particular, the problem of generating constrained optimal paths leads to global optimization problems requiring efficient approximation algorithms. We will discuss two case studies and use them to outline an algorithmic framework for the solution of such problems. The first problem is that of trajectory generation for dynamical robots operating in unstructured environments in the absence of detailed models of the dynamics of the environment or of the robot itself. This involves two subproblems - that of task variation, and that of imprecision in models of dynamics. We will show that it is possible to encode this problem so that the desired trajectories can be generated by a data-driven approximation of a task manifold. The second problem arises in the domain of signal processing, involving the design of fixed-point implementations of given floating-point filters. This space of filters is a nonlinear manifold, on which we can use the geodesic distance to identify the best fixed-point instantiation. We will present novel algorithms for optimally sampling such manifolds such that the desired geodesics can be approximated.

An approximation method for the Shapley value

The Shapley value is a key solution concept for coalitional games in general and voting games in particular. Its main advantage is that it provides a solution that is both unique and fair. But its main problem is that finding the players’ Shapley values for a voting game is computationally hard. Likewise, the inverse problem of finding a voting game that yields some desired Shapley values to the players is also computationally hard. Given the importance of the Shapley value and voting games, our objective is to overcome these two problems. To this end, we present a computationally efficient method that takes a voting game as input and gives the players’ approximate Shapley values for the game. We also present an anytime approximation method for finding a voting game that yields some desired Shapley values to the players.

Weighted voting games: manipulation, control, and computational complexity

Weighted voting games are coalitional games in which each player has a weight (intuitively corresponding to its voting power), and a coalition is successful if the sum of its weights exceeds a given threshold. A key issue in these games is that of distributing the total payoff. A traditional approach to this problem, which has many attractive properties, is to pay each agent according to his Shapley value, which is a classical solution concept in cooperative games. However, the players may want to increase their payoff share under this scheme, e.g., by distributing their weight between several identities, while the central authority may want to affect the payoff distribution, e.g., by changing the threshold.

We study these types of dishonest behavior (usually referred to as manipulation and control in voting theory) both from the worst-case perspective, providing upper and lower bounds on their effects on each player's payoff, and from the computational perspective, showing that finding a successful manipulation is a computationally hard problem. We also provide several results on the computational complexity of other solution concepts for weighted voting games, such as the least core and the nucleolus. Finally, we discuss vector weighted voting games, which are a natural generalization of weighted voting games, and show several results on complexity of finding the optimal representation for such games.

A Sublinear-Time Approximation Scheme for Bin Packing

The bin-packing problem is defined as follows: given a set of items with sizes , find a packing of these items into fewest unit-size bins possible. We present a sublinear-time asymptotic approximation scheme for the bin-packing problem; that is, for any , we present an algorithm that has sampling access to the input instance and outputs a value such that , where is the cost of an optimal solution. We consider two types of sampling access for the algorithm: uniform and weighted sampling. For a weighted sample, item is sampled with probability proportional to its weight; that is, with probability . It is clear that uniform sampling by itself will not allow a sublinear-time algorithm in this setting; a small number of items might consitute most of the total weight and uniform samples will not hit them. In the presence of weighted samples, the approximation algorithm runs in time, where is an exponential function of . When both weighted and uniform sampling is allowed, time suffices.

Joint work with Petra Berenbrink and Christian Sohler.

Approximating Geometric Coverage Problems

Coverage problems arise in the design of wireless networks. The aim is to place base stations at suitable locations so as to provide good coverage and at the same time limit the interference that is caused at customer locations that are covered by several base stations. Two fundamental optimization problems arising in this scenario are the unique coverage problem (maximizing the number of customers that are served by exactly one base station) and the minimum membership set cover problem (minimizing the maximum interference at any customer location). Previous work had considered these coverage problems in an abstract setting where the coverage regions of potential base station locations are represented by arbitrary set families. Motivated by the geometric nature of the application setting, we study the approximability of geometric versions of these problems and show that substantially improved approximation guarantees can be achieved in this case.

The talk is based on joint work with Erik Jan van Leeuwen (CWI, Amsterdam).Sound 3-query PCPPs are Long

We present a tradeoff between the {\em length} of a $3$-query probabilistically checkable proof of proximity (PCPP) and the best possible {\em soundness} obtained by querying it. Consider the task of distinguishing between ``good'' inputs $w\in \zo^n$ that are codewords of a linear error correcting code $\code$ over the binary alphabet and ``bad'' inputs that differ from every word of $\code$ on $\approx 1/2$ of their bits. To perform this task, we allow oracle access to $w$ and an auxiliary proof $\pi$, however, we place the following limitations on our verifier: \itone it can read at most $3$ bits from $w$ and $\pi$, \ittwo it must accept every ``good'' input with probability one (when accompanied by a suitable proof), and \itthree its decision must be {\em linear} --- i.e., based on the parity of the sum of the queried bits. We notice that all known techniques for PCPP constructions yield verifiers for linear codes that satisfy these conditions, so our tradeoff applies to all of them.

Our main result implies that for certain codes, every verifier accessing a proof of polynomial length (in the length of the input), will accept some ``bad'' words with probability at least $\approx 2/3$. In contrast, if no limitation is placed on the proof length, we can construct a verifier that rejects any ``bad'' word with the largest possible probability of $\approx 1/2$. In other words, the closer the rejection probability is to the best possible, the longer the proof our verifier will require. This tradeoff between proof length and soundness holds for any code that is not locally testable, including codes with large dual distance and most random Low Density Parity Check (LDPC) codes.

Sequence comparison by transposition networks

Computing sequence alignments is a classical method of comparing sequences, which has applications in many areas of computing, such as signal processing and bioinformatics. Semi-local sequence alignment is a recent generalisation of this method, in which the alignment of a given sequence and all substrings of another sequence are computed simultaneously at no additional asymptotic cost. In this talk, we will show that there is a close connection between semi-local sequence comparison and traditional comparison networks. The network approach can be used to represent different sequence comparison algorithms in a unified form, and in some cases provides generalisations or improvements on existing algorithms. In particular, we obtain a new algorithm for sparse semi-local sequence comparison, and a data-parallel subsequence recognition algorithm suitable for implementation using SIMD instructions as provided e.g. by the Intel MMX architecture. We also present applications of the network approach to bit-parallel sequence comparison, and to efficient comparison of sequences with a low edit distance. We conclude that the comparison string method is a very general and flexible way of understanding and improving different sequence comparison algorithms, as well as obtaining methods for their efficient implementation.

A simple P-matrix linear complementarity problem for discounted games

We consider zerosum discounted games, which are played by two players on a finite directed graph with rewards on the edges. The players' strategies together determine an infinite path in the game graph. One player tries to maximize, the other minimize the discounted sum of the rewards encountered on the path. These games provide one of the few natural problems - Is the value of a game greater than k? - that are in NP intersect coNP, but for which no polynomial-time algorithms are known. One of the main algorithms for solving these games is "strategy improvement", however its worst case complexity is not known to be superpolynomial (superlinear even).

In this talk we present a simple reduction from discounted games to the P-matrix linear complementarity problem (PLCP), and discuss the application of known algorithms for PLCPs to discounted games. In particular, we recover a variant of strategy improvement. We also discuss the underlying combinatorial framework, which is given by unique sink orientations of cubes.

Joint work with Marcin Jurdzinski.

Design and analysis of weighted voting games

Weighted voting games are ubiquitous mathematical models which are used in economics, political science, neuroscience, threshold logic, reliability theory and distributed systems. They model situations where agents with variable voting weight vote in favour of or against a decision. A coalition of agents is winning if and only if the sum of weights of the coalition exceeds or equals a specified quota. The calculation of voting powers of players in a weighted voting game has been extensively researched in the last few years. However, the inverse problem of designing a weighted voting game with a desirable distribution of power has received less attention. We present a useful algorithm which uses generating functions and interpolation to compute voting systems with required voting power distribution. We also analyse the combinatorial and computational aspects of multiple weighted voting games.

Testing expansion in bounded-degree graphs

This is joint work with Christian Sohler

Testing s-t Connectivity

The meta problem in the area of property testing is the following: Given a combinatorial structure $S$, distinguish between that case that $S$ satisfies some property ${\cal P}$ and the case that $S$ is {\em $\epsilon$-far} from satisfying ${\cal P}$. Roughly speaking, a combinatorial structure is said to be $\epsilon$-far from satisfying some property ${\cal P}$ if an $\epsilon$-fraction of its representation should be modified in order to make $S$ satisfy ${\cal P}$. The main goal in property testing is to design randomized algorithms, which look at a very small portion of the input, and use this information to distinguish with high probability between the above two cases. Such algorithms are called {\em property testers} or simply {\em testers} for the property ${\cal P}$. Preferably, a tester should look at a portion of the input whose size is a function of $\epsilon$ only and is independent of the input size. A property that admits such a tester is called {\em testable}.

This talk deals with property testing of graphs in the orientation model. More specifically it deals with the question whether the property of $st$-connectivity (the property of all orientations of a graph $G$ in which there is a directed path from a predetermined vertex $s$ to another predetermined vertex $t$) can be tested with a constant number of queries. The answer to this question is affirmative. That is, there exists a non-trivial reduction of the $st$-connectivity problem to the problem of testing languages that are decidable by branching programs, which was solved by Newman. The reduction combines combinatorial arguments with a concentration type lemma that is proved for this purpose. Unlike the case for many other property testing results, here the resulting testing algorithm is highly non-trivial itself, and not only its analysis.

Oblivious Routing for Minimizing Average Latency

Oblivious Routing algorithms choose routing paths only dependent on the network topology but independent of the traffic demand in the network. Hence, the routing paths do not depend on dynamically changing parameters, which allows for an efficient implementation of these types of algorithms.

Previous work has analyzed oblivious routing algorithms for \emph{congestion} (maximum load of a network link) and \emph{total load} (sum of the loads of all network links), where the load of a link is a concave function in the flow going over the link.

In this work we investigate oblivious routing algorithms for the sum of squares which can be interpreted as the average latency in networks. We show a competitive ratio of $O(\log n)$ for the case that all sources are directed to the same target.

This is joint work with Prahladh Harsha, Thomas Hayes, Hariharan Narayanan and Jaikumar Radhakrishnan

Finding a Heaviest Triangle in a Graph

In this talk I will discuss recent advances in the problem of finding a heaviest (maximum-weight) triangle in an undirected graph with real weights assigned to vertices. I will present a simple algorithm for this problem that runs in time O(n^w), where w is the exponent of fastest matrix multiplication algorithm. By the currently best bound on w, the running time of this algorithm is O(n^2.376). This algorithm substantially improves the previous time-bounds for this problem established by Vassilevska et al. (STOC 2006, O(n^2.688)) and (ICALP 2006, O(n^2.575)). Its asymptotic time complexity matches that of the fastest known algorithm for finding a triangle (not necessarily a maximum-weight one) in a graph.

By applying or extending this algorithm, one can also improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph established in the papers by Vassilevska et al. For example, one can find a maximum-weight triangle in a vertex-weighted graph with m edges in asymptotic time required by the fastest algorithm for finding any triangle in a graph with m edges, i.e., in time O(m^1.41).

Joint work with Andrzej Lingas (Lund University)