# ICALP 2012 Accepted Papers with Abstracts for Track C: Foundations of Networked Computation

Multi-User Equality Testing and Its Applications

**Abstract:**Motivated by the recent widespread emergence of location-based services (LBS) over mobile devices, we explore highly-efficient protocols for proximity-testing. Such protocols allow a group of friends to discover if they are all close to each other in some physical location, without revealing their individual locations to each other even if one of them is missing. Since our focus is hand-held devices, we aim at protocols with very small communication complexity and a small constant number of rounds.

The proximity-testing problem can be reduces into a private equality testing (PET), in which parties find out whether or not they hold the same input without revealing any other information about their inputs to each other. While previous works analyze the 2-party PET special case (and its LBS application), in this work we consider highly-efficient schemes for the multiparty case with no honest majority. We provide schemes for both a direct setting and a setting with a trusted mediating server. Our most efficient scheme takes 2 rounds, where in each round the users send only a couple of ElGamal ciphertexts.

Online Packing with Gradually Improving Capacity Estimations with Applications to Network Lifetime Maximization

**Abstract:**We introduce a general model for online packing problems with applications to lifetime optimization of wireless sensor networks. Classical approaches for lifetime maximization make the crucial assumption that battery capacities of sensor nodes are known a priori. For real-world batteries, however, the capacities are only vaguely known. To capture this aspect, we introduce an adversarial online model where estimates become more and more accurate over time, that is, when using the corresponding resources. Our model is based on general linear

packing programs and we assume the remaining capacities to be always specified by lower and upper bounds that may deviate from each other by a fixed factor alpha. We analyze the algorithmic consequences of our model and provide a general ln(alpha)/alpha competitive framework. Furthermore, we show a complementary upper bound of O(1/sqrt(alpha)).

Anonymous Card Shuffling and its Applications to Parallel Mixnets

**Abstract:**We study the question of how to shuffle $n$ cards when faced with an opponent who knows the initial position of all the cards {\em and} can track every card when permuted, {\em except} when one takes $K< n$ cards at a time and shuffles them in a private buffer ``behind your back,'' which we call {\em buffer shuffling}. The problem arises naturally in the context of parallel mixnet servers as well as other security applications. Our analysis is based on related analyses of load-balancing processes. We include extensions to variations that involve corrupted servers and adversarially injected messages, which correspond to an opponent who can peek at some shuffles in the buffer and who can mark some number of the cards. In addition, our analysis makes novel use of a sum-of-squares metric for anonymity, which leads to improved performance bounds for parallel mixnets and can also be used to bound well-known existing anonymity measures.

k-Chordal Graphs: from Cops and Robber to Compact Routing via Treewidth

**Abstract:**Cops and robber games concern a team of cops that must capture a robber moving in a graph. We consider the class of k-chordal graphs, i.e., graphs with no induced cycle of length greater than k, $k\geq 3$. We prove that k-1 cops are always sufficient to capture a robber in k-chordal graphs. This leads us to our main result, a new structural decomposition for a graph class including k-chordal graphs.

We present a quadratic algorithm that, given a graph G and $k\geq 3$, either returns an induced cycle larger than k in G, or computes a tree-decomposition of G, each bag of which contains a dominating path with at most k-1 vertices. This allows us to prove that any k-chordal graph with maximum degree $\Delta$ has treewidth at most $(k-1)(\Delta-1)+2$, improving the $O(\Delta (\Delta-1)^{k-3})$ bound of Bodlaender and Thilikos (1997).Moreover, any graph admitting such a tree-decomposition has small hyperbolicity.

As an application, for any n-node graph admitting such a tree-decomposition, we propose a compact routing scheme using routing tables, addresses and headers of size O(log n) bits and achieving an additive stretch of O(k log Delta). As far as we know, this is the first routing scheme with O(log n)-routing tables and small additive stretch for k-chordal graphs.

Deterministic network exploration by anonymous silent agents with local traffic reports

**Abstract:**A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to explore the network: every node must be visited by at least one agent and all agents must eventually stop. Agents are anonymous (identical), execute the same deterministic algorithm and move in syn- chronous rounds along links of the network. They are silent: they cannot send any messages to other agents or mark visited nodes in any way. In the absence of any additional information, exploration with termination of an arbitrary network in this weak model is impossible. Our aim is to solve the exploration problem giving to agents very restricted local traffic reports. Specifically, an agent that is at a node v in a given round, is pro- vided with three bits of information, answering the following questions: Am I alone at v? Did any agent enter v in this round? Did any agent exit v in this round? We show that this small information permits to solve the exploration problem in arbitrary networks. More precisely, we give a deterministic terminating exploration algorithm working in arbitrary networks for all initial configurations that are not perfectly symmetric, i.e., in which there are agents with different views of the network. The algorithm works in time polynomial in the (unknown) size of the net- work. A deterministic terminating exploration algorithm working for all initial configurations in arbitrary networks does not exist.

Minimizing Rosenthal Potential in Multicast Games

**Abstract:**A multicast game is a network design game modelling how selfish non-cooperative agents build and maintain one-to-many network communication. There is a special source node and a collection of agents located at corresponding terminals. Each agent is interested in selecting a route from the special source to its terminal minimizing the cost. The mutual influence of the agents is determined by a cost sharing mechanism, which evenly splits the cost of an edge among all the agents using it for routing. The existence of a Nash equilibrium for the game was previously established by the means of Rosenthal potential. Anshelevich et al. [FOCS 2004, SICOMP 2008] introduced a measure of quality of the best Nash equilibrium, the price of stability, as the ratio of its cost to the optimum network cost. While Rosenthal potential is a reasonable measure of the quality of Nash equilibra, finding a Nash equilibrium minimizing this potential is NP-hard.

In this paper we provide several algorithmic and complexity results on finding a Nash equilibrium minimizing the value of Rosenthal potential. Let $n$ be the number of agents and $G$ be the communication network. We show that - For a given strategy profile $s$ and integer $k\geq 1$, there is a local search algorithm which in polynomial time $n^{O(k)} \cdot |G|^{O(1)}$ finds a better strategy profile, if there is any, in a $k$-exchange neighbourhood of $s$. In other words, the algorithm decides if Rosenthal potential can be decreased by changing strategies of at most $k$ agents;

- The running time of our local search algorithm is essentially tight: unless $FPT= W[1]$, for any function $f(k)$, searching of the $k$-neighbourhood cannot be done in time $f(k)\cdot |G|^{O(1)}$.

The key ingredient of our algorithmic result is a subroutine that finds an equilibrium with minimum potential in $O(3^n \cdot |G|^{O(1)})$ time. In other words, finding an equilibrium with minimum potential is fixed-parameter tractable when parameterized by the number of agents.

Efficiency-Revenue Trade-offs in Auctions

**Abstract:**When agents with independent priors bid for a single item, Myerson's optimal auction maximizes expected revenue, whereas Vickrey's second-price auction optimizes social welfare. We address the natural question of {\em trade-offs} between the two criteria, that is, auctions that optimize, say, revenue under the constraint that the welfare is above a given level. If one allows for randomized mechanisms, it is easy to see that there are polynomial-time mechanisms that achieve any point in the trade-off (the {\em Pareto curve\/}) between revenue and welfare. We investigate whether one can achieve the same guarantees using {\em deterministic} mechanisms. We provide a negative answer to this question by showing that this is a (weakly) NP-hard problem. On the positive side, we provide polynomial-time deterministic mechanisms that approximate with arbitrary precision any point of the trade-off between these two fundamental objectives for the case of two bidders, even when the valuations are correlated arbitrarily. The major problem left open by our work is whether there is such an algorithm for three or more bidders with independent valuation distributions.

Counting Arbitrary Subgraphs in Data Streams

**Abstract:**We study the subgraph counting problem in data streams. We provide the first non-trivial estimator for approximately counting the number of occurrences of an arbitrary subgraph H of constant size in a (large) graph G. Our estimator works in the turnstile model, i.e., can handle both edge-insertions and edge-deletions, and is applicable in a distributed setting. Prior to this work, only for a few non-regular graphs estimators were known in case of edge-insertions, leaving the problem of counting general subgraphs in the turnstile model wide open. We further demonstrate the applicability of our estimator by analyzing its concentration for several graphs H and the case where G is a power law graph.

Online Mechanism Design (Randomized Rounding on the Fly)

**Abstract:**We study incentive compatible mechanisms for combinatorial auctions (CAs) in an online model with sequentially arriving bidders. We distinguish two kinds of arrivals: bidders might appear in an order specified by a random permutation in analogy to the secretary problem, or in an order specified by an adversary like in a worst-case competitive analysis. Previously known online mechanisms for CAs assume that each item is available at a certain multiplicity $b > 1$. Typically, one assumes $b =\Omega(\log m)$, where $m$ is the number of different items.

We present the first online mechanisms for CAs guaranteeing competitiveness without assumptions about the minimum multiplicity. In particular, our analysis covers the standard CAs with $b=1$. We introduce an online variant of oblivious randomized rounding enabling us to prove competitive ratios that are close to or even beat the best known offline approximation factors for various CAs settings.

Our mechanisms are universally truthful. They are interesting not only for online but also for offline optimization as they have a polynomially bounded running time for valuations given by demand oracles. For example, we achieve a competitive ratio of $O(\log m)$ with respect to the social welfare for CAs with submodular (or more general XOS) valuations when considering bidders in random order. This beats the best previously known offline approximation factor for this important class of valuations by a factor of $O(\log \log m)$.

Incentive Ratios of Fisher Markets

**Abstract:**We consider a Fisher market where agents submit their own utility functions and money endowments to the market maker, who, upon receiving every agent's report,

derives market equilibrium prices and allocations of the items. While agents may benefit by misreporting their private information, we show that the percentage of improvement by a unilateral strategic play, called incentive ratio, is rather limited---it is less than 2 for linear markets and at most $e^{1/e}=1.445$ for Cobb-Douglas markets. We further prove that both ratios are tight.

On the Locality of NP-Complete Problems

**Abstract:**We consider the distributed message-passing {LOCAL} model. In this model a communication network is represented by a graph where vertices host processors, and communication is performed over the edges. Computation proceeds in synchronous rounds. The running time of an algorithm is the number of rounds from the beginning until all vertices terminate. An algorithm is called {\em local} if it terminates within constant number of rounds. The question of what problems can be computed locally was raised by Naor and Stockmayer \cite{NS93} in their seminal paper in STOC'93. Since then the quest for problems with local algorithms, and for problems that cannot be computed locally, has become a central research direction in the field of distributed algorithms \cite{KMW04,KMW10,LOW08,PR01}. The currently known problems that can be solved locally have simple sequential algorithms. On the other hand, many problems with simple sequential algorithms cannot be solved locally \cite{KMW04,KMW10}.

We devise the first local algorithm for an {NP-complete problem}. Specifically, we show that O(n^{1/2 + \epsilon} \cdot \chi)-coloring can be computed within O(1) rounds, where \epsilon > 0 is an arbitrarily small constant, and \chi is the chromatic number of the input graph. (This problem was shown to be NP-complete in \cite{Z07}.) In our way to this result we devise a constant-time algorithm for computing (O(1), O(n^{1/2 + \epsilon}))-netwok-decompositions. Network-decompositions were introduced by Awerbuch et al. \cite{AGLP89}, and are very useful for computing various distributed problems. The best previously-known algorithm for network-decomposition has a polylogarithmic running time (but is applicable for a wider range of parameters) \cite{LS93}. We also devise a \Delta^{1 + \epsilon}-coloring algorithm for graphs with sufficiently large maximum degree \Delta that runs within O(1) rounds. It improves the best previously-known result for this family of graphs, which is O(\log^* n) \cite{SW10}.

A QPTAS for $\eps$-Envy-Free Profit-Maximizing Pricing on Line Graphs

**Abstract:**We consider the problem of pricing edges of a line graph so as to maximize the profit made from selling intervals to single-minded customers. An instance is given by a set $E$ of $n$ edges with a limited supply for each edge, and a set of $m$ clients, where each client $j$ specifies one interval of $E$ she is interested in and a budget $B_j$ which is the maximum price she is willing to pay for that interval. An envy-free pricing is one in which every customer is allocated (possibly empty) interval maximizing her utility. Recently, Grandoni and Rothvoss (SODA 2011) gave a polynomial-time approximation scheme (PTAS) for the unlimited supply case with running time $(nm)^{O((\frac{1}{\eps})^{\frac{1}{\eps}})}$. By utilizing the known hierarchical decomposition of doubling metrics, we give a PTAS with running time $(nm)^{O(\frac{1}{\eps^2})}$. We then consider the limited supply case, and the notion of $\eps$-envy-free pricing in which a customer gets an allocation maximizing her utility within an additive error of $\eps$. For this case we develop an approximation scheme with running time $(nm)^{O(\frac{\log^4 \max_eH_e}{\eps^3})}$, where $H_e=\frac{B_{\max}(e)}{B_{\min}(e)}$ is the maximum ratio of the budgets of any two customers demanding edge $e$. This yields a PTAS in the uniform budget case, and a quasi-PTAS for the general case.

Growing Half-Balls: Minimizing Storage and Communication Costs in CDNs

**Abstract:**The Dynamic Content Distribution problem addresses the trade-off between storage and delivery costs in modern virtual Content Delivery Networks (CDNs). That is, a video file can be stored in multiple places so that the request of each user is served from a location that is near by to the user. This minimizes the delivery costs, but is associated with a storage cost. This problem is NP-hard even in grid networks. In this paper, we present a constant factor approximation algorithm for grid networks. We also present an $O(\log \diam)$-competitive algorithm, where $\diam$ is the normalized diameter of the network, for general networks with general metrics. We show a matching lower bound by using a reduction from online undirected \textsc{Steiner tree}. Our algorithms use a rather intuitive approach that has an elegant representation in geometric terms.

Edge Fault Tolerance on Sparse Networks

**Abstract:**Byzantine agreement, which requires $n$ processors (nodes) in a completely connected network to agree on a value dependent on their initial values and despite the arbitrary, possible malicious behavior of some of them, is perhaps the most popular paradigm in fault-tolerant distributed systems. However, partially connected networks are far more realistic than fully connected networks, which led Dwork, Peleg, Pippenger and Upfal [STOC'86] to formulate the notion of \emph{almost-everywhere (a.e.) agreement} which shares the same aim with the original problem, except that now not all pairs of nodes are connected by reliable and authenticated channels. In such a setting, agreement amongst all correct nodes cannot be guaranteed due to possible poor connectivity with other correct nodes, and some of them must be given up. The number of such nodes is a function of the underlying communication graph and the adversarial set of nodes.

In this work we introduce the notion of \emph{almost-everywhere agreement with edge corruptions} which is exactly the same problem as described above, except that we additionally allow the adversary to completely control some of the communication channels between two correct nodes---i.e., to ``corrupt'' edges in the network. While it is easy to see that an a.e. agreement protocol for the original node-corruption model is also an a.e. agreement protocol tolerating edge corruptions (albeit for a reduced fraction of edge corruptions with respect to the bound for node corruptions), no polynomial-time protocol is known in the case where a constant fraction of the edges can be corrupted and the degree of the network is sub-linear.

We make progress on this front, by constructing graphs of degree $O(n^\epsilon)$ (for arbitrary constant $0<\epsilon<1$) on which we can run a.e. agreement protocols tolerating a constant fraction of adversarial edges. The number of given-up nodes in our construction is $\mu n$ (for some constant $0<\mu<1$ that depends on the fraction of corrupted edges), which is asymptotically optimal. We remark that allowing an adversary to corrupt edges in the network can be seen as taking a step closer towards guaranteeing a.e. agreement amongst honest nodes even on adversarially chosen communication networks, as opposed to earlier results where the communication graph is specially constructed.

In addition, building upon the work of Garay and Ostrovsky~[Eurocrypt'08], we obtain a protocol for {\em a.e. secure computation} tolerating edge corruptions on the above graphs.

Contention issues in congestion games

**Abstract:**We study time-dependent strategies for playing congestion games. The players can time their participation in the game with the hope that fewer players will compete for the same resources. We study two models: the boat model, in which the latency of a player is influenced only by the players that start at the same time, and the conveyor belt model in which the latency of a player is affected by the players that share the system, even if they started earlier or later; unlike standard congestion games, in these games the order of the edges in the paths affect the latency of the players. We characterize the symmetric Nash equilibria of the games with affine latencies of networks of parallel links in the boat model and we bound their price of anarchy and stability. For the conveyor belt model, we characterize the symmetric Nash equilibria of two players on parallel links. We also show that the games of the boat model are themselves congestion games. The same is true for the games of two players for the conveyor model; however, for this model the games of three or more players are not in general congestion games and may not have pure equilibria.

Distributed Algorithms for Network Diameter and Girth

**Abstract:**This paper considers the problem of computing the diameter $D$ and the girth $g$ of an $n$-node network in the CONGEST distributed model. In this model, in each synchronous round, each vertex can transmit a different short (say, $O(\log n)$ bits) message to each of its neighbors. We present a distributed algorithm that computes the diameter of the network in $O(n)$ rounds. We also present two distributed approximation algorithms. The first computes a $3/2$ multiplicative approximation of the diameter in $\Ot(D\sqrt n)$ rounds. The second computes a $2-1/g$ multiplicative approximation of the girth in $\Ot(D+\sqrt{gn})$ rounds. Recently, Frischknecht, Holzer and Wattenhofer~\cite{FHW12} considered these problems in the CONGEST model but from the perspective of lower bounds. They showed an $\OMt(n)$ rounds lower bound for exact diameter computation. For diameter approximation, they showed a lower bound of $\OMt(\sqrt n)$ rounds for getting a multiplicative approximation of $3/2-\eps$. Both lower bounds hold for networks with constant diameter. For girth approximation, they showed a lower bound of $\OMt(\sqrt n)$ rounds for getting a multiplicative approximation of $2-\eps$ on a network with constant girth. Our exact algorithm for computing the diameter matches their lower bound. Our diameter and girth approximation algorithms almost match their lower bounds for constant diameter and for constant girth.

Topology-Aware VM Migration in Bandwidth Oversubscribed Datacenter Networks

**Abstract:**Virtualization can deliver significant benefits for cloud computing by enabling VM migration to improve utilization, balance load and alleviate hotspots. While several mechanisms exist to migrate VMs, few efforts have focused on developing policies to minimize migration costs in a multi-rooted tree data center network. The problem of VM migration in a data center is a variant of well-studied problems: (1) Maximum throughput tree routing problem to route VMs in a bandwidth oversubscribed network, and (2) Variants of the matching problem---quadratic assignment and demand matching---for meeting load constraints at overloaded and underloaded servers. While these problems have been individually studied, a new fundamental challenge is to simultaneously handle packing constraints of server load and tree edge capacities.

This paper proposes novel algorithms for the objective of ``relieving" a maximal number of hot servers. Our first approach considers a related problem of migrating a maximal number of VMs, which in some scenarios can serve a plausible solution for the original objective. We provide an $8$-approximation algorithm for the problem. Our second approach directly targets the original objective, while assuming that the network topology is a directed tree. The idea is to first formulate an LP with fractional migration and then round it to a second LP having totally unimodular constraint system. The latter LP guarantees integral solutions with provable performance guarantees. Since the actual network topology is undirected, we construct an iterative heuristic which imposes edge directionality in a clever way.

We conclude our work by evaluating this heuristic on realistic models of data center workloads, demonstrating that the algorithm is time-responsive in computing placement and migration decisions while scaling to large systems.

Super-Fast Distributed Algorithms for Metric Facility Location

**Abstract:**This paper presents an O(log log n (log^*n))-round, O(1)-approximation algorithm for the metric facility location problem on a clique network. Though metric facility location has been considered by several researchers in low-diameter settings, this is the first sub-logarithmic algorithm for the problem that yields an O(1)-approximation. We assume the standard CONGEST model, which is a synchronous message-passing model in which each node in a size-n network can send a message of size O(log n) along each incident communication link in each round. Since facility location is specified by Theta(n^2) pieces of information, any fast solution to the problem needs to be truly distributed. Our paper makes three main technical contributions. First, we show a new lower bound for metric facility location. Next, we demonstrate a reduction of the distributed facility location problem to the problem of computing an O(1)-ruling set on an appropriate spanning subgraph. Finally, we present a "super-fast" algorithm to compute a 2-ruling set by using a combination of randomized and deterministic sparsification.

Hyperbolic Random Graphs: Degree Sequence and Clustering

**Abstract:**Recently, Papadopoulos, Krioukov, Boguñá and Vahdat [Infocom'10] introduced a random geometric graph model that is based on hyperbolic geometry. The authors argued empirically and by some preliminary mathematical analysis that the resulting graphs have many of the desired properties for models of large real-world graphs, such as high clustering and heavy tailed degree distributions. By computing explicitly a maximum likelihood fit of the Internet graph, they demonstrated impressively that this model is adequate for reproducing the structure of such with high accuracy.

In this work we initiate the rigorous study of random hyperbolic graphs. We compute exact asymptotic expressions for the expected number of vertices of degree k for all k up to the maximum degree and provide small probabilities for large deviations. We also prove a constant lower bound for the clustering coefficient. In particular, our findings confirm rigorously that the degree sequence follows a power-law distribution with controllable exponent and that the clustering is nonvanishing.

Computational Complexity of Traffic Hijacking under BGP and S-BGP

**Abstract:**Harmful Internet hijacking incidents put in evidence how fragile the Border Gateway Protocol (BGP) is, which is used to exchange routing information between Autonomous Systems (ASes). As proved by recent research contributions, even S-BGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. We show that this problem marks a sharp difference between BGP and S-BGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP it is NP-hard for S-BGP.

Our study of traffic attraction strategies has several by-products. As an example, we solve a problem left open in the literature, stating when performing a hijacking in S-BGP is equivalent to performing an interception.

Byzantine Agreement with a Rational Adversary

**Abstract:**Researchers in cryptography have frequently found it beneficial to consider, instead of a traditional worst-case adversary, a rational adversary that behaves in a more predictable way. We apply this model to the previously unconsidered case of Byzantine agreement (BA). We define security for both flavours of BA, i.e., consensus and broadcast, for a natural class of utilities. Surprisingly, we show that many of the widely known results on BA, e.g., the equivalence of broadcast and consensus or the impossibility of consensus tolerating t\geq n/2 corruptions, do not transfer---per se---to the rational model. We then study the question of feasibility of information-theoretic (both perfect and statistical) BA for the cases where the parties have complete or partial knowledge of the adversary's utility function. For the first case we describe a BA protocol tolerating t<n corruptions in the plain model, i.e., without assuming a setup, that is even perfectly secure. For the second case, we prove tight bounds that depend on the adversary's preference of agreement over disagreement. All our suggested protocols are more efficient than corresponding protocols for traditional, i.e., non-rational, BA.

Preventing Unraveling in Social Networks: The Anchored k-Core Problem

**Abstract:**We consider a model of user engagement in social networks, where each player incurs a cost of remaining engaged but derives a benefit proportional to the number of its engaged neighbors. The natural equilibrium of this model corresponds to the k-core of the social network --- the maximal induced subgraph with minimum degree at least k.

We study the problem of "anchoring" a small number of vertices to maximize the size of the corresponding anchored k-core --- the maximal induced subgraph in which every non-anchored vertex has degree at least k. This problem corresponds to preventing "unraveling" --- a cascade of iterated withdrawals. We provide polynomial-time algorithms for general graphs with k=2, and for bounded-treewidth graphs with arbitrary k. We prove strong non-approximability results for general graphs and k >= 3.