2023
We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of $\omega$-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is $\omega$-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be $\omega$-regular. This provides a game-theoretic characterization of $\omega$-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. We illustrate our results with the family of discounted-sum objectives, for which $\omega$-regularity depends on the value of some parameters.
Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and H\aa stad, there has been a flurry of works on PCSPs [BBKO19,KO19,WZ20]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as the polymorphisms are analyzed. The polymorphisms of PCSPs are much richer than CSPs. In the Boolean case, we still do not know if dichotomy for PCSPs exists analogous to Schaefer's dichotomy result for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate $x \leq y$. In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [BKM21] which is a perfect completeness surrogate of the Unique Games Conjecture. Assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every $\epsilon>0$, it has polymorphisms where each coordinate has Shapley value at most $\epsilon$, else it is NP-hard. The […]
We study turn-based quantitative games of infinite duration opposing two antagonistic players and played over graphs. This model is widely accepted as providing the adequate framework for formalizing the synthesis question for reactive systems. This important application motivates the question of strategy complexity: which valuations (or payoff functions) admit optimal positional strategies (without memory)? Valuations for which both players have optimal positional strategies have been characterized by Gimbert and Zielonka for finite graphs and by Colcombet and Niwi\'nski for infinite graphs. However, for reactive synthesis, existence of optimal positional strategies for the opponent (which models an antagonistic environment) is irrelevant. Despite this fact, not much is known about valuations for which the protagonist admits optimal positional strategies, regardless of the opponent. In this work, we characterize valuations which admit such strategies over infinite game graphs. Our characterization uses the vocabulary of universal graphs, which has also proved useful in understanding recent breakthrough results regarding the complexity of parity games. More precisely, we show that a valuation admitting universal graphs which are monotone and well-ordered is positional over all game graphs, and -- more surprisingly -- that the converse is also true for valuations admitting neutral colors. We prove the applicability and elegance of the framework by unifying a number of […]
We consider fixpoint algorithms for two-player games on graphs with $\omega$-regular winning conditions, where the environment is constrained by a strong transition fairness assumption. Strong transition fairness is a widely occurring special case of strong fairness, which requires that any execution is strongly fair with respect to a specified set of live edges: whenever the source vertex of a live edge is visited infinitely often along a play, the edge itself is traversed infinitely often along the play as well. We show that, surprisingly, strong transition fairness retains the algorithmic characteristics of the fixpoint algorithms for $\omega$-regular games -- the new algorithms have the same alternation depth as the classical algorithms but invoke a new type of predecessor operator. For Rabin games with $k$ pairs, the complexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which is independent of the number of live edges in the strong transition fairness assumption. Further, we show that GR(1) specifications with strong transition fairness assumptions can be solved with a 3-nested fixpoint algorithm, same as the usual algorithm. In contrast, strong fairness necessarily requires increasing the alternation depth depending on the number of fairness assumptions. We get symbolic algorithms for (generalized) Rabin, parity and GR(1) objectives under strong transition fairness assumptions as well as a direct symbolic algorithm for qualitative winning in stochastic […]
Hegedűs's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \mathbb{F}[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of some fixed Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Hegedűs (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrubeš, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Hegedűs's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.
Dynamic connectivity is one of the most fundamental problems in dynamic graph algorithms. We present a randomized Las Vegas dynamic connectivity data structure with $O(\log n(\log\log n)^2)$ amortized expected update time and $O(\log n/\log\log\log n)$ worst case query time, which comes very close to the cell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup (2011).
Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can […]
We give a simple polynomial-time approximation algorithm for the total variation distance between two product distributions.
Every graph with maximum degree $\Delta$ can be colored with $(\Delta+1)$ colors using a simple greedy algorithm. Remarkably, recent work has shown that one can find such a coloring even in the semi-streaming model. But, in reality, one almost never needs $(\Delta+1)$ colors to properly color a graph. Indeed, the celebrated \Brooks' theorem states that every (connected) graph beside cliques and odd cycles can be colored with $\Delta$ colors. Can we find a $\Delta$-coloring in the semi-streaming model as well? We settle this key question in the affirmative by designing a randomized semi-streaming algorithm that given any graph, with high probability, either correctly declares that the graph is not $\Delta$-colorable or outputs a $\Delta$-coloring of the graph. The proof of this result starts with a detour. We first (provably) identify the extent to which the previous approaches for streaming coloring fail for $\Delta$-coloring: for instance, all these approaches can handle streams with repeated edges and they can run in $o(n^2)$ time -- we prove that neither of these tasks is possible for $\Delta$-coloring. These impossibility results however pinpoint exactly what is missing from prior approaches when it comes to $\Delta$-coloring. We then build on these insights to design a semi-streaming algorithm that uses $(i)$ a novel sparse-recovery approach based on sparse-dense decompositions to (partially) recover the "problematic" subgraphs of the input -- the […]
We study a class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. These problems are characterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, and include natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.
We study a standard operator on classes of languages: unambiguous polynomial closure. We prove that for every class C of regular languages satisfying mild properties, the membership problem for its unambiguous polynomial closure UPol(C) reduces to the same problem for C. We also show that unambiguous polynomial closure coincides with alternating left and right deterministic closure. Moreover, we prove that if additionally C is finite, the separation and covering problems are decidable for UPol(C). Finally, we present an overview of the generic logical characterizations of the classes built using unambiguous polynomial closure.
We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings), and the distance between distributions is defined as the earth mover's distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions.