2023

We consider zero-sum games on infinite graphs, with objectives specified assets of infinite words over some alphabet of colors. A well-studied class ofobjectives is the one of $\omega$-regular objectives, due to its relation tomany natural problems in theoretical computer science. We focus on the strategycomplexity question: given an objective, how much memory does each playerrequire to play as well as possible? A classical result is that finite-memorystrategies suffice for both players when the objective is $\omega$-regular. Weshow a reciprocal of that statement: when both players can play optimally witha chromatic finite-memory structure (i.e., whose updates can only observecolors) in all infinite game graphs, then the objective must be$\omega$-regular. This provides a game-theoretic characterization of$\omega$-regular objectives, and this characterization can help in obtainingmemory bounds. Moreover, a by-product of our characterization is a newone-to-two-player lift: to show that chromatic finite-memory structures sufficeto play optimally in two-player games on infinite graphs, it suffices to showit in the simpler case of one-player games on infinite graphs. We illustrateour results with the family of discounted-sum objectives, for which$\omega$-regularity depends on the value of some parameters.

Promise Constraint Satisfaction Problems (PCSPs) are a generalization ofConstraint Satisfaction Problems (CSPs) where each predicate has a strong and aweak form and given a CSP instance, the objective is to distinguish if thestrong form can be satisfied vs. even the weak form cannot be satisfied. Sincetheir formal introduction by Austrin, Guruswami, and H\aa stad, there has beena flurry of works on PCSPs [BBKO19,KO19,WZ20]. The key tool in studying PCSPsis the algebraic framework developed in the context of CSPs where the closureproperties of the satisfying solutions known as the polymorphisms are analyzed. The polymorphisms of PCSPs are much richer than CSPs. In the Boolean case, westill do not know if dichotomy for PCSPs exists analogous to Schaefer'sdichotomy result for CSPs. In this paper, we study a special case of BooleanPCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate$x \leq y$. In the algebraic framework, this is the special case of BooleanPCSPs when the polymorphisms are monotone functions. We prove that BooleanOrdered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1Conjecture [BKM21] which is a perfect completeness surrogate of the UniqueGames Conjecture. Assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP canbe solved in polynomial time if for every $\epsilon>0$, it has polymorphismswhere each coordinate has Shapley value at most $\epsilon$, else it is NP-hard.The algorithmic part of our […]

We study turn-based quantitative games of infinite duration opposing twoantagonistic players and played over graphs. This model is widely accepted asproviding the adequate framework for formalizing the synthesis question forreactive systems. This important application motivates the question of strategycomplexity: which valuations (or payoff functions) admit optimal positionalstrategies (without memory)? Valuations for which both players have optimalpositional strategies have been characterized by Gimbert and Zielonka forfinite graphs and by Colcombet and Niwi\'nski for infinite graphs. However, forreactive synthesis, existence of optimal positional strategies for the opponent(which models an antagonistic environment) is irrelevant. Despite this fact,not much is known about valuations for which the protagonist admits optimalpositional strategies, regardless of the opponent. In this work, wecharacterize valuations which admit such strategies over infinite game graphs.Our characterization uses the vocabulary of universal graphs, which has alsoproved useful in understanding recent breakthrough results regarding thecomplexity of parity games. More precisely, we show that a valuation admittinguniversal graphs which are monotone and well-ordered is positional over allgame graphs, and -- more surprisingly -- that the converse is also true forvaluations admitting neutral colors. We prove the applicability and elegance ofthe framework by unifying a number of known positionality […]

We consider fixpoint algorithms for two-player games on graphs with$\omega$-regular winning conditions, where the environment is constrained by astrong transition fairness assumption. Strong transition fairness is a widelyoccurring special case of strong fairness, which requires that any execution isstrongly fair with respect to a specified set of live edges: whenever thesource vertex of a live edge is visited infinitely often along a play, the edgeitself is traversed infinitely often along the play as well. We show that,surprisingly, strong transition fairness retains the algorithmiccharacteristics of the fixpoint algorithms for $\omega$-regular games -- thenew algorithms have the same alternation depth as the classical algorithms butinvoke a new type of predecessor operator. For Rabin games with $k$ pairs, thecomplexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which isindependent of the number of live edges in the strong transition fairnessassumption. Further, we show that GR(1) specifications with strong transitionfairness assumptions can be solved with a 3-nested fixpoint algorithm, same asthe usual algorithm. In contrast, strong fairness necessarily requiresincreasing the alternation depth depending on the number of fairnessassumptions. We get symbolic algorithms for (generalized) Rabin, parity andGR(1) objectives under strong transition fairness assumptions as well as adirect symbolic algorithm for qualitative winning in stochastic$\omega$-regular games […]

Hegedűs's lemma is the following combinatorial statement regardingpolynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p> 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial$P\in \mathbb{F}[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at allpoints in $\{0,1\}^n$ of some fixed Hamming weight $k\in [q,n-q]$ must alsovanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used byHegedűs (2009) to give a solution to \emph{Galvin's problem}, an extremalproblem about set systems; by Alon, Kumar and Volk (2018) to improve thebest-known multilinear circuit lower bounds; and by Hrubeš, Ramamoorthy,Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Hegedűs's lemma.Informally, this version says that if a polynomial of degree $o(q)$ vanishes atmost points of weight $k$, then it vanishes at many points of weight $k+q$. Weprove this lemma and give three different applications.

Dynamic connectivity is one of the most fundamental problems in dynamic graphalgorithms. We present a randomized Las Vegas dynamic connectivity datastructure with $O(\log n(\log\log n)^2)$ amortized expected update time and$O(\log n/\log\log\log n)$ worst case query time, which comes very close to thecell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup(2011).

Boosting is a celebrated machine learning approach which is based on the ideaof combining weak and moderately inaccurate hypotheses to a strong and accurateone. We study boosting under the assumption that the weak hypotheses belong toa class of bounded capacity. This assumption is inspired by the commonconvention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learnclass". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally,we assume the class of weak hypotheses has a bounded VC dimension. We focus ontwo main questions: (i) Oracle Complexity: How many weak hypotheses are neededto produce an accurate hypothesis? We design a novel boosting algorithm anddemonstrate that it circumvents a classical lower bound by Freund and Schapire('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weakhypotheses with $\gamma$-margin are sometimes necessary, our new methodrequires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that theybelong to a class of bounded VC dimension. Unlike previous boosting algorithmswhich aggregate the weak hypotheses by majority votes, the new boostingalgorithm uses more complex ("deeper") aggregation rules. We complement thisresult by showing that complex aggregation rules are in fact necessary tocircumvent the aforementioned lower bound. (ii) Expressivity: Which tasks canbe learned by boosting weak hypotheses from a bounded VC class? Can complexconcepts that are "far […]

We give a simple polynomial-time approximation algorithm for the totalvariation distance between two product distributions.

Every graph with maximum degree $\Delta$ can be colored with $(\Delta+1)$colors using a simple greedy algorithm. Remarkably, recent work has shown thatone can find such a coloring even in the semi-streaming model. But, in reality,one almost never needs $(\Delta+1)$ colors to properly color a graph. Indeed,the celebrated \Brooks' theorem states that every (connected) graph besidecliques and odd cycles can be colored with $\Delta$ colors. Can we find a$\Delta$-coloring in the semi-streaming model as well? We settle this key question in the affirmative by designing a randomizedsemi-streaming algorithm that given any graph, with high probability, eithercorrectly declares that the graph is not $\Delta$-colorable or outputs a$\Delta$-coloring of the graph. The proof of this result starts with a detour. We first (provably) identifythe extent to which the previous approaches for streaming coloring fail for$\Delta$-coloring: for instance, all these approaches can handle streams withrepeated edges and they can run in $o(n^2)$ time -- we prove that neither ofthese tasks is possible for $\Delta$-coloring. These impossibility resultshowever pinpoint exactly what is missing from prior approaches when it comes to$\Delta$-coloring. We then build on these insights to design a semi-streaming algorithm thatuses $(i)$ a novel sparse-recovery approach based on sparse-densedecompositions to (partially) recover the "problematic" subgraphs of the input-- the ones that form the basis of […]

We study a class of functional problems reducible to computing $f^{(n)}(x)$for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove,the definition is robust against variations in the type of reduction used inits definition, and in whether we require $f$ to have a polynomial-time inverseor to be computible by a reversible logic circuit. These problems arecharacterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, andinclude natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuitcomplexity, cellular automata, graph algorithms, and the dynamical systemsdescribed by piecewise-linear transformations.

We study a standard operator on classes of languages: unambiguous polynomialclosure. We prove that for every class C of regular languages satisfying mildproperties, the membership problem for its unambiguous polynomial closureUPol(C) reduces to the same problem for C. We also show that unambiguouspolynomial closure coincides with alternating left and right deterministicclosure. Moreover, we prove that if additionally C is finite, the separationand covering problems are decidable for UPol(C). Finally, we present anoverview of the generic logical characterizations of the classes built usingunambiguous polynomial closure.

We initiate a study of a new model of property testing that is a hybrid oftesting properties of distributions and testing properties of strings.Specifically, the new model refers to testing properties of distributions, butthese are distributions over huge objects (i.e., very long strings).Accordingly, the model accounts for the total number of local probes into theseobjects (resp., queries to the strings) as well as for the distance betweenobjects (resp., strings), and the distance between distributions is defined asthe earth mover's distance with respect to the relative Hamming distancebetween strings. We study the query complexity of testing in this new model, focusing on threedirections. First, we try to relate the query complexity of testing propertiesin the new model to the sample complexity of testing these properties in thestandard distribution testing model. Second, we consider the complexity oftesting properties that arise naturally in the new model (e.g., distributionsthat capture random variations of fixed strings). Third, we consider thecomplexity of testing properties that were extensively studied in the standarddistribution testing model: Two such cases are uniform distributions and pairsof identical distributions.