<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <title>TheoretiCS - Latest Publications</title>
    <description>Latest articles</description>
    
    <pubDate>Tue, 07 Apr 2026 01:34:24 +0000</pubDate>
    <generator>episciences.org</generator>
    <link>https://theoretics.episciences.org</link>
    <author>TheoretiCS</author>
    <dc:creator>TheoretiCS</dc:creator>
    <atom:link rel="self" type="application/rss+xml" href="https://theoretics.episciences.org/rss/papers"/>
    <atom:link rel="hub" href="http://pubsubhubbub.appspot.com/"/>
    <item>
      <title>MSO-Enumeration Over SLP-Compressed Unranked Forests</title>
      <description><![CDATA[We study the problem of enumerating the answers to a query formulated in monadic second order logic (MSO) over an unranked forest F that is compressed by a straight-line program (SLP) D. Our main result states that this can be done after O(|D|) preprocessing and with output-linear delay (in data complexity). This is a substantial improvement over the previously known algorithms for MSO-evaluation over trees, since the compressed size |D| might be much smaller than (or even logarithmic in) the actual data size |F|, and there are linear time SLP-compressors that yield very good compressions on practical inputs. In particular, this also constitutes a meta-theorem in the field of algorithmics on SLP-compressed inputs: all enumeration problems on trees or strings that can be formulated in MSO-logic can be solved with linear preprocessing and output-linear delay, even if the inputs are compressed by SLPs. We also show that our approach can support vertex relabelling updates in time that is logarithmic in the uncompressed data. Our result extends previous work on the enumeration of MSO-queries over uncompressed trees and on the enumeration of document spanners over compressed text documents.]]></description>
      <pubDate>Thu, 26 Feb 2026 08:28:07 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.6</link>
      <guid>https://doi.org/10.46298/theoretics.26.6</guid>
      <author>Lohrey, Markus</author>
      <author>Schmid, Markus L.</author>
      <dc:creator>Lohrey, Markus</dc:creator>
      <dc:creator>Schmid, Markus L.</dc:creator>
      <content:encoded><![CDATA[We study the problem of enumerating the answers to a query formulated in monadic second order logic (MSO) over an unranked forest F that is compressed by a straight-line program (SLP) D. Our main result states that this can be done after O(|D|) preprocessing and with output-linear delay (in data complexity). This is a substantial improvement over the previously known algorithms for MSO-evaluation over trees, since the compressed size |D| might be much smaller than (or even logarithmic in) the actual data size |F|, and there are linear time SLP-compressors that yield very good compressions on practical inputs. In particular, this also constitutes a meta-theorem in the field of algorithmics on SLP-compressed inputs: all enumeration problems on trees or strings that can be formulated in MSO-logic can be solved with linear preprocessing and output-linear delay, even if the inputs are compressed by SLPs. We also show that our approach can support vertex relabelling updates in time that is logarithmic in the uncompressed data. Our result extends previous work on the enumeration of MSO-queries over uncompressed trees and on the enumeration of document spanners over compressed text documents.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Positional $ω$-regular languages</title>
      <description><![CDATA[In the context of two-player games over graphs, a language $L$ is called positional if, in all games using $L$ as winning objective, the protagonist can play optimally using positional strategies, that is, strategies that do not depend on the history of the play. In this work, we describe the class of parity automata recognising positional languages, providing a complete characterisation of positionality for $ω$-regular languages. As corollaries, we establish decidability of positionality in polynomial time, finite-to-infinite and 1-to-2-players lifts, and show the closure under union of prefix-independent positional objectives, answering a conjecture by Kopczyński in the $ω$-regular case.]]></description>
      <pubDate>Tue, 24 Feb 2026 12:22:15 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.5</link>
      <guid>https://doi.org/10.46298/theoretics.26.5</guid>
      <author>Casares, Antonio</author>
      <author>Ohlmann, Pierre</author>
      <dc:creator>Casares, Antonio</dc:creator>
      <dc:creator>Ohlmann, Pierre</dc:creator>
      <content:encoded><![CDATA[In the context of two-player games over graphs, a language $L$ is called positional if, in all games using $L$ as winning objective, the protagonist can play optimally using positional strategies, that is, strategies that do not depend on the history of the play. In this work, we describe the class of parity automata recognising positional languages, providing a complete characterisation of positionality for $ω$-regular languages. As corollaries, we establish decidability of positionality in polynomial time, finite-to-infinite and 1-to-2-players lifts, and show the closure under union of prefix-independent positional objectives, answering a conjecture by Kopczyński in the $ω$-regular case.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Isomorphism for Tournaments of Small Twin Width</title>
      <description><![CDATA[We prove that isomorphism of tournaments of twin width at most $k$ can be decided in time $k^{O(\log k)}n^{O(1)}$. This implies that the isomorphism problem for classes of tournaments of bounded or moderately growing twin width is in polynomial time. By comparison, there are classes of undirected graphs of bounded twin width that are isomorphism complete, that is, the isomorphism problem for the classes is as hard as the general graph isomorphism problem. Twin width is a graph parameter that has been introduced only recently (Bonnet et al., J. ACM 2022), but has received a lot of attention in structural graph theory since then. On directed graphs, it is functionally smaller than clique width. We prove that on tournaments (but not on general directed graphs) it is also functionally smaller than directed tree width (and thus, the same also holds for cut width and directed path width). Hence, our result implies that tournament isomorphism testing is also fixed-parameter tractable when parameterized by any of these parameters. Our isomorphism algorithm heavily employs group-theoretic techniques. This seems to be necessary: as a second main result, we show that the combinatorial Weisfeiler-Leman algorithm does not decide isomorphism of tournaments of twin width at most 35 if its dimension is $o(n)$. (Throughout this abstract, $n$ is the order of the input graphs.)]]></description>
      <pubDate>Mon, 23 Feb 2026 10:14:38 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.4</link>
      <guid>https://doi.org/10.46298/theoretics.26.4</guid>
      <author>Grohe, Martin</author>
      <author>Neuen, Daniel</author>
      <dc:creator>Grohe, Martin</dc:creator>
      <dc:creator>Neuen, Daniel</dc:creator>
      <content:encoded><![CDATA[We prove that isomorphism of tournaments of twin width at most $k$ can be decided in time $k^{O(\log k)}n^{O(1)}$. This implies that the isomorphism problem for classes of tournaments of bounded or moderately growing twin width is in polynomial time. By comparison, there are classes of undirected graphs of bounded twin width that are isomorphism complete, that is, the isomorphism problem for the classes is as hard as the general graph isomorphism problem. Twin width is a graph parameter that has been introduced only recently (Bonnet et al., J. ACM 2022), but has received a lot of attention in structural graph theory since then. On directed graphs, it is functionally smaller than clique width. We prove that on tournaments (but not on general directed graphs) it is also functionally smaller than directed tree width (and thus, the same also holds for cut width and directed path width). Hence, our result implies that tournament isomorphism testing is also fixed-parameter tractable when parameterized by any of these parameters. Our isomorphism algorithm heavily employs group-theoretic techniques. This seems to be necessary: as a second main result, we show that the combinatorial Weisfeiler-Leman algorithm does not decide isomorphism of tournaments of twin width at most 35 if its dimension is $o(n)$. (Throughout this abstract, $n$ is the order of the input graphs.)]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Simple Sublinear Algorithms for $(Δ+1)$ Vertex Coloring via Asymmetric Palette Sparsification</title>
      <description><![CDATA[The palette sparsification theorem (PST) of Assadi, Chen, and Khanna (SODA 2019) states that in every graph $G$ with maximum degree $Δ$, sampling a list of $O(\log{n})$ colors from $\{1,\ldots,Δ+1\}$ for every vertex independently and uniformly, with high probability, allows for finding a $(Δ+1)$ vertex coloring of $G$ by coloring each vertex only from its sampled list. PST naturally leads to a host of sublinear algorithms for $(Δ+1)$ vertex coloring, including in semi-streaming, sublinear time, and MPC models, which are all proven to be nearly optimal, and in the case of the former two are the only known sublinear algorithms for this problem. While being a quite natural and simple-to-state theorem, PST suffers from two drawbacks. Firstly, all its known proofs require technical arguments that rely on sophisticated graph decompositions and probabilistic arguments. Secondly, finding the coloring of the graph from the sampled lists in an efficient manner requires a considerably complicated algorithm. We show that a natural weakening of PST addresses both these drawbacks while still leading to sublinear algorithms of similar quality (up to polylog factors). In particular, we prove an asymmetric palette sparsification theorem (APST) that allows for list sizes of the vertices to have different sizes and only bounds the average size of these lists. The benefit of this weaker requirement is that we can now easily show the graph can be $(Δ+1)$ colored from the sampled lists using the standard greedy coloring algorithm. This way, we can recover nearly-optimal bounds for $(Δ+1)$ vertex coloring in all the aforementioned models using algorithms that are much simpler to implement and analyze.]]></description>
      <pubDate>Thu, 29 Jan 2026 08:43:37 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.3</link>
      <guid>https://doi.org/10.46298/theoretics.26.3</guid>
      <author>Assadi, Sepehr</author>
      <author>Yazdanyar, Helia</author>
      <dc:creator>Assadi, Sepehr</dc:creator>
      <dc:creator>Yazdanyar, Helia</dc:creator>
      <content:encoded><![CDATA[The palette sparsification theorem (PST) of Assadi, Chen, and Khanna (SODA 2019) states that in every graph $G$ with maximum degree $Δ$, sampling a list of $O(\log{n})$ colors from $\{1,\ldots,Δ+1\}$ for every vertex independently and uniformly, with high probability, allows for finding a $(Δ+1)$ vertex coloring of $G$ by coloring each vertex only from its sampled list. PST naturally leads to a host of sublinear algorithms for $(Δ+1)$ vertex coloring, including in semi-streaming, sublinear time, and MPC models, which are all proven to be nearly optimal, and in the case of the former two are the only known sublinear algorithms for this problem. While being a quite natural and simple-to-state theorem, PST suffers from two drawbacks. Firstly, all its known proofs require technical arguments that rely on sophisticated graph decompositions and probabilistic arguments. Secondly, finding the coloring of the graph from the sampled lists in an efficient manner requires a considerably complicated algorithm. We show that a natural weakening of PST addresses both these drawbacks while still leading to sublinear algorithms of similar quality (up to polylog factors). In particular, we prove an asymmetric palette sparsification theorem (APST) that allows for list sizes of the vertices to have different sizes and only bounds the average size of these lists. The benefit of this weaker requirement is that we can now easily show the graph can be $(Δ+1)$ colored from the sampled lists using the standard greedy coloring algorithm. This way, we can recover nearly-optimal bounds for $(Δ+1)$ vertex coloring in all the aforementioned models using algorithms that are much simpler to implement and analyze.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Minimum Star Partitions of Simple Polygons in Polynomial Time</title>
      <description><![CDATA[We devise a polynomial-time algorithm for partitioning a simple polygon $P$ into a minimum number of star-shaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O'Rourke's famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The only previously known algorithm for a non-trivial special case is for $P$ being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of $P$. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are star-shaped but allowed to overlap--known as the Art Gallery Problem--was recently shown to be $\exists\mathbb R$-complete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomial-time algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin [STOC, 1979 & Comp. Geom., 1985].]]></description>
      <pubDate>Tue, 13 Jan 2026 08:34:20 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.2</link>
      <guid>https://doi.org/10.46298/theoretics.26.2</guid>
      <author>Abrahamsen, Mikkel</author>
      <author>Blikstad, Joakim</author>
      <author>Nusser, André</author>
      <author>Zhang, Hanwen</author>
      <dc:creator>Abrahamsen, Mikkel</dc:creator>
      <dc:creator>Blikstad, Joakim</dc:creator>
      <dc:creator>Nusser, André</dc:creator>
      <dc:creator>Zhang, Hanwen</dc:creator>
      <content:encoded><![CDATA[We devise a polynomial-time algorithm for partitioning a simple polygon $P$ into a minimum number of star-shaped polygons. The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O'Rourke's famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The only previously known algorithm for a non-trivial special case is for $P$ being both monotone and rectilinear [Liu and Ntafos, Algorithmica, 1991]. For general polygons, an algorithm was only known for the restricted version in which Steiner points are disallowed [Keil, SIAM J. Comput., 1985], meaning that each corner of a piece in the partition must also be a corner of $P$. Interestingly, the solution size for the restricted version may be linear for instances where the unrestricted solution has constant size. The covering variant in which the pieces are star-shaped but allowed to overlap--known as the Art Gallery Problem--was recently shown to be $\exists\mathbb R$-complete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 & J. ACM 2022]; this is in stark contrast to our result. Arguably the most related work to ours is the polynomial-time algorithm to partition a simple polygon into a minimum number of convex pieces by Chazelle and Dobkin [STOC, 1979 & Comp. Geom., 1985].]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Deterministic approximate counting of colorings with fewer than $2Δ$ colors via absence of zeros</title>
      <description><![CDATA[Let $Δ,q\geq 3$ be integers. We prove that there exists $η\geq 0.002$ such that if $q\geq (2-η)Δ$, then there exists an open set $\mathcal{U}\subset \mathbb{C}$ that contains the interval $[0,1]$ such that for each $w\in \mathcal{U}$ and any graph $G=(V,E)$ of maximum degree at most $Δ$, the partition function of the anti-ferromagnetic $q$-state Potts model evaluated at $w$ does not vanish. This provides a (modest) improvement on a result of Liu, Sinclair, and Srivastava, and breaks the $q=2Δ$-barrier for this problem. As a direct consequence we obtain via Barvinok's interpolation method a deterministic polynomial time algorithm to approximate the number of proper $q$-colorings of graphs of maximum degree at most $Δ$, provided $q\geq (2-η)Δ$.]]></description>
      <pubDate>Tue, 13 Jan 2026 08:32:28 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.26.1</link>
      <guid>https://doi.org/10.46298/theoretics.26.1</guid>
      <author>Bencs, Ferenc</author>
      <author>Berrekkal, Khallil</author>
      <author>Regts, Guus</author>
      <dc:creator>Bencs, Ferenc</dc:creator>
      <dc:creator>Berrekkal, Khallil</dc:creator>
      <dc:creator>Regts, Guus</dc:creator>
      <content:encoded><![CDATA[Let $Δ,q\geq 3$ be integers. We prove that there exists $η\geq 0.002$ such that if $q\geq (2-η)Δ$, then there exists an open set $\mathcal{U}\subset \mathbb{C}$ that contains the interval $[0,1]$ such that for each $w\in \mathcal{U}$ and any graph $G=(V,E)$ of maximum degree at most $Δ$, the partition function of the anti-ferromagnetic $q$-state Potts model evaluated at $w$ does not vanish. This provides a (modest) improvement on a result of Liu, Sinclair, and Srivastava, and breaks the $q=2Δ$-barrier for this problem. As a direct consequence we obtain via Barvinok's interpolation method a deterministic polynomial time algorithm to approximate the number of proper $q$-colorings of graphs of maximum degree at most $Δ$, provided $q\geq (2-η)Δ$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Algorithmizing the Multiplicity Schwartz-Zippel Lemma</title>
      <description><![CDATA[The multiplicity Schwartz-Zippel lemma asserts that over a field, a low-degree polynomial cannot vanish with high multiplicity very often on a sufficiently large product set. Since its discovery in a work of Dvir, Kopparty, Saraf and Sudan [SIAM J. Comput., 2013], the lemma has found numerous applications in both math and computer science; in particular, in the definition and properties of multiplicity codes by Kopparty, Saraf and Yekhanin [J. ACM, 2014]. In this work, we show how to algorithmize the multiplicity Schwartz-Zippel lemma for arbitrary product sets over any field. In other words, we give an efficient algorithm for unique decoding of multivariate multiplicity codes from half their minimum distance on arbitrary product sets over all fields. Previously, such an algorithm was known either when the underlying product set had a nice algebraic structure: for instance, was a subfield (by Kopparty [ToC, 2015]) or when the underlying field had large (or zero) characteristic, the multiplicity parameter was sufficiently large and the multiplicity code had distance bounded away from $1$ (Bhandari, Harsha, Kumar and Sudan [STOC 2021]). In particular, even unique decoding of bivariate multiplicity codes with multiplicity two from half their minimum distance was not known over arbitrary product sets over any field. Our algorithm builds upon a result of Kim and Kopparty [ToC, 2017] who gave an algorithmic version of the Schwartz-Zippel lemma (without multiplicities) or equivalently, an efficient algorithm for unique decoding of Reed-Muller codes over arbitrary product sets. We introduce a refined notion of distance based on the multiplicity Schwartz-Zippel lemma and design a unique decoding algorithm for this distance measure. On the way, we give an alternate analysis of Forney's classical generalized minimum distance decoder that might be of independent interest.]]></description>
      <pubDate>Thu, 18 Dec 2025 09:29:03 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.29</link>
      <guid>https://doi.org/10.46298/theoretics.25.29</guid>
      <author>Bhandari, Siddharth</author>
      <author>Harsha, Prahladh</author>
      <author>Kumar, Mrinal</author>
      <author>Shankar, Ashutosh</author>
      <dc:creator>Bhandari, Siddharth</dc:creator>
      <dc:creator>Harsha, Prahladh</dc:creator>
      <dc:creator>Kumar, Mrinal</dc:creator>
      <dc:creator>Shankar, Ashutosh</dc:creator>
      <content:encoded><![CDATA[The multiplicity Schwartz-Zippel lemma asserts that over a field, a low-degree polynomial cannot vanish with high multiplicity very often on a sufficiently large product set. Since its discovery in a work of Dvir, Kopparty, Saraf and Sudan [SIAM J. Comput., 2013], the lemma has found numerous applications in both math and computer science; in particular, in the definition and properties of multiplicity codes by Kopparty, Saraf and Yekhanin [J. ACM, 2014]. In this work, we show how to algorithmize the multiplicity Schwartz-Zippel lemma for arbitrary product sets over any field. In other words, we give an efficient algorithm for unique decoding of multivariate multiplicity codes from half their minimum distance on arbitrary product sets over all fields. Previously, such an algorithm was known either when the underlying product set had a nice algebraic structure: for instance, was a subfield (by Kopparty [ToC, 2015]) or when the underlying field had large (or zero) characteristic, the multiplicity parameter was sufficiently large and the multiplicity code had distance bounded away from $1$ (Bhandari, Harsha, Kumar and Sudan [STOC 2021]). In particular, even unique decoding of bivariate multiplicity codes with multiplicity two from half their minimum distance was not known over arbitrary product sets over any field. Our algorithm builds upon a result of Kim and Kopparty [ToC, 2017] who gave an algorithmic version of the Schwartz-Zippel lemma (without multiplicities) or equivalently, an efficient algorithm for unique decoding of Reed-Muller codes over arbitrary product sets. We introduce a refined notion of distance based on the multiplicity Schwartz-Zippel lemma and design a unique decoding algorithm for this distance measure. On the way, we give an alternate analysis of Forney's classical generalized minimum distance decoder that might be of independent interest.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Completing the picture for the Skolem Problem on order-4 linear recurrence sequences</title>
      <description><![CDATA[For almost a century, the decidability of the Skolem Problem - that is, the problem of finding whether a given linear recurrence sequence (LRS) has a zero term - has remained open. A breakthrough in the 1980s established that the Skolem Problem is indeed decidable for algebraic LRS of order at most 3, and real algebraic LRS of order at most 4. However, for general algebraic LRS of order 4 the question of decidability has remained open. Our main contribution in this paper is to prove decidability for this last case, i.e. we show that the Skolem Problem is decidable for all algebraic LRS of order at most 4.]]></description>
      <pubDate>Tue, 02 Dec 2025 09:29:03 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.28</link>
      <guid>https://doi.org/10.46298/theoretics.25.28</guid>
      <author>Bacik, Piotr</author>
      <dc:creator>Bacik, Piotr</dc:creator>
      <content:encoded><![CDATA[For almost a century, the decidability of the Skolem Problem - that is, the problem of finding whether a given linear recurrence sequence (LRS) has a zero term - has remained open. A breakthrough in the 1980s established that the Skolem Problem is indeed decidable for algebraic LRS of order at most 3, and real algebraic LRS of order at most 4. However, for general algebraic LRS of order 4 the question of decidability has remained open. Our main contribution in this paper is to prove decidability for this last case, i.e. we show that the Skolem Problem is decidable for all algebraic LRS of order at most 4.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On Small-depth Frege Proofs for PHP</title>
      <description><![CDATA[We study Frege proofs for the one-to-one graph Pigeon Hole Principle defined on the $n\times n$ grid where $n$ is odd. We are interested in the case where each formula in the proof is a depth $d$ formula in the basis given by $\land$, $\lor$, and $\neg$. We prove that in this situation the proof needs to be of size exponential in $n^{Ω(1/d)}$. If we restrict the size of each line in the proof to be of size $M$ then the number of lines needed is exponential in $n/(\log M)^{O(d)}$. The main technical component of the proofs is to design a new family of random restrictions and to prove the appropriate switching lemmas.]]></description>
      <pubDate>Mon, 01 Dec 2025 09:01:48 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.27</link>
      <guid>https://doi.org/10.46298/theoretics.25.27</guid>
      <author>Håstad, Johan</author>
      <dc:creator>Håstad, Johan</dc:creator>
      <content:encoded><![CDATA[We study Frege proofs for the one-to-one graph Pigeon Hole Principle defined on the $n\times n$ grid where $n$ is odd. We are interested in the case where each formula in the proof is a depth $d$ formula in the basis given by $\land$, $\lor$, and $\neg$. We prove that in this situation the proof needs to be of size exponential in $n^{Ω(1/d)}$. If we restrict the size of each line in the proof to be of size $M$ then the number of lines needed is exponential in $n/(\log M)^{O(d)}$. The main technical component of the proofs is to design a new family of random restrictions and to prove the appropriate switching lemmas.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Protocols for Quantum Weak Coin Flipping</title>
      <description><![CDATA[Weak coin flipping is an important cryptographic primitive$\unicode{x2013}$it is the strongest known secure two-party computation primitive that classically becomes secure only under certain assumptions (e.g. computational hardness), while quantumly there exist protocols that achieve arbitrarily close to perfect security. This breakthrough result was established by Mochon in 2007 [arXiv:0711.4114]. However, his proof relied on the existence of certain unitary operators which was established by a non-constructive argument. Consequently, explicit protocols have remained elusive. In this work, we give exact constructions of related unitary operators. These, together with a new formalism, yield a family of protocols approaching perfect security thereby also simplifying Mochon's proof of existence. We illustrate the construction of explicit weak coin flipping protocols by considering concrete examples (from the aforementioned family of protocols) that are more secure than all previously known protocols.]]></description>
      <pubDate>Wed, 26 Nov 2025 14:49:16 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.26</link>
      <guid>https://doi.org/10.46298/theoretics.25.26</guid>
      <author>Arora, Atul Singh</author>
      <author>Roland, Jérémie</author>
      <author>Vlachou, Chrysoula</author>
      <author>Weis, Stephan</author>
      <dc:creator>Arora, Atul Singh</dc:creator>
      <dc:creator>Roland, Jérémie</dc:creator>
      <dc:creator>Vlachou, Chrysoula</dc:creator>
      <dc:creator>Weis, Stephan</dc:creator>
      <content:encoded><![CDATA[Weak coin flipping is an important cryptographic primitive$\unicode{x2013}$it is the strongest known secure two-party computation primitive that classically becomes secure only under certain assumptions (e.g. computational hardness), while quantumly there exist protocols that achieve arbitrarily close to perfect security. This breakthrough result was established by Mochon in 2007 [arXiv:0711.4114]. However, his proof relied on the existence of certain unitary operators which was established by a non-constructive argument. Consequently, explicit protocols have remained elusive. In this work, we give exact constructions of related unitary operators. These, together with a new formalism, yield a family of protocols approaching perfect security thereby also simplifying Mochon's proof of existence. We illustrate the construction of explicit weak coin flipping protocols by considering concrete examples (from the aforementioned family of protocols) that are more secure than all previously known protocols.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Directed Acyclic Outerplanar Graphs Have Constant Stack Number</title>
      <description><![CDATA[The stack number of a directed acyclic graph $G$ is the minimum $k$ for which there is a topological ordering of $G$ and a $k$-coloring of the edges such that no two edges of the same color cross, i.e., have alternating endpoints along the topological ordering. We prove that the stack number of directed acyclic outerplanar graphs is bounded by a constant, which gives a positive answer to a conjecture by Heath, Pemmaraju and Trenk [SIAM J. Computing, 1999]. As an immediate consequence, this shows that all upward outerplanar graphs have constant stack number, answering a question by Bhore et al. [Eur. J. Comb., 2023] and thereby making significant progress towards the problem for general upward planar graphs originating from Nowakowski and Parker [Order, 1989]. As our main tool we develop the novel technique of directed $H$-partitions, which might be of independent interest. We complement the bounded stack number for directed acyclic outerplanar graphs by constructing a family of directed acyclic 2-trees that have unbounded stack number, thereby refuting a conjecture by Nöllenburg and Pupyrev [GD 2023].]]></description>
      <pubDate>Thu, 16 Oct 2025 22:00:00 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.25</link>
      <guid>https://doi.org/10.46298/theoretics.25.25</guid>
      <author>Jungeblut, Paul</author>
      <author>Merker, Laura</author>
      <author>Ueckerdt, Torsten</author>
      <dc:creator>Jungeblut, Paul</dc:creator>
      <dc:creator>Merker, Laura</dc:creator>
      <dc:creator>Ueckerdt, Torsten</dc:creator>
      <content:encoded><![CDATA[The stack number of a directed acyclic graph $G$ is the minimum $k$ for which there is a topological ordering of $G$ and a $k$-coloring of the edges such that no two edges of the same color cross, i.e., have alternating endpoints along the topological ordering. We prove that the stack number of directed acyclic outerplanar graphs is bounded by a constant, which gives a positive answer to a conjecture by Heath, Pemmaraju and Trenk [SIAM J. Computing, 1999]. As an immediate consequence, this shows that all upward outerplanar graphs have constant stack number, answering a question by Bhore et al. [Eur. J. Comb., 2023] and thereby making significant progress towards the problem for general upward planar graphs originating from Nowakowski and Parker [Order, 1989]. As our main tool we develop the novel technique of directed $H$-partitions, which might be of independent interest. We complement the bounded stack number for directed acyclic outerplanar graphs by constructing a family of directed acyclic 2-trees that have unbounded stack number, thereby refuting a conjecture by Nöllenburg and Pupyrev [GD 2023].]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>3SUM in Preprocessed Universes: Faster and Simpler</title>
      <description><![CDATA[We revisit the 3SUM problem in the \emph{preprocessed universes} setting. We present an algorithm that, given three sets $A$, $B$, $C$ of $n$ integers, preprocesses them in quadratic time, so that given any subsets $A' \subseteq A$, $B' \subseteq B$, $C' \subseteq C$, it can decide if there exist $a \in A'$, $b \in B'$, $c \in C'$ with $a+b=c$ in time $O(n^{1.5} \log n)$. In contrast to both the first subquadratic $\tilde{O}(n^{13/7})$-time algorithm by Chan and Lewenstein (STOC 2015) and the current fastest $\tilde{O}(n^{11/6})$-time algorithm by Chan, Vassilevska Williams, and Xu (STOC 2023), which are based on the Balog--Szemerédi--Gowers theorem from additive combinatorics, our algorithm uses only standard 3SUM-related techniques, namely FFT and linear hashing modulo a prime. It is therefore not only faster but also simpler. Just as the two previous algorithms, ours not only decides if there is a single 3SUM solution but it actually determines for each $c \in C'$ if there is a solution containing it. We also modify the algorithm to still work in the scenario where the set $C$ is unknown at the time of preprocessing. Finally, even though the simplest version of our algorithm is randomized, we show how to make it deterministic losing only polylogarithmic factors in the running time.]]></description>
      <pubDate>Thu, 16 Oct 2025 07:31:26 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.24</link>
      <guid>https://doi.org/10.46298/theoretics.25.24</guid>
      <author>Kasliwal, Shashwat</author>
      <author>Polak, Adam</author>
      <author>Sharma, Pratyush</author>
      <dc:creator>Kasliwal, Shashwat</dc:creator>
      <dc:creator>Polak, Adam</dc:creator>
      <dc:creator>Sharma, Pratyush</dc:creator>
      <content:encoded><![CDATA[We revisit the 3SUM problem in the \emph{preprocessed universes} setting. We present an algorithm that, given three sets $A$, $B$, $C$ of $n$ integers, preprocesses them in quadratic time, so that given any subsets $A' \subseteq A$, $B' \subseteq B$, $C' \subseteq C$, it can decide if there exist $a \in A'$, $b \in B'$, $c \in C'$ with $a+b=c$ in time $O(n^{1.5} \log n)$. In contrast to both the first subquadratic $\tilde{O}(n^{13/7})$-time algorithm by Chan and Lewenstein (STOC 2015) and the current fastest $\tilde{O}(n^{11/6})$-time algorithm by Chan, Vassilevska Williams, and Xu (STOC 2023), which are based on the Balog--Szemerédi--Gowers theorem from additive combinatorics, our algorithm uses only standard 3SUM-related techniques, namely FFT and linear hashing modulo a prime. It is therefore not only faster but also simpler. Just as the two previous algorithms, ours not only decides if there is a single 3SUM solution but it actually determines for each $c \in C'$ if there is a solution containing it. We also modify the algorithm to still work in the scenario where the set $C$ is unknown at the time of preprocessing. Finally, even though the simplest version of our algorithm is randomized, we show how to make it deterministic losing only polylogarithmic factors in the running time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Upper and Lower Bounds on the Smoothed Complexity of the Simplex Method</title>
      <description><![CDATA[The simplex method for linear programming is known to be highly efficient in practice, and understanding its performance from a theoretical perspective is an active research topic. The framework of smoothed analysis, first introduced by Spielman and Teng (JACM '04) for this purpose, defines the smoothed complexity of solving a linear program with $d$ variables and $n$ constraints as the expected running time when Gaussian noise of variance $σ^2$ is added to the LP data. We prove that the smoothed complexity of the simplex method is $O(σ^{-3/2} d^{13/4}\log^{7/4} n)$, improving the dependence on $1/σ$ compared to the previous bound of $O(σ^{-2} d^2\sqrt{\log n})$. We accomplish this through a new analysis of the \emph{shadow bound}, key to earlier analyses as well. Illustrating the power of our new method, we use our method to prove a nearly tight upper bound on the smoothed complexity of two-dimensional polygons. We also establish the first non-trivial lower bound on the smoothed complexity of the simplex method, proving that the \emph{shadow vertex simplex method} requires at least $Ω\Big(\min \big(σ^{-1/2} d^{-1/2}\log^{-1/4} d,2^d \big) \Big)$ pivot steps with high probability. A key part of our analysis is a new variation on the extended formulation for the regular $2^k$-gon. We end with a numerical experiment that suggests this analysis could be further improved.]]></description>
      <pubDate>Wed, 15 Oct 2025 08:05:48 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.23</link>
      <guid>https://doi.org/10.46298/theoretics.25.23</guid>
      <author>Huiberts, Sophie</author>
      <author>Lee, Yin Tat</author>
      <author>Zhang, Xinzhi</author>
      <dc:creator>Huiberts, Sophie</dc:creator>
      <dc:creator>Lee, Yin Tat</dc:creator>
      <dc:creator>Zhang, Xinzhi</dc:creator>
      <content:encoded><![CDATA[The simplex method for linear programming is known to be highly efficient in practice, and understanding its performance from a theoretical perspective is an active research topic. The framework of smoothed analysis, first introduced by Spielman and Teng (JACM '04) for this purpose, defines the smoothed complexity of solving a linear program with $d$ variables and $n$ constraints as the expected running time when Gaussian noise of variance $σ^2$ is added to the LP data. We prove that the smoothed complexity of the simplex method is $O(σ^{-3/2} d^{13/4}\log^{7/4} n)$, improving the dependence on $1/σ$ compared to the previous bound of $O(σ^{-2} d^2\sqrt{\log n})$. We accomplish this through a new analysis of the \emph{shadow bound}, key to earlier analyses as well. Illustrating the power of our new method, we use our method to prove a nearly tight upper bound on the smoothed complexity of two-dimensional polygons. We also establish the first non-trivial lower bound on the smoothed complexity of the simplex method, proving that the \emph{shadow vertex simplex method} requires at least $Ω\Big(\min \big(σ^{-1/2} d^{-1/2}\log^{-1/4} d,2^d \big) \Big)$ pivot steps with high probability. A key part of our analysis is a new variation on the extended formulation for the regular $2^k$-gon. We end with a numerical experiment that suggests this analysis could be further improved.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal Padded Decomposition For Bounded Treewidth Graphs</title>
      <description><![CDATA[A $(β,δ,Δ)$-padded decomposition of an edge-weighted graph $G = (V,E,w)$ is a stochastic decomposition into clusters of diameter at most $Δ$ such that for every vertex $v\in V$, the probability that $\rm{ball}_G(v,γΔ)$ is entirely contained in the cluster containing $v$ is at least $e^{-βγ}$ for every $γ\in [0,δ]$. Padded decompositions have been studied for decades and have found numerous applications, including metric embedding, multicommodity flow-cut gap, multicut, and zero extension problems, to name a few. In these applications, parameter $β$, called the padding parameter, is the most important parameter since it decides either the distortion or the approximation ratios. For general graphs with $n$ vertices, $β= Θ(\log n)$. Klein, Plotkin, and Rao showed that $K_r$-minor-free graphs have padding parameter $β= O(r^3)$, which is a significant improvement over general graphs when $r$ is a constant. A long-standing conjecture is to construct a padded decomposition for $K_r$-minor-free graphs with padding parameter $β= O(\log r)$. Despite decades of research, the best-known result is $β= O(r)$, even for graphs with treewidth at most $r$. In this work, we make significant progress toward the aforementioned conjecture by showing that graphs with treewidth $\rm{tw}$ admit a padded decomposition with padding parameter $O(\log \rm{tw})$, which is tight. As corollaries, we obtain an exponential improvement in dependency on treewidth in a host of algorithmic applications: $O(\sqrt{ \log n \cdot \log(\rm{tw})})$ flow-cut gap, max flow-min multicut ratio of $O(\log(\rm{tw}))$, an $O(\log(\rm{tw}))$ approximation for the 0-extension problem, an $\ell^{O(\log n)}_\infty$ embedding with distortion $O(\log \rm{tw})$, and an $O(\log \rm{tw})$ bound for integrality gap for the uniform sparsest cut.]]></description>
      <pubDate>Fri, 10 Oct 2025 07:48:53 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.22</link>
      <guid>https://doi.org/10.46298/theoretics.25.22</guid>
      <author>Filtser, Arnold</author>
      <author>Friedrich, Tobias</author>
      <author>Issac, Davis</author>
      <author>Kumar, Nikhil</author>
      <author>Le, Hung</author>
      <author>Mallek, Nadym</author>
      <author>Zeif, Ziena</author>
      <dc:creator>Filtser, Arnold</dc:creator>
      <dc:creator>Friedrich, Tobias</dc:creator>
      <dc:creator>Issac, Davis</dc:creator>
      <dc:creator>Kumar, Nikhil</dc:creator>
      <dc:creator>Le, Hung</dc:creator>
      <dc:creator>Mallek, Nadym</dc:creator>
      <dc:creator>Zeif, Ziena</dc:creator>
      <content:encoded><![CDATA[A $(β,δ,Δ)$-padded decomposition of an edge-weighted graph $G = (V,E,w)$ is a stochastic decomposition into clusters of diameter at most $Δ$ such that for every vertex $v\in V$, the probability that $\rm{ball}_G(v,γΔ)$ is entirely contained in the cluster containing $v$ is at least $e^{-βγ}$ for every $γ\in [0,δ]$. Padded decompositions have been studied for decades and have found numerous applications, including metric embedding, multicommodity flow-cut gap, multicut, and zero extension problems, to name a few. In these applications, parameter $β$, called the padding parameter, is the most important parameter since it decides either the distortion or the approximation ratios. For general graphs with $n$ vertices, $β= Θ(\log n)$. Klein, Plotkin, and Rao showed that $K_r$-minor-free graphs have padding parameter $β= O(r^3)$, which is a significant improvement over general graphs when $r$ is a constant. A long-standing conjecture is to construct a padded decomposition for $K_r$-minor-free graphs with padding parameter $β= O(\log r)$. Despite decades of research, the best-known result is $β= O(r)$, even for graphs with treewidth at most $r$. In this work, we make significant progress toward the aforementioned conjecture by showing that graphs with treewidth $\rm{tw}$ admit a padded decomposition with padding parameter $O(\log \rm{tw})$, which is tight. As corollaries, we obtain an exponential improvement in dependency on treewidth in a host of algorithmic applications: $O(\sqrt{ \log n \cdot \log(\rm{tw})})$ flow-cut gap, max flow-min multicut ratio of $O(\log(\rm{tw}))$, an $O(\log(\rm{tw}))$ approximation for the 0-extension problem, an $\ell^{O(\log n)}_\infty$ embedding with distortion $O(\log \rm{tw})$, and an $O(\log \rm{tw})$ bound for integrality gap for the uniform sparsest cut.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Determinantal Sieving</title>
      <description><![CDATA[We introduce determinantal sieving, a new, remarkably powerful tool in the toolbox of algebraic FPT algorithms. Given a polynomial $P(X)$ on a set of variables $X=\{x_1,\ldots,x_n\}$ and a linear matroid $M=(X,\mathcal{I})$ of rank $k$, both over a field $\mathbb{F}$ of characteristic 2, in $2^k$ evaluations we can sieve for those terms in the monomial expansion of $P$ which are multilinear and whose support is a basis for $M$. Alternatively, using $2^k$ evaluations of $P$ we can sieve for those monomials whose odd support spans $M$. Applying this framework, we improve on a range of algebraic FPT algorithms, such as: 1. Solving $q$-Matroid Intersection in time $O^*(2^{(q-2)k})$ and $q$-Matroid Parity in time $O^*(2^{qk})$, improving on $O^*(4^{qk})$ over general fields (Brand and Pratt, ICALP 2021) 2. Long $(s,t)$-Path in $O^*(1.66^k)$ time, improving on $O^*(2^k)$, and Rank $k$ $(S,T)$-Linkage in so-called frameworks in $O^*(2^k)$ time, improving on $O^*(2^{|S|+O(k^2 \log(k+|\mathbb{F}|))})$ over general fields (Fomin et al., SODA 2023). 3. Many instances of the Diverse X paradigm, finding a collection of $r$ solutions to a problem with a minimum mutual distance of $d$ in time $O^*(2^{r(r-1)d/2})$, improving solutions for $k$-Distinct Branchings from time $2^{O(k \log k)}$ to $O^*(2^k)$ (Bang-Jensen et al., ESA 2021), and for Diverse Perfect Matchings from $O^*(2^{2^{O(rd)}})$ to $O^*(2^{r^2d/2})$ (Fomin et al., STACS 2021). Here, all matroids are assumed to be represented over fields of characteristic 2. Over general fields, we achieve similar results at the cost of using exponential space by working over the exterior algebra. For a class of arithmetic circuits we call strongly monotone, this is even achieved without any loss of running time. However, the odd support sieving result appears to be specific to working over characteristic 2.]]></description>
      <pubDate>Thu, 02 Oct 2025 08:18:57 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.21</link>
      <guid>https://doi.org/10.46298/theoretics.25.21</guid>
      <author>Eiben, Eduard</author>
      <author>Koana, Tomohiro</author>
      <author>Wahlström, Magnus</author>
      <dc:creator>Eiben, Eduard</dc:creator>
      <dc:creator>Koana, Tomohiro</dc:creator>
      <dc:creator>Wahlström, Magnus</dc:creator>
      <content:encoded><![CDATA[We introduce determinantal sieving, a new, remarkably powerful tool in the toolbox of algebraic FPT algorithms. Given a polynomial $P(X)$ on a set of variables $X=\{x_1,\ldots,x_n\}$ and a linear matroid $M=(X,\mathcal{I})$ of rank $k$, both over a field $\mathbb{F}$ of characteristic 2, in $2^k$ evaluations we can sieve for those terms in the monomial expansion of $P$ which are multilinear and whose support is a basis for $M$. Alternatively, using $2^k$ evaluations of $P$ we can sieve for those monomials whose odd support spans $M$. Applying this framework, we improve on a range of algebraic FPT algorithms, such as: 1. Solving $q$-Matroid Intersection in time $O^*(2^{(q-2)k})$ and $q$-Matroid Parity in time $O^*(2^{qk})$, improving on $O^*(4^{qk})$ over general fields (Brand and Pratt, ICALP 2021) 2. Long $(s,t)$-Path in $O^*(1.66^k)$ time, improving on $O^*(2^k)$, and Rank $k$ $(S,T)$-Linkage in so-called frameworks in $O^*(2^k)$ time, improving on $O^*(2^{|S|+O(k^2 \log(k+|\mathbb{F}|))})$ over general fields (Fomin et al., SODA 2023). 3. Many instances of the Diverse X paradigm, finding a collection of $r$ solutions to a problem with a minimum mutual distance of $d$ in time $O^*(2^{r(r-1)d/2})$, improving solutions for $k$-Distinct Branchings from time $2^{O(k \log k)}$ to $O^*(2^k)$ (Bang-Jensen et al., ESA 2021), and for Diverse Perfect Matchings from $O^*(2^{2^{O(rd)}})$ to $O^*(2^{r^2d/2})$ (Fomin et al., STACS 2021). Here, all matroids are assumed to be represented over fields of characteristic 2. Over general fields, we achieve similar results at the cost of using exponential space by working over the exterior algebra. For a class of arithmetic circuits we call strongly monotone, this is even achieved without any loss of running time. However, the odd support sieving result appears to be specific to working over characteristic 2.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Randomized Communication and Implicit Graph Representations</title>
      <description><![CDATA[We initiate the focused study of constant-cost randomized communication, with emphasis on its connection to graph representations. We observe that constant-cost randomized communication problems are equivalent to hereditary (i.e. closed under taking induced subgraphs) graph classes which admit constant-size adjacency sketches and probabilistic universal graphs (PUGs), which are randomized versions of the well-studied adjacency labeling schemes and induced-universal graphs. This gives a new perspective on long-standing questions about the existence of these objects, including new methods of constructing adjacency labeling schemes. We ask three main questions about constant-cost communication, or equivalently, constant-size PUGs: (1) Are there any natural, non-trivial problems aside from Equality and k-Hamming Distance which have constant-cost communication? We provide a number of new examples, including deciding whether two vertices have path-distance at most k in a planar graph, and showing that constant-size PUGs are preserved by the Cartesian product operation. (2) What structures of a problem explain the existence or non-existence of a constant-cost protocol? We show that in many cases a Greater-Than subproblem is such a structure. (3) Is the Equality problem complete for constant-cost randomized communication? We show that it is not: there are constant-cost problems which do not reduce to Equality.]]></description>
      <pubDate>Wed, 01 Oct 2025 07:16:46 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.20</link>
      <guid>https://doi.org/10.46298/theoretics.25.20</guid>
      <author>Harms, Nathaniel</author>
      <author>Wild, Sebastian</author>
      <author>Zamaraev, Viktor</author>
      <dc:creator>Harms, Nathaniel</dc:creator>
      <dc:creator>Wild, Sebastian</dc:creator>
      <dc:creator>Zamaraev, Viktor</dc:creator>
      <content:encoded><![CDATA[We initiate the focused study of constant-cost randomized communication, with emphasis on its connection to graph representations. We observe that constant-cost randomized communication problems are equivalent to hereditary (i.e. closed under taking induced subgraphs) graph classes which admit constant-size adjacency sketches and probabilistic universal graphs (PUGs), which are randomized versions of the well-studied adjacency labeling schemes and induced-universal graphs. This gives a new perspective on long-standing questions about the existence of these objects, including new methods of constructing adjacency labeling schemes. We ask three main questions about constant-cost communication, or equivalently, constant-size PUGs: (1) Are there any natural, non-trivial problems aside from Equality and k-Hamming Distance which have constant-cost communication? We provide a number of new examples, including deciding whether two vertices have path-distance at most k in a planar graph, and showing that constant-size PUGs are preserved by the Cartesian product operation. (2) What structures of a problem explain the existence or non-existence of a constant-cost protocol? We show that in many cases a Greater-Than subproblem is such a structure. (3) Is the Equality problem complete for constant-cost randomized communication? We show that it is not: there are constant-cost problems which do not reduce to Equality.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On the Constant-Depth Circuit Complexity of Generating Quasigroups</title>
      <description><![CDATA[We investigate the constant-depth circuit complexity of the Isomorphism Problem, Minimum Generating Set Problem (MGS), and Sub(quasi)group Membership Problem (Membership) for groups and quasigroups (=Latin squares), given as input in terms of their multiplication (Cayley) tables. Despite decades of research on these problems, lower bounds for these problems even against depth-$2$ AC circuits remain unknown. Perhaps surprisingly, Chattopadhyay, Torán, and Wagner (FSTTCS 2010; ACM Trans. Comput. Theory, 2013) showed that Quasigroup Isomorphism could be solved by AC circuits of depth $O(\log \log n)$ using $O(\log^2 n)$ nondeterministic bits, a class we denote $\exists^{\log^2(n)}FOLL$. We narrow this gap by improving the upper bound for many of these problems to $quasiAC^0$, thus decreasing the depth to constant. In particular, we show: - MGS for quasigroups is in $\exists^{\log^2(n)}\forall^{\log n}NTIME(\mathrm{polylog}(n))\subseteq quasiAC^0$. Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1996) conjectured that this problem was $\exists^{\log^2(n)}P$-complete; our results refute a version of that conjecture for completeness under $quasiAC^0$ reductions unconditionally, and under polylog-space reductions assuming EXP $\neq$ PSPACE. - MGS for groups is in $AC^{1}(L)$, improving on the previous upper bound of $P$ (Lucchini & Thakkar, J. Algebra, 2024). - Quasigroup Isomorphism belongs to $\exists^{\log^2(n)}AC^0(DTISP(\mathrm{polylog},\log)\subseteq quasiAC^0$, improving on the previous bound of $\exists^{\log^2(n)}L\cap\exists^{\log^2(n)}FOLL\subseteq quasiFOLL$ (Chattopadhyay, Torán, & Wagner, ibid.; Levet, Australas. J. Combin., 2023). Our results suggest that understanding the constant-depth circuit complexity may be key to resolving the complexity of problems concerning (quasi)groups in the multiplication table model.]]></description>
      <pubDate>Fri, 22 Aug 2025 06:56:13 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.19</link>
      <guid>https://doi.org/10.46298/theoretics.25.19</guid>
      <author>Collins, Nathaniel A.</author>
      <author>Grochow, Joshua A.</author>
      <author>Levet, Michael</author>
      <author>Weiß, Armin</author>
      <dc:creator>Collins, Nathaniel A.</dc:creator>
      <dc:creator>Grochow, Joshua A.</dc:creator>
      <dc:creator>Levet, Michael</dc:creator>
      <dc:creator>Weiß, Armin</dc:creator>
      <content:encoded><![CDATA[We investigate the constant-depth circuit complexity of the Isomorphism Problem, Minimum Generating Set Problem (MGS), and Sub(quasi)group Membership Problem (Membership) for groups and quasigroups (=Latin squares), given as input in terms of their multiplication (Cayley) tables. Despite decades of research on these problems, lower bounds for these problems even against depth-$2$ AC circuits remain unknown. Perhaps surprisingly, Chattopadhyay, Torán, and Wagner (FSTTCS 2010; ACM Trans. Comput. Theory, 2013) showed that Quasigroup Isomorphism could be solved by AC circuits of depth $O(\log \log n)$ using $O(\log^2 n)$ nondeterministic bits, a class we denote $\exists^{\log^2(n)}FOLL$. We narrow this gap by improving the upper bound for many of these problems to $quasiAC^0$, thus decreasing the depth to constant. In particular, we show: - MGS for quasigroups is in $\exists^{\log^2(n)}\forall^{\log n}NTIME(\mathrm{polylog}(n))\subseteq quasiAC^0$. Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1996) conjectured that this problem was $\exists^{\log^2(n)}P$-complete; our results refute a version of that conjecture for completeness under $quasiAC^0$ reductions unconditionally, and under polylog-space reductions assuming EXP $\neq$ PSPACE. - MGS for groups is in $AC^{1}(L)$, improving on the previous upper bound of $P$ (Lucchini & Thakkar, J. Algebra, 2024). - Quasigroup Isomorphism belongs to $\exists^{\log^2(n)}AC^0(DTISP(\mathrm{polylog},\log)\subseteq quasiAC^0$, improving on the previous bound of $\exists^{\log^2(n)}L\cap\exists^{\log^2(n)}FOLL\subseteq quasiFOLL$ (Chattopadhyay, Torán, & Wagner, ibid.; Levet, Australas. J. Combin., 2023). Our results suggest that understanding the constant-depth circuit complexity may be key to resolving the complexity of problems concerning (quasi)groups in the multiplication table model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Quantum Money from Abelian Group Actions</title>
      <description><![CDATA[We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.]]></description>
      <pubDate>Tue, 19 Aug 2025 07:14:07 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.18</link>
      <guid>https://doi.org/10.46298/theoretics.25.18</guid>
      <author>Zhandry, Mark</author>
      <dc:creator>Zhandry, Mark</dc:creator>
      <content:encoded><![CDATA[We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Note on Approximating Weighted Nash Social Welfare with Additive Valuations</title>
      <description><![CDATA[We give the first $O(1)$-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is $e^{1/e} + ε\approx 1.445 + ε$, which matches the best known approximation ratio for the unweighted case. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most $e^{1/e} \approx 1.445$ by Barman, Krishnamurthy and Vaish, leading to our approximation ratio.]]></description>
      <pubDate>Thu, 14 Aug 2025 07:20:15 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.17</link>
      <guid>https://doi.org/10.46298/theoretics.25.17</guid>
      <author>Feng, Yuda</author>
      <author>Li, Shi</author>
      <dc:creator>Feng, Yuda</dc:creator>
      <dc:creator>Li, Shi</dc:creator>
      <content:encoded><![CDATA[We give the first $O(1)$-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is $e^{1/e} + ε\approx 1.445 + ε$, which matches the best known approximation ratio for the unweighted case. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most $e^{1/e} \approx 1.445$ by Barman, Krishnamurthy and Vaish, leading to our approximation ratio.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Simple $(1-ε)$-Approximation Semi-Streaming Algorithm for Maximum (Weighted) Matching</title>
      <description><![CDATA[We present a simple semi-streaming algorithm for $(1-ε)$-approximation of bipartite matching in $O(\log{\!(n)}/ε)$ passes. This matches the performance of state-of-the-art "$ε$-efficient" algorithms -- the ones with much better dependence on $ε$ albeit with some mild dependence on $n$ -- while being considerably simpler. The algorithm relies on a direct application of the multiplicative weight update method with a self-contained primal-dual analysis that can be of independent interest. To show case this, we use the same ideas, alongside standard tools from matching theory, to present an equally simple semi-streaming algorithm for $(1-ε)$-approximation of weighted matchings in general (not necessarily bipartite) graphs, again in $O(\log{\!(n)}/ε)$ passes.]]></description>
      <pubDate>Fri, 01 Aug 2025 13:02:57 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.16</link>
      <guid>https://doi.org/10.46298/theoretics.25.16</guid>
      <author>Assadi, Sepehr</author>
      <dc:creator>Assadi, Sepehr</dc:creator>
      <content:encoded><![CDATA[We present a simple semi-streaming algorithm for $(1-ε)$-approximation of bipartite matching in $O(\log{\!(n)}/ε)$ passes. This matches the performance of state-of-the-art "$ε$-efficient" algorithms -- the ones with much better dependence on $ε$ albeit with some mild dependence on $n$ -- while being considerably simpler. The algorithm relies on a direct application of the multiplicative weight update method with a self-contained primal-dual analysis that can be of independent interest. To show case this, we use the same ideas, alongside standard tools from matching theory, to present an equally simple semi-streaming algorithm for $(1-ε)$-approximation of weighted matchings in general (not necessarily bipartite) graphs, again in $O(\log{\!(n)}/ε)$ passes.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Parameterized algorithms for block-structured integer programs with large entries</title>
      <description><![CDATA[We study two classic variants of block-structured integer programming. Two-stage stochastic programs are integer programs of the form $\{A_i \mathbf{x} + D_i \mathbf{y}_i = \mathbf{b}_i\textrm{ for all }i=1,\ldots,n\}$, where $A_i$ and $D_i$ are bounded-size matrices. On the other hand, $n$-fold programs are integer programs of the form $\{{\sum_{i=1}^n C_i\mathbf{y}_i=\mathbf{a}} \textrm{ and } D_i\mathbf{y}_i=\mathbf{b}_i\textrm{ for all }i=1,\ldots,n\}$, where again $C_i$ and $D_i$ are bounded-size matrices. It is known that solving these kind of programs is fixed-parameter tractable when parameterized by the maximum dimension among the relevant matrices $A_i,C_i,D_i$ and the maximum absolute value of any entry appearing in the constraint matrix. We show that the parameterized tractability results for two-stage stochastic and $n$-fold programs persist even when one allows large entries in the global part of the program. More precisely, we prove that: - The feasibility problem for two-stage stochastic programs is fixed-parameter tractable when parameterized by the dimensions of matrices $A_i,D_i$ and by the maximum absolute value of the entries of matrices $D_i$. That is, we allow matrices $A_i$ to have arbitrarily large entries. - The linear optimization problem for $n$-fold integer programs that are uniform -- all matrices $C_i$ are equal -- is fixed-parameter tractable when parameterized by the dimensions of matrices $C_i$ and $D_i$ and by the maximum absolute value of the entries of matrices $D_i$. That is, we require that $C_i=C$ for all $i=1,\ldots,n$, but we allow $C$ to have arbitrarily large entries. In the second result, the uniformity assumption is necessary; otherwise the problem is $\mathsf{NP}$-hard already when the parameters take constant values. Both our algorithms are weakly polynomial: the running time is measured in the total bitsize of the input.]]></description>
      <pubDate>Fri, 18 Jul 2025 09:31:50 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.15</link>
      <guid>https://doi.org/10.46298/theoretics.25.15</guid>
      <author>Cslovjecsek, Jana</author>
      <author>Koutecký, Martin</author>
      <author>Lassota, Alexandra</author>
      <author>Pilipczuk, Michał</author>
      <author>Polak, Adam</author>
      <dc:creator>Cslovjecsek, Jana</dc:creator>
      <dc:creator>Koutecký, Martin</dc:creator>
      <dc:creator>Lassota, Alexandra</dc:creator>
      <dc:creator>Pilipczuk, Michał</dc:creator>
      <dc:creator>Polak, Adam</dc:creator>
      <content:encoded><![CDATA[We study two classic variants of block-structured integer programming. Two-stage stochastic programs are integer programs of the form $\{A_i \mathbf{x} + D_i \mathbf{y}_i = \mathbf{b}_i\textrm{ for all }i=1,\ldots,n\}$, where $A_i$ and $D_i$ are bounded-size matrices. On the other hand, $n$-fold programs are integer programs of the form $\{{\sum_{i=1}^n C_i\mathbf{y}_i=\mathbf{a}} \textrm{ and } D_i\mathbf{y}_i=\mathbf{b}_i\textrm{ for all }i=1,\ldots,n\}$, where again $C_i$ and $D_i$ are bounded-size matrices. It is known that solving these kind of programs is fixed-parameter tractable when parameterized by the maximum dimension among the relevant matrices $A_i,C_i,D_i$ and the maximum absolute value of any entry appearing in the constraint matrix. We show that the parameterized tractability results for two-stage stochastic and $n$-fold programs persist even when one allows large entries in the global part of the program. More precisely, we prove that: - The feasibility problem for two-stage stochastic programs is fixed-parameter tractable when parameterized by the dimensions of matrices $A_i,D_i$ and by the maximum absolute value of the entries of matrices $D_i$. That is, we allow matrices $A_i$ to have arbitrarily large entries. - The linear optimization problem for $n$-fold integer programs that are uniform -- all matrices $C_i$ are equal -- is fixed-parameter tractable when parameterized by the dimensions of matrices $C_i$ and $D_i$ and by the maximum absolute value of the entries of matrices $D_i$. That is, we require that $C_i=C$ for all $i=1,\ldots,n$, but we allow $C$ to have arbitrarily large entries. In the second result, the uniformity assumption is necessary; otherwise the problem is $\mathsf{NP}$-hard already when the parameters take constant values. Both our algorithms are weakly polynomial: the running time is measured in the total bitsize of the input.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Minimizing Tardy Processing Time on a Single Machine in Near-Linear Time</title>
      <description><![CDATA[In this work we revisit the elementary scheduling problem $1||\sum p_j U_j$. The goal is to select, among $n$ jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time $O(nP)$, where $P$ is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore's algorithm for $1||\sum p_j U_j$: First to time $\tilde O(P^{7/4})$ [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time $\tilde O(P^{5/3})$ [Klein, Polak, Rohwedder; SODA'23], and finally to time $\tilde O(P^{7/5})$ [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time $\tilde O(P)$ for the $1||\sum p_j U_j$ problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of $m$ machines in time $\tilde O(P^m)$. In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.]]></description>
      <pubDate>Thu, 17 Jul 2025 06:59:02 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.14</link>
      <guid>https://doi.org/10.46298/theoretics.25.14</guid>
      <author>Fischer, Nick</author>
      <author>Wennmann, Leo</author>
      <dc:creator>Fischer, Nick</dc:creator>
      <dc:creator>Wennmann, Leo</dc:creator>
      <content:encoded><![CDATA[In this work we revisit the elementary scheduling problem $1||\sum p_j U_j$. The goal is to select, among $n$ jobs with processing times and due dates, a subset of jobs with maximum total processing time that can be scheduled in sequence without violating their due dates. This problem is NP-hard, but a classical algorithm by Lawler and Moore from the 60s solves this problem in pseudo-polynomial time $O(nP)$, where $P$ is the total processing time of all jobs. With the aim to develop best-possible pseudo-polynomial-time algorithms, a recent wave of results has improved Lawler and Moore's algorithm for $1||\sum p_j U_j$: First to time $\tilde O(P^{7/4})$ [Bringmann, Fischer, Hermelin, Shabtay, Wellnitz; ICALP'20], then to time $\tilde O(P^{5/3})$ [Klein, Polak, Rohwedder; SODA'23], and finally to time $\tilde O(P^{7/5})$ [Schieber, Sitaraman; WADS'23]. It remained an exciting open question whether these works can be improved further. In this work we develop an algorithm in near-linear time $\tilde O(P)$ for the $1||\sum p_j U_j$ problem. This running time not only significantly improves upon the previous results, but also matches conditional lower bounds based on the Strong Exponential Time Hypothesis or the Set Cover Hypothesis and is therefore likely optimal (up to subpolynomial factors). Our new algorithm also extends to the case of $m$ machines in time $\tilde O(P^m)$. In contrast to the previous improvements, we take a different, more direct approach inspired by the recent reductions from Modular Subset Sum to dynamic string problems. We thereby arrive at a satisfyingly simple algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A $4/3$ Approximation for $2$-Vertex-Connectivity</title>
      <description><![CDATA[The 2-Vertex-Connected Spanning Subgraph problem (2VCSS) is among the most basic NP-hard (Survivable) Network Design problems: we are given an (unweighted) undirected graph $G$. Our goal is to find a spanning subgraph $S$ of $G$ with the minimum number of edges which is $2$-vertex-connected, namely $S$ remains connected after the deletion of an arbitrary node. 2VCSS is well-studied in terms of approximation algorithms, and the current best (polynomial-time) approximation factor is $10/7$ by Heeger and Vygen [SIDMA'17] (improving on earlier results by Khuller and Vishkin [STOC'92] and Garg, Vempala and Singla [SODA'93]). Here we present an improved $4/3$ approximation. Our main technical ingredient is an approximation preserving reduction to a conveniently structured subset of instances which are ``almost'' 3-vertex-connected. The latter reduction might be helpful in future work.]]></description>
      <pubDate>Mon, 16 Jun 2025 15:04:46 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.13</link>
      <guid>https://doi.org/10.46298/theoretics.25.13</guid>
      <author>Bosch-Calvo, Miguel</author>
      <author>Grandoni, Fabrizio</author>
      <author>Ameli, Afrouz Jabal</author>
      <dc:creator>Bosch-Calvo, Miguel</dc:creator>
      <dc:creator>Grandoni, Fabrizio</dc:creator>
      <dc:creator>Ameli, Afrouz Jabal</dc:creator>
      <content:encoded><![CDATA[The 2-Vertex-Connected Spanning Subgraph problem (2VCSS) is among the most basic NP-hard (Survivable) Network Design problems: we are given an (unweighted) undirected graph $G$. Our goal is to find a spanning subgraph $S$ of $G$ with the minimum number of edges which is $2$-vertex-connected, namely $S$ remains connected after the deletion of an arbitrary node. 2VCSS is well-studied in terms of approximation algorithms, and the current best (polynomial-time) approximation factor is $10/7$ by Heeger and Vygen [SIDMA'17] (improving on earlier results by Khuller and Vishkin [STOC'92] and Garg, Vempala and Singla [SODA'93]). Here we present an improved $4/3$ approximation. Our main technical ingredient is an approximation preserving reduction to a conveniently structured subset of instances which are ``almost'' 3-vertex-connected. The latter reduction might be helpful in future work.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>PANDA: Query Evaluation in Submodular Width</title>
      <description><![CDATA[In recent years, several information-theoretic upper bounds have been introduced on the output size and evaluation cost of database join queries. These bounds vary in their power depending on both the type of statistics on input relations and the query plans that they support. This motivated the search for algorithms that can compute the output of a join query in times that are bounded by the corresponding information-theoretic bounds. In this paper, we describe PANDA, an algorithm that takes a Shannon-inequality that underlies the bound, and translates each proof step into an algorithmic step corresponding to some database operation. PANDA computes answers to a conjunctive query in time given by the the submodular width plus the output size of the query. The version in this paper represents a significant simplification of the original version [ANS, PODS'17].]]></description>
      <pubDate>Wed, 30 Apr 2025 10:04:41 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.12</link>
      <guid>https://doi.org/10.46298/theoretics.25.12</guid>
      <author>Khamis, Mahmoud Abo</author>
      <author>Ngo, Hung Q.</author>
      <author>Suciu, Dan</author>
      <dc:creator>Khamis, Mahmoud Abo</dc:creator>
      <dc:creator>Ngo, Hung Q.</dc:creator>
      <dc:creator>Suciu, Dan</dc:creator>
      <content:encoded><![CDATA[In recent years, several information-theoretic upper bounds have been introduced on the output size and evaluation cost of database join queries. These bounds vary in their power depending on both the type of statistics on input relations and the query plans that they support. This motivated the search for algorithms that can compute the output of a join query in times that are bounded by the corresponding information-theoretic bounds. In this paper, we describe PANDA, an algorithm that takes a Shannon-inequality that underlies the bound, and translates each proof step into an algorithmic step corresponding to some database operation. PANDA computes answers to a conjunctive query in time given by the the submodular width plus the output size of the query. The version in this paper represents a significant simplification of the original version [ANS, PODS'17].]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Ortho-Radial Drawing in Near-Linear Time</title>
      <description><![CDATA[An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be described combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design. Barth, Niedermann, Rutter, and Wolf (SoCG 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing. In this paper, we further improve the time complexity of these tasks to near-linear time. We establish a new characterization for ortho-radial drawings based on the concept of a good sequence. Using the new characterization, we design a simple greedy algorithm for constructing ortho-radial drawings.]]></description>
      <pubDate>Tue, 08 Apr 2025 15:18:14 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.11</link>
      <guid>https://doi.org/10.46298/theoretics.25.11</guid>
      <author>Chang, Yi-Jun</author>
      <dc:creator>Chang, Yi-Jun</dc:creator>
      <content:encoded><![CDATA[An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be described combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design. Barth, Niedermann, Rutter, and Wolf (SoCG 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing. In this paper, we further improve the time complexity of these tasks to near-linear time. We establish a new characterization for ortho-radial drawings based on the concept of a good sequence. Using the new characterization, we design a simple greedy algorithm for constructing ortho-radial drawings.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Learning Algorithms for Verification of Markov Decision Processes</title>
      <description><![CDATA[We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{\'{a}}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.]]></description>
      <pubDate>Tue, 01 Apr 2025 15:22:37 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.10</link>
      <guid>https://doi.org/10.46298/theoretics.25.10</guid>
      <author>Brázdil, Tomáš</author>
      <author>Chatterjee, Krishnendu</author>
      <author>Chmelik, Martin</author>
      <author>Forejt, Vojtěch</author>
      <author>Křetínský, Jan</author>
      <author>Kwiatkowska, Marta</author>
      <author>Meggendorfer, Tobias</author>
      <author>Parker, David</author>
      <author>Ujma, Mateusz</author>
      <dc:creator>Brázdil, Tomáš</dc:creator>
      <dc:creator>Chatterjee, Krishnendu</dc:creator>
      <dc:creator>Chmelik, Martin</dc:creator>
      <dc:creator>Forejt, Vojtěch</dc:creator>
      <dc:creator>Křetínský, Jan</dc:creator>
      <dc:creator>Kwiatkowska, Marta</dc:creator>
      <dc:creator>Meggendorfer, Tobias</dc:creator>
      <dc:creator>Parker, David</dc:creator>
      <dc:creator>Ujma, Mateusz</dc:creator>
      <content:encoded><![CDATA[We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{\'{a}}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Exponential Resolution Lower Bounds for Weak Pigeonhole Principle and Perfect Matching Formulas over Sparse Graphs</title>
      <description><![CDATA[We show exponential lower bounds on resolution proof length for pigeonhole principle (PHP) formulas and perfect matching formulas over highly unbalanced, sparse expander graphs, thus answering the challenge to establish strong lower bounds in the regime between balanced constant-degree expanders as in [Ben-Sasson and Wigderson '01] and highly unbalanced, dense graphs as in [Raz '04] and [Razborov '03, '04]. We obtain our results by revisiting Razborov's pseudo-width method for PHP formulas over dense graphs and extending it to sparse graphs. This further demonstrates the power of the pseudo-width method, and we believe it could potentially be useful for attacking also other longstanding open problems for resolution and other proof systems.]]></description>
      <pubDate>Tue, 25 Mar 2025 09:56:26 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.9</link>
      <guid>https://doi.org/10.46298/theoretics.25.9</guid>
      <author>de Rezende, Susanna F.</author>
      <author>Nordström, Jakob</author>
      <author>Risse, Kilian</author>
      <author>Sokolov, Dmitry</author>
      <dc:creator>de Rezende, Susanna F.</dc:creator>
      <dc:creator>Nordström, Jakob</dc:creator>
      <dc:creator>Risse, Kilian</dc:creator>
      <dc:creator>Sokolov, Dmitry</dc:creator>
      <content:encoded><![CDATA[We show exponential lower bounds on resolution proof length for pigeonhole principle (PHP) formulas and perfect matching formulas over highly unbalanced, sparse expander graphs, thus answering the challenge to establish strong lower bounds in the regime between balanced constant-degree expanders as in [Ben-Sasson and Wigderson '01] and highly unbalanced, dense graphs as in [Raz '04] and [Razborov '03, '04]. We obtain our results by revisiting Razborov's pseudo-width method for PHP formulas over dense graphs and extending it to sparse graphs. This further demonstrates the power of the pseudo-width method, and we believe it could potentially be useful for attacking also other longstanding open problems for resolution and other proof systems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Regular Languages in the Sliding Window Model</title>
      <description><![CDATA[We study the space complexity of the following problem: For a fixed regular language $L$, we receive a stream of symbols and want to test membership of a sliding window of size $n$ in $L$. For deterministic streaming algorithms we prove a trichotomy theorem, namely that the (optimal) space complexity is either constant, logarithmic or linear, measured in the window size $n$. Additionally, we provide natural language-theoretic characterizations of the space classes. We then extend the results to randomized streaming algorithms and we show that in this setting, the space complexity of any regular language is either constant, doubly logarithmic, logarithmic or linear. Finally, we introduce sliding window testers, which can distinguish whether a sliding window of size $n$ belongs to the language $L$ or has Hamming distance $> \epsilon n$ to $L$. We prove that every regular language has a deterministic (resp., randomized) sliding window tester that requires only logarithmic (resp., constant) space.]]></description>
      <pubDate>Thu, 06 Mar 2025 08:47:58 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.8</link>
      <guid>https://doi.org/10.46298/theoretics.25.8</guid>
      <author>Ganardi, Moses</author>
      <author>Hucke, Danny</author>
      <author>Lohrey, Markus</author>
      <author>Mamouras, Konstantinos</author>
      <author>Starikovskaya, Tatiana</author>
      <dc:creator>Ganardi, Moses</dc:creator>
      <dc:creator>Hucke, Danny</dc:creator>
      <dc:creator>Lohrey, Markus</dc:creator>
      <dc:creator>Mamouras, Konstantinos</dc:creator>
      <dc:creator>Starikovskaya, Tatiana</dc:creator>
      <content:encoded><![CDATA[We study the space complexity of the following problem: For a fixed regular language $L$, we receive a stream of symbols and want to test membership of a sliding window of size $n$ in $L$. For deterministic streaming algorithms we prove a trichotomy theorem, namely that the (optimal) space complexity is either constant, logarithmic or linear, measured in the window size $n$. Additionally, we provide natural language-theoretic characterizations of the space classes. We then extend the results to randomized streaming algorithms and we show that in this setting, the space complexity of any regular language is either constant, doubly logarithmic, logarithmic or linear. Finally, we introduce sliding window testers, which can distinguish whether a sliding window of size $n$ belongs to the language $L$ or has Hamming distance $> \epsilon n$ to $L$. We prove that every regular language has a deterministic (resp., randomized) sliding window tester that requires only logarithmic (resp., constant) space.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Multidimensional Quantum Walks, with Application to $k$-Distinctness</title>
      <description><![CDATA[While the quantum query complexity of $k$-distinctness is known to be $O\left(n^{3/4-1/4(2^k-1)}\right)$ for any constant $k \geq 4$, the best previous upper bound on the time complexity was $\widetilde{O}\left(n^{1-1/k}\right)$. We give a new upper bound of $\widetilde{O}\left(n^{3/4-1/4(2^k-1)}\right)$ on the time complexity, matching the query complexity up to polylogarithmic factors. In order to achieve this upper bound, we give a new technique for designing quantum walk search algorithms, which is an extension of the electric network framework. We also show how to solve the welded trees problem in $O(n)$ queries and $O(n^2)$ time using this new technique, showing that the new quantum walk framework can achieve exponential speedups.]]></description>
      <pubDate>Tue, 04 Mar 2025 10:48:58 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.7</link>
      <guid>https://doi.org/10.46298/theoretics.25.7</guid>
      <author>Jeffery, Stacey</author>
      <author>Zur, Sebastian</author>
      <dc:creator>Jeffery, Stacey</dc:creator>
      <dc:creator>Zur, Sebastian</dc:creator>
      <content:encoded><![CDATA[While the quantum query complexity of $k$-distinctness is known to be $O\left(n^{3/4-1/4(2^k-1)}\right)$ for any constant $k \geq 4$, the best previous upper bound on the time complexity was $\widetilde{O}\left(n^{1-1/k}\right)$. We give a new upper bound of $\widetilde{O}\left(n^{3/4-1/4(2^k-1)}\right)$ on the time complexity, matching the query complexity up to polylogarithmic factors. In order to achieve this upper bound, we give a new technique for designing quantum walk search algorithms, which is an extension of the electric network framework. We also show how to solve the welded trees problem in $O(n)$ queries and $O(n^2)$ time using this new technique, showing that the new quantum walk framework can achieve exponential speedups.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A $(3+\varepsilon)$-Approximate Correlation Clustering Algorithm in Dynamic Streams</title>
      <description><![CDATA[Grouping together similar elements in datasets is a common task in data mining and machine learning. In this paper, we study streaming algorithms for correlation clustering, where each pair of elements is labeled either similar or dissimilar. The task is to partition the elements and the objective is to minimize disagreements, that is, the number of dissimilar elements grouped together and similar elements that get separated. Our main contribution is a semi-streaming algorithm that achieves a $(3 + \varepsilon)$-approximation to the minimum number of disagreements using a single pass over the stream. In addition, the algorithm also works for dynamic streams. Our approach builds on the analysis of the PIVOT algorithm by Ailon, Charikar, and Newman [JACM'08] that obtains a $3$-approximation in the centralized setting. Our design allows us to sparsify the input graph by ignoring a large portion of the nodes and edges without a large extra cost as compared to the analysis of PIVOT. This sparsification makes our technique applicable in models such as semi-streaming, where sparse graphs can typically be handled much more efficiently. Our work improves on the approximation ratio of the recent single-pass $5$-approximation algorithm and on the number of passes of the recent $O(1/\varepsilon)$-pass $(3 + \varepsilon)$-approximation algorithm [Behnezhad, Charikar, Ma, Tan FOCS'22, SODA'23]. Our algorithm is also more robust and can be applied in dynamic streams. Furthermore, it is the first single pass $(3 + \varepsilon)$-approximation algorithm that uses polynomial post-processing time.]]></description>
      <pubDate>Fri, 28 Feb 2025 10:41:29 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.6</link>
      <guid>https://doi.org/10.46298/theoretics.25.6</guid>
      <author>Cambus, Mélanie</author>
      <author>Kuhn, Fabian</author>
      <author>Lindy, Etna</author>
      <author>Pai, Shreyas</author>
      <author>Uitto, Jara</author>
      <dc:creator>Cambus, Mélanie</dc:creator>
      <dc:creator>Kuhn, Fabian</dc:creator>
      <dc:creator>Lindy, Etna</dc:creator>
      <dc:creator>Pai, Shreyas</dc:creator>
      <dc:creator>Uitto, Jara</dc:creator>
      <content:encoded><![CDATA[Grouping together similar elements in datasets is a common task in data mining and machine learning. In this paper, we study streaming algorithms for correlation clustering, where each pair of elements is labeled either similar or dissimilar. The task is to partition the elements and the objective is to minimize disagreements, that is, the number of dissimilar elements grouped together and similar elements that get separated. Our main contribution is a semi-streaming algorithm that achieves a $(3 + \varepsilon)$-approximation to the minimum number of disagreements using a single pass over the stream. In addition, the algorithm also works for dynamic streams. Our approach builds on the analysis of the PIVOT algorithm by Ailon, Charikar, and Newman [JACM'08] that obtains a $3$-approximation in the centralized setting. Our design allows us to sparsify the input graph by ignoring a large portion of the nodes and edges without a large extra cost as compared to the analysis of PIVOT. This sparsification makes our technique applicable in models such as semi-streaming, where sparse graphs can typically be handled much more efficiently. Our work improves on the approximation ratio of the recent single-pass $5$-approximation algorithm and on the number of passes of the recent $O(1/\varepsilon)$-pass $(3 + \varepsilon)$-approximation algorithm [Behnezhad, Charikar, Ma, Tan FOCS'22, SODA'23]. Our algorithm is also more robust and can be applied in dynamic streams. Furthermore, it is the first single pass $(3 + \varepsilon)$-approximation algorithm that uses polynomial post-processing time.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Computing Threshold Budgets in Discrete-Bidding Games</title>
      <description><![CDATA[In a two-player zero-sum graph game, the players move a token throughout a graph to produce an infinite play, which determines the winner of the game. Bidding games are graph games in which in each turn, an auction (bidding) determines which player moves the token: the players have budgets, and in each turn, both players simultaneously submit bids that do not exceed their available budgets, the higher bidder moves the token, and pays the bid to the lower bidder (called Richman bidding). We focus on discrete-bidding games, in which, motivated by practical applications, the granularity of the players' bids is restricted, e.g., bids must be given in cents. A central quantity in bidding games is threshold budgets: a necessary and sufficient initial budget for winning the game. Previously, thresholds were shown to exist in parity games, but their structure was only understood for reachability games. Moreover, the previously-known algorithms have a worst-case exponential running time for both reachability and parity objectives, and output strategies that use exponential memory. We describe two algorithms for finding threshold budgets in parity discrete-bidding games. The first is a fixed-point algorithm. It reveals, for the first time, the structure of threshold budgets in parity discrete-bidding games. Based on this structure, we develop a second algorithm that shows that the problem of finding threshold budgets is in NP and coNP for both reachability and parity objectives. Moreover, our algorithm constructs strategies that use only linear memory. This is a corrected version of the paper (arXiv:2210.02773v4) published originally on Jan 22, 2025.]]></description>
      <pubDate>Wed, 22 Jan 2025 10:36:03 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.5</link>
      <guid>https://doi.org/10.46298/theoretics.25.5</guid>
      <author>Avni, Guy</author>
      <author>Sadhukhan, Suman</author>
      <dc:creator>Avni, Guy</dc:creator>
      <dc:creator>Sadhukhan, Suman</dc:creator>
      <content:encoded><![CDATA[In a two-player zero-sum graph game, the players move a token throughout a graph to produce an infinite play, which determines the winner of the game. Bidding games are graph games in which in each turn, an auction (bidding) determines which player moves the token: the players have budgets, and in each turn, both players simultaneously submit bids that do not exceed their available budgets, the higher bidder moves the token, and pays the bid to the lower bidder (called Richman bidding). We focus on discrete-bidding games, in which, motivated by practical applications, the granularity of the players' bids is restricted, e.g., bids must be given in cents. A central quantity in bidding games is threshold budgets: a necessary and sufficient initial budget for winning the game. Previously, thresholds were shown to exist in parity games, but their structure was only understood for reachability games. Moreover, the previously-known algorithms have a worst-case exponential running time for both reachability and parity objectives, and output strategies that use exponential memory. We describe two algorithms for finding threshold budgets in parity discrete-bidding games. The first is a fixed-point algorithm. It reveals, for the first time, the structure of threshold budgets in parity discrete-bidding games. Based on this structure, we develop a second algorithm that shows that the problem of finding threshold budgets is in NP and coNP for both reachability and parity objectives. Moreover, our algorithm constructs strategies that use only linear memory. This is a corrected version of the paper (arXiv:2210.02773v4) published originally on Jan 22, 2025.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On Approximability of Steiner Tree in $\ell_p$-metrics</title>
      <description><![CDATA[In the Continuous Steiner Tree problem (CST), we are given as input a set of points (called terminals) in a metric space and ask for the minimum-cost tree connecting them. Additional points (called Steiner points) from the metric space can be introduced as nodes in the solution. In the Discrete Steiner Tree problem (DST), we are given in addition to the terminals, a set of facilities, and any solution tree connecting the terminals can only contain the Steiner points from this set of facilities. Trevisan [SICOMP'00] showed that CST and DST are APX-hard when the input lies in the $\ell_1$-metric (and Hamming metric). Chleb\'ik and Chleb\'ikov\'a [TCS'08] showed that DST is NP-hard to approximate to factor of $96/95\approx 1.01$ in the graph metric (and consequently $\ell_\infty$-metric). Prior to this work, it was unclear if CST and DST are APX-hard in essentially every other popular metric. In this work, we prove that DST is APX-hard in every $\ell_p$-metric. We also prove that CST is APX-hard in the $\ell_{\infty}$-metric. Finally, we relate CST and DST, showing a general reduction from CST to DST in $\ell_p$-metrics.]]></description>
      <pubDate>Mon, 20 Jan 2025 09:54:14 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.4</link>
      <guid>https://doi.org/10.46298/theoretics.25.4</guid>
      <author>Fleischmann, Henry</author>
      <author>Gavva, Surya Teja</author>
      <author>S, Karthik C.</author>
      <dc:creator>Fleischmann, Henry</dc:creator>
      <dc:creator>Gavva, Surya Teja</dc:creator>
      <dc:creator>S, Karthik C.</dc:creator>
      <content:encoded><![CDATA[In the Continuous Steiner Tree problem (CST), we are given as input a set of points (called terminals) in a metric space and ask for the minimum-cost tree connecting them. Additional points (called Steiner points) from the metric space can be introduced as nodes in the solution. In the Discrete Steiner Tree problem (DST), we are given in addition to the terminals, a set of facilities, and any solution tree connecting the terminals can only contain the Steiner points from this set of facilities. Trevisan [SICOMP'00] showed that CST and DST are APX-hard when the input lies in the $\ell_1$-metric (and Hamming metric). Chleb\'ik and Chleb\'ikov\'a [TCS'08] showed that DST is NP-hard to approximate to factor of $96/95\approx 1.01$ in the graph metric (and consequently $\ell_\infty$-metric). Prior to this work, it was unclear if CST and DST are APX-hard in essentially every other popular metric. In this work, we prove that DST is APX-hard in every $\ell_p$-metric. We also prove that CST is APX-hard in the $\ell_{\infty}$-metric. Finally, we relate CST and DST, showing a general reduction from CST to DST in $\ell_p$-metrics.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approximate Counting for Spin Systems in Sub-Quadratic Time</title>
      <description><![CDATA[We present two randomised approximate counting algorithms with $\widetilde{O}(n^{2-c}/\varepsilon^2)$ running time for some constant $c>0$ and accuracy $\varepsilon$: (1) for the hard-core model with fugacity $\lambda$ on graphs with maximum degree $\Delta$ when $\lambda=O(\Delta^{-1.5-c_1})$ where $c_1=c/(2-2c)$; (2) for spin systems with strong spatial mixing (SSM) on planar graphs with quadratic growth, such as $\mathbb{Z}^2$. For the hard-core model, Weitz's algorithm (STOC, 2006) achieves sub-quadratic running time when correlation decays faster than the neighbourhood growth, namely when $\lambda = o(\Delta^{-2})$. Our first algorithm does not require this property and extends the range where sub-quadratic algorithms exist. Our second algorithm appears to be the first to achieve sub-quadratic running time up to the SSM threshold, albeit on a restricted family of graphs. It also extends to (not necessarily planar) graphs with polynomial growth, such as $\mathbb{Z}^d$, but with a running time of the form $\widetilde{O}\left(n^2\varepsilon^{-2}/2^{c(\log n)^{1/d}}\right)$ where $d$ is the exponent of the polynomial growth and $c>0$ is some constant.]]></description>
      <pubDate>Mon, 13 Jan 2025 11:35:49 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.3</link>
      <guid>https://doi.org/10.46298/theoretics.25.3</guid>
      <author>Anand, Konrad</author>
      <author>Feng, Weiming</author>
      <author>Freifeld, Graham</author>
      <author>Guo, Heng</author>
      <author>Wang, Jiaheng</author>
      <dc:creator>Anand, Konrad</dc:creator>
      <dc:creator>Feng, Weiming</dc:creator>
      <dc:creator>Freifeld, Graham</dc:creator>
      <dc:creator>Guo, Heng</dc:creator>
      <dc:creator>Wang, Jiaheng</dc:creator>
      <content:encoded><![CDATA[We present two randomised approximate counting algorithms with $\widetilde{O}(n^{2-c}/\varepsilon^2)$ running time for some constant $c>0$ and accuracy $\varepsilon$: (1) for the hard-core model with fugacity $\lambda$ on graphs with maximum degree $\Delta$ when $\lambda=O(\Delta^{-1.5-c_1})$ where $c_1=c/(2-2c)$; (2) for spin systems with strong spatial mixing (SSM) on planar graphs with quadratic growth, such as $\mathbb{Z}^2$. For the hard-core model, Weitz's algorithm (STOC, 2006) achieves sub-quadratic running time when correlation decays faster than the neighbourhood growth, namely when $\lambda = o(\Delta^{-2})$. Our first algorithm does not require this property and extends the range where sub-quadratic algorithms exist. Our second algorithm appears to be the first to achieve sub-quadratic running time up to the SSM threshold, albeit on a restricted family of graphs. It also extends to (not necessarily planar) graphs with polynomial growth, such as $\mathbb{Z}^d$, but with a running time of the form $\widetilde{O}\left(n^2\varepsilon^{-2}/2^{c(\log n)^{1/d}}\right)$ where $d$ is the exponent of the polynomial growth and $c>0$ is some constant.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An Alternate Proof of Near-Optimal Light Spanners</title>
      <description><![CDATA[In 2016, a breakthrough result of Chechik and Wulff-Nilsen [SODA '16] established that every $n$-node graph $G$ has a $(1+\varepsilon)(2k-1)$-spanner of lightness $O_{\varepsilon}(n^{1/k})$, and recent followup work by Le and Solomon [STOC '23] generalized the proof strategy and improved the dependence on $\varepsilon$. We give a new proof of this result, with the improved $\varepsilon$-dependence. Our proof is a direct analysis of the often-studied greedy spanner, and can be viewed as an extension of the folklore Moore bounds used to analyze spanner sparsity.]]></description>
      <pubDate>Fri, 10 Jan 2025 10:36:42 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.2</link>
      <guid>https://doi.org/10.46298/theoretics.25.2</guid>
      <author>Bodwin, Greg</author>
      <dc:creator>Bodwin, Greg</dc:creator>
      <content:encoded><![CDATA[In 2016, a breakthrough result of Chechik and Wulff-Nilsen [SODA '16] established that every $n$-node graph $G$ has a $(1+\varepsilon)(2k-1)$-spanner of lightness $O_{\varepsilon}(n^{1/k})$, and recent followup work by Le and Solomon [STOC '23] generalized the proof strategy and improved the dependence on $\varepsilon$. We give a new proof of this result, with the improved $\varepsilon$-dependence. Our proof is a direct analysis of the often-studied greedy spanner, and can be viewed as an extension of the folklore Moore bounds used to analyze spanner sparsity.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Finite-valued Streaming String Transducers</title>
      <description><![CDATA[A transducer is finite-valued if for some bound k, it maps any given input to at most k outputs. For classical, one-way transducers, it is known since the 80s that finite valuedness entails decidability of the equivalence problem. This decidability result is in contrast to the general case, which makes finite-valued transducers very attractive. For classical transducers, it is also known that finite valuedness is decidable and that any k-valued finite transducer can be decomposed as a union of k single-valued finite transducers. In this paper, we extend the above results to copyless streaming string transducers (SSTs), answering questions raised by Alur and Deshmukh in 2011. SSTs strictly extend the expressiveness of one-way transducers via additional variables that store partial outputs. We prove that any k-valued SST can be effectively decomposed as a union of k (single-valued) deterministic SSTs. As a corollary, we obtain equivalence of SSTs and two-way transducers in the finite-valued case (those two models are incomparable in general). Another corollary is an elementary upper bound for checking equivalence of finite-valued SSTs. The latter problem was already known to be decidable, but the proof complexity was unknown (it relied on Ehrenfeucht's conjecture). Finally, our main result is that finite valuedness of SSTs is decidable. The complexity is PSpace, and even PTime when the number of variables is fixed.]]></description>
      <pubDate>Wed, 08 Jan 2025 15:48:53 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.25.1</link>
      <guid>https://doi.org/10.46298/theoretics.25.1</guid>
      <author>Filiot, Emmanuel</author>
      <author>Jecker, Ismaël</author>
      <author>Löding, Christof</author>
      <author>Muscholl, Anca</author>
      <author>Puppis, Gabriele</author>
      <author>Winter, Sarah</author>
      <dc:creator>Filiot, Emmanuel</dc:creator>
      <dc:creator>Jecker, Ismaël</dc:creator>
      <dc:creator>Löding, Christof</dc:creator>
      <dc:creator>Muscholl, Anca</dc:creator>
      <dc:creator>Puppis, Gabriele</dc:creator>
      <dc:creator>Winter, Sarah</dc:creator>
      <content:encoded><![CDATA[A transducer is finite-valued if for some bound k, it maps any given input to at most k outputs. For classical, one-way transducers, it is known since the 80s that finite valuedness entails decidability of the equivalence problem. This decidability result is in contrast to the general case, which makes finite-valued transducers very attractive. For classical transducers, it is also known that finite valuedness is decidable and that any k-valued finite transducer can be decomposed as a union of k single-valued finite transducers. In this paper, we extend the above results to copyless streaming string transducers (SSTs), answering questions raised by Alur and Deshmukh in 2011. SSTs strictly extend the expressiveness of one-way transducers via additional variables that store partial outputs. We prove that any k-valued SST can be effectively decomposed as a union of k (single-valued) deterministic SSTs. As a corollary, we obtain equivalence of SSTs and two-way transducers in the finite-valued case (those two models are incomparable in general). Another corollary is an elementary upper bound for checking equivalence of finite-valued SSTs. The latter problem was already known to be decidable, but the proof complexity was unknown (it relied on Ehrenfeucht's conjecture). Finally, our main result is that finite valuedness of SSTs is decidable. The complexity is PSpace, and even PTime when the number of variables is fixed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>An Enumerative Perspective on Connectivity</title>
      <description><![CDATA[Connectivity (or equivalently, unweighted maximum flow) is an important measure in graph theory and combinatorial optimization. Given a graph $G$ with vertices $s$ and $t$, the connectivity $\lambda(s,t)$ from $s$ to $t$ is defined to be the maximum number of edge-disjoint paths from $s$ to $t$ in $G$. Much research has gone into designing fast algorithms for computing connectivities in graphs. Previous work showed that it is possible to compute connectivities for all pairs of vertices in directed graphs with $m$ edges in $\tilde{O}(m^\omega)$ time [Chueng, Lau, and Leung, FOCS 2011], where $\omega \in [2,2.3716)$ is the exponent of matrix multiplication. For the related problem of computing "small connectivities," it was recently shown that for any positive integer $k$, we can compute $\min(k,\lambda(s,t))$ for all pairs of vertices $(s,t)$ in a directed graph with $n$ nodes in $\tilde{O}((kn)^\omega)$ time [Akmal and Jin, ICALP 2023]. In this paper, we present an alternate exposition of these $\tilde{O}(m^\omega)$ and $\tilde{O}((kn)^\omega)$ time algorithms, with simpler proofs of correctness. Earlier proofs were somewhat indirect, introducing an elegant but ad hoc "flow vector framework" for showing correctness of these algorithms. In contrast, we observe that these algorithms for computing exact and small connectivity values can be interpreted as testing whether certain generating functions enumerating families of edge-disjoint paths are nonzero. This new perspective yields more transparent proofs, and ties the approach for these problems more closely to the literature surrounding algebraic graph algorithms.]]></description>
      <pubDate>Thu, 19 Dec 2024 08:18:38 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.26</link>
      <guid>https://doi.org/10.46298/theoretics.24.26</guid>
      <author>Akmal, Shyan</author>
      <dc:creator>Akmal, Shyan</dc:creator>
      <content:encoded><![CDATA[Connectivity (or equivalently, unweighted maximum flow) is an important measure in graph theory and combinatorial optimization. Given a graph $G$ with vertices $s$ and $t$, the connectivity $\lambda(s,t)$ from $s$ to $t$ is defined to be the maximum number of edge-disjoint paths from $s$ to $t$ in $G$. Much research has gone into designing fast algorithms for computing connectivities in graphs. Previous work showed that it is possible to compute connectivities for all pairs of vertices in directed graphs with $m$ edges in $\tilde{O}(m^\omega)$ time [Chueng, Lau, and Leung, FOCS 2011], where $\omega \in [2,2.3716)$ is the exponent of matrix multiplication. For the related problem of computing "small connectivities," it was recently shown that for any positive integer $k$, we can compute $\min(k,\lambda(s,t))$ for all pairs of vertices $(s,t)$ in a directed graph with $n$ nodes in $\tilde{O}((kn)^\omega)$ time [Akmal and Jin, ICALP 2023]. In this paper, we present an alternate exposition of these $\tilde{O}(m^\omega)$ and $\tilde{O}((kn)^\omega)$ time algorithms, with simpler proofs of correctness. Earlier proofs were somewhat indirect, introducing an elegant but ad hoc "flow vector framework" for showing correctness of these algorithms. In contrast, we observe that these algorithms for computing exact and small connectivity values can be interpreted as testing whether certain generating functions enumerating families of edge-disjoint paths are nonzero. This new perspective yields more transparent proofs, and ties the approach for these problems more closely to the literature surrounding algebraic graph algorithms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The Descriptive Complexity of Graph Neural Networks</title>
      <description><![CDATA[We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) $\text{TC}^0$. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmoid", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in $\text{TC}^0$. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform $\text{TC}^0$.]]></description>
      <pubDate>Wed, 04 Dec 2024 08:45:26 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.25</link>
      <guid>https://doi.org/10.46298/theoretics.24.25</guid>
      <author>Grohe, Martin</author>
      <dc:creator>Grohe, Martin</dc:creator>
      <content:encoded><![CDATA[We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) $\text{TC}^0$. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmoid", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in $\text{TC}^0$. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform $\text{TC}^0$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On complete classes of valuated matroids</title>
      <description><![CDATA[We characterize a rich class of valuated matroids, called R-minor valuated matroids that includes the indicator functions of matroids, and is closed under operations such as taking minors, duality, and induction by network. We exhibit a family of valuated matroids that are not R-minor based on sparse paving matroids. Valuated matroids are inherently related to gross substitute valuations in mathematical economics. By the same token we refute the Matroid Based Valuation Conjecture by Ostrovsky and Paes Leme (Theoretical Economics 2015) asserting that every gross substitute valuation arises from weighted matroid rank functions by repeated applications of merge and endowment operations. Our result also has implications in the context of Lorentzian polynomials: it reveals the limitations of known construction operations.]]></description>
      <pubDate>Mon, 18 Nov 2024 13:34:54 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.24</link>
      <guid>https://doi.org/10.46298/theoretics.24.24</guid>
      <author>Husić, Edin</author>
      <author>Loho, Georg</author>
      <author>Smith, Ben</author>
      <author>Végh, László A.</author>
      <dc:creator>Husić, Edin</dc:creator>
      <dc:creator>Loho, Georg</dc:creator>
      <dc:creator>Smith, Ben</dc:creator>
      <dc:creator>Végh, László A.</dc:creator>
      <content:encoded><![CDATA[We characterize a rich class of valuated matroids, called R-minor valuated matroids that includes the indicator functions of matroids, and is closed under operations such as taking minors, duality, and induction by network. We exhibit a family of valuated matroids that are not R-minor based on sparse paving matroids. Valuated matroids are inherently related to gross substitute valuations in mathematical economics. By the same token we refute the Matroid Based Valuation Conjecture by Ostrovsky and Paes Leme (Theoretical Economics 2015) asserting that every gross substitute valuation arises from weighted matroid rank functions by repeated applications of merge and endowment operations. Our result also has implications in the context of Lorentzian polynomials: it reveals the limitations of known construction operations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal Algorithm for the Planar Two-Center Problem</title>
      <description><![CDATA[We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.]]></description>
      <pubDate>Tue, 05 Nov 2024 09:04:51 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.23</link>
      <guid>https://doi.org/10.46298/theoretics.24.23</guid>
      <author>Cho, Kyungjin</author>
      <author>Oh, Eunjin</author>
      <author>Wang, Haitao</author>
      <author>Xue, Jie</author>
      <dc:creator>Cho, Kyungjin</dc:creator>
      <dc:creator>Oh, Eunjin</dc:creator>
      <dc:creator>Wang, Haitao</dc:creator>
      <dc:creator>Xue, Jie</dc:creator>
      <content:encoded><![CDATA[We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds</title>
      <description><![CDATA[We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an $s$-$t$-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight $(n+nm^{1/3})^{1-o(1)}$ lower bound for the Word Break problem on strings of length $n$ and dictionaries of total size $m$. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.]]></description>
      <pubDate>Fri, 04 Oct 2024 08:29:14 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.22</link>
      <guid>https://doi.org/10.46298/theoretics.24.22</guid>
      <author>Bringmann, Karl</author>
      <author>Grønlund, Allan</author>
      <author>Künnemann, Marvin</author>
      <author>Larsen, Kasper Green</author>
      <dc:creator>Bringmann, Karl</dc:creator>
      <dc:creator>Grønlund, Allan</dc:creator>
      <dc:creator>Künnemann, Marvin</dc:creator>
      <dc:creator>Larsen, Kasper Green</dc:creator>
      <content:encoded><![CDATA[We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an $s$-$t$-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight $(n+nm^{1/3})^{1-o(1)}$ lower bound for the Word Break problem on strings of length $n$ and dictionaries of total size $m$. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Refined Laser Method and Faster Matrix Multiplication</title>
      <description><![CDATA[The complexity of matrix multiplication is measured in terms of $\omega$, the smallest real number such that two $n\times n$ matrices can be multiplied using $O(n^{\omega+\epsilon})$ field operations for all $\epsilon>0$; the best bound until now is $\omega<2.37287$ [Le Gall'14]. All bounds on $\omega$ since 1986 have been obtained using the so-called laser method, a way to lower-bound the `value' of a tensor in designing matrix multiplication algorithms. The main result of this paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors. Thus, even before computing any specific values, it is clear that we achieve an improved bound on $\omega$, and we indeed obtain the best bound on $\omega$ to date: $$\omega < 2.37286.$$ The improvement is of the same magnitude as the improvement that [Le Gall'14] obtained over the previous bound [Vassilevska W.'12]. Our improvement to the laser method is quite general, and we believe it will have further applications in arithmetic complexity.]]></description>
      <pubDate>Wed, 04 Sep 2024 21:07:40 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.21</link>
      <guid>https://doi.org/10.46298/theoretics.24.21</guid>
      <author>Alman, Josh</author>
      <author>Williams, Virginia Vassilevska</author>
      <dc:creator>Alman, Josh</dc:creator>
      <dc:creator>Williams, Virginia Vassilevska</dc:creator>
      <content:encoded><![CDATA[The complexity of matrix multiplication is measured in terms of $\omega$, the smallest real number such that two $n\times n$ matrices can be multiplied using $O(n^{\omega+\epsilon})$ field operations for all $\epsilon>0$; the best bound until now is $\omega<2.37287$ [Le Gall'14]. All bounds on $\omega$ since 1986 have been obtained using the so-called laser method, a way to lower-bound the `value' of a tensor in designing matrix multiplication algorithms. The main result of this paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors. Thus, even before computing any specific values, it is clear that we achieve an improved bound on $\omega$, and we indeed obtain the best bound on $\omega$ to date: $$\omega < 2.37286.$$ The improvement is of the same magnitude as the improvement that [Le Gall'14] obtained over the previous bound [Vassilevska W.'12]. Our improvement to the laser method is quite general, and we believe it will have further applications in arithmetic complexity.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Lasserre Hierarchy for Graph Isomorphism and Homomorphism Indistinguishability</title>
      <description><![CDATA[We show that feasibility of the $t^\text{th}$ level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class $\mathcal{L}_t$ of graphs such that graphs $G$ and $H$ are not distinguished by the $t^\text{th}$ level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in $\mathcal{L}_t$. By analysing the treewidth of graphs in $\mathcal{L}_t$, we prove that the $3t^\text{th}$ level of Sherali--Adams linear programming hierarchy is as strong as the $t^\text{th}$ level of Lasserre. Moreover, we show that this is best possible in the sense that $3t$ cannot be lowered to $3t-1$ for any $t$. The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family $\mathcal{L}_t^+$ of graphs. Additionally, we give characterisations of level-$t$ Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the $t^\text{th}$ level of the Lasserre hierarchy with non-negativity constraints.]]></description>
      <pubDate>Mon, 02 Sep 2024 14:56:46 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.20</link>
      <guid>https://doi.org/10.46298/theoretics.24.20</guid>
      <author>Roberson, David E.</author>
      <author>Seppelt, Tim</author>
      <dc:creator>Roberson, David E.</dc:creator>
      <dc:creator>Seppelt, Tim</dc:creator>
      <content:encoded><![CDATA[We show that feasibility of the $t^\text{th}$ level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class $\mathcal{L}_t$ of graphs such that graphs $G$ and $H$ are not distinguished by the $t^\text{th}$ level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in $\mathcal{L}_t$. By analysing the treewidth of graphs in $\mathcal{L}_t$, we prove that the $3t^\text{th}$ level of Sherali--Adams linear programming hierarchy is as strong as the $t^\text{th}$ level of Lasserre. Moreover, we show that this is best possible in the sense that $3t$ cannot be lowered to $3t-1$ for any $t$. The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family $\mathcal{L}_t^+$ of graphs. Additionally, we give characterisations of level-$t$ Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the $t^\text{th}$ level of the Lasserre hierarchy with non-negativity constraints.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Faster parameterized algorithms for modification problems to minor-closed classes</title>
      <description><![CDATA[Let ${\cal G}$ be a minor-closed graph class and let $G$ be an $n$-vertex graph. We say that $G$ is a $k$-apex of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. Our first result is an algorithm that decides whether $G$ is a $k$-apex of ${\cal G}$ in time $2^{{\sf poly}(k)}\cdot n^2$, where ${\sf poly}$ is a polynomial function depending on ${\cal G}$. This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020], whose running time was $2^{{\sf poly}(k)}\cdot n^3$. The elimination distance of $G$ to ${\cal G}$, denoted by ${\sf ed}_{\cal G}(G)$, is the minimum number of rounds required to reduce each connected component of $G$ to a graph in ${\cal G}$ by removing one vertex from each connected component in each round. Bulian and Dawar [Algorithmica 2017] provided an FPT-algorithm, with parameter $k$, to decide whether ${\sf ed}_{\cal G}(G)\leq k$. However, its dependence on $k$ is not explicit. We extend the techniques used in the first algorithm to decide whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{2^{2^{{\sf poly}(k)}}}\cdot n^2$. This is the first algorithm for this problem with an explicit parametric dependence in $k$. In the special case where ${\cal G}$ excludes some apex-graph as a minor, we give two alternative algorithms, running in time $2^{2^{{\cal O}(k^2\log k)}}\cdot n^2$ and $2^{{\sf poly}(k)}\cdot n^3$ respectively, where $c$ and ${\sf poly}$ depend on ${\cal G}$. As a stepping stone for these algorithms, we provide an algorithm that decides whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{{\cal O}({\sf tw}\cdot k+{\sf tw}\log{\sf tw})}\cdot n$, where ${\sf tw}$ is the treewidth of $G$. Finally, we provide explicit upper bounds on the size of the graphs in the minor-obstruction set of the class of graphs ${\cal E}_k({\cal G})=\{G\mid{\sf ed}_{\cal G}(G)\leq k\}$.]]></description>
      <pubDate>Mon, 12 Aug 2024 06:49:06 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.19</link>
      <guid>https://doi.org/10.46298/theoretics.24.19</guid>
      <author>Morelle, Laure</author>
      <author>Sau, Ignasi</author>
      <author>Stamoulis, Giannos</author>
      <author>Thilikos, Dimitrios M.</author>
      <dc:creator>Morelle, Laure</dc:creator>
      <dc:creator>Sau, Ignasi</dc:creator>
      <dc:creator>Stamoulis, Giannos</dc:creator>
      <dc:creator>Thilikos, Dimitrios M.</dc:creator>
      <content:encoded><![CDATA[Let ${\cal G}$ be a minor-closed graph class and let $G$ be an $n$-vertex graph. We say that $G$ is a $k$-apex of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. Our first result is an algorithm that decides whether $G$ is a $k$-apex of ${\cal G}$ in time $2^{{\sf poly}(k)}\cdot n^2$, where ${\sf poly}$ is a polynomial function depending on ${\cal G}$. This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020], whose running time was $2^{{\sf poly}(k)}\cdot n^3$. The elimination distance of $G$ to ${\cal G}$, denoted by ${\sf ed}_{\cal G}(G)$, is the minimum number of rounds required to reduce each connected component of $G$ to a graph in ${\cal G}$ by removing one vertex from each connected component in each round. Bulian and Dawar [Algorithmica 2017] provided an FPT-algorithm, with parameter $k$, to decide whether ${\sf ed}_{\cal G}(G)\leq k$. However, its dependence on $k$ is not explicit. We extend the techniques used in the first algorithm to decide whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{2^{2^{{\sf poly}(k)}}}\cdot n^2$. This is the first algorithm for this problem with an explicit parametric dependence in $k$. In the special case where ${\cal G}$ excludes some apex-graph as a minor, we give two alternative algorithms, running in time $2^{2^{{\cal O}(k^2\log k)}}\cdot n^2$ and $2^{{\sf poly}(k)}\cdot n^3$ respectively, where $c$ and ${\sf poly}$ depend on ${\cal G}$. As a stepping stone for these algorithms, we provide an algorithm that decides whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{{\cal O}({\sf tw}\cdot k+{\sf tw}\log{\sf tw})}\cdot n$, where ${\sf tw}$ is the treewidth of $G$. Finally, we provide explicit upper bounds on the size of the graphs in the minor-obstruction set of the class of graphs ${\cal E}_k({\cal G})=\{G\mid{\sf ed}_{\cal G}(G)\leq k\}$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Sparse juntas on the biased hypercube</title>
      <description><![CDATA[We give a structure theorem for Boolean functions on the $p$-biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse juntas. Our structure theorem implies that such functions are $O(\epsilon^{C_d} + p)$-close to constant functions. We pinpoint the exact value of the constant $C_d$. We also give an analogous result for monotone Boolean functions on the biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse DNFs. Our structure theorems are optimal in the following sense: for every $d,\epsilon,p$, we identify a class $\mathcal{F}_{d,\epsilon,p}$ of degree $d$ sparse juntas which are $O(\epsilon)$-close to Boolean (in the monotone case, width $d$ sparse DNFs) such that a Boolean function on the $p$-biased hypercube is $O(\epsilon)$-close to degree $d$ in $L_2$ iff it is $O(\epsilon)$-close to a function in $\mathcal{F}_{d,\epsilon,p}$.]]></description>
      <pubDate>Tue, 30 Jul 2024 07:27:49 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.18</link>
      <guid>https://doi.org/10.46298/theoretics.24.18</guid>
      <author>Dinur, Irit</author>
      <author>Filmus, Yuval</author>
      <author>Harsha, Prahladh</author>
      <dc:creator>Dinur, Irit</dc:creator>
      <dc:creator>Filmus, Yuval</dc:creator>
      <dc:creator>Harsha, Prahladh</dc:creator>
      <content:encoded><![CDATA[We give a structure theorem for Boolean functions on the $p$-biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse juntas. Our structure theorem implies that such functions are $O(\epsilon^{C_d} + p)$-close to constant functions. We pinpoint the exact value of the constant $C_d$. We also give an analogous result for monotone Boolean functions on the biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse DNFs. Our structure theorems are optimal in the following sense: for every $d,\epsilon,p$, we identify a class $\mathcal{F}_{d,\epsilon,p}$ of degree $d$ sparse juntas which are $O(\epsilon)$-close to Boolean (in the monotone case, width $d$ sparse DNFs) such that a Boolean function on the $p$-biased hypercube is $O(\epsilon)$-close to degree $d$ in $L_2$ iff it is $O(\epsilon)$-close to a function in $\mathcal{F}_{d,\epsilon,p}$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Constructing Deterministic Parity Automata from Positive and Negative Examples</title>
      <description><![CDATA[We present a polynomial time algorithm that constructs a deterministic parity automaton (DPA) from a given set of positive and negative ultimately periodic example words. We show that this algorithm is complete for the class of $\omega$-regular languages, that is, it can learn a DPA for each regular $\omega$-language. For use in the algorithm, we give a definition of a DPA, that we call the precise DPA of a language, and show that it can be constructed from the syntactic family of right congruences for that language (introduced by Maler and Staiger in 1997). Depending on the structure of the language, the precise DPA can be of exponential size compared to a minimal DPA, but it can also be a minimal DPA. The upper bound that we obtain on the number of examples required for our algorithm to find a DPA for $L$ is therefore exponential in the size of a minimal DPA, in general. However we identify two parameters of regular $\omega$-languages such that fixing these parameters makes the bound polynomial.]]></description>
      <pubDate>Tue, 30 Jul 2024 07:25:44 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.17</link>
      <guid>https://doi.org/10.46298/theoretics.24.17</guid>
      <author>Bohn, León</author>
      <author>Löding, Christof</author>
      <dc:creator>Bohn, León</dc:creator>
      <dc:creator>Löding, Christof</dc:creator>
      <content:encoded><![CDATA[We present a polynomial time algorithm that constructs a deterministic parity automaton (DPA) from a given set of positive and negative ultimately periodic example words. We show that this algorithm is complete for the class of $\omega$-regular languages, that is, it can learn a DPA for each regular $\omega$-language. For use in the algorithm, we give a definition of a DPA, that we call the precise DPA of a language, and show that it can be constructed from the syntactic family of right congruences for that language (introduced by Maler and Staiger in 1997). Depending on the structure of the language, the precise DPA can be of exponential size compared to a minimal DPA, but it can also be a minimal DPA. The upper bound that we obtain on the number of examples required for our algorithm to find a DPA for $L$ is therefore exponential in the size of a minimal DPA, in general. However we identify two parameters of regular $\omega$-languages such that fixing these parameters makes the bound polynomial.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Spectral Independence via Stability and Applications to Holant-Type Problems</title>
      <description><![CDATA[This paper formalizes connections between stability of polynomials and convergence rates of Markov Chain Monte Carlo (MCMC) algorithms. We prove that if a (multivariate) partition function is nonzero in a region around a real point $\lambda$ then spectral independence holds at $\lambda$. As a consequence, for Holant-type problems (e.g., spin systems) on bounded-degree graphs, we obtain optimal $O(n\log n)$ mixing time bounds for the single-site update Markov chain known as the Glauber dynamics. Our result significantly improves the running time guarantees obtained via the polynomial interpolation method of Barvinok (2017), refined by Patel and Regts (2017). There are a variety of applications of our results. In this paper, we focus on Holant-type (i.e., edge-coloring) problems, including weighted edge covers and weighted even subgraphs. For the weighted edge cover problem (and several natural generalizations) we obtain an $O(n\log{n})$ sampling algorithm on bounded-degree graphs. The even subgraphs problem corresponds to the high-temperature expansion of the ferromagnetic Ising model. We obtain an $O(n\log{n})$ sampling algorithm for the ferromagnetic Ising model with a nonzero external field on bounded-degree graphs, which improves upon the classical result of Jerrum and Sinclair (1993) for this class of graphs. We obtain further applications to antiferromagnetic two-spin models on line graphs, weighted graph homomorphisms, tensor networks, and more.]]></description>
      <pubDate>Fri, 12 Jul 2024 04:24:58 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.16</link>
      <guid>https://doi.org/10.46298/theoretics.24.16</guid>
      <author>Chen, Zongchen</author>
      <author>Liu, Kuikui</author>
      <author>Vigoda, Eric</author>
      <dc:creator>Chen, Zongchen</dc:creator>
      <dc:creator>Liu, Kuikui</dc:creator>
      <dc:creator>Vigoda, Eric</dc:creator>
      <content:encoded><![CDATA[This paper formalizes connections between stability of polynomials and convergence rates of Markov Chain Monte Carlo (MCMC) algorithms. We prove that if a (multivariate) partition function is nonzero in a region around a real point $\lambda$ then spectral independence holds at $\lambda$. As a consequence, for Holant-type problems (e.g., spin systems) on bounded-degree graphs, we obtain optimal $O(n\log n)$ mixing time bounds for the single-site update Markov chain known as the Glauber dynamics. Our result significantly improves the running time guarantees obtained via the polynomial interpolation method of Barvinok (2017), refined by Patel and Regts (2017). There are a variety of applications of our results. In this paper, we focus on Holant-type (i.e., edge-coloring) problems, including weighted edge covers and weighted even subgraphs. For the weighted edge cover problem (and several natural generalizations) we obtain an $O(n\log{n})$ sampling algorithm on bounded-degree graphs. The even subgraphs problem corresponds to the high-temperature expansion of the ferromagnetic Ising model. We obtain an $O(n\log{n})$ sampling algorithm for the ferromagnetic Ising model with a nonzero external field on bounded-degree graphs, which improves upon the classical result of Jerrum and Sinclair (1993) for this class of graphs. We obtain further applications to antiferromagnetic two-spin models on line graphs, weighted graph homomorphisms, tensor networks, and more.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Approximate Distance Sensitivity Oracles in Subquadratic Space</title>
      <description><![CDATA[An $f$-edge fault-tolerant distance sensitive oracle ($f$-DSO) with stretch $\sigma \ge 1$ is a data structure that preprocesses a given undirected, unweighted graph $G$ with $n$ vertices and $m$ edges, and a positive integer $f$. When queried with a pair of vertices $s, t$ and a set $F$ of at most $f$ edges, it returns a $\sigma$-approximation of the $s$-$t$-distance in $G-F$. We study $f$-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005] showed that this is only possible for $\sigma \ge 3$. We present, for any constant $f \ge 1$ and $\alpha \in (0, \frac{1}{2})$, and any $\varepsilon > 0$, a randomized $f$-DSO with stretch $ 3 + \varepsilon$ that w.h.p. takes $\widetilde{O}(n^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+2}$ space and has an $O(n^\alpha/\varepsilon^2)$ query time. The time to build the oracle is $\widetilde{O}(mn^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+1}$. We also give an improved construction for graphs with diameter at most $D$. For any positive integer $k$, we devise an $f$-DSO with stretch $2k-1$ that w.h.p. takes $O(D^{f+o(1)} n^{1+1/k})$ space and has $\widetilde{O}(D^{o(1)})$ query time, with a preprocessing time of $O(D^{f+o(1)} mn^{1/k})$. Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an $f$-DSO with stretch $1{+}\varepsilon$ and preprocessing time $O(n^{5+o(1)}/\varepsilon^f)$, albeit with a super-quadratic space requirement. We show how to reduce their preprocessing time to $O(mn^{2+o(1)}/\varepsilon^f)$.]]></description>
      <pubDate>Wed, 05 Jun 2024 13:19:52 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.15</link>
      <guid>https://doi.org/10.46298/theoretics.24.15</guid>
      <author>Bilò, Davide</author>
      <author>Chechik, Shiri</author>
      <author>Choudhary, Keerti</author>
      <author>Cohen, Sarel</author>
      <author>Friedrich, Tobias</author>
      <author>Krogmann, Simon</author>
      <author>Schirneck, Martin</author>
      <dc:creator>Bilò, Davide</dc:creator>
      <dc:creator>Chechik, Shiri</dc:creator>
      <dc:creator>Choudhary, Keerti</dc:creator>
      <dc:creator>Cohen, Sarel</dc:creator>
      <dc:creator>Friedrich, Tobias</dc:creator>
      <dc:creator>Krogmann, Simon</dc:creator>
      <dc:creator>Schirneck, Martin</dc:creator>
      <content:encoded><![CDATA[An $f$-edge fault-tolerant distance sensitive oracle ($f$-DSO) with stretch $\sigma \ge 1$ is a data structure that preprocesses a given undirected, unweighted graph $G$ with $n$ vertices and $m$ edges, and a positive integer $f$. When queried with a pair of vertices $s, t$ and a set $F$ of at most $f$ edges, it returns a $\sigma$-approximation of the $s$-$t$-distance in $G-F$. We study $f$-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005] showed that this is only possible for $\sigma \ge 3$. We present, for any constant $f \ge 1$ and $\alpha \in (0, \frac{1}{2})$, and any $\varepsilon > 0$, a randomized $f$-DSO with stretch $ 3 + \varepsilon$ that w.h.p. takes $\widetilde{O}(n^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+2}$ space and has an $O(n^\alpha/\varepsilon^2)$ query time. The time to build the oracle is $\widetilde{O}(mn^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+1}$. We also give an improved construction for graphs with diameter at most $D$. For any positive integer $k$, we devise an $f$-DSO with stretch $2k-1$ that w.h.p. takes $O(D^{f+o(1)} n^{1+1/k})$ space and has $\widetilde{O}(D^{o(1)})$ query time, with a preprocessing time of $O(D^{f+o(1)} mn^{1/k})$. Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an $f$-DSO with stretch $1{+}\varepsilon$ and preprocessing time $O(n^{5+o(1)}/\varepsilon^f)$, albeit with a super-quadratic space requirement. We show how to reduce their preprocessing time to $O(mn^{2+o(1)}/\varepsilon^f)$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Unifying the Three Algebraic Approaches to the CSP via Minimal Taylor Algebras</title>
      <description><![CDATA[This paper focuses on the algebraic theory underlying the study of the complexity and the algorithms for the Constraint Satisfaction Problem (CSP). We unify, simplify, and extend parts of the three approaches that have been developed to study the CSP over finite templates -- absorption theory that was used to characterize CSPs solvable by local consistency methods (JACM'14), and Bulatov's and Zhuk's theories that were used for two independent proofs of the CSP Dichotomy Theorem (FOCS'17, JACM'20). As the first contribution we present an elementary theorem about primitive positive definability and use it to obtain the starting points of Bulatov's and Zhuk's proofs as corollaries. As the second contribution we propose and initiate a systematic study of minimal Taylor algebras. This class of algebras is broad enough that it suffices to verify the CSP Dichotomy Theorem on this class only, but still is unusually well behaved. In particular, many concepts from the three approaches coincide in this class, which is in striking contrast with the general setting. We believe that the theory initiated in this paper will eventually result in a simple and more natural proof of the Dichotomy Theorem that employs a simpler and more efficient algorithm, and will help in attacking complexity questions in other CSP-related problems.]]></description>
      <pubDate>Wed, 15 May 2024 08:36:31 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.14</link>
      <guid>https://doi.org/10.46298/theoretics.24.14</guid>
      <author>Barto, Libor</author>
      <author>Brady, Zarathustra</author>
      <author>Bulatov, Andrei</author>
      <author>Kozik, Marcin</author>
      <author>Zhuk, Dmitriy</author>
      <dc:creator>Barto, Libor</dc:creator>
      <dc:creator>Brady, Zarathustra</dc:creator>
      <dc:creator>Bulatov, Andrei</dc:creator>
      <dc:creator>Kozik, Marcin</dc:creator>
      <dc:creator>Zhuk, Dmitriy</dc:creator>
      <content:encoded><![CDATA[This paper focuses on the algebraic theory underlying the study of the complexity and the algorithms for the Constraint Satisfaction Problem (CSP). We unify, simplify, and extend parts of the three approaches that have been developed to study the CSP over finite templates -- absorption theory that was used to characterize CSPs solvable by local consistency methods (JACM'14), and Bulatov's and Zhuk's theories that were used for two independent proofs of the CSP Dichotomy Theorem (FOCS'17, JACM'20). As the first contribution we present an elementary theorem about primitive positive definability and use it to obtain the starting points of Bulatov's and Zhuk's proofs as corollaries. As the second contribution we propose and initiate a systematic study of minimal Taylor algebras. This class of algebras is broad enough that it suffices to verify the CSP Dichotomy Theorem on this class only, but still is unusually well behaved. In particular, many concepts from the three approaches coincide in this class, which is in striking contrast with the general setting. We believe that the theory initiated in this paper will eventually result in a simple and more natural proof of the Dichotomy Theorem that employs a simpler and more efficient algorithm, and will help in attacking complexity questions in other CSP-related problems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Orbit-Finite-Dimensional Vector Spaces and Weighted Register Automata</title>
      <description><![CDATA[We develop a theory of vector spaces spanned by orbit-finite sets. Using this theory, we give a decision procedure for equivalence of weighted register automata, which are the common generalization of weighted automata and register automata for infinite alphabets. The algorithm runs in exponential time, and in polynomial time for a fixed number of registers. As a special case, we can decide, with the same complexity, language equivalence for unambiguous register automata, which improves previous results in three ways: (a) we allow for order comparisons on atoms, and not just equality; (b) the complexity is exponentially better; and (c) we allow automata with guessing.]]></description>
      <pubDate>Mon, 06 May 2024 16:52:08 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.13</link>
      <guid>https://doi.org/10.46298/theoretics.24.13</guid>
      <author>Bojańczyk, Mikołaj</author>
      <author>Fijalkow, Joanna</author>
      <author>Klin, Bartek</author>
      <author>Moerman, Joshua</author>
      <dc:creator>Bojańczyk, Mikołaj</dc:creator>
      <dc:creator>Fijalkow, Joanna</dc:creator>
      <dc:creator>Klin, Bartek</dc:creator>
      <dc:creator>Moerman, Joshua</dc:creator>
      <content:encoded><![CDATA[We develop a theory of vector spaces spanned by orbit-finite sets. Using this theory, we give a decision procedure for equivalence of weighted register automata, which are the common generalization of weighted automata and register automata for infinite alphabets. The algorithm runs in exponential time, and in polynomial time for a fixed number of registers. As a special case, we can decide, with the same complexity, language equivalence for unambiguous register automata, which improves previous results in three ways: (a) we allow for order comparisons on atoms, and not just equality; (b) the complexity is exponentially better; and (c) we allow automata with guessing.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Framework for $\exists \mathbb{R}$-Completeness of Two-Dimensional Packing Problems</title>
      <description><![CDATA[The aim in packing problems is to decide if a given set of pieces can be placed inside a given container. A packing problem is defined by the types of pieces and containers to be handled, and the motions that are allowed to move the pieces. The pieces must be placed so that in the resulting placement, they are pairwise interior-disjoint. We establish a framework which enables us to show that for many combinations of allowed pieces, containers and motions, the resulting problem is $\exists \mathbb{R}$-complete. This means that the problem is equivalent (under polynomial time reductions) to deciding whether a given system of polynomial equations and inequalities with integer coefficients has a real solution. We consider packing problems where only translations are allowed as the motions, and problems where arbitrary rigid motions are allowed, i.e., both translations and rotations. When rotations are allowed, we show that it is an $\exists \mathbb{R}$-complete problem to decide if a set of convex polygons, each of which has at most $7$ corners, can be packed into a square. Restricted to translations, we show that the following problems are $\exists \mathbb{R}$-complete: (i) pieces bounded by segments and hyperbolic curves to be packed in a square, and (ii) convex polygons to be packed in a container bounded by segments and hyperbolic curves.]]></description>
      <pubDate>Fri, 26 Apr 2024 07:42:59 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.11</link>
      <guid>https://doi.org/10.46298/theoretics.24.11</guid>
      <author>Abrahamsen, Mikkel</author>
      <author>Miltzow, Tillmann</author>
      <author>Seiferth, Nadja</author>
      <dc:creator>Abrahamsen, Mikkel</dc:creator>
      <dc:creator>Miltzow, Tillmann</dc:creator>
      <dc:creator>Seiferth, Nadja</dc:creator>
      <content:encoded><![CDATA[The aim in packing problems is to decide if a given set of pieces can be placed inside a given container. A packing problem is defined by the types of pieces and containers to be handled, and the motions that are allowed to move the pieces. The pieces must be placed so that in the resulting placement, they are pairwise interior-disjoint. We establish a framework which enables us to show that for many combinations of allowed pieces, containers and motions, the resulting problem is $\exists \mathbb{R}$-complete. This means that the problem is equivalent (under polynomial time reductions) to deciding whether a given system of polynomial equations and inequalities with integer coefficients has a real solution. We consider packing problems where only translations are allowed as the motions, and problems where arbitrary rigid motions are allowed, i.e., both translations and rotations. When rotations are allowed, we show that it is an $\exists \mathbb{R}$-complete problem to decide if a set of convex polygons, each of which has at most $7$ corners, can be packed into a square. Restricted to translations, we show that the following problems are $\exists \mathbb{R}$-complete: (i) pieces bounded by segments and hyperbolic curves to be packed in a square, and (ii) convex polygons to be packed in a container bounded by segments and hyperbolic curves.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>From Muller to Parity and Rabin Automata: Optimal Transformations Preserving (History) Determinism</title>
      <description><![CDATA[We study transformations of automata and games using Muller conditions into equivalent ones using parity or Rabin conditions. We present two transformations, one that turns a deterministic Muller automaton into an equivalent deterministic parity automaton, and another that provides an equivalent history-deterministic Rabin automaton. We show a strong optimality result: the obtained automata are minimal amongst those that can be derived from the original automaton by duplication of states. We introduce the notions of locally bijective morphisms and history-deterministic mappings to formalise the correctness and optimality of these transformations. The proposed transformations are based on a novel structure, called the alternating cycle decomposition, inspired by and extending Zielonka trees. In addition to providing optimal transformations of automata, the alternating cycle decomposition offers fundamental information on their structure. We use this information to give crisp characterisations on the possibility of relabelling automata with different acceptance conditions and to perform a systematic study of a normal form for parity automata.]]></description>
      <pubDate>Wed, 24 Apr 2024 14:51:44 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.12</link>
      <guid>https://doi.org/10.46298/theoretics.24.12</guid>
      <author>Casares, Antonio</author>
      <author>Colcombet, Thomas</author>
      <author>Fijalkow, Nathanaël</author>
      <author>Lehtinen, Karoliina</author>
      <dc:creator>Casares, Antonio</dc:creator>
      <dc:creator>Colcombet, Thomas</dc:creator>
      <dc:creator>Fijalkow, Nathanaël</dc:creator>
      <dc:creator>Lehtinen, Karoliina</dc:creator>
      <content:encoded><![CDATA[We study transformations of automata and games using Muller conditions into equivalent ones using parity or Rabin conditions. We present two transformations, one that turns a deterministic Muller automaton into an equivalent deterministic parity automaton, and another that provides an equivalent history-deterministic Rabin automaton. We show a strong optimality result: the obtained automata are minimal amongst those that can be derived from the original automaton by duplication of states. We introduce the notions of locally bijective morphisms and history-deterministic mappings to formalise the correctness and optimality of these transformations. The proposed transformations are based on a novel structure, called the alternating cycle decomposition, inspired by and extending Zielonka trees. In addition to providing optimal transformations of automata, the alternating cycle decomposition offers fundamental information on their structure. We use this information to give crisp characterisations on the possibility of relabelling automata with different acceptance conditions and to perform a systematic study of a normal form for parity automata.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On Classifying Continuous Constraint Satisfaction Problems</title>
      <description><![CDATA[A continuous constraint satisfaction problem (CCSP) is a constraint satisfaction problem (CSP) with an interval domain $U \subset \mathbb{R}$. We engage in a systematic study to classify CCSPs that are complete of the Existential Theory of the Reals, i.e., ER-complete. To define this class, we first consider the problem ETR, which also stands for Existential Theory of the Reals. In an instance of this problem we are given some sentence of the form $\exists x_1, \ldots, x_n \in \mathbb{R} : \Phi(x_1, \ldots, x_n)$, where $\Phi$ is a well-formed quantifier-free formula consisting of the symbols $\{0, 1, +, \cdot, \geq, >, \wedge, \vee, \neg\}$, the goal is to check whether this sentence is true. Now the class ER is the family of all problems that admit a polynomial-time many-one reduction to ETR. It is known that NP $\subseteq$ ER $\subseteq$ PSPACE. We restrict our attention on CCSPs with addition constraints ($x + y = z$) and some other mild technical conditions. Previously, it was shown that multiplication constraints ($x \cdot y = z$), squaring constraints ($x^2 = y$), or inversion constraints ($x\cdot y = 1$) are sufficient to establish ER-completeness. We extend this in the strongest possible sense for equality constraints as follows. We show that CCSPs (with addition constraints and some other mild technical conditions) that have any one well-behaved curved equality constraint ($f(x,y) = 0$) are ER-complete. We further extend our results to inequality constraints. We show that any well-behaved convexly curved and any well-behaved concavely curved inequality constraint ($f(x,y) \geq 0$ and $g(x,y) \geq 0$) imply ER-completeness on the class of such CCSPs.]]></description>
      <pubDate>Tue, 16 Apr 2024 08:17:28 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.10</link>
      <guid>https://doi.org/10.46298/theoretics.24.10</guid>
      <author>Miltzow, Tillmann</author>
      <author>Schmiermann, Reinier F.</author>
      <dc:creator>Miltzow, Tillmann</dc:creator>
      <dc:creator>Schmiermann, Reinier F.</dc:creator>
      <content:encoded><![CDATA[A continuous constraint satisfaction problem (CCSP) is a constraint satisfaction problem (CSP) with an interval domain $U \subset \mathbb{R}$. We engage in a systematic study to classify CCSPs that are complete of the Existential Theory of the Reals, i.e., ER-complete. To define this class, we first consider the problem ETR, which also stands for Existential Theory of the Reals. In an instance of this problem we are given some sentence of the form $\exists x_1, \ldots, x_n \in \mathbb{R} : \Phi(x_1, \ldots, x_n)$, where $\Phi$ is a well-formed quantifier-free formula consisting of the symbols $\{0, 1, +, \cdot, \geq, >, \wedge, \vee, \neg\}$, the goal is to check whether this sentence is true. Now the class ER is the family of all problems that admit a polynomial-time many-one reduction to ETR. It is known that NP $\subseteq$ ER $\subseteq$ PSPACE. We restrict our attention on CCSPs with addition constraints ($x + y = z$) and some other mild technical conditions. Previously, it was shown that multiplication constraints ($x \cdot y = z$), squaring constraints ($x^2 = y$), or inversion constraints ($x\cdot y = 1$) are sufficient to establish ER-completeness. We extend this in the strongest possible sense for equality constraints as follows. We show that CCSPs (with addition constraints and some other mild technical conditions) that have any one well-behaved curved equality constraint ($f(x,y) = 0$) are ER-complete. We further extend our results to inequality constraints. We show that any well-behaved convexly curved and any well-behaved concavely curved inequality constraint ($f(x,y) \geq 0$ and $g(x,y) \geq 0$) imply ER-completeness on the class of such CCSPs.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Positivity-hardness results on Markov decision processes</title>
      <description><![CDATA[This paper investigates a series of optimization problems for one-counter Markov decision processes (MDPs) and integer-weighted MDPs with finite state space. Specifically, it considers problems addressing termination probabilities and expected termination times for one-counter MDPs, as well as satisfaction probabilities of energy objectives, conditional and partial expectations, satisfaction probabilities of constraints on the total accumulated weight, the computation of quantiles for the accumulated weight, and the conditional value-at-risk for accumulated weights for integer-weighted MDPs. Although algorithmic results are available for some special instances, the decidability status of the decision versions of these problems is unknown in general. The paper demonstrates that these optimization problems are inherently mathematically difficult by providing polynomial-time reductions from the Positivity problem for linear recurrence sequences. This problem is a well-known number-theoretic problem whose decidability status has been open for decades and it is known that decidability of the Positivity problem would have far-reaching consequences in analytic number theory. So, the reductions presented in the paper show that an algorithmic solution to any of the investigated problems is not possible without a major breakthrough in analytic number theory. The reductions rely on the construction of MDP-gadgets that encode the initial values and linear recurrence relations of linear recurrence sequences. These gadgets can flexibly be adjusted to prove the various Positivity-hardness results.]]></description>
      <pubDate>Mon, 08 Apr 2024 06:12:45 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.9</link>
      <guid>https://doi.org/10.46298/theoretics.24.9</guid>
      <author>Piribauer, Jakob</author>
      <author>Baier, Christel</author>
      <dc:creator>Piribauer, Jakob</dc:creator>
      <dc:creator>Baier, Christel</dc:creator>
      <content:encoded><![CDATA[This paper investigates a series of optimization problems for one-counter Markov decision processes (MDPs) and integer-weighted MDPs with finite state space. Specifically, it considers problems addressing termination probabilities and expected termination times for one-counter MDPs, as well as satisfaction probabilities of energy objectives, conditional and partial expectations, satisfaction probabilities of constraints on the total accumulated weight, the computation of quantiles for the accumulated weight, and the conditional value-at-risk for accumulated weights for integer-weighted MDPs. Although algorithmic results are available for some special instances, the decidability status of the decision versions of these problems is unknown in general. The paper demonstrates that these optimization problems are inherently mathematically difficult by providing polynomial-time reductions from the Positivity problem for linear recurrence sequences. This problem is a well-known number-theoretic problem whose decidability status has been open for decades and it is known that decidability of the Positivity problem would have far-reaching consequences in analytic number theory. So, the reductions presented in the paper show that an algorithmic solution to any of the investigated problems is not possible without a major breakthrough in analytic number theory. The reductions rely on the construction of MDP-gadgets that encode the initial values and linear recurrence relations of linear recurrence sequences. These gadgets can flexibly be adjusted to prove the various Positivity-hardness results.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Linear Hashing with $\ell_\infty$ guarantees and two-sided Kakeya bounds</title>
      <description><![CDATA[We show that a randomly chosen linear map over a finite field gives a good hash function in the $\ell_\infty$ sense. More concretely, consider a set $S \subset \mathbb{F}_q^n$ and a randomly chosen linear map $L : \mathbb{F}_q^n \to \mathbb{F}_q^t$ with $q^t$ taken to be sufficiently smaller than $ |S|$. Let $U_S$ denote a random variable distributed uniformly on $S$. Our main theorem shows that, with high probability over the choice of $L$, the random variable $L(U_S)$ is close to uniform in the $\ell_\infty$ norm. In other words, {\em every} element in the range $\mathbb{F}_q^t$ has about the same number of elements in $S$ mapped to it. This complements the widely-used Leftover Hash Lemma (LHL) which proves the analog statement under the statistical, or $\ell_1$, distance (for a richer class of functions) as well as prior work on the expected largest 'bucket size' in linear hash functions [ADMPT99]. By known bounds from the load balancing literature [RS98], our results are tight and show that linear functions hash as well as trully random function up to a constant factor in the entropy loss. Our proof leverages a connection between linear hashing and the finite field Kakeya problem and extends some of the tools developed in this area, in particular the polynomial method.]]></description>
      <pubDate>Wed, 03 Apr 2024 07:36:36 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.8</link>
      <guid>https://doi.org/10.46298/theoretics.24.8</guid>
      <author>Dhar, Manik</author>
      <author>Dvir, Zeev</author>
      <dc:creator>Dhar, Manik</dc:creator>
      <dc:creator>Dvir, Zeev</dc:creator>
      <content:encoded><![CDATA[We show that a randomly chosen linear map over a finite field gives a good hash function in the $\ell_\infty$ sense. More concretely, consider a set $S \subset \mathbb{F}_q^n$ and a randomly chosen linear map $L : \mathbb{F}_q^n \to \mathbb{F}_q^t$ with $q^t$ taken to be sufficiently smaller than $ |S|$. Let $U_S$ denote a random variable distributed uniformly on $S$. Our main theorem shows that, with high probability over the choice of $L$, the random variable $L(U_S)$ is close to uniform in the $\ell_\infty$ norm. In other words, {\em every} element in the range $\mathbb{F}_q^t$ has about the same number of elements in $S$ mapped to it. This complements the widely-used Leftover Hash Lemma (LHL) which proves the analog statement under the statistical, or $\ell_1$, distance (for a richer class of functions) as well as prior work on the expected largest 'bucket size' in linear hash functions [ADMPT99]. By known bounds from the load balancing literature [RS98], our results are tight and show that linear functions hash as well as trully random function up to a constant factor in the entropy loss. Our proof leverages a connection between linear hashing and the finite field Kakeya problem and extends some of the tools developed in this area, in particular the polynomial method.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Improved quantum data analysis</title>
      <description><![CDATA[We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only $O((\log^2 m)/\epsilon^2)$ samples of a $d$-dimensional state $\rho$. That is, given observables $0 \le A_1, A_2, ..., A_m \le 1$ such that $\mathrm{tr}(\rho A_i) \ge 1/2$ for at least one $i$, the algorithm finds $j$ with $\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon$. As a consequence, we obtain a Shadow Tomography algorithm requiring only $\tilde{O}((\log^2 m)(\log d)/\epsilon^4)$ samples, which simultaneously achieves the best known dependence on each parameter $m$, $d$, $\epsilon$. This yields the same sample complexity for quantum Hypothesis Selection among $m$ states; we also give an alternative Hypothesis Selection method using $\tilde{O}((\log^3 m)/\epsilon^2)$ samples.]]></description>
      <pubDate>Mon, 18 Mar 2024 08:40:29 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.7</link>
      <guid>https://doi.org/10.46298/theoretics.24.7</guid>
      <author>Bădescu, Costin</author>
      <author>O'Donnell, Ryan</author>
      <dc:creator>Bădescu, Costin</dc:creator>
      <dc:creator>O'Donnell, Ryan</dc:creator>
      <content:encoded><![CDATA[We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only $O((\log^2 m)/\epsilon^2)$ samples of a $d$-dimensional state $\rho$. That is, given observables $0 \le A_1, A_2, ..., A_m \le 1$ such that $\mathrm{tr}(\rho A_i) \ge 1/2$ for at least one $i$, the algorithm finds $j$ with $\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon$. As a consequence, we obtain a Shadow Tomography algorithm requiring only $\tilde{O}((\log^2 m)(\log d)/\epsilon^4)$ samples, which simultaneously achieves the best known dependence on each parameter $m$, $d$, $\epsilon$. This yields the same sample complexity for quantum Hypothesis Selection among $m$ states; we also give an alternative Hypothesis Selection method using $\tilde{O}((\log^3 m)/\epsilon^2)$ samples.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Terminal Embeddings in Sublinear Time</title>
      <description><![CDATA[Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} When $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside of prior work is that evaluating their embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in~$m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $O^* (n^{1-\Theta(\epsilon^2)} + d)$. To accomplish this, we leverage tools developed in the context of approximate nearest neighbor search.]]></description>
      <pubDate>Thu, 14 Mar 2024 10:09:55 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.6</link>
      <guid>https://doi.org/10.46298/theoretics.24.6</guid>
      <author>Cherapanamjeri, Yeshwanth</author>
      <author>Nelson, Jelani</author>
      <dc:creator>Cherapanamjeri, Yeshwanth</dc:creator>
      <dc:creator>Nelson, Jelani</dc:creator>
      <content:encoded><![CDATA[Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} When $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside of prior work is that evaluating their embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in~$m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $O^* (n^{1-\Theta(\epsilon^2)} + d)$. To accomplish this, we leverage tools developed in the context of approximate nearest neighbor search.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>PPSZ is better than you think</title>
      <description><![CDATA[PPSZ, for long time the fastest known algorithm for $k$-SAT, works by going through the variables of the input formula in random order; each variable is then set randomly to $0$ or $1$, unless the correct value can be inferred by an efficiently implementable rule (like small-width resolution; or being implied by a small set of clauses). We show that PPSZ performs exponentially better than previously known, for all $k \geq 3$. For Unique-$3$-SAT we bound its running time by $O(1.306973^{n})$, which is somewhat better than the algorithm of Hansen, Kaplan, Zamir, and Zwick, which runs in time $O(1.306995^n)$. Before that, the best known upper bound for Unique-$3$-SAT was $O(1.3070319^n)$. All improvements are achieved without changing the original PPSZ. The core idea is to pretend that PPSZ does not process the variables in uniformly random order, but according to a carefully designed distribution. We write "pretend" since this can be done without any actual change to the algorithm.]]></description>
      <pubDate>Wed, 13 Mar 2024 08:14:57 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.5</link>
      <guid>https://doi.org/10.46298/theoretics.24.5</guid>
      <author>Scheder, Dominik</author>
      <dc:creator>Scheder, Dominik</dc:creator>
      <content:encoded><![CDATA[PPSZ, for long time the fastest known algorithm for $k$-SAT, works by going through the variables of the input formula in random order; each variable is then set randomly to $0$ or $1$, unless the correct value can be inferred by an efficiently implementable rule (like small-width resolution; or being implied by a small set of clauses). We show that PPSZ performs exponentially better than previously known, for all $k \geq 3$. For Unique-$3$-SAT we bound its running time by $O(1.306973^{n})$, which is somewhat better than the algorithm of Hansen, Kaplan, Zamir, and Zwick, which runs in time $O(1.306995^n)$. Before that, the best known upper bound for Unique-$3$-SAT was $O(1.3070319^n)$. All improvements are achieved without changing the original PPSZ. The core idea is to pretend that PPSZ does not process the variables in uniformly random order, but according to a carefully designed distribution. We write "pretend" since this can be done without any actual change to the algorithm.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>DeepSec: Deciding Equivalence Properties for Security Protocols -- Improved theory and practice</title>
      <description><![CDATA[Automated verification has become an essential part in the security evaluation of cryptographic protocols. In this context privacy-type properties are often modelled by indistinguishability statements, expressed as behavioural equivalences in a process calculus. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of protocol sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives -- those that can be represented by a subterm convergent destructor rewrite system. We also implemented the procedure in a new tool, DeepSec. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.]]></description>
      <pubDate>Wed, 13 Mar 2024 08:13:21 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.4</link>
      <guid>https://doi.org/10.46298/theoretics.24.4</guid>
      <author>Cheval, Vincent</author>
      <author>Kremer, Steve</author>
      <author>Rakotonirina, Itsaka</author>
      <dc:creator>Cheval, Vincent</dc:creator>
      <dc:creator>Kremer, Steve</dc:creator>
      <dc:creator>Rakotonirina, Itsaka</dc:creator>
      <content:encoded><![CDATA[Automated verification has become an essential part in the security evaluation of cryptographic protocols. In this context privacy-type properties are often modelled by indistinguishability statements, expressed as behavioural equivalences in a process calculus. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of protocol sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives -- those that can be represented by a subterm convergent destructor rewrite system. We also implemented the procedure in a new tool, DeepSec. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Constructive Separations and Their Consequences</title>
      <description><![CDATA[For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.]]></description>
      <pubDate>Thu, 15 Feb 2024 09:43:52 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.3</link>
      <guid>https://doi.org/10.46298/theoretics.24.3</guid>
      <author>Chen, Lijie</author>
      <author>Jin, Ce</author>
      <author>Santhanam, Rahul</author>
      <author>Williams, Ryan</author>
      <dc:creator>Chen, Lijie</dc:creator>
      <dc:creator>Jin, Ce</dc:creator>
      <dc:creator>Santhanam, Rahul</dc:creator>
      <dc:creator>Williams, Ryan</dc:creator>
      <content:encoded><![CDATA[For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Realizable Learning is All You Need</title>
      <description><![CDATA[The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions and more general loss functions, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.]]></description>
      <pubDate>Tue, 06 Feb 2024 12:51:56 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.2</link>
      <guid>https://doi.org/10.46298/theoretics.24.2</guid>
      <author>Hopkins, Max</author>
      <author>Kane, Daniel M.</author>
      <author>Lovett, Shachar</author>
      <author>Mahajan, Gaurav</author>
      <dc:creator>Hopkins, Max</dc:creator>
      <dc:creator>Kane, Daniel M.</dc:creator>
      <dc:creator>Lovett, Shachar</dc:creator>
      <dc:creator>Mahajan, Gaurav</dc:creator>
      <content:encoded><![CDATA[The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions and more general loss functions, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Strongly Sublinear Algorithms for Testing Pattern Freeness</title>
      <description><![CDATA[For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ values from every $\pi$-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.]]></description>
      <pubDate>Fri, 12 Jan 2024 10:54:23 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.24.1</link>
      <guid>https://doi.org/10.46298/theoretics.24.1</guid>
      <author>Newman, Ilan</author>
      <author>Varma, Nithin</author>
      <dc:creator>Newman, Ilan</dc:creator>
      <dc:creator>Varma, Nithin</dc:creator>
      <content:encoded><![CDATA[For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ values from every $\pi$-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>All about unambiguous polynomial closure</title>
      <description><![CDATA[We study a standard operator on classes of languages: unambiguous polynomial closure. We prove that for every class C of regular languages satisfying mild properties, the membership problem for its unambiguous polynomial closure UPol(C) reduces to the same problem for C. We also show that unambiguous polynomial closure coincides with alternating left and right deterministic closure. Moreover, we prove that if additionally C is finite, the separation and covering problems are decidable for UPol(C). Finally, we present an overview of the generic logical characterizations of the classes built using unambiguous polynomial closure.]]></description>
      <pubDate>Wed, 10 Jan 2024 10:05:09 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.11</link>
      <guid>https://doi.org/10.46298/theoretics.23.11</guid>
      <author>Place, Thomas</author>
      <author>Zeitoun, Marc</author>
      <dc:creator>Place, Thomas</dc:creator>
      <dc:creator>Zeitoun, Marc</dc:creator>
      <content:encoded><![CDATA[We study a standard operator on classes of languages: unambiguous polynomial closure. We prove that for every class C of regular languages satisfying mild properties, the membership problem for its unambiguous polynomial closure UPol(C) reduces to the same problem for C. We also show that unambiguous polynomial closure coincides with alternating left and right deterministic closure. Moreover, we prove that if additionally C is finite, the separation and covering problems are decidable for UPol(C). Finally, we present an overview of the generic logical characterizations of the classes built using unambiguous polynomial closure.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Testing Distributions of Huge Objects</title>
      <description><![CDATA[We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings), and the distance between distributions is defined as the earth mover's distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions.]]></description>
      <pubDate>Sat, 30 Dec 2023 13:11:14 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.12</link>
      <guid>https://doi.org/10.46298/theoretics.23.12</guid>
      <author>Goldreich, Oded</author>
      <author>Ron, Dana</author>
      <dc:creator>Goldreich, Oded</dc:creator>
      <dc:creator>Ron, Dana</dc:creator>
      <content:encoded><![CDATA[We initiate a study of a new model of property testing that is a hybrid of testing properties of distributions and testing properties of strings. Specifically, the new model refers to testing properties of distributions, but these are distributions over huge objects (i.e., very long strings). Accordingly, the model accounts for the total number of local probes into these objects (resp., queries to the strings) as well as for the distance between objects (resp., strings), and the distance between distributions is defined as the earth mover's distance with respect to the relative Hamming distance between strings. We study the query complexity of testing in this new model, focusing on three directions. First, we try to relate the query complexity of testing properties in the new model to the sample complexity of testing these properties in the standard distribution testing model. Second, we consider the complexity of testing properties that arise naturally in the new model (e.g., distributions that capture random variations of fixed strings). Third, we consider the complexity of testing properties that were extensively studied in the standard distribution testing model: Two such cases are uniform distributions and pairs of identical distributions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>The Complexity of Iterated Reversible Computation</title>
      <description><![CDATA[We study a class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. These problems are characterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, and include natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.]]></description>
      <pubDate>Tue, 26 Dec 2023 20:07:35 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.10</link>
      <guid>https://doi.org/10.46298/theoretics.23.10</guid>
      <author>Eppstein, David</author>
      <dc:creator>Eppstein, David</dc:creator>
      <content:encoded><![CDATA[We study a class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. These problems are characterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, and include natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Brooks' Theorem in Graph Streams: A Single-Pass Semi-Streaming Algorithm for $\Delta$-Coloring</title>
      <description><![CDATA[Every graph with maximum degree $\Delta$ can be colored with $(\Delta+1)$ colors using a simple greedy algorithm. Remarkably, recent work has shown that one can find such a coloring even in the semi-streaming model. But, in reality, one almost never needs $(\Delta+1)$ colors to properly color a graph. Indeed, the celebrated \Brooks' theorem states that every (connected) graph beside cliques and odd cycles can be colored with $\Delta$ colors. Can we find a $\Delta$-coloring in the semi-streaming model as well? We settle this key question in the affirmative by designing a randomized semi-streaming algorithm that given any graph, with high probability, either correctly declares that the graph is not $\Delta$-colorable or outputs a $\Delta$-coloring of the graph. The proof of this result starts with a detour. We first (provably) identify the extent to which the previous approaches for streaming coloring fail for $\Delta$-coloring: for instance, all these approaches can handle streams with repeated edges and they can run in $o(n^2)$ time -- we prove that neither of these tasks is possible for $\Delta$-coloring. These impossibility results however pinpoint exactly what is missing from prior approaches when it comes to $\Delta$-coloring. We then build on these insights to design a semi-streaming algorithm that uses $(i)$ a novel sparse-recovery approach based on sparse-dense decompositions to (partially) recover the "problematic" subgraphs of the input -- the ones that form the basis of our impossibility results -- and $(ii)$ a new coloring approach for these subgraphs that allows for recoloring of other vertices in a controlled way without relying on local explorations or finding "augmenting paths" that are generally impossible for semi-streaming algorithms. We believe both these techniques can be of independent interest.]]></description>
      <pubDate>Fri, 25 Aug 2023 11:48:34 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.9</link>
      <guid>https://doi.org/10.46298/theoretics.23.9</guid>
      <author>Assadi, Sepehr</author>
      <author>Kumar, Pankaj</author>
      <author>Mittal, Parth</author>
      <dc:creator>Assadi, Sepehr</dc:creator>
      <dc:creator>Kumar, Pankaj</dc:creator>
      <dc:creator>Mittal, Parth</dc:creator>
      <content:encoded><![CDATA[Every graph with maximum degree $\Delta$ can be colored with $(\Delta+1)$ colors using a simple greedy algorithm. Remarkably, recent work has shown that one can find such a coloring even in the semi-streaming model. But, in reality, one almost never needs $(\Delta+1)$ colors to properly color a graph. Indeed, the celebrated \Brooks' theorem states that every (connected) graph beside cliques and odd cycles can be colored with $\Delta$ colors. Can we find a $\Delta$-coloring in the semi-streaming model as well? We settle this key question in the affirmative by designing a randomized semi-streaming algorithm that given any graph, with high probability, either correctly declares that the graph is not $\Delta$-colorable or outputs a $\Delta$-coloring of the graph. The proof of this result starts with a detour. We first (provably) identify the extent to which the previous approaches for streaming coloring fail for $\Delta$-coloring: for instance, all these approaches can handle streams with repeated edges and they can run in $o(n^2)$ time -- we prove that neither of these tasks is possible for $\Delta$-coloring. These impossibility results however pinpoint exactly what is missing from prior approaches when it comes to $\Delta$-coloring. We then build on these insights to design a semi-streaming algorithm that uses $(i)$ a novel sparse-recovery approach based on sparse-dense decompositions to (partially) recover the "problematic" subgraphs of the input -- the ones that form the basis of our impossibility results -- and $(ii)$ a new coloring approach for these subgraphs that allows for recoloring of other vertices in a controlled way without relying on local explorations or finding "augmenting paths" that are generally impossible for semi-streaming algorithms. We believe both these techniques can be of independent interest.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Boosting Simple Learners</title>
      <description><![CDATA[Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we {introduce combinatorial-geometric parameters which capture expressivity in boosting.} As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.]]></description>
      <pubDate>Mon, 19 Jun 2023 07:10:09 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.8</link>
      <guid>https://doi.org/10.46298/theoretics.23.8</guid>
      <author>Alon, Noga</author>
      <author>Gonen, Alon</author>
      <author>Hazan, Elad</author>
      <author>Moran, Shay</author>
      <dc:creator>Alon, Noga</dc:creator>
      <dc:creator>Gonen, Alon</dc:creator>
      <dc:creator>Hazan, Elad</dc:creator>
      <dc:creator>Moran, Shay</dc:creator>
      <content:encoded><![CDATA[Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma^2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we {introduce combinatorial-geometric parameters which capture expressivity in boosting.} As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A simple polynomial-time approximation algorithm for the total variation distance between two product distributions</title>
      <description><![CDATA[We give a simple polynomial-time approximation algorithm for the total variation distance between two product distributions.]]></description>
      <pubDate>Thu, 15 Jun 2023 11:51:42 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.7</link>
      <guid>https://doi.org/10.46298/theoretics.23.7</guid>
      <author>Feng, Weiming</author>
      <author>Guo, Heng</author>
      <author>Jerrum, Mark</author>
      <author>Wang, Jiaheng</author>
      <dc:creator>Feng, Weiming</dc:creator>
      <dc:creator>Guo, Heng</dc:creator>
      <dc:creator>Jerrum, Mark</dc:creator>
      <dc:creator>Wang, Jiaheng</dc:creator>
      <content:encoded><![CDATA[We give a simple polynomial-time approximation algorithm for the total variation distance between two product distributions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Fully Dynamic Connectivity in $O(\log n(\log\log n)^2)$ Amortized Expected Time</title>
      <description><![CDATA[Dynamic connectivity is one of the most fundamental problems in dynamic graph algorithms. We present a randomized Las Vegas dynamic connectivity data structure with $O(\log n(\log\log n)^2)$ amortized expected update time and $O(\log n/\log\log\log n)$ worst case query time, which comes very close to the cell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup (2011).]]></description>
      <pubDate>Tue, 02 May 2023 11:15:27 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.6</link>
      <guid>https://doi.org/10.46298/theoretics.23.6</guid>
      <author>Huang, Shang-En</author>
      <author>Huang, Dawei</author>
      <author>Kopelowitz, Tsvi</author>
      <author>Pettie, Seth</author>
      <author>Thorup, Mikkel</author>
      <dc:creator>Huang, Shang-En</dc:creator>
      <dc:creator>Huang, Dawei</dc:creator>
      <dc:creator>Kopelowitz, Tsvi</dc:creator>
      <dc:creator>Pettie, Seth</dc:creator>
      <dc:creator>Thorup, Mikkel</dc:creator>
      <content:encoded><![CDATA[Dynamic connectivity is one of the most fundamental problems in dynamic graph algorithms. We present a randomized Las Vegas dynamic connectivity data structure with $O(\log n(\log\log n)^2)$ amortized expected update time and $O(\log n/\log\log\log n)$ worst case query time, which comes very close to the cell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup (2011).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Robust Version of Hegedűs's Lemma, with Applications</title>
      <description><![CDATA[Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \mathbb{F}[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of some fixed Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Heged\H{u}s (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.]]></description>
      <pubDate>Wed, 01 Mar 2023 14:47:30 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.5</link>
      <guid>https://doi.org/10.46298/theoretics.23.5</guid>
      <author>Srinivasan, Srikanth</author>
      <dc:creator>Srinivasan, Srikanth</dc:creator>
      <content:encoded><![CDATA[Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \mathbb{F}[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of some fixed Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Heged\H{u}s (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Fast Symbolic Algorithms for Omega-Regular Games under Strong Transition Fairness</title>
      <description><![CDATA[We consider fixpoint algorithms for two-player games on graphs with $\omega$-regular winning conditions, where the environment is constrained by a strong transition fairness assumption. Strong transition fairness is a widely occurring special case of strong fairness, which requires that any execution is strongly fair with respect to a specified set of live edges: whenever the source vertex of a live edge is visited infinitely often along a play, the edge itself is traversed infinitely often along the play as well. We show that, surprisingly, strong transition fairness retains the algorithmic characteristics of the fixpoint algorithms for $\omega$-regular games -- the new algorithms have the same alternation depth as the classical algorithms but invoke a new type of predecessor operator. For Rabin games with $k$ pairs, the complexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which is independent of the number of live edges in the strong transition fairness assumption. Further, we show that GR(1) specifications with strong transition fairness assumptions can be solved with a 3-nested fixpoint algorithm, same as the usual algorithm. In contrast, strong fairness necessarily requires increasing the alternation depth depending on the number of fairness assumptions. We get symbolic algorithms for (generalized) Rabin, parity and GR(1) objectives under strong transition fairness assumptions as well as a direct symbolic algorithm for qualitative winning in stochastic $\omega$-regular games that runs in $O(n^{k+2}k!)$ symbolic steps, improving the state of the art. Finally, we have implemented a BDD-based synthesis engine based on our algorithm. We show on a set of synthetic and real benchmarks that our algorithm is scalable, parallelizable, and outperforms previous algorithms by orders of magnitude.]]></description>
      <pubDate>Fri, 24 Feb 2023 13:39:08 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.4</link>
      <guid>https://doi.org/10.46298/theoretics.23.4</guid>
      <author>Banerjee, Tamajit</author>
      <author>Majumdar, Rupak</author>
      <author>Mallik, Kaushik</author>
      <author>Schmuck, Anne-Kathrin</author>
      <author>Soudjani, Sadegh</author>
      <dc:creator>Banerjee, Tamajit</dc:creator>
      <dc:creator>Majumdar, Rupak</dc:creator>
      <dc:creator>Mallik, Kaushik</dc:creator>
      <dc:creator>Schmuck, Anne-Kathrin</dc:creator>
      <dc:creator>Soudjani, Sadegh</dc:creator>
      <content:encoded><![CDATA[We consider fixpoint algorithms for two-player games on graphs with $\omega$-regular winning conditions, where the environment is constrained by a strong transition fairness assumption. Strong transition fairness is a widely occurring special case of strong fairness, which requires that any execution is strongly fair with respect to a specified set of live edges: whenever the source vertex of a live edge is visited infinitely often along a play, the edge itself is traversed infinitely often along the play as well. We show that, surprisingly, strong transition fairness retains the algorithmic characteristics of the fixpoint algorithms for $\omega$-regular games -- the new algorithms have the same alternation depth as the classical algorithms but invoke a new type of predecessor operator. For Rabin games with $k$ pairs, the complexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which is independent of the number of live edges in the strong transition fairness assumption. Further, we show that GR(1) specifications with strong transition fairness assumptions can be solved with a 3-nested fixpoint algorithm, same as the usual algorithm. In contrast, strong fairness necessarily requires increasing the alternation depth depending on the number of fairness assumptions. We get symbolic algorithms for (generalized) Rabin, parity and GR(1) objectives under strong transition fairness assumptions as well as a direct symbolic algorithm for qualitative winning in stochastic $\omega$-regular games that runs in $O(n^{k+2}k!)$ symbolic steps, improving the state of the art. Finally, we have implemented a BDD-based synthesis engine based on our algorithm. We show on a set of synthetic and real benchmarks that our algorithm is scalable, parallelizable, and outperforms previous algorithms by orders of magnitude.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Characterizing Positionality in Games of Infinite Duration over Infinite Graphs</title>
      <description><![CDATA[We study turn-based quantitative games of infinite duration opposing two antagonistic players and played over graphs. This model is widely accepted as providing the adequate framework for formalizing the synthesis question for reactive systems. This important application motivates the question of strategy complexity: which valuations (or payoff functions) admit optimal positional strategies (without memory)? Valuations for which both players have optimal positional strategies have been characterized by Gimbert and Zielonka for finite graphs and by Colcombet and Niwi\'nski for infinite graphs. However, for reactive synthesis, existence of optimal positional strategies for the opponent (which models an antagonistic environment) is irrelevant. Despite this fact, not much is known about valuations for which the protagonist admits optimal positional strategies, regardless of the opponent. In this work, we characterize valuations which admit such strategies over infinite game graphs. Our characterization uses the vocabulary of universal graphs, which has also proved useful in understanding recent breakthrough results regarding the complexity of parity games. More precisely, we show that a valuation admitting universal graphs which are monotone and well-ordered is positional over all game graphs, and -- more surprisingly -- that the converse is also true for valuations admitting neutral colors. We prove the applicability and elegance of the framework by unifying a number of known positionality results, proving new ones, and establishing closure under lexicographical products. Finally, we discuss a class of prefix-independent positional objectives which is closed under countable unions.]]></description>
      <pubDate>Tue, 31 Jan 2023 08:56:28 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.3</link>
      <guid>https://doi.org/10.46298/theoretics.23.3</guid>
      <author>Ohlmann, Pierre</author>
      <dc:creator>Ohlmann, Pierre</dc:creator>
      <content:encoded><![CDATA[We study turn-based quantitative games of infinite duration opposing two antagonistic players and played over graphs. This model is widely accepted as providing the adequate framework for formalizing the synthesis question for reactive systems. This important application motivates the question of strategy complexity: which valuations (or payoff functions) admit optimal positional strategies (without memory)? Valuations for which both players have optimal positional strategies have been characterized by Gimbert and Zielonka for finite graphs and by Colcombet and Niwi\'nski for infinite graphs. However, for reactive synthesis, existence of optimal positional strategies for the opponent (which models an antagonistic environment) is irrelevant. Despite this fact, not much is known about valuations for which the protagonist admits optimal positional strategies, regardless of the opponent. In this work, we characterize valuations which admit such strategies over infinite game graphs. Our characterization uses the vocabulary of universal graphs, which has also proved useful in understanding recent breakthrough results regarding the complexity of parity games. More precisely, we show that a valuation admitting universal graphs which are monotone and well-ordered is positional over all game graphs, and -- more surprisingly -- that the converse is also true for valuations admitting neutral colors. We prove the applicability and elegance of the framework by unifying a number of known positionality results, proving new ones, and establishing closure under lexicographical products. Finally, we discuss a class of prefix-independent positional objectives which is closed under countable unions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Conditional Dichotomy of Boolean Ordered Promise CSPs</title>
      <description><![CDATA[Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and H\aa stad, there has been a flurry of works on PCSPs [BBKO19,KO19,WZ20]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as the polymorphisms are analyzed. The polymorphisms of PCSPs are much richer than CSPs. In the Boolean case, we still do not know if dichotomy for PCSPs exists analogous to Schaefer's dichotomy result for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate $x \leq y$. In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [BKM21] which is a perfect completeness surrogate of the Unique Games Conjecture. Assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every $\epsilon>0$, it has polymorphisms where each coordinate has Shapley value at most $\epsilon$, else it is NP-hard. The algorithmic part of our dichotomy is based on a structural lemma that Boolean monotone functions with each coordinate having low Shapley value have arbitrarily large threshold functions as minors. The hardness part proceeds by showing that the Shapley value is consistent under a uniformly random 2-to-1 minor. Of independent interest, we show that the Shapley value can be inconsistent under an adversarial 2-to-1 minor.]]></description>
      <pubDate>Wed, 25 Jan 2023 08:45:00 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.2</link>
      <guid>https://doi.org/10.46298/theoretics.23.2</guid>
      <author>Brakensiek, Joshua</author>
      <author>Guruswami, Venkatesan</author>
      <author>Sandeep, Sai</author>
      <dc:creator>Brakensiek, Joshua</dc:creator>
      <dc:creator>Guruswami, Venkatesan</dc:creator>
      <dc:creator>Sandeep, Sai</dc:creator>
      <content:encoded><![CDATA[Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and H\aa stad, there has been a flurry of works on PCSPs [BBKO19,KO19,WZ20]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as the polymorphisms are analyzed. The polymorphisms of PCSPs are much richer than CSPs. In the Boolean case, we still do not know if dichotomy for PCSPs exists analogous to Schaefer's dichotomy result for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate $x \leq y$. In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [BKM21] which is a perfect completeness surrogate of the Unique Games Conjecture. Assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every $\epsilon>0$, it has polymorphisms where each coordinate has Shapley value at most $\epsilon$, else it is NP-hard. The algorithmic part of our dichotomy is based on a structural lemma that Boolean monotone functions with each coordinate having low Shapley value have arbitrarily large threshold functions as minors. The hardness part proceeds by showing that the Shapley value is consistent under a uniformly random 2-to-1 minor. Of independent interest, we show that the Shapley value can be inconsistent under an adversarial 2-to-1 minor.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Characterizing Omega-Regularity through Finite-Memory Determinacy of Games on Infinite Graphs</title>
      <description><![CDATA[We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of $\omega$-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is $\omega$-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be $\omega$-regular. This provides a game-theoretic characterization of $\omega$-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. We illustrate our results with the family of discounted-sum objectives, for which $\omega$-regularity depends on the value of some parameters.]]></description>
      <pubDate>Mon, 16 Jan 2023 09:15:47 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.23.1</link>
      <guid>https://doi.org/10.46298/theoretics.23.1</guid>
      <author>Bouyer, Patricia</author>
      <author>Randour, Mickael</author>
      <author>Vandenhove, Pierre</author>
      <dc:creator>Bouyer, Patricia</dc:creator>
      <dc:creator>Randour, Mickael</dc:creator>
      <dc:creator>Vandenhove, Pierre</dc:creator>
      <content:encoded><![CDATA[We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of $\omega$-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is $\omega$-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be $\omega$-regular. This provides a game-theoretic characterization of $\omega$-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. We illustrate our results with the family of discounted-sum objectives, for which $\omega$-regularity depends on the value of some parameters.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Perfect Matching in Random Graphs is as Hard as Tseitin</title>
      <description><![CDATA[We study the complexity of proving that a sparse random regular graph on an odd number of vertices does not have a perfect matching, and related problems involving each vertex being matched some pre-specified number of times. We show that this requires proofs of degree $\Omega(n / \log n)$ in the Polynomial Calculus (over fields of characteristic $\ne 2$) and Sum-of-Squares proof systems, and exponential size in the bounded-depth Frege proof system. This resolves a question by Razborov asking whether the Lov\'asz-Schrijver proof system requires $n^\delta$ rounds to refute these formulas for some $\delta > 0$. The results are obtained by a worst-case to average-case reduction of these formulas relying on a topological embedding theorem which may be of independent interest.]]></description>
      <pubDate>Wed, 21 Dec 2022 11:29:42 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.22.2</link>
      <guid>https://doi.org/10.46298/theoretics.22.2</guid>
      <author>Austrin, Per</author>
      <author>Risse, Kilian</author>
      <dc:creator>Austrin, Per</dc:creator>
      <dc:creator>Risse, Kilian</dc:creator>
      <content:encoded><![CDATA[We study the complexity of proving that a sparse random regular graph on an odd number of vertices does not have a perfect matching, and related problems involving each vertex being matched some pre-specified number of times. We show that this requires proofs of degree $\Omega(n / \log n)$ in the Polynomial Calculus (over fields of characteristic $\ne 2$) and Sum-of-Squares proof systems, and exponential size in the bounded-depth Frege proof system. This resolves a question by Razborov asking whether the Lov\'asz-Schrijver proof system requires $n^\delta$ rounds to refute these formulas for some $\delta > 0$. The results are obtained by a worst-case to average-case reduction of these formulas relying on a topological embedding theorem which may be of independent interest.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Robustly Self-Ordered Graphs: Constructions and Applications to Property Testing</title>
      <description><![CDATA[A graph $G$ is called self-ordered (a.k.a asymmetric) if the identity permutation is its only automorphism. Equivalently, there is a unique isomorphism from $G$ to any graph that is isomorphic to $G$. We say that $G=(V,E)$ is robustly self-ordered if the size of the symmetric difference between $E$ and the edge-set of the graph obtained by permuting $V$ using any permutation $\pi:V\to V$ is proportional to the number of non-fixed-points of $\pi$. In this work, we initiate the study of the structure, construction and utility of robustly self-ordered graphs. We show that robustly self-ordered bounded-degree graphs exist (in abundance), and that they can be constructed efficiently, in a strong sense. Specifically, given the index of a vertex in such a graph, it is possible to find all its neighbors in polynomial-time (i.e., in time that is poly-logarithmic in the size of the graph). We also consider graphs of unbounded degree, seeking correspondingly unbounded robustness parameters. We again demonstrate that such graphs (of linear degree) exist (in abundance), and that they can be constructed efficiently, in a strong sense. This turns out to require very different tools. Specifically, we show that the construction of such graphs reduces to the construction of non-malleable two-source extractors (with very weak parameters but with some additional natural features). We demonstrate that robustly self-ordered bounded-degree graphs are useful towards obtaining lower bounds on the query complexity of testing graph properties both in the bounded-degree and the dense graph models. One of the results that we obtain, via such a reduction, is a subexponential separation between the query complexities of testing and tolerant testing of graph properties in the bounded-degree graph model.]]></description>
      <pubDate>Wed, 21 Dec 2022 11:27:11 +0000</pubDate>
      <link>https://doi.org/10.46298/theoretics.22.1</link>
      <guid>https://doi.org/10.46298/theoretics.22.1</guid>
      <author>Goldreich, Oded</author>
      <author>Wigderson, Avi</author>
      <dc:creator>Goldreich, Oded</dc:creator>
      <dc:creator>Wigderson, Avi</dc:creator>
      <content:encoded><![CDATA[A graph $G$ is called self-ordered (a.k.a asymmetric) if the identity permutation is its only automorphism. Equivalently, there is a unique isomorphism from $G$ to any graph that is isomorphic to $G$. We say that $G=(V,E)$ is robustly self-ordered if the size of the symmetric difference between $E$ and the edge-set of the graph obtained by permuting $V$ using any permutation $\pi:V\to V$ is proportional to the number of non-fixed-points of $\pi$. In this work, we initiate the study of the structure, construction and utility of robustly self-ordered graphs. We show that robustly self-ordered bounded-degree graphs exist (in abundance), and that they can be constructed efficiently, in a strong sense. Specifically, given the index of a vertex in such a graph, it is possible to find all its neighbors in polynomial-time (i.e., in time that is poly-logarithmic in the size of the graph). We also consider graphs of unbounded degree, seeking correspondingly unbounded robustness parameters. We again demonstrate that such graphs (of linear degree) exist (in abundance), and that they can be constructed efficiently, in a strong sense. This turns out to require very different tools. Specifically, we show that the construction of such graphs reduces to the construction of non-malleable two-source extractors (with very weak parameters but with some additional natural features). We demonstrate that robustly self-ordered bounded-degree graphs are useful towards obtaining lower bounds on the query complexity of testing graph properties both in the bounded-degree and the dense graph models. One of the results that we obtain, via such a reduction, is a subexponential separation between the query complexities of testing and tolerant testing of graph properties in the bounded-degree graph model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
