2024
For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ values from every $\pi$-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions and more general loss functions, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.
For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our […]
Automated verification has become an essential part in the security evaluation of cryptographic protocols. In this context privacy-type properties are often modelled by indistinguishability statements, expressed as behavioural equivalences in a process calculus. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of protocol sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives -- those that can be represented by a subterm convergent destructor rewrite system. We also implemented the procedure in a new tool, DeepSec. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.
PPSZ, for long time the fastest known algorithm for $k$-SAT, works by going through the variables of the input formula in random order; each variable is then set randomly to $0$ or $1$, unless the correct value can be inferred by an efficiently implementable rule (like small-width resolution; or being implied by a small set of clauses). We show that PPSZ performs exponentially better than previously known, for all $k \geq 3$. For Unique-$3$-SAT we bound its running time by $O(1.306973^{n})$, which is somewhat better than the algorithm of Hansen, Kaplan, Zamir, and Zwick, which runs in time $O(1.306995^n)$. Before that, the best known upper bound for Unique-$3$-SAT was $O(1.3070319^n)$. All improvements are achieved without changing the original PPSZ. The core idea is to pretend that PPSZ does not process the variables in uniformly random order, but according to a carefully designed distribution. We write "pretend" since this can be done without any actual change to the algorithm.
Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} When $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside of prior work is that evaluating their embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in~$m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $O^* (n^{1-\Theta(\epsilon^2)} + d)$. To accomplish this, we leverage […]
We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only $O((\log^2 m)/\epsilon^2)$ samples of a $d$-dimensional state $\rho$. That is, given observables $0 \le A_1, A_2, ..., A_m \le 1$ such that $\mathrm{tr}(\rho A_i) \ge 1/2$ for at least one $i$, the algorithm finds $j$ with $\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon$. As a consequence, we obtain a Shadow Tomography algorithm requiring only $\tilde{O}((\log^2 m)(\log d)/\epsilon^4)$ samples, which simultaneously achieves the best known dependence on each parameter $m$, $d$, $\epsilon$. This yields the same sample complexity for quantum Hypothesis Selection among $m$ states; we also give an alternative Hypothesis Selection method using $\tilde{O}((\log^3 m)/\epsilon^2)$ samples.
We show that a randomly chosen linear map over a finite field gives a good hash function in the $\ell_\infty$ sense. More concretely, consider a set $S \subset \mathbb{F}_q^n$ and a randomly chosen linear map $L : \mathbb{F}_q^n \to \mathbb{F}_q^t$ with $q^t$ taken to be sufficiently smaller than $ |S|$. Let $U_S$ denote a random variable distributed uniformly on $S$. Our main theorem shows that, with high probability over the choice of $L$, the random variable $L(U_S)$ is close to uniform in the $\ell_\infty$ norm. In other words, {\em every} element in the range $\mathbb{F}_q^t$ has about the same number of elements in $S$ mapped to it. This complements the widely-used Leftover Hash Lemma (LHL) which proves the analog statement under the statistical, or $\ell_1$, distance (for a richer class of functions) as well as prior work on the expected largest 'bucket size' in linear hash functions [ADMPT99]. By known bounds from the load balancing literature [RS98], our results are tight and show that linear functions hash as well as trully random function up to a constant factor in the entropy loss. Our proof leverages a connection between linear hashing and the finite field Kakeya problem and extends some of the tools developed in this area, in particular the polynomial method.
This paper investigates a series of optimization problems for one-counter Markov decision processes (MDPs) and integer-weighted MDPs with finite state space. Specifically, it considers problems addressing termination probabilities and expected termination times for one-counter MDPs, as well as satisfaction probabilities of energy objectives, conditional and partial expectations, satisfaction probabilities of constraints on the total accumulated weight, the computation of quantiles for the accumulated weight, and the conditional value-at-risk for accumulated weights for integer-weighted MDPs. Although algorithmic results are available for some special instances, the decidability status of the decision versions of these problems is unknown in general. The paper demonstrates that these optimization problems are inherently mathematically difficult by providing polynomial-time reductions from the Positivity problem for linear recurrence sequences. This problem is a well-known number-theoretic problem whose decidability status has been open for decades and it is known that decidability of the Positivity problem would have far-reaching consequences in analytic number theory. So, the reductions presented in the paper show that an algorithmic solution to any of the investigated problems is not possible without a major breakthrough in analytic number theory. The reductions rely on the construction of MDP-gadgets that encode the initial values and linear recurrence relations of linear […]
A continuous constraint satisfaction problem (CCSP) is a constraint satisfaction problem (CSP) with an interval domain $U \subset \mathbb{R}$. We engage in a systematic study to classify CCSPs that are complete of the Existential Theory of the Reals, i.e., ER-complete. To define this class, we first consider the problem ETR, which also stands for Existential Theory of the Reals. In an instance of this problem we are given some sentence of the form $\exists x_1, \ldots, x_n \in \mathbb{R} : \Phi(x_1, \ldots, x_n)$, where $\Phi$ is a well-formed quantifier-free formula consisting of the symbols $\{0, 1, +, \cdot, \geq, >, \wedge, \vee, \neg\}$, the goal is to check whether this sentence is true. Now the class ER is the family of all problems that admit a polynomial-time many-one reduction to ETR. It is known that NP $\subseteq$ ER $\subseteq$ PSPACE. We restrict our attention on CCSPs with addition constraints ($x + y = z$) and some other mild technical conditions. Previously, it was shown that multiplication constraints ($x \cdot y = z$), squaring constraints ($x^2 = y$), or inversion constraints ($x\cdot y = 1$) are sufficient to establish ER-completeness. We extend this in the strongest possible sense for equality constraints as follows. We show that CCSPs (with addition constraints and some other mild technical conditions) that have any one well-behaved curved equality constraint ($f(x,y) = 0$) are ER-complete. We further extend our results to inequality constraints. […]
The aim in packing problems is to decide if a given set of pieces can be placed inside a given container. A packing problem is defined by the types of pieces and containers to be handled, and the motions that are allowed to move the pieces. The pieces must be placed so that in the resulting placement, they are pairwise interior-disjoint. We establish a framework which enables us to show that for many combinations of allowed pieces, containers and motions, the resulting problem is $\exists \mathbb{R}$-complete. This means that the problem is equivalent (under polynomial time reductions) to deciding whether a given system of polynomial equations and inequalities with integer coefficients has a real solution. We consider packing problems where only translations are allowed as the motions, and problems where arbitrary rigid motions are allowed, i.e., both translations and rotations. When rotations are allowed, we show that it is an $\exists \mathbb{R}$-complete problem to decide if a set of convex polygons, each of which has at most $7$ corners, can be packed into a square. Restricted to translations, we show that the following problems are $\exists \mathbb{R}$-complete: (i) pieces bounded by segments and hyperbolic curves to be packed in a square, and (ii) convex polygons to be packed in a container bounded by segments and hyperbolic curves.
We study transformations of automata and games using Muller conditions into equivalent ones using parity or Rabin conditions. We present two transformations, one that turns a deterministic Muller automaton into an equivalent deterministic parity automaton, and another that provides an equivalent history-deterministic Rabin automaton. We show a strong optimality result: the obtained automata are minimal amongst those that can be derived from the original automaton by duplication of states. We introduce the notions of locally bijective morphisms and history-deterministic mappings to formalise the correctness and optimality of these transformations. The proposed transformations are based on a novel structure, called the alternating cycle decomposition, inspired by and extending Zielonka trees. In addition to providing optimal transformations of automata, the alternating cycle decomposition offers fundamental information on their structure. We use this information to give crisp characterisations on the possibility of relabelling automata with different acceptance conditions and to perform a systematic study of a normal form for parity automata.
We develop a theory of vector spaces spanned by orbit-finite sets. Using this theory, we give a decision procedure for equivalence of weighted register automata, which are the common generalization of weighted automata and register automata for infinite alphabets. The algorithm runs in exponential time, and in polynomial time for a fixed number of registers. As a special case, we can decide, with the same complexity, language equivalence for unambiguous register automata, which improves previous results in three ways: (a) we allow for order comparisons on atoms, and not just equality; (b) the complexity is exponentially better; and (c) we allow automata with guessing.
This paper focuses on the algebraic theory underlying the study of the complexity and the algorithms for the Constraint Satisfaction Problem (CSP). We unify, simplify, and extend parts of the three approaches that have been developed to study the CSP over finite templates -- absorption theory that was used to characterize CSPs solvable by local consistency methods (JACM'14), and Bulatov's and Zhuk's theories that were used for two independent proofs of the CSP Dichotomy Theorem (FOCS'17, JACM'20). As the first contribution we present an elementary theorem about primitive positive definability and use it to obtain the starting points of Bulatov's and Zhuk's proofs as corollaries. As the second contribution we propose and initiate a systematic study of minimal Taylor algebras. This class of algebras is broad enough that it suffices to verify the CSP Dichotomy Theorem on this class only, but still is unusually well behaved. In particular, many concepts from the three approaches coincide in this class, which is in striking contrast with the general setting. We believe that the theory initiated in this paper will eventually result in a simple and more natural proof of the Dichotomy Theorem that employs a simpler and more efficient algorithm, and will help in attacking complexity questions in other CSP-related problems.
An $f$-edge fault-tolerant distance sensitive oracle ($f$-DSO) with stretch $\sigma \ge 1$ is a data structure that preprocesses a given undirected, unweighted graph $G$ with $n$ vertices and $m$ edges, and a positive integer $f$. When queried with a pair of vertices $s, t$ and a set $F$ of at most $f$ edges, it returns a $\sigma$-approximation of the $s$-$t$-distance in $G-F$. We study $f$-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005] showed that this is only possible for $\sigma \ge 3$. We present, for any constant $f \ge 1$ and $\alpha \in (0, \frac{1}{2})$, and any $\varepsilon > 0$, a randomized $f$-DSO with stretch $ 3 + \varepsilon$ that w.h.p. takes $\widetilde{O}(n^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+2}$ space and has an $O(n^\alpha/\varepsilon^2)$ query time. The time to build the oracle is $\widetilde{O}(mn^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+1}$. We also give an improved construction for graphs with diameter at most $D$. For any positive integer $k$, we devise an $f$-DSO with stretch $2k-1$ that w.h.p. takes $O(D^{f+o(1)} n^{1+1/k})$ space and has $\widetilde{O}(D^{o(1)})$ query time, with a preprocessing time of $O(D^{f+o(1)} mn^{1/k})$. Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an $f$-DSO with stretch $1{+}\varepsilon$ and preprocessing time $O(n^{5+o(1)}/\varepsilon^f)$, albeit with a super-quadratic space requirement. We show how to reduce their preprocessing time to […]
This paper formalizes connections between stability of polynomials and convergence rates of Markov Chain Monte Carlo (MCMC) algorithms. We prove that if a (multivariate) partition function is nonzero in a region around a real point $\lambda$ then spectral independence holds at $\lambda$. As a consequence, for Holant-type problems (e.g., spin systems) on bounded-degree graphs, we obtain optimal $O(n\log n)$ mixing time bounds for the single-site update Markov chain known as the Glauber dynamics. Our result significantly improves the running time guarantees obtained via the polynomial interpolation method of Barvinok (2017), refined by Patel and Regts (2017). There are a variety of applications of our results. In this paper, we focus on Holant-type (i.e., edge-coloring) problems, including weighted edge covers and weighted even subgraphs. For the weighted edge cover problem (and several natural generalizations) we obtain an $O(n\log{n})$ sampling algorithm on bounded-degree graphs. The even subgraphs problem corresponds to the high-temperature expansion of the ferromagnetic Ising model. We obtain an $O(n\log{n})$ sampling algorithm for the ferromagnetic Ising model with a nonzero external field on bounded-degree graphs, which improves upon the classical result of Jerrum and Sinclair (1993) for this class of graphs. We obtain further applications to antiferromagnetic two-spin models on line graphs, weighted graph homomorphisms, tensor networks, and more.
We present a polynomial time algorithm that constructs a deterministic parity automaton (DPA) from a given set of positive and negative ultimately periodic example words. We show that this algorithm is complete for the class of $\omega$-regular languages, that is, it can learn a DPA for each regular $\omega$-language. For use in the algorithm, we give a definition of a DPA, that we call the precise DPA of a language, and show that it can be constructed from the syntactic family of right congruences for that language (introduced by Maler and Staiger in 1997). Depending on the structure of the language, the precise DPA can be of exponential size compared to a minimal DPA, but it can also be a minimal DPA. The upper bound that we obtain on the number of examples required for our algorithm to find a DPA for $L$ is therefore exponential in the size of a minimal DPA, in general. However we identify two parameters of regular $\omega$-languages such that fixing these parameters makes the bound polynomial.
We give a structure theorem for Boolean functions on the $p$-biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse juntas. Our structure theorem implies that such functions are $O(\epsilon^{C_d} + p)$-close to constant functions. We pinpoint the exact value of the constant $C_d$. We also give an analogous result for monotone Boolean functions on the biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse DNFs. Our structure theorems are optimal in the following sense: for every $d,\epsilon,p$, we identify a class $\mathcal{F}_{d,\epsilon,p}$ of degree $d$ sparse juntas which are $O(\epsilon)$-close to Boolean (in the monotone case, width $d$ sparse DNFs) such that a Boolean function on the $p$-biased hypercube is $O(\epsilon)$-close to degree $d$ in $L_2$ iff it is $O(\epsilon)$-close to a function in $\mathcal{F}_{d,\epsilon,p}$.
Let ${\cal G}$ be a minor-closed graph class and let $G$ be an $n$-vertex graph. We say that $G$ is a $k$-apex of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. Our first result is an algorithm that decides whether $G$ is a $k$-apex of ${\cal G}$ in time $2^{{\sf poly}(k)}\cdot n^2$, where ${\sf poly}$ is a polynomial function depending on ${\cal G}$. This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020], whose running time was $2^{{\sf poly}(k)}\cdot n^3$. The elimination distance of $G$ to ${\cal G}$, denoted by ${\sf ed}_{\cal G}(G)$, is the minimum number of rounds required to reduce each connected component of $G$ to a graph in ${\cal G}$ by removing one vertex from each connected component in each round. Bulian and Dawar [Algorithmica 2017] provided an FPT-algorithm, with parameter $k$, to decide whether ${\sf ed}_{\cal G}(G)\leq k$. However, its dependence on $k$ is not explicit. We extend the techniques used in the first algorithm to decide whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{2^{2^{{\sf poly}(k)}}}\cdot n^2$. This is the first algorithm for this problem with an explicit parametric dependence in $k$. In the special case where ${\cal G}$ excludes some apex-graph as a minor, we give two alternative algorithms, running in time $2^{2^{{\cal O}(k^2\log k)}}\cdot n^2$ and $2^{{\sf poly}(k)}\cdot n^3$ respectively, where $c$ and ${\sf poly}$ depend on ${\cal G}$. […]
We show that feasibility of the $t^\text{th}$ level of the Lasserre semidefinite programming hierarchy for graph isomorphism can be expressed as a homomorphism indistinguishability relation. In other words, we define a class $\mathcal{L}_t$ of graphs such that graphs $G$ and $H$ are not distinguished by the $t^\text{th}$ level of the Lasserre hierarchy if and only if they admit the same number of homomorphisms from any graph in $\mathcal{L}_t$. By analysing the treewidth of graphs in $\mathcal{L}_t$, we prove that the $3t^\text{th}$ level of Sherali--Adams linear programming hierarchy is as strong as the $t^\text{th}$ level of Lasserre. Moreover, we show that this is best possible in the sense that $3t$ cannot be lowered to $3t-1$ for any $t$. The same result holds for the Lasserre hierarchy with non-negativity constraints, which we similarly characterise in terms of homomorphism indistinguishability over a family $\mathcal{L}_t^+$ of graphs. Additionally, we give characterisations of level-$t$ Lasserre with non-negativity constraints in terms of logical equivalence and via a graph colouring algorithm akin to the Weisfeiler--Leman algorithm. This provides a polynomial time algorithm for determining if two given graphs are distinguished by the $t^\text{th}$ level of the Lasserre hierarchy with non-negativity constraints.
The complexity of matrix multiplication is measured in terms of $\omega$, the smallest real number such that two $n\times n$ matrices can be multiplied using $O(n^{\omega+\epsilon})$ field operations for all $\epsilon>0$; the best bound until now is $\omega<2.37287$ [Le Gall'14]. All bounds on $\omega$ since 1986 have been obtained using the so-called laser method, a way to lower-bound the `value' of a tensor in designing matrix multiplication algorithms. The main result of this paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors. Thus, even before computing any specific values, it is clear that we achieve an improved bound on $\omega$, and we indeed obtain the best bound on $\omega$ to date: $$\omega < 2.37286.$$ The improvement is of the same magnitude as the improvement that [Le Gall'14] obtained over the previous bound [Vassilevska W.'12]. Our improvement to the laser method is quite general, and we believe it will have further applications in arithmetic complexity.
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an $s$-$t$-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight $(n+nm^{1/3})^{1-o(1)}$ lower bound for the Word Break problem on strings of length $n$ and dictionaries of total size $m$. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a […]
We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.
We characterize a rich class of valuated matroids, called R-minor valuated matroids that includes the indicator functions of matroids, and is closed under operations such as taking minors, duality, and induction by network. We exhibit a family of valuated matroids that are not R-minor based on sparse paving matroids. Valuated matroids are inherently related to gross substitute valuations in mathematical economics. By the same token we refute the Matroid Based Valuation Conjecture by Ostrovsky and Paes Leme (Theoretical Economics 2015) asserting that every gross substitute valuation arises from weighted matroid rank functions by repeated applications of merge and endowment operations. Our result also has implications in the context of Lorentzian polynomials: it reveals the limitations of known construction operations.