Volume 3

2024


1. Strongly Sublinear Algorithms for Testing Pattern Freeness

Ilan Newman ; Nithin Varma.
For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k\leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if$\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances.In this paper, we investigate the problem of testing whether an input function$f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ valuesfrom every $\pi$-free function. This is a generalization of the well-studiedmonotonicity testing and was first studied by Newman, Rabinovich,Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show thatfor all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation$\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithmfor $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes$\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous bestupper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Ouralgorithm is adaptive, while the earlier best upper bound is known to be tightfor nonadaptive algorithms.

2. Realizable Learning is All You Need

Max Hopkins ; Daniel M. Kane ; Shachar Lovett ; Gaurav Mahajan.
The equivalence of realizable and agnostic learnability is a fundamentalphenomenon in learning theory. With variants ranging from classical settingslike PAC learning and regression to recent trends such as adversarially robustlearning, it's surprising that we still lack a unified theory; traditionalproofs of the equivalence tend to be disparate, and rely on strongmodel-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining theequivalence of realizable and agnostic learnability: a three-line blackboxreduction that simplifies, unifies, and extends our understanding across a widevariety of settings. This includes models with no known characterization oflearnability such as learning with arbitrary distributional assumptions andmore general loss functions, as well as a host of other popular settings suchas robust learning, partial learning, fair learning, and the statistical querymodel. More generally, we argue that the equivalence of realizable and agnosticlearning is actually a special case of a broader phenomenon we call propertygeneralization: any desirable property of a learning algorithm (e.g. noisetolerance, privacy, stability) that can be satisfied over finite hypothesisclasses extends (possibly in some variation) to any learnable hypothesis class.

3. Constructive Separations and Their Consequences

Lijie Chen ; Ce Jin ; Rahul Santhanam ; Ryan Williams.
For a complexity class $C$ and language $L$, a constructive separation of $L\notin C$ gives an efficient algorithm (also called a refuter) to findcounterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$.We study the questions: Which lower bounds can be made constructive? What arethe consequences of constructive separations? We build a case that"constructiveness" serves as a dividing line between many weak lower bounds weknow how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Putanother way, constructiveness is the opposite of a complexity barrier: it is aproperty we want lower bounds to have. Our results fall into three broadcategories. 1. Our first set of results shows that, for many well-known lower boundsagainst streaming algorithms, one-tape Turing machines, and query complexity,as well as lower bounds for the Minimum Circuit Size Problem, making theselower bounds constructive would imply breakthrough separations ranging from$EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lowerbounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$,$P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separationwould further imply a constructive separation. Our results generalize earlierresults for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP\neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results […]