## Circle discrepancy for checkerboard measures

This week I am giving a talk at the Department of Mathematics and Systems Analysis of Aalto University where I will discuss results from a recent paper with Mihalis Kolountzakis. I will give a short introduction to different notions of discrepancy together with a description of the main set-up and results in the checkerboard setting. I will also describe our main questions and results and some further problems related to checkerboard colorings. Our work with Mihalis Kolountzakis is a natural continuation of the previous papers [K], [IK] where similar questions have been considered. In fact our current papers answers some of the questions posed by Kolountzakis and Iosevich in [IK].

1. Notions of Discrepancy

I will begin this post by briefly recalling some notions of discrepancy that have been extensively studied in the literature, namely geometric and combinatorial discrepancy. I will close this introduction by describing the set-up for our current topic, that is discrepancy of sets on checkerboards. This notion was introduced by Kolountzakis in [K]. Although the notion has some characteristics in common with the two notions of discrepancy just mentioned, it can not be put in the context of either one. Thus some new definitions and investigations are necessary.

1.1. Geometric Discrepancy

This refers to the following set-up. Consider the unit cube ${Q:= [0,1]^d \subset \mathbb R^d}$ and let

$\displaystyle \begin{array}{rcl} \mathcal P_N = \{p_1,p_2,\ldots,p_N\} \subset [0,1]^d, \end{array}$

be an ${N}$-point set in ${Q}$. Now consider ${\mathcal A}$ to be a family of sets such as rectangles, circles, convex polygons and so on. We are interested in studying the discrepancy of the point set ${\mathcal P_N}$ with respect to the family ${\mathcal A}$. For ${A\in\mathcal A}$ we set

$\displaystyle \begin{array}{rcl} D(\mathcal P_N, A):= \sharp \mathcal P_N \cap {A} - N|A\cap Q|.\end{array}$

Here ${|E| }$ denotes the Lebesgue measure of a set ${E}$. We are interested in the absolute value of the quantity above which describes the difference of the actual number of points of ${\mathcal P_N}$ in ${A}$ minus the expected number of points in ${A\cap Q}$. Thus only the part of the set ${A}$ that lies inside ${Q}$ is relevant. The ${L^\infty}$-discrepancy is just the supremum over all sets ${A\in\mathcal A}$:

$\displaystyle \begin{array}{rcl} D(\mathcal P_N,\mathcal A):= \sup_{A\in\mathcal A} |D(\mathcal P_N,A)|.\end{array}$

A point distribution with small’ discrepancy can be used to approximate the volume of any ${A\cap Q}$ by the average ${\frac{1}{N}\sharp P_N\cap A }$. Let us now give a more concrete example by specializing the family ${\mathcal A}$ to be the family of all rectangles contained in ${Q}$ with one corner anchored at the origin and another corner in the interior of ${Q}$. For ${\vec x = (x_1,\ldots,x_d)\in Q}$ we write

$\displaystyle \begin{array}{rcl} [0,\vec x) := [0,x_1)\times \cdots \times [0,x_d), \end{array}$

for the rectangle anchored at ${0}$ and at ${\vec x}$. Let ${\mathcal P_N}$ be an ${N}$-point distribution in ${Q}$. The star or corner discrepancy is then defined as:

$\displaystyle \begin{array}{rcl} D_N(x):=\sharp \mathcal P_N\cap [0,\vec x) - N |[0,\vec x)|.\end{array}$

It turns out that the size’ of this function (in many different senses) must necessarily grow to infinity with ${N\rightarrow \infty.}$ The most typical manifestation of this claim is the following ${L^p}$ lower bounds which are essentially optimal:

Theorem 1 (Roth ’54) For all dimensions ${ d\geq 2}$ and all point distributions ${\mathcal P_N\subset [0,1)^d}$, we have that

$\displaystyle \begin{array}{rcl} \|D_N\|_2 \gtrsim_d (\log N)^\frac{d-1}{2}.\end{array}$

We have the corresponding theorem in ${L^p}$, originally due to Schmidt:

Theorem 2 (Schmidt ’77) For all dimensions ${ d\geq 2}$ and all point distributions ${\mathcal P_N\subset [0,1)^d}$, we have that

$\displaystyle \begin{array}{rcl} \|D_N\|_p \gtrsim_{d,p} (\log N)^\frac{d-1}{2},\end{array}$

for all ${1.

These theorems are sharp as to the order of magnitude. In particular we have the following theorem:

Theorem 3 (Davenport ’56) Let ${0 and ${d\geq 2}$. For any positive integer ${N}$, there exists a point distribution ${\mathcal R_N}$ such that

$\displaystyle \begin{array}{rcl} \| D_N \|_p \lesssim_{d,p} (\log N)^\frac{d-1}{2}\end{array}$

Constructing point distribution that are extremal for discrepancy is a delicate matter, especially in high dimensions. For more extremal constructions see [Roth '74, '80], [Chen '81].

A nice motivating application of geometric discrepancy is the Koksma-Hlawka inequality . Let ${f:[0,1]^d\rightarrow\mathbb R}$ be a function of bounded variation ${V(f)}$ (in the sense of Hardy and Krause). Then

$\displaystyle \begin{array}{rcl} \bigg | \int_{[0,1]^d} f(x)dx - \frac{1}{N}\sum_{j=1} ^N f(p_j)\bigg |\leq V(f)\frac{\|D_N\|_\infty}{N}.\end{array}$

In dimension ${d=1}$ the variation of a ${C^1}$ function can be expressed as ${V(f)=\int_{(0,1)}|f'(t)|dt}$. In higher dimensions there is an expression for ${V(f)}$ involving the partial derivatives of ${f}$, assuming sufficient smoothness. Observe that if we replace ${f}$ by ${\mathbf 1 _{[0,\vec x)}}$ in the Koksma-Hlawka inequality, we recover the definition of the discrepancy function on the left hand-side.

Remark 1 (Highly structured sets are bad for discrepancy)
Here it is implied that the structure’ is consistent with the axis-parallel nature of the discrepancy function. Let us for example consider for example a ${\sqrt{N} \times \sqrt{N} }$ lattice in ${[0,1)^2}$. Considering a very thin axis parallel rectangle of the form ${[j/\sqrt{N}-\epsilon, j/\sqrt{N}+\epsilon]\times [0,1)}$ for some ${j<\sqrt {N}}$ we see that ${\|D_N\|_\infty\simeq \sqrt{N}}$, much worse than the ${(\log N)^\frac{1}{2}}$ bound from Davenport’s theorem.

Remark 2 (Random sets are bad) Let ${X_1,\ldots,X_N}$ be chosen independently and uniformly in ${[0,1]^d}$. Consider the rectangle ${R=[0,\frac{1}{2})\times [0,1)}$. Then
${ \sharp R \cap \mathcal P_N \sim B(N,\frac{1}{2})}$ which implies that

$\displaystyle \begin{array}{rcl} \mathop{\mathbb P} (\{ |\sharp R\cap \mathcal P _N -{N}/{2}|<\lambda \sqrt{N} \} )=O(\lambda).\end{array}$

This in turn means that ${|\sharp R \cap \mathcal P_N -N|R| | \gtrsim \sqrt{N}}$ with high probability. Another way to express this is to observe that by the Central limit theorem we get that

$\displaystyle \begin{array}{rcl} \frac{1}{N}\sum_{j=1} ^N f(X_j) = \int_{[0,1]^d}f(x)dx +O(N^{-\frac{1}{2}}).\end{array}$

The discrepancy of a random set is thus of the order ${\sqrt{N}}$, again much larger than the optimal point distribution discrepancy ${(\log N)^\frac{1}{2}}$.

For a much more detailed discussion on star discrepancy you can also check this older post on my blog.

1.2. Combinatorial discrepancy

Let ${X}$ be a set of ${n}$ elements and ${\mathcal S}$ be a family consisting of subsets of ${X}$. A coloring of ${X}$ is a function ${\chi:X\rightarrow\{-1,+1\}}$ and let us agree that ${+1}$ corresponds to white colored points and ${-1}$ corresponds to black colored points of ${X}$. Given a family ${\mathcal S}$ the goal here is to color the elements of ${X}$ in such a way so that any of the sets in ${\mathcal S}$ contains roughly the same number of black points’ and white points’. As in the geometric set-up it turns out that this is in most of the interesting cases not possible.
The relevant combinatorial discrepancy is defined as

$\displaystyle \begin{array}{rcl} D(\chi, S):= \sum_{x\in S}\chi(x),\quad S\in\mathcal S.\end{array}$

The corresponding ${\ell^\infty}$ discrepancy is then

$\displaystyle \begin{array}{rcl} D(\chi,\mathcal S):= \sup_{S\in\mathcal S} |D(\chi,S)|.\end{array}$

Of course it makes sense to consider ${\ell^p}$ discrepancies in the obvious way. To give an example, if one considers as ${\mathcal S}$ the family of all subsets of ${X}$ then the corresponding discrepancy is large, at least ${n/2}$, no matter how clever we are in coloring the elements of ${X}$. That is, there exists a subset ${S}$ of ${X}$ with

$\displaystyle \begin{array}{rcl} D(\chi, S)\geq n/2,\end{array}$

uniformly for all colorings ${\chi.}$ On the other hand, a good upper bound is obtained by coloring the set ${X}$ randomly, that is, each ${x\in X}$ is colored black or white with probability ${1/2}$ and independently of other points of ${X}$. Then for any system ${\mathcal S}$ of subsets of ${X}$ we have

$\displaystyle \begin{array}{rcl} D(\chi_{\textup{random}},S)\leq \sqrt{2|S| \ln|\mathcal S| } ,\end{array}$

for all sets ${S\in\mathcal S}$ simultaneously with probability at least ${1/2}$.

1.3. Discrepancy for checkerboard colorings

Consider the Euclidean plane ${{\mathbb R}^2}$ as a union of congruent cells

$\displaystyle \begin{array}{rcl} {\mathbb R}^2=\cup_{m\in{\mathbb Z}^2}\{Q+m\},\end{array}$

where ${Q=[0,1)^2}$. Now color each square in the union, black or white, in an arbitrary manner. This amounts to defining a function ${f:{\mathbb R}^2\rightarrow\{-1,+1\}}$, where ${f}$ is constant on each one of the cells ${\{Q+m\}_{m\in{\mathbb Z}^2}}$. We will call such a coloring a checkerboard coloring of the (infinite) Euclidean plane. Now let ${\mathcal S}$ be a family of sets and for simplicity let us assume that ${\mathcal S}$ consists of line segments. The question is whether one can choose a checkerboard coloring ${f}$ such that, for any line segment ${L}$, the white length’ of ${L}$ minus its black length’ is bounded uniformly by an absolute constant. Note that the corresponding discrepancy here can be defined as

$\displaystyle \begin{array}{rcl} D(f,L):= \int_L f(s)ds,\end{array}$

where ${ds}$ is the arc-length measure on the segment ${L}$, while the ${L^\infty}$ discrepancy

$\displaystyle \begin{array}{rcl} D(f,L):=\sup_{L\textup{ segment}} |D(f,L)|.\end{array}$

Like in many other similar questions in discrepancy theory it turns out that this is not possible. In particular there exist arbitrarily long line segments ${L}$ such that the discrepancy is at least ${\sqrt{|L|}}$ for all checkerboard colorings of the plane. This is essentially best possible (up to an ${\epsilon}$-loss): a random coloring has discrepancy about ${|L|^{\frac{1}{2}+\epsilon}}$ for every segment ${L}$.

We will take up these issues in detail in the rest of this post. A general remark is the following. Although in most cases we want to prove lower bounds for a supremum like quantity, the ${L^\infty}$-discrepancy, it is often more convenient to bound a corresponding average discrepancy. For this one has to place an appropriate measure on the family of sets ${\mathcal S}$. For example the set of lines can be parametrized by ${{\mathbb R}\times S^1}$ where ${S^1}$ is the one dimensional unit circle and there is the natural product measure on this space. The same principle holds for other types of geometric discrepancy.

2. The discrepancy of a line segment on a checkerboard

Like before we fix some checkerboard coloring ${f:{\mathbb R}^2\rightarrow \{-1,+1\}}$ where ${f}$ is constant on each cell ${[0,1)^2+m,}$, ${m\in{\mathbb Z}^2.}$ The family of sets ${\mathcal S}$ consists of (finite) line segments of arbitrary position and length. We now translate the problem to a corresponding problem on a finite checkerboard as follows. Let ${Q_N=[0,N)^2}$ where ${N}$ will be a large positive integer. Consider ${Q_N}$ as a union of congruent cells as before:

$\displaystyle \begin{array}{rcl} Q_N=\bigcup_{0\leq m_1,m_2

A finite checkerboard coloring of ${Q_N}$ is a function ${f_N:{\mathbb R}^2\rightarrow \{-1,+1,0\}}$ which is constant and identically equal to either ${+1}$ or ${-1}$ on each cell ${Q_m}$ and identically zero on the complement of ${Q_N}$. We define the discrepancy

$\displaystyle \begin{array}{rcl} D(f_N,L):= \int_L f_N(s)ds.\end{array}$

The main theorem proved in [K] says that for every large ${N}$, there exists a line segment, contained in ${Q_N}$ with discrepancy

$\displaystyle \begin{array}{rcl} |D(f_N,L)| \gtrsim \sqrt{N}.\end{array}$

Observe that this finite checkerboard’ result implies the corresponding statement for the infinite coloring since ${N}$ can be arbitrarily large.

Let us parametrize the set of lines in the plane by their distance ${t}$ from the origin and the unit vector ${u^\perp \in S^1}$ along the line. Thus any line ${\ell}$ can be described as

$\displaystyle \begin{array}{rcl} \ell: tu+su^\perp,\quad s\in{\mathbb R}.\end{array}$

We set

$\displaystyle \begin{array}{rcl} Rf_N(u,t)=\int_{\mathbb R} f_N(tu+su^\perp)ds.\end{array}$

This is the discrepancy of the line segment ${\ell\cap Q_N}$ with respect to the coloring ${f}$. At the same time it is the Radon transform of ${f_N}$, hence the notation ${Rf_N(u,t).}$ In order to consider all possible line segments one has to vary ${u\in S^1}$ and ${s\in{\mathbb R}}$. For any ${1\leq p \leq \infty}$ consider the quantities

$\displaystyle \begin{array}{rcl} \bigg(\int_{S^1} \int_{{\mathbb R}} |Rf_N(u,t)|^p dt d\sigma (u)\bigg)^\frac{1}{p},\end{array}$

where ${d\sigma_1}$ denotes the normalized Lebesgue measure on ${S^1}$. Finally observe that the measure of the support of ${Rf_N(u,t)}$ is about ${N}$ so we define the normalized ${L^p}$ discrepancies

$\displaystyle \begin{array}{rcl} D(f_N,p):= \bigg(\frac{1}{N} \int_{S^1}\big|\int_{\mathbb R} f(su+tu^\perp)ds \big|^p dt d\sigma_1(u)\bigg)^\frac{1}{p}.\end{array}$

Now observe that Plancherel’s theorem and the Fourier slice theorem imply that

$\displaystyle \begin{array}{rcl} D(f_N,2)^2&=&\frac{1}{N}\int_{S^1} \int_{{\mathbb R}}| \mathcal F_t {Rf_N(\cdot,u)}(\xi)|^2d\xi d\sigma_1(u)=\frac{1}{N}\int_{S^1} \int_{{\mathbb R}}| \widehat {f_N}(\xi u)|^2 d\xi d\sigma_1(u) \\ \\ &\geq&\frac{1}{N}\int_{S^1} \int_{|\xi|<1} |\widehat {f_N}(\xi u)|^2 d\xi d\sigma_1(u) \geq \frac{1}{N}\int_{S^1} \int_{|\xi|<1} |\widehat {f_N}(\xi u)|^2 |\xi| d\xi d\sigma_1(u) \\ \\ &\simeq& \frac{1}{N} \int_{B(0,1)}|\widehat{f_N}(\xi)|^2 d\xi\geq \frac{1}{N} \int_{[-\frac{1}{2},\frac{1}{2})^2}|\widehat{f_N}(\xi)|^2d\xi. \end{array}$

An easy calculation shows that

$\displaystyle \begin{array}{rcl} |\widehat f_N(\xi)|=\big|\frac{\sin(\pi\xi_1)}{\pi \xi_1}\frac{\sin(\pi\xi_2)}{\pi \xi_2}\phi(\xi)\big|, \end{array}$

where ${\phi(\xi)}$ is the (bi-variate) trigonometric polynomial

$\displaystyle \begin{array}{rcl} \phi(\xi)=\sum_{1\leq k,j\leq N-1}z_{k,j} e^{2\pi i (k\xi_1+j\xi_2)},\end{array}$

and ${z_{k,j}\in\{-1,+1\}}$ is the color corresponding to the cell ${(k,j)+[0,1)^2}$. Observing the elementary bound

$\displaystyle \begin{array}{rcl} \big|\frac{\sin(\pi\xi_1)}{\pi \xi_1}\frac{\sin(\pi\xi_2)}{\pi \xi_2}|\gtrsim 1\quad \mbox{when}\quad\xi\in\big[-\frac{1}{2},\frac{1}{2}\big)^2,\end{array}$

the previous calculation shows

$\displaystyle \int_{|\xi|<1}|\hat f_N(\xi)|^2 d\xi \gtrsim \int_{[-\frac{1}{2},\frac{1}{2})^2}|\phi(\xi)|^2 d\xi=N^2, \ \ \ \ \ (1)$

which in turn implies that ${D(f_N,2)^2 \gtrsim N.}$
so that ${D(f_N,\infty)\gtrsim D(f_N,2)\gtrsim\sqrt{N}}$.

2.1. Some remarks on ${L^p}$ discrepancies for ${p\neq 2}$.

Kolountzakis proves in [K] that for random colorings ${f_N}$ we have ${D(f_N,\infty)\lesssim_\epsilon N^{\frac{1}{2}+\epsilon}}$ with high probability so, ignoring ${\epsilon}$-losses, the for the ${L^p}$-discrepancies we have

$\displaystyle \begin{array}{rcl} \inf_{f_N}D(f_N,p)\simeq \sqrt{N},\quad 2\leq p\leq \infty.\end{array}$

For ${1 one can use the convexity of the ${L^p}$ norms to get the bound

$\displaystyle \begin{array}{rcl} D(f_N,p)\gtrsim N^\frac{1}{p'},\quad 1

On the other hand consider a coloring ${h_N}$ where each one of the ${N}$ lines of the checkerboard has a single color and adjacent lines have different colors. It is then easy to check that

$\displaystyle \begin{array}{rcl} D(h_N,p)\simeq_p N^\frac{1}{p'}\quad\mbox{and}\quad D(h_N,1)\simeq \log N. \end{array}$

Thus, while the previous ${L^p}$ bounds are sharp for ${1, the ${L^1}$ discrepancy can be much smaller, of the order ${\log N}$. See the end of this post for some related open questions.

3. Discrepancy of a circle on a checkerboard

Here we consider the family ${\mathcal S}$ consisting initially of all circles in the Euclidean plane, of any center and radius. For the infinite checkerboard problem this is an appropriate set-up. Translating the problem to the finite checkerboard requires some discussion in this case. Fix some large ${N}$ and a coloring ${f_N}$ of ${Q_N}$. In this set-up, it makes sense to restrict the radii of the circles considered to be at most of the order ${N}$ otherwise one can consider radius tending to infinity so that the circle essentially becomes a line with respect to the checkerboard scale which is ${N}$. If ${C}$ is a circle the relevant discrepancy now is

$\displaystyle \begin{array}{rcl} D(f_N,C)=\int_C f_N(s)ds,\end{array}$

where we integrate with respect to the arc-length measure on the circle ${C}$. Remember that we have assumed that ${f_N}$ is identically zero outside ${Q_N}$. This means that the previous integral ignores circles that do not `hit’ the finite checkerboard ${Q_N}$. This is of no consequence when translating to the infinite set-up. However, the previous integral takes into account arcs as well as circles if no restriction is imposed on the centers and radii of the circles. Thus if we manage to find a circle ${C}$ of radius say ${\sim N}$ such that
${|D(f_N,C)|}$ is large this only amounts to arc in the infinite checkerboard which has large discrepancy. This is a technical difficulty to which we will come back later on.

Let us now write down a convenient formula for the circle discrepancy. Fixing ${f_N}$ we set

$\displaystyle \begin{array}{rcl} D_tf_N(x):= f_N* \sigma_t(x)=\int_{C(x,t)} f(s)ds,\end{array}$

where ${C(x,t)}$ denotes the circle of center ${x\in{\mathbb R}^2}$ and radius ${t}$ and ${\sigma_t}$ denotes the arc length measure on the circle ${C(x,t)}$. Note that ${\sigma_t}$ is not normalized so that

$\displaystyle \begin{array}{rcl} \int_{C(x,t)} d\sigma_t=2\pi t .\end{array}$

In our considerations below we will only consider radii ${t\lesssim N}$ which implies that the measure of the support of ${D_t f_N(x)}$ is about ${N^2}$. Thus we define the normalized ${L^2}$ discrepancy for any ${t>1}$:

$\displaystyle \begin{array}{rcl} D_t(f_N,2):= \bigg(\frac{1}{N^2} \int |f*\sigma_t(x)|^2dx \bigg)^\frac{1}{2}.\end{array}$

By Plancherel’s theorem we have as usual

$\displaystyle \begin{array}{rcl} D_t(f_N,2)^2 =\frac{1}{N^2}\int_{{\mathbb R}^2} |\hat f_N (\xi)|^2 |\hat \sigma_t(\xi)|^2 d\xi.\end{array}$

The main tool for the study of the ${L^2}$ discrepancy is the classical asymptotic description of the Fourier transform of the arc-length measure on the circle. We have

$\displaystyle \begin{array}{rcl} \hat \sigma_t(x)=t\hat \sigma_1(t\xi)\end{array}$

and

$\displaystyle \begin{array}{rcl} \hat \sigma_1(\xi)=\hat \sigma_1(|\xi|)=2|\xi|^{-\frac{1}{2}}\cos\big(2\pi |\xi|-\frac{\pi}{4}\big)+O(|\xi|^{-\frac{3}{2}}),\quad |\xi|\rightarrow+\infty .\end{array}$

Studying the asymptotic description of ${\hat \sigma_1}$ one easily sees that the main obstruction in proving a lower bound for ${D_t(f_N,2)}$ are the zeros of ${\hat \sigma_1(|\xi|)}$ which, as ${|\xi|\rightarrow+\infty}$, become almost periodic. This problem was dealt with in [IK] by also averaging in the radial variable, in an interval of the form ${[N/10,N/5]}$. Since the zeros of ${\hat \sigma_1(r)}$ are isolated this provides a lower bound for ${D_t(f_N,2)}$ which is of the order ${\sqrt{N}}$. However the bound in [IK] only implies a bound for arcs in the infinite checkerboard. Furthermore, as a result of the averaging in the radial variable, the radius of the arc is not specified exactly; it lies in some interval of the form ${[N/10,N/5]}$.

Our first result answers the problem of finding a full circle with large discrepancy.

Theorem 4 Let ${t>1}$ be a real number and ${N}$ a large positive integer. We have that

$\displaystyle \begin{array}{rcl} D_t (f_N,2)^2+D_{2t} (f_N,2) ^2 \gtrsim t.\end{array}$

Proof: A sketch of the proof of this result is the following. A simple calculation as before shows that

$\displaystyle \begin{array}{rcl} D_t (f_N,2)^2+D_{2t} (f_N,2) ^2&=& \frac{1}{N^2} \int |\hat f_N(\xi)|^2 \bigg[|\sigma_t(|\xi|)|^2 +| \sigma_{2t}(|\xi|)|^2 \bigg]d\xi \\ \\ &\gtrsim& \frac{t^2}{N^2} \int |\hat f_N(\xi)|^2 \bigg[|\sigma_1(t|\xi|)|^2 +| \sigma_{1}(2t|\xi|)|^2 \bigg]d\xi \\ \\ &=&\frac{1}{N^2} \int |\hat f_N(\xi/t)|^2 \bigg[|\sigma_1(|\xi|)|^2 +| \sigma_{1}(2|\xi|)|^2 \bigg]d\xi \\ \\ &=&\frac{1}{N^2} \int_{|\xi|A} \cdots d\xi:= I+II. \end{array}$

Here ${A}$ is a positive constant to be chosen appropriately. Now assume, as we may, that ${\hat \sigma_1 (r)}$ behaves likes ${\frac{1}{\sqrt r} \cos\big(2\pi r -\frac{\pi}{4}\big)}$ when ${r}$ is large, which corresponds to the term ${II}$. The cosine term above is zero exactly when

$\displaystyle \begin{array}{rcl} 2\pi r = k\pi +\frac{\pi}{2} \Leftrightarrow r=\beta_k:=\frac{k}{2}+\frac{3}{8},\quad k=0,1,2,\ldots.\end{array}$

From the previous expression of the (approximate) zeros of ${\hat \sigma_1}$ one readily sees that if some real number ${r}$ is a zero, say ${r=\beta_k}$ for some non-negative integer ${k}$, then ${|2r-\beta_\ell|=|2\beta_k-\beta_l| \geq |k-\frac{\ell}{2}|-\frac{3}{8}\geq \frac{1}{8}.}$ That is if ${r}$ is a zero then ${2r}$ stays bounded away from the zeros of ${\hat \sigma_1}$; we get that

$\displaystyle \begin{array}{rcl} \sigma_1(r)^2 +\sigma_1(2r)^2 \gtrsim \frac{1}{r},\quad r\mbox{ large}.\end{array}$

The estimate for ${II}$ is thus:

$\displaystyle \begin{array}{rcl} II&\gtrsim& \frac{1}{N^2} \int_{A<|\xi|

For ${I}$ one needs to note that for an appropriate choice of ${A}$, the disk ${|\xi| has a positive distance from the first zero of ${\hat \sigma_1(|\xi|)}$ so that ${|\sigma_1(|\xi|)|^2\gtrsim 1}$ in that region. We get the better estimate

$\displaystyle \begin{array}{rcl} I\gtrsim \frac{1}{N^2} \int_{|\xi|

since ${t\geq 1}$. Summing the estimates for ${I}$ and ${II}$ we get

$\displaystyle \begin{array}{rcl} D_t (f_N,2)^2+D_{2t} (f_N,2) ^2&\gtrsim& \frac{t}{N^2} \int_{|\xi|<1} |\hat f_N(\xi)|^2\gtrsim t, \end{array}$

by another use of the basic estimate (1).
$\Box$

Remark 3 To make the previous argument rigorous on needs to make an explicit choice of the constant ${A>0}$ and prove that

$\displaystyle |\hat \sigma_1 (r)|^2+|\hat \sigma_1(2r)|^2 \gtrsim \frac{1}{r}, \quad r \geq A. \ \ \ \ \ (2)$

One way to do that is the following. Fro classical estimates we have that

$\displaystyle \begin{array}{rcl} \hat \sigma_1(r)=2\pi J_0(2\pi r) \simeq \frac{1}{\sqrt{r}} \bigg(\cos(2\pi r-\frac{\pi}{4})+e(2\pi r)\bigg),\quad r\geq 1/2\pi, \end{array}$

where ${J_0}$ is the ${0}$-th order Bessel function and ${e(r)}$ is an error term that satisfies ${|e(r)|\leq \frac{1}{5r}}$. A direct computation then shows that (2) is valid whenever ${2\pi r\geq 7}$ (say). For ${1\leq 2\pi r <7}$ one can check directly the validity of (2) either by looking up the zeros of ${J_0}$ in a table or by drawing the graph of the function. Finally, by looking up the zeros of ${J_0}$ one checks again that the first zero of ${J_0}$ is ${\xi_0=2.4048\ldots >1 }$ so that ${\hat \sigma_1(r)}$ has no zero for ${r\leq \xi_0/2\pi}$ and the estimate for ${I}$ is also valid. Thus the previous argument works for ${A=\frac{1}{2\pi}<1.

Using Theorem 4 we have the following easy corollary

Corollary 5 Let ${t>1}$ and ${f}$ be any checkerboard coloring of the infinite Euclidean plane. There exists a constant ${c>0}$ and a circle ${C_\rho}$ of radius ${\rho}$, where ${\rho}$ is equal to either ${t}$ or ${2t}$, such that

$\displaystyle \begin{array}{rcl} \bigg| \int_{C_\rho} f(s)ds\bigg| \geq c \sqrt{\rho}.\end{array}$

Proof: Given ${t>1}$ consider the finite checkerboard with ${N=100 t^2}$ and the corresponding finite coloring ${f_N}$. Theorem 4 then shows that

$\displaystyle \begin{array}{rcl} D_\rho(f_N,2)\gtrsim \sqrt{\rho},\end{array}$

where ${\rho}$ has a fixed value equal to either ${t}$ or ${2t}$. Consider the cube ${Q_1:= [\rho,N-\rho)^2}$. We have

$\displaystyle \begin{array}{rcl} \int_{Q_1} |D_\rho f_N(x)|^2&=&D_\rho(f_N,2)^2-\frac{1}{N^2} \int_{[-\rho,N+\rho)^2\setminus Q_1 } |f_N*\sigma_s(x)|^2 dx \\ \\ &\gtrsim & \rho-\frac{1}{N^2} N \rho \rho ^2\gtrsim \rho(1 - \frac{\rho^2}{N})\gtrsim \rho, \end{array}$

since ${N\gtrsim \rho^2}$.
$\Box$

4. Single radius discrepancy for arcs

The previous discussion answers our main question by showing that for every ${t>1}$ there exists a full circle of radius either ${t}$ or ${2t}$, with discrepancy ${\geq c\sqrt{t}}$. If there is an unsatisfying element in this result it is exactly that we cannot prove the corresponding statement for every radius ${t>1}$. Observe that we did not completely avoid the averaging in the radial direction, albeit we only only averaged between two discrete values ${t}$ and ${2t}$. An attempt to deal with this problem is contained in the following discussion. In order to avoid the ambiguity in the choice of the radius of the circle with large discrepancy we replace the radial averaging with pointwise estimates. Our methods however only work under the restriction ${t\simeq N}$ for the finite ${N\times N}$ checkerboard. As a result, the simple argument of Corollary 5, which required ${N\gtrsim t^2}$, does not work any longer and our current methods only prove the following:

Theorem 6 For every radius ${t>1}$ and every coloring ${f}$ of the infinite checkerboard there exists a circular arc ${C_t}$, of radius ${t}$, with

$\displaystyle \begin{array}{rcl} \bigg| \int_{C_t} f(s)ds\bigg|\geq c \sqrt{t},\end{array}$

where ${c>0}$ is an absolute constant, independent of ${f}$, ${t}$ and ${C_t}$.

Once again we actually prove the corresponding lower bound for the ${L^2}$ discrepancy of the finite checkerboard. In particular we show that for ${N\simeq t}$ we have

$\displaystyle \begin{array}{rcl} D(f_N,2)\geq c \sqrt t.\end{array}$

Here, the restriction ${N\simeq t}$ is essential and results to the existence of an arc, instead of a full circle, in the infinite checkerboard with discrepancy ${\gtrsim \sqrt t.}$ The idea is the following. We write

$\displaystyle \begin{array}{rcl} D_t(f_N,2)^2 &=&\frac{1}{N^2}\int_{{\mathbb R}^2} |\hat f_N (\xi)|^2 |\hat \sigma_t(\xi)|^2 d\xi =\frac{1}{N^2}\int |\hat f_N (\xi/t)|^2 |\hat \sigma_1(\xi)|^2d\xi. \end{array}$

Assuming arbitrarily large values of ${|\xi|\geq A}$ we can assume that the asymptotic description

$\displaystyle \begin{array}{rcl} \hat \sigma_1(|\xi|)\simeq \frac{1}{\sqrt{|\xi|}} \big(\cos\big(2\pi |\xi|-\frac{\pi}{4}\big )\big)+O(|\xi|^{-\frac{3}{2}}),\end{array}$

is valid. We denote by

$\displaystyle \begin{array}{rcl} \beta_k:= \big(\frac{k}{2}+\frac{3}{8}\big),\quad k=0,1,2,\ldots,\end{array}$

the roots of the cosine term as before. It will also be convenient to introduce a notation for annular neighbourhoods around the roots ${\beta_k}$ by setting

$\displaystyle \begin{array}{rcl} A_w(\beta_k):= \{\xi\in{\mathbb R}^2: \big| |\xi|-\beta_k \big| \geq w\},\end{array}$

where ${w>0}$ is a small real parameter. Now using the asymptotic expansion of ${\hat \sigma_1}$ it is easy to see that

$\displaystyle \sigma_1(|\xi|) ^2 \gtrsim_w \frac{1}{|\xi|}\quad\mbox{whenever}\quad \xi \in B(0,A_w)^c \cap \big(\cup_k A_w(\beta_k) \big)^c,\ \ \ \ \ \ (3)$

where ${A_w>0}$ is a large constant, which may depend on ${w}$, which is chosen so that for ${|\xi|>A_w}$ the asymptotic expansion of ${\hat \sigma_1}$ is valid. On the other hand for ${|\xi| we have no asymptotic description of ${\hat \sigma_1(|\xi|) }$ be we now that its zeros are of the form ${|\xi|=\gamma_k}$ where ${\{\gamma_k\}_{k\in{\mathbb N}}}$ is a finite sequence without accumulation points. This can be seen by arguing via the ${0}$-th order Bessel function as before, or, alternatively, by noting that ${\hat \sigma_1(\xi)}$ is radial and can only have isolated zeros in the radial direction since it is the Fourier transform of a compactly supported measure. We conclude by compactness that

$\displaystyle \sigma_1(|\xi|) ^2 \gtrsim_w 1 \quad\mbox{whenever}\quad \xi \in B(0,A_w) \cap \big(\cup_k A_w(\gamma_k) \big)^c. \ \ \ \ \ (4)$

Using (4) and (3) we can write for every small parameter ${w>0}$ the estimate

$\displaystyle \begin{array}{rcl} D(f_N,2)^2&\gtrsim_w&\frac{1}{N^2} \int_{B(0,A_w) \cap \big(\cup_k A_w(\gamma_k) \big)^c}|\hat f_N(\xi/t)|^2d\xi \\ \\ &&\quad +\frac{1}{N^2} \int_{B(0,A_w)^c \cap \big(\cup_k A_w(\beta_k) \big)^c}|\hat f_N(\xi/t)|^2 \frac{1}{|\xi|}d\xi. \end{array}$

Setting ${E_w := \bigcup_k A_w(\gamma_k) \cup \bigcup_k A_w(\beta_k) }$ the previous considerations imply that

$\displaystyle \begin{array}{rcl} D(f_N,2)^2 \gtrsim_w \frac{1}{t N^2} \int_{\{|\xi|

Note that the implicit constant in the estimate above depends on the parameter ${w}$ which is yet to be chosen. The main idea to complete the proof is that for small enough ${w}$, the integral

$\displaystyle \begin{array}{rcl} \int_{\{|\xi|

captures a positive proportion of the integral

$\displaystyle \begin{array}{rcl} \int_{|\xi|

This can be formulated more generally for any reasonable function ${g}$ as follows:

Proposition 7 For ${d\geq 1}$ consider any ball ${B=B(0,R)\subset {\mathbb R}^d}$ and let ${g\in C^1(B)}$. Fix a finite sequence

$\displaystyle \begin{array}{rcl} \beta_0:= 0<\beta_1<\beta_2<\cdots<\beta_N

and set ${\beta:= \min_{k=0,1,\ldots,N} (\beta_{k+1}-\beta_k).}$ Define as before the annuli

$\displaystyle \begin{array}{rcl} A_w(\beta_k):= \{\xi\in{\mathbb R}^d: \big| |\xi|-\beta_k \big|

Then for ${0 we have

$\displaystyle \begin{array}{rcl} \int_B |g(x)|^2 dx \lesssim \int_{B\setminus\big( \cup_{k=1} ^N A_w(\beta_k) \big)}|g(x)|^2 dx +w^2\int_B |\nabla g(x)|^2 dx.\end{array}$

Before giving the proof of this proposition let us see how we can use it to complete the proof of Theorem 6. We have

$\displaystyle \begin{array}{rcl} \int_{\{|\xi|

where by (4) and (3) we see that ${\delta_k}$ is either ${\beta_k}$ or ${\gamma_k}$ and tha in every case we have
${|\delta_k/t-\delta_{k-1}/t|\gtrsim 1/t}$ so for ${w, where ${w_o}$ is some numerical constant, the hypotheses of the proposition are satisfied. We thus get

$\displaystyle \begin{array}{rcl} D_t(f_N,2)^2 &\gtrsim_w & \frac{1}{tN^2} t^2\bigg( \int_{B(0,1)}|\hat f_N(\xi)|^2d\xi -\frac{w^2}{t^2}\int_{{\mathbb R}^2}|\nabla \hat f_N(\xi)|^2d\xi \bigg) \\ \\ &\gtrsim& \frac{t}{N^2} \bigg(N^2-\frac{w^2}{t^2} \int_{Q_N} |x|^2 dx\bigg)= \frac{t}{N^2} \big(N^2-\frac{w^2}{t^2}N^4\big)' \\ \\ &\simeq& t (1-\frac{N^2 w^2}{t^2})\simeq t(1-w^2), \end{array}$

since ${t\simeq N}$. Thus ${D_t(f_N,2)\gtrsim_w \sqrt{t}}$ for ${w}$ small enough, but not depending on any of the parameters of the problem. We now give the proof of Proposition 7.

Proof: } Fix ${w<\beta/3}$ and for ${1\leq n \leq N}$ consider the annulus ${A_s(\beta_n)}$ with ${n\in[w,2w]}$. For ${\beta_n-s\leq r \leq \beta_n +s}$ and ${u\in S^{d-1}}$ we have

$\displaystyle \begin{array}{rcl} g(ru)=g((\beta_n-s)u)+\int_{\beta_n-s} ^r \partial_t g(tu) dt. \end{array}$

Using ${(a+b)^2\lesssim a^2+b^2}$ and Cauchy-Schwarz we get

$\displaystyle \begin{array}{rcl} |g(ru)|^2 \lesssim |g((\beta_n-s)u)|^2 + 2s \int_{\beta_n-s} ^{\beta_n+s} |\partial_t g(tu)|^2 dt. \end{array}$

Multiplying both sides by ${r^{d-1}}$ and integrating for ${\beta_n-s\leq r \leq \beta_n+s}$ and ${u\in S^{d-1}}$ we have

$\displaystyle \begin{array}{rcl} \int_{A_s(\beta_n)}|g(x)|^2 dx& \lesssim &\int_{\beta_n-s} ^{\beta_n+s} \int_{S^{d-1}} |g((\beta_n-s)u)|^2 r^{d-1} dr d\sigma(u) \\ \\ &&+s \quad \int_{\beta_n-s} ^{\beta_n+s} \int_{S^{d-1}} \int_{\beta_n-s} ^{\beta_n+s} |\nabla g(tu)|^2 dt r^{d-1} dr d\sigma(u). \end{array}$

Now for ${w\leq s \leq 2w }$ and ${r,t\in[\beta_n-s,\beta_n+s]}$ we have

$\displaystyle \begin{array}{rcl} r,t\leq \beta_n+2w\lesssim \beta_n+\beta \leq 2 \beta_{n}\end{array}$

and

$\displaystyle \begin{array}{rcl} r,t\geq \beta_{n}-1/3(\beta_n-\beta_{n-1})\gtrsim \beta_n,\end{array}$

so that ${r\simeq t\simeq \beta_n}$. Thus the last estimate can be written in the form

$\displaystyle \int_{A_s(\beta_n)}|g(x)|^2 dx \lesssim_d \beta_n ^{d-1} s \int_{S^{d-1}} |g((\beta_n-s)u)|^2 d\sigma(u)+s^2\int_{A_s(\beta_n)}|\nabla g(x)|^2 dx \ \ \ \ \ (5)$

We integrate the previous inequality for ${s\in[w,2w]}$. For the left hand side we have

$\displaystyle \begin{array}{rcl} \int_w ^{2w} \int_{A_s(\beta_n)}|g(x)|^2 dx \gtrsim w\int_{A_w(\beta_n)}|g(x)|^2 dx. \end{array}$

For the first term on the right hand side we estimate

$\displaystyle \begin{array}{rcl} \beta_n ^{d-1} \int_w ^{2w} \int_{S^{d-1}} |g((\beta_n-s)u)|^2 d\sigma(u)s ds&\simeq &w \int_{\beta_n-2w} ^{\beta_n-w}\int_{S^{d-1}} |g(\xi u)|^2 \xi^{d-1}d\xi d\sigma(u) \\ \\ &=&w\int_{B(0.\beta_n-w)\setminus B(\beta_n-2w)} |g(x)|^2 dx \\ \\ &\leq &w\int_{B(0.\beta_n-w)\setminus B(\beta_{n-1}+w)} |g(x)|^2 dx, \end{array}$

since ${w<(\beta_n-\beta_{n-1})/3.}$ Finally, for the last term in the right hand side (5) we have

$\displaystyle \begin{array}{rcl} \int_w ^{2w} \int_{A_s(\beta_n)}|\nabla g(x)|^2 dx s^2 ds &\lesssim& w^3 \int_{A_{2w}(\beta_n)} |\nabla g(x)|^2 dx. \end{array}$

Summing up the previous estimates we get for every ${n=1,2,\ldots,N}$:

$\displaystyle \begin{array}{rcl} w\int_{A_w(\beta_n)}|g(x)|^2 dx \lesssim w\int_{B(0.\beta_n-w)\setminus B(\beta_n-2w)} |g(x)|^2 dx+ w^3 \int_{A_{2w}(\beta_n)} |\nabla g(x)|^2 dx.\end{array}$

Observe that

$\displaystyle \begin{array}{rcl} \bigcup_{n=1} ^N B(0,\beta_n-w,\beta_{n-1}+w ) \subset B\setminus \bigcup_{n=1} ^N A_w(\beta_n) .\end{array}$

Summing the previous estimates for ${n=1,2,\ldots,N,}$ we get

$\displaystyle \begin{array}{rcl} \int_{\cup_n A_w(\beta_n)}|g(x)|^2 dx \lesssim \int_{B\setminus \cup A_w(\beta_n)} |g(x)|^2 dx+ w^2 \int_{B} |\nabla g(x)|^2 dx\end{array}$

Now adding the term ${\int_{B\setminus \cup A_w(\beta_n)} |g(x)|^2 dx}$ to both sides of the estimate above gives the claim of the proposition.
$\Box$

5. Some open questions

We close the discussion on the discrepancy for checkerboard colorings by summarizing some of the open questions that we alluded to earlier:

Question 8 Is it true that for every ${t>1}$ there is a full circle with discrepancy ${\gtrsim \sqrt{t}}$? In particular, is it true that

$\displaystyle \begin{array}{rcl} D_t(f_N,2)\gtrsim \sqrt{t},\end{array}$

for all ${t>1}$?
We believe that yes but the methods discussed here do not seem to be sufficient to prove such a result.

Consider the ${L^p}$ discrepancies of circles defined in the obvious way

$\displaystyle \begin{array}{rcl} D_t(f_N,p):= \bigg(\frac{1}{N^2} \int |f_N*\sigma_t(x)|^p dx \bigg)^\frac{1}{p}.\end{array}$

As we just saw we have ${D_t(f_N,p)\gtrsim \sqrt{t}}$ for all ${2\leq p\leq \infty}$, at least whenever ${t\simeq N}$.

Question 9 Construct a coloring ${f_N}$ with ${N\simeq t}$, such that ${D(f_N,\infty)\simeq \sqrt{t}}$. A simple interpolation argument shows that if ${N\simeq t}$ then we have

$\displaystyle \begin{array}{rcl} D_t (f_N,p) \gtrsim t^\frac{1}{p'},\quad 1

Is this best possible? (probably yes). What is the correct order of magnitude of ${\inf_{f_N} D_t(f_N,1)}$?

Consider the discrepancy of a segment on a checkerboard defined above:

$\displaystyle \begin{array}{rcl} D f_N (t,u) := \int f(tu+su^\perp)ds,\quad t\in{\mathbb R},u\in S^1,\end{array}$

and the corresponding ${L^p}$ discrepancies

$\displaystyle \begin{array}{rcl} D (f_N,p):= \bigg(\frac{1}{N} \int_{S^1} \int_{\mathbb R} |Df_N(x)|^pdx d\sigma(u)\bigg)^\frac{1}{p}.\end{array}$

As we saw in the introduction we have the bounds (up to ${\epsilon}$-losses)

$\displaystyle \begin{array}{rcl} \inf_{f_N} D(f_N,p) \simeq N^\frac{1}{2},\quad \leq p \leq \infty, \\ \\ \inf_{f_N} D(f_N,p) \simeq N^\frac{1}{p'},\quad 1

The following question is open:

Question 10 Is it true that

$\displaystyle \begin{array}{rcl} D(f_N,1)\gtrsim \log N,\quad N\rightarrow \infty,\end{array}$

for any coloring ${f_N}$ of the finite ${N\times N}$ checkerboard? Failing that, one could try to show that we have

$\displaystyle \begin{array}{rcl} D(f_N,1)\rightarrow +\infty,\quad N\rightarrow \infty,\end{array}$

for any coloring ${f_N}$.

[IK] Alex Iosevich and Mihail N. Kolountzakis, The discrepancy of a needle on a checkerboard. II, Unif. Distrib. Theory 5 (2010), no. 2.

[K] Mihail N. Kolountzakis, The discrepancy of a needle on a checkerboard, Online J. Anal. Comb. 3 (2008), Art.7, 5.

[KP] Mihail N. Kolountzakis and Ioannis Parissis, Circle Discrepancy for Checkerboard measures, (2012), preprint, arXiv:1201.5544.