# The Chang-Wilson-Wolff inequality using a lemma of Tao-Wright

Today I would like to introduce an important inequality from the theory of martingales that will be the subject of a few more posts. This inequality will further provide the opportunity to introduce a very interesting and powerful result of Tao and Wright – a sort of square-function characterisation for the Orlicz space $L(\log L)^{1/2}$.

## 1. The Chang-Wilson-Wolff inequality

Consider the collection $\mathcal{D}$ of standard dyadic intervals that are contained in $[0,1]$. We let $\mathcal{D}_j$ for each $j \in \mathbb{N}$ denote the subcollection of intervals $I \in \mathcal{D}$ such that $|I|= 2^{-j}$. Notice that these subcollections generate a filtration of $\mathcal{D}$, that is $(\sigma(\mathcal{D}_j))_{j \in \mathbb{N}}$, where $\sigma(\mathcal{D}_j)$ denotes the sigma-algebra generated by the collection $\mathcal{D}_j$. We can associate to this filtration the conditional expectation operators

$\displaystyle \mathbf{E}_j f := \mathbf{E}[f \,|\, \sigma(\mathcal{D}_j)],$

and therefore define the martingale differences

$\displaystyle \mathbf{D}_j f:= \mathbf{E}_{j+1} f - \mathbf{E}_{j}f.$

With this notation, we have the formal telescopic identity

$\displaystyle f = \mathbf{E}_0 f + \sum_{j \in \mathbb{N}} \mathbf{D}_j f.$

Demystification: the expectation $\mathbf{E}_j f(x)$ is simply $\frac{1}{|I|} \int_I f(y) \,dy$, where $I$ is the unique dyadic interval in $\mathcal{D}_j$ such that $x \in I$.

Letting $f_j := \mathbf{E}_j f$ for brevity, the sequence of functions $(f_j)_{j \in \mathbb{N}}$ is called a martingale (hence the name “martingale differences” above) because it satisfies the martingale property that the conditional expectation of “future values” at the present time is the present value, that is

$\displaystyle \mathbf{E}_{j} f_{j+1} = f_j.$

In the following we will only be interested in functions with zero average, that is functions such that $\mathbf{E}_0 f = 0$. Given such a function $f : [0,1] \to \mathbb{R}$ then, we can define its martingale square function $S_{\mathcal{D}}f$ to be

$\displaystyle S_{\mathcal{D}} f := \Big(\sum_{j \in \mathbb{N}} |\mathbf{D}_j f|^2 \Big)^{1/2}.$

With these definitions in place we can state the Chang-Wilson-Wolff inequality as follows.

C-W-W inequality: Let ${f : [0,1] \to \mathbb{R}}$ be such that $\mathbf{E}_0 f = 0$. For any ${2\leq p < \infty}$ it holds that

$\displaystyle \boxed{\|f\|_{L^p([0,1])} \lesssim p^{1/2}\, \|S_{\mathcal{D}}f\|_{L^p([0,1])}.} \ \ \ \ \ \ (\text{CWW}_1)$

An important point about the above inequality is the behaviour of the constant in the Lebesgue exponent ${p}$, which is sharp. This can be seen by taking a “lacunary” function ${f}$ (essentially one where $\mathbf{D}_jf = a_j \in \mathbb{C}$, a constant) and randomising the signs using Khintchine’s inequality (indeed, ${p^{1/2}}$ is precisely the asymptotic behaviour of the constant in Khintchine’s inequality; see Exercise 5 in the 2nd post on Littlewood-Paley theory).
It should be remarked that the inequality extends very naturally and with no additional effort to higher dimensions, in which $[0,1]$ is replaced by the unit cube $[0,1]^d$ and the dyadic intervals are replaced by the dyadic cubes. We will only be interested in the one-dimensional case here though.

# Representing points in a set in positional-notation fashion (a trick by Bourgain): part II

This is the second and final part of an entry dedicated to a very interesting and inventive trick due to Bourgain. In part I we saw a lemma on maximal Fourier projections due to Bourgain, together with the context it arises from (the study of pointwise ergodic theorems for polynomial sequences); we also saw a baby version of the idea to come, that we used to prove the Rademacher-Menshov theorem (recall that the idea was to represent the indices in the supremum in their binary positional notation form and to rearrange the supremum accordingly). Today we finally get to see Bourgain’s trick.

Before we start, recall the statement of Bourgain’s lemma:

Lemma 1 [Bourgain]: Let $K$ be an integer and let $\Lambda = \{\lambda_1, \ldots, \lambda_K \}$ a set of ${K}$ distinct frequencies. Define the maximal frequency projections

$\displaystyle \mathcal{B}_\Lambda f(x) := \sup_{j} \Big|\sum_{k=1}^{K} (\mathbf{1}_{[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]} \widehat{f})^{\vee}\Big|,$

where the supremum is restricted to those ${j \geq j_0}$ with $j_0 = j_0(\Lambda)$ being the smallest integer such that $2^{-j_0} \leq \frac{1}{2}\min \{ |\lambda_k - \lambda_{k'}| : 1\leq k\neq k'\leq K \}$.
Then

$\displaystyle \|\mathcal{B}_\Lambda f\|_{L^2} \lesssim (\log \#\Lambda)^2 \|f\|_{L^2}.$

Here we are using the notation $(\mathbf{1}_{[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]} \widehat{f})^{\vee}$ in the statement in place of the expanded formula $\int_{|\xi - \lambda_k| < 2^{-j}} \widehat{f}(\xi) e^{2\pi i \xi x} d\xi$. Observe that by the definition of $j_0$ we have that the intervals $[\lambda_k - 2^{-j_0}, \lambda_k + 2^{-j_0}]$ are disjoint (and $j_0$ is precisely maximal with respect to this condition).
We will need to do some reductions before we can get to the point where the trick makes its appearance. These reductions are the subject of the next section.

3. Initial reductions

A first important reduction is that we can safely replace the characteristic functions $\mathbf{1}_{[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]}$ by smooth bump functions with comparable support. Indeed, this is the result of a very standard square-function argument which was already essentially presented in Exercise 22 of the 3rd post on basic Littlewood-Paley theory. Briefly then, let $\varphi$ be a Schwartz function such that $\widehat{\varphi}$ is a smooth bump function compactly supported in the interval $[-1,1]$ and such that $\widehat{\varphi} \equiv 1$ on the interval $[-1/2, 1/2]$. Let $\varphi_j (x) := \frac{1}{2^j} \varphi \Big(\frac{x}{2^j}\Big)$ (so that $\widehat{\varphi_j}(\xi) = \widehat{\varphi}(2^j \xi)$) and let for convenience $\theta_j$ denote the difference $\theta_j := \mathbf{1}_{[-2^{-j}, 2^{-j}]} - \widehat{\varphi_j}$. We have that the difference

$\displaystyle \sup_{j\geq j_0(\Lambda)} \Big|\sum_{k=1}^{K} ((\mathbf{1}_{[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]} - \widehat{\varphi_j}(\cdot - \lambda_k)) \widehat{f})^{\vee}\Big|$

is an $L^2$ bounded operator with norm $O(1)$ (that is, independent of $K$). Indeed, observe that $\mathbf{1}_{[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]}(\xi) - \widehat{\varphi_j}(\xi - \lambda_k) = \theta_j (\xi - \lambda_k)$, and bounding the supremum by the $\ell^2$ sum we have that the $L^2$ norm (squared) of the operator above is bounded by

$\displaystyle \sum_{j \geq j_0(\Lambda)} \Big\|\sum_{k=1}^{K} (\theta_j(\cdot - \lambda_k)\widehat{f})^{\vee}\Big\|_{L^2}^2,$

where the summation in ${j}$ is restricted in the same way as the supremum is in the lemma (that is, the intervals $[\lambda_k - 2^{-j}, \lambda_k + 2^{-j}]$ must be pairwise disjoint). By an application of Plancherel we see that the above is equal to

$\displaystyle \sum_{k=1}^{K} \Big\| \widehat{f}(\xi) \Big[\sum_{j \geq j_0} \theta_j(\xi - \lambda_k) \Big]\Big\|_{L^2}^2;$

but notice that the functions $\theta_j$ have supports disjoint in ${j}$, and therefore the multiplier satisfies $\sum_{j\geq j_0} \theta_j(\xi - \lambda_k) \lesssim 1$ in a neighbourhood of $\lambda_k$, and vanishes outside such neighbourhood. A final application of Plancherel allows us to conclude that the above is bounded by $\lesssim \|f\|_{L^2}^2$ by orthogonality (these neighbourhoods being all disjoint as well).
By triangle inequality, we see therefore that in order to prove Lemma 1 it suffices to prove that the operator

$\displaystyle \sup_{j} \Big|\sum_{k=1}^{K} (\widehat{\varphi_j}(\cdot - \lambda_k) \widehat{f})^{\vee}\Big|$

is $L^2$ bounded with norm at most $O((\log \#\Lambda)^2)$.

# Representing points in a set in positional-notation fashion (a trick by Bourgain): part I

If you are reading this blog, you have probably heard that Jean Bourgain – one of the greatest analysts of the last century – has unfortunately passed away last December. It is fair to say that the progress of analysis will slow down significantly without him. I am not in any position to give a eulogy to this giant, but I thought it would be nice to commemorate him by talking occasionally on this blog about some of his many profound papers and his crazily inventive tricks. That’s something everybody agrees on: Bourgain was able to come up with a variety of insane tricks in a way that no one else is. The man was a problem solver and an overall magician: the first time you see one of his tricks, you don’t believe what’s happening in front of you. And that’s just the tricks part!

In this two-parts post I am going to talk about a certain trick that loosely speaking, involves representing points on an arbitrary set in a fashion similar to how integers are represented, say, in binary basis. I don’t know if this trick came straight out of Bourgain’s magical top hat or if he learned it from somewhere else; I haven’t seen it used elsewhere except for papers that cite Bourgain himself, so I’m inclined to attribute it to him – but please, correct me if I’m wrong.
Today we introduce the context for the trick (a famous lemma by Bourgain for maximal frequency projections on the real line) and present a toy version of the idea in a proof of the Rademacher-Menshov theorem. In the second part we will finally see the trick.

1. Ergodic averages along arithmetic sequences
First, some context. The trick I am going to talk about can be found in one of Bourgain’s major papers, that were among the ones cited in the motivation for his Fields medal prize. I am talking about the paper on a.e. convergence of ergodic averages along arithmetic sequences. The main result of that paper is stated as follows: let $(X,T,\mu)$ be an ergodic system, that is

1. $\mu$ is a probability on $X$;
2. $T: X \to X$ satisfies $\mu(T^{-1} A) = \mu(A)$ for all $\mu$-measurable sets $A$ (this is the invariance condition);
3. $T^{-1} A = A$ implies $\mu(A) = 0 \text{ or } 1$ (this is the ergodicity condition).

Then the result is

Theorem: [Bourgain, ’89] Let $(X,T,\mu)$ be an ergodic system and let $p(n)$ be a polynomial with integer coefficients. If $f \in L^q(d\mu)$ with ${q}$ > 1, then the averages $A_N f(x) := \frac{1}{N}\sum_{n=1}^{N}f(T^{p(n)} x)$ converge $\mu$-a.e. as $N \to \infty$; moreover, if ${T}$ is weakly mixing1, we have more precisely

$\displaystyle \lim_{N \to \infty} A_N f(x) = \int_X f d\mu$

for $\mu$-a.e. ${x}$.

For comparison, the more classical pointwise ergodic theorem of Birkhoff states the same for the case $p(n) = n$ and $f \in L^1(d\mu)$ (notice this is the largest of the $L^p(X,d\mu)$ spaces because $\mu$ is finite), in which case the theorem is deduced as a consequence of the $L^1 \to L^{1,\infty}$ boundedness of the Hardy-Littlewood maximal function. The dense class to appeal to is roughly speaking $L^2(X,d\mu)$, thanks to the ergodic theorem of Von Neumann, which states $A_N f$ converges in $L^2$ norm for $f \in L^2(X,d\mu)$. However, the details are non-trivial. Heuristically, these ergodic theorems incarnate a quantitative version of the idea that the orbits $\{T^n x\}_{n\in\mathbb{N}}$ fill up the entire space ${X}$ uniformly. I don’t want to enter into details because here I am just providing some context for those interested; there are plenty of introductions to ergodic theory where these results are covered in depth.

# Basic Littlewood-Paley theory III: applications

This is the last part of a 3 part series on the basics of Littlewood-Paley theory. Today we discuss a couple of applications, that is Marcinkiewicz multiplier theorem and the boundedness of the spherical maximal function (the latter being an application of frequency decompositions in general, and not so much of square functions – though one appears, but only for $L^2$ estimates where one does not need the sophistication of Littlewood-Paley theory).
Part I: frequency projections
Part II: square functions

7. Applications of Littlewood-Paley theory

In this section we will present two applications of the Littlewood-Paley theory developed so far. You can find further applications in the exercises (see particularly Exercise 22 and Exercise 23).

7.1. Marcinkiewicz multipliers

Given an ${L^\infty (\mathbb{R}^d)}$ function ${m}$, one can define the operator ${T_m}$ given by

$\displaystyle \widehat{T_m f}(\xi) := m(\xi) \widehat{f}(\xi)$

for all ${f \in L^2(\mathbb{R}^d)}$. The operator ${T_m}$ is called a multiplier and the function ${m}$ is called the symbol of the multiplier1. Since ${m \in L^\infty}$, Plancherel’s theorem shows that ${T_m}$ is a linear operator bounded in ${L^2}$; its definition can then be extended to ${L^2 \cap L^p}$ functions (which are dense in ${L^p}$). A natural question to ask is: for which values of ${p}$ in ${1 \leq p \leq \infty}$ is the operator ${T_m}$ an ${L^p \rightarrow L^p}$ bounded operator? When ${T_m}$ is bounded in a certain ${L^p}$ space, we say that it is an ${L^p}$multiplier.

The operator ${T_m}$ introduced in Section 1 of the first post in this series is an example of a multiplier, with symbol ${m(\xi,\tau) = \tau / (\tau - 2\pi i |\xi|^2)}$. It is the linear operator that satisfies the formal identity $T \circ (\partial_t - \Delta) = \partial_t$. We have seen that it cannot be a (euclidean) Calderón-Zygmund operator, and thus in particular it cannot be a Hörmander-Mikhlin multiplier. This can be seen more directly by the fact that any Hörmander-Mikhlin condition of the form ${|\partial^{\alpha}m(\xi,\tau)| \lesssim_\alpha |(\xi,\tau)|^{-|\alpha|} = (|\xi|^2 + \tau^2)^{-|\alpha|/2}}$ is clearly incompatible with the rescaling invariance of the symbol ${m}$, which satisfies ${m(\lambda \xi, \lambda^2 \tau) = m(\xi,\tau)}$ for any ${\lambda \neq 0}$. However, the derivatives of ${m}$ actually satisfy some other superficially similar conditions that are of interest to us. Indeed, letting ${(\xi,\tau) \in \mathbb{R}^2}$ for simplicity, we can see for example that ${\partial_\xi \partial_\tau m(\xi, \tau) = \lambda^3 \partial_\xi \partial_\tau m(\lambda\xi, \lambda^2\tau)}$. When ${|\tau|\lesssim |\xi|^2}$ we can therefore argue that ${|\partial_\xi \partial_\tau m(\xi, \tau)| = |\xi|^{-3} |\partial_\xi \partial_\tau m(1, \tau |\xi|^{-2})| \lesssim |\xi|^{-1} |\tau|^{-1} \sup_{|\eta|\lesssim 1} |\partial_\xi \partial_\tau m(1, \eta)|}$, and similarly when ${|\tau|\gtrsim |\xi|^2}$; this shows that for any ${(\xi, \tau)}$ with ${\xi,\tau \neq 0}$ one has

$\displaystyle |\partial_\xi \partial_\tau m(\xi, \tau)| \lesssim |\xi|^{-1} |\tau|^{-1}.$

This condition is comparable with the corresponding Hörmander-Mikhlin condition only when ${|\xi| \sim |\tau|}$, and is vastly different otherwise, being of product type (also notice that the inequality above is compatible with the rescaling invariance of ${m}$, as it should be).

# Basic Littlewood-Paley theory II: square functions

This is the second part of the series on basic Littlewood-Paley theory, which has been extracted from some lecture notes I wrote for a masterclass. In this part we will prove the Littlewood-Paley inequalities, namely that for any ${1 < p < \infty}$ it holds that

$\displaystyle \|f\|_{L^p (\mathbb{R})} \sim_p \Big\|\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p (\mathbb{R})}. \ \ \ \ \ (\dagger)$

This time there are also plenty more exercises, some of which I think are fairly interesting (one of them is a theorem of Rudin in disguise).
Part I: frequency projections.

4. Smooth square function

In this subsection we will consider a variant of the square function appearing at the right-hand side of ($\dagger$) where we replace the frequency projections ${\Delta_j}$ by better behaved ones.

Let ${\psi}$ denote a smooth function with the properties that ${\psi}$ is compactly supported in the intervals ${[-4,-1/2] \cup [1/2, 4]}$ and is identically equal to ${1}$ on the intervals ${[-2,-1] \cup [1,2]}$. We define the smooth frequency projections ${\widetilde{\Delta}_j}$ by stipulating

$\displaystyle \widehat{\widetilde{\Delta}_j f}(\xi) := \psi(2^{-j} \xi) \widehat{f}(\xi);$

notice that the function ${\psi(2^{-j} \xi)}$ is supported in ${[-2^{j+2},-2^{j-1}] \cup [2^{j-1}, 2^{j+2}]}$ and identically ${1}$ in ${[-2^{j+1},-2^{j}] \cup [2^{j}, 2^{j+1}]}$. The reason why such projections are better behaved resides in the fact that the functions ${\psi(2^{-j}\xi)}$ are now smooth, unlike the characteristic functions ${\mathbf{1}_{[2^j,2^{j+1}]}}$. Indeed, they are actually Schwartz functions and you can see by Fourier inversion formula that ${\widetilde{\Delta}_j f = f \ast (2^{j} \widehat{\psi}(2^{j}\cdot))}$; the convolution kernel ${2^{j} \widehat{\psi}(2^{j}\cdot)}$ is uniformly in ${L^1}$ and therefore the operator is trivially ${L^p \rightarrow L^p}$ bounded for any ${1 \leq p \leq \infty}$ by Young’s inequality, without having to resort to the boundedness of the Hilbert transform.
We will show that the following smooth analogue of (one half of) ($\dagger$) is true (you can study the other half in Exercise 6).

Proposition 3 Let ${\widetilde{S}}$ denote the square function

$\displaystyle \widetilde{S}f := \Big(\sum_{j \in \mathbb{Z}} \big|\widetilde{\Delta}_j f \big|^2\Big)^{1/2}.$

Then for any ${1 < p < \infty}$ we have that the inequality

$\displaystyle \big\|\widetilde{S}f\big\|_{L^p(\mathbb{R})} \lesssim_p \|f\|_{L^p(\mathbb{R})} \ \ \ \ \ (1)$

holds for any ${f \in L^p(\mathbb{R})}$.

We will give two proofs of this fact, to illustrate different techniques. We remark that the boundedness will depend on the smoothness and the support properties of ${\psi}$ only, and as such extends to a larger class of square functions.

# Basic Littlewood-Paley theory I: frequency projections

I have written some notes on Littlewood-Paley theory for a masterclass, which I thought I would share here as well. This is the first part, covering some motivation, the case of a single frequency projection and its vector-valued generalisation. References I have used in preparing these notes include Stein’s “Singular integrals and differentiability properties of functions“, Duoandikoetxea’s “Fourier Analysis“, Grafakos’ “Classical Fourier Analysis” and as usual some material by Tao, both from his blog and the notes for his courses. Prerequisites are some basic Fourier transform theory, Calderón-Zygmund theory of euclidean singular integrals and its vector-valued generalisation (to Hilbert spaces, we won’t need Banach spaces).

0. Introduction
Harmonic analysis makes a fundamental use of divide-et-impera approaches. A particularly fruitful one is the decomposition of a function in terms of the frequencies that compose it, which is prominently incarnated in the theory of the Fourier transform and Fourier series. In many applications however it is not necessary or even useful to resolve the function ${f}$ at the level of single frequencies and it suffices instead to consider how wildly different frequency components behave instead. One example of this is the (formal) decomposition of functions of ${\mathbb{R}}$ given by

$\displaystyle f = \sum_{j \in \mathbb{Z}} \Delta_j f,$

where ${\Delta_j f}$ denotes the operator

$\displaystyle \Delta_j f (x) := \int_{\{\xi \in \mathbb{R} : 2^j \leq |\xi| < 2^{j+1}\}} \widehat{f}(\xi) e^{2\pi i \xi \cdot x} d\xi,$

commonly referred to as a (dyadic) frequency projection. Thus ${\Delta_j f}$ represents the portion of ${f}$ with frequencies of magnitude ${\sim 2^j}$. The Fourier inversion formula can be used to justify the above decomposition if, for example, ${f \in L^2(\mathbb{R})}$. Heuristically, since any two ${\Delta_j f, \Delta_{k} f}$ oscillate at significantly different frequencies when ${|j-k|}$ is large, we would expect that for most ${x}$‘s the different contributions to the sum cancel out more or less randomly; a probabilistic argument typical of random walks (see Exercise 1) leads to the conjecture that ${|f|}$ should behave “most of the time” like ${\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}}$ (the last expression is an example of a square function). While this is not true in a pointwise sense, we will see in these notes that the two are indeed interchangeable from the point of view of ${L^p}$-norms: more precisely, we will show that for any ${1 < p < \infty}$ it holds that

$\displaystyle \boxed{ \|f\|_{L^p (\mathbb{R})} \sim_p \Big\|\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p (\mathbb{R})}. }\ \ \ \ \ (\dagger)$

This is a result historically due to Littlewood and Paley, which explains the name given to the related theory. It is easy to see that the ${p=2}$ case is obvious thanks to Plancherel’s theorem, to which the statement is essentially equivalent. Therefore one could interpret the above as a substitute for Plancherel’s theorem in generic ${L^p}$ spaces when ${p\neq 2}$.

In developing a framework that allows to prove ($\dagger$) we will encounter some variants of the square function above, including ones with smoother frequency projections that are useful in a variety of contexts. We will moreover show some applications of the above fact and its variants. One of these applications will be a proof of the boundedness of the spherical maximal function ${\mathscr{M}_{\mathbb{S}^{d-1}}}$ (almost verbatim the one on Tao’s blog).

Notation: We will use ${A \lesssim B}$ to denote the estimate ${A \leq C B}$ where ${C>0}$ is some absolute constant, and ${A\sim B}$ to denote the fact that ${A \lesssim B \lesssim A}$. If the constant ${C}$ depends on a list of parameters ${L}$ we will write ${A \lesssim_L B}$.