# Basic Littlewood-Paley theory II: square functions

This is the second part of the series on basic Littlewood-Paley theory, which has been extracted from some lecture notes I wrote for a masterclass. In this part we will prove the Littlewood-Paley inequalities, namely that for any ${1 < p < \infty}$ it holds that

$\displaystyle \|f\|_{L^p (\mathbb{R})} \sim_p \Big\|\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p (\mathbb{R})}. \ \ \ \ \ (\dagger)$

This time there are also plenty more exercises, some of which I think are fairly interesting (one of them is a theorem of Rudin in disguise).
Part I: frequency projections.

4. Smooth square function

In this subsection we will consider a variant of the square function appearing at the right-hand side of ($\dagger$) where we replace the frequency projections ${\Delta_j}$ by better behaved ones.

Let ${\psi}$ denote a smooth function with the properties that ${\psi}$ is compactly supported in the intervals ${[-4,-1/2] \cup [1/2, 4]}$ and is identically equal to ${1}$ on the intervals ${[-2,-1] \cup [1,2]}$. We define the smooth frequency projections ${\widetilde{\Delta}_j}$ by stipulating

$\displaystyle \widehat{\widetilde{\Delta}_j f}(\xi) := \psi(2^{-j} \xi) \widehat{f}(\xi);$

notice that the function ${\psi(2^{-j} \xi)}$ is supported in ${[-2^{j+2},-2^{j-1}] \cup [2^{j-1}, 2^{j+2}]}$ and identically ${1}$ in ${[-2^{j+1},-2^{j}] \cup [2^{j}, 2^{j+1}]}$. The reason why such projections are better behaved resides in the fact that the functions ${\psi(2^{-j}\xi)}$ are now smooth, unlike the characteristic functions ${\mathbf{1}_{[2^j,2^{j+1}]}}$. Indeed, they are actually Schwartz functions and you can see by Fourier inversion formula that ${\widetilde{\Delta}_j f = f \ast (2^{j} \widehat{\psi}(2^{j}\cdot))}$; the convolution kernel ${2^{j} \widehat{\psi}(2^{j}\cdot)}$ is uniformly in ${L^1}$ and therefore the operator is trivially ${L^p \rightarrow L^p}$ bounded for any ${1 \leq p \leq \infty}$ by Young’s inequality, without having to resort to the boundedness of the Hilbert transform.
We will show that the following smooth analogue of (one half of) ($\dagger$) is true (you can study the other half in Exercise 6).

Proposition 3 Let ${\widetilde{S}}$ denote the square function

$\displaystyle \widetilde{S}f := \Big(\sum_{j \in \mathbb{Z}} \big|\widetilde{\Delta}_j f \big|^2\Big)^{1/2}.$

Then for any ${1 < p < \infty}$ we have that the inequality

$\displaystyle \big\|\widetilde{S}f\big\|_{L^p(\mathbb{R})} \lesssim_p \|f\|_{L^p(\mathbb{R})} \ \ \ \ \ (1)$

holds for any ${f \in L^p(\mathbb{R})}$.

We will give two proofs of this fact, to illustrate different techniques. We remark that the boundedness will depend on the smoothness and the support properties of ${\psi}$ only, and as such extends to a larger class of square functions.

Proof: In this proof, we will prove the inequality by seeing it as a ${\mathbb{C}}$-valued to ${\ell^2(\mathbb{Z})}$-valued inequality and then applying vector-valued Calderón-Zygmund theory to it.
Consider the vector-valued operator ${\widetilde{\mathbf{S}}}$ given by

$\displaystyle \widetilde{\mathbf{S}}f(x) := \big(\widetilde{\Delta}_j f(x)\big)_{j \in \mathbb{Z}};$

then, since ${\|\widetilde{\mathbf{S}}f\|_{\ell^2(\mathbb{Z})} = \widetilde{S}f}$, the inequality we want to prove can be rephrased as

$\displaystyle \big\|\widetilde{\mathbf{S}}f \big\|_{L^p(\mathbb{R};\ell^2(\mathbb{Z}))} \lesssim_p \|f\|_{L^p(\mathbb{R})}. \ \ \ \ \ (2)$

The ${p=2}$ case is easy to prove: indeed, in this case the left-hand side of (2) is simply ${\sum_{j} \|\widetilde{\Delta}_j f\|_{L^2(\mathbb{R})}^2}$, and by Plancherel’s theorem this is equal to

$\displaystyle \int_{\mathbb{R}} |\widehat{f}(\xi)|^2 \sum_{j \in \mathbb{Z}} |\psi(2^{-j} \xi) |^2 d\xi;$

the sum is clearly ${\lesssim 1}$ for any ${\xi}$ and thus conclude by using Plancherel again.
Next it remains to show ${\widetilde{\mathbf{S}}}$ is given by convolution with a vector-valued kernel that satisfies a condition analogous to ii) as in the proof of Proposition 1 from the previous notes (we do not need i) and iii) because we have already concluded the ${L^2}$ boundedness by other means); a final appeal to vector-valued Calderón-Zygmund theory will conclude the proof. It is easy to see that the convolution kernel for ${\widetilde{\mathbf{S}}}$ is

$\displaystyle \widetilde{\mathbf{K}}(x) := (\widehat{\psi}_j(x))_{j\in\mathbb{Z}},$

where ${\widehat{\psi}_j(x)}$ denotes ${2^{j} \widehat{\psi}(2^{j}x)}$. We have to verify the vector-valued Hörmander condition

$\displaystyle \int_{|x|>2|y|} \|\widetilde{\mathbf{K}}(x-y) - \widetilde{\mathbf{K}}(x)\|_{B(\mathbb{C},\ell^2(\mathbb{Z}))} dx \lesssim 1.$

Recall that this is a consequence of the bound ${\|\widetilde{\mathbf{K}}'(x)\|_{B(\mathbb{C},\ell^2(\mathbb{Z}))} \lesssim |x|^{-2}}$, where the prime ${'}$ denotes the componentwise derivative. Since ${\|\widetilde{\mathbf{K}}(x)\|_{B(\mathbb{C},\ell^2(\mathbb{Z}))} = (\sum_{j} |\widehat{\psi}_j(x)|^2)^{1/2}}$ (prove this), we have

$\displaystyle \|\widetilde{\mathbf{K}}'(x)\|_{B(\mathbb{C},\ell^2(\mathbb{Z}))}^2 = \sum_{j \in \mathbb{Z}} 2^{4j} |{\widehat{\psi}}\,'(2^j x)|^2;$

thanks to the smoothness of ${\psi}$, we have ${|{\widehat{\psi}}\,'(x)| \lesssim (1+|x|)^{-100}}$ (or any other large positive exponent) and the estimate follows easily. We let you fill in the details in Exercise 4 below. $\Box$

For the second proof of the proposition, we will introduce a basic but very important tool – Khintchine’s inequality – that allows one to linearise objects of square function type without resorting to duality.

Lemma 4 (Khintchine’s inequality) Let ${(\epsilon_k (\omega))_{k\in \mathbb{N}}}$ be a sequence of independent identically distributed random variables over a probability space ${(\Omega, \mathbb{P})}$ taking values in the set ${\{+1,-1\}}$ and such that each value occurs with probability ${1/2}$. Then for any ${0 < p < \infty}$ we have that for any sequence of complex numbers ${(a_k)_{k\in\mathbb{N}}}$

$\displaystyle \Big(\mathbb{E}_\omega\Big[\Big|\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k \Big|^p\Big]\Big)^{1/p} \sim_p \Big(\sum_{k \in \mathbb{N}} |a_k|^2 \Big)^{1/2},$

where ${\mathbb{E}_\omega}$ denotes the expectation with respect to ${\mathbb{P}}$.

In other words, by randomising the signs the expression ${\sum_k \pm a_k}$ behaves on average simply like the ${\ell^2}$ norm of the sequence ${(a_k)_k}$. See the exercises for some interesting uses of the lemma.

Proof: The proof relies on a clever use of the independence assumption.
Let ${E_p := \Big(\mathbb{E}_\omega\Big[\Big|\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k \Big|^p\Big]\Big)^{1/p}}$ for convenience, and observe that ${E_2 = \Big(\sum_{k \in \mathbb{N}} |a_k|^2 \Big)^{1/2}}$ (because ${\mathbb{E}_\omega [\epsilon_k(\omega)\epsilon_{k'}(\omega)]= \delta_{k,k'}}$; cf. Exercise 1). First we have two trivial facts: when ${p>2}$ we have by Hölder’s inequality that ${E_2 \leq E_p}$, and when ${0 < p < 2}$ we have conversely that ${E_2 \geq E_p}$ by Jensen's inequality. Next, observe that to prove the ${\lesssim_p}$ part of the inequality it suffices to assume that the ${a_k}$'s are real-valued. We thus make this assumption and proceed to estimate ${E_p}$ by expressing the ${L^p}$ norm as an integral over the superlevel sets,

$\displaystyle E_p^p = \int \Big|\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k\Big|^p d\mathbb{P}(\omega) = p \int_{0}^{\infty} \lambda^{p-1} \mathbb{P}\Big(\Big|\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k\Big| > \lambda \Big) d\lambda.$

We split the level set in two and estimate ${\mathbb{P}\Big(\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k > \lambda \Big)}$; observe that for any ${t>0}$ this is equal to ${ \mathbb{P}\Big(e^{t \sum_{k} \epsilon_k(\omega) a_k} > e^{t\lambda} \Big)}$, which we estimate by Markov’s inequality by

$\displaystyle e^{-t \lambda} \int e^{t \sum_k \epsilon_k(\omega)a_k} d\mathbb{P}(\omega).$

Since ${e^{t \sum_k \epsilon_k(\omega)a_k} = \prod_k e^{t \epsilon_k(\omega)a_k}}$, it follows by independence1 of the ${\epsilon_k}$‘s that the integral equals ${\prod_k \int e^{t \epsilon_k(\omega)a_k} d\mathbb{P}(\omega)}$, and each factor is easily evaluated to be ${\cosh(t a_k)}$. Since ${\cosh(x) \leq e^{x^2 / 2}}$, we have shown that the above is controlled by

$\displaystyle e^{-t \lambda} \prod_{k \in \mathbb{N}} e^{\frac{1}{2} t^2 a_k^2} = e^{-t \lambda + \frac{1}{2} t^2 \sum_k a_k^2};$

if we choose ${t = \lambda / \|a\|_{\ell^2}^2}$, the above is controlled by ${\exp\big(-\frac{\lambda^2}{2 \|a\|_{\ell^2}^2} \big)}$. Inserting these bounds in the layer-cake representation of ${E_p^p}$ we have shown that

$\displaystyle E_p^p \leq 2p \int_{0}^{\infty} \lambda^{p-1} e^{-\frac{\lambda^2}{2 \|a\|_{\ell^2}^2}} d\lambda = \|a\|_{\ell^2}^p 2^{p/2} p \int_{0}^{\infty} s^{p/2 - 1} e^{-s} ds,$

which shows ${E_p \lesssim_p \|a\|_{\ell^2(\mathbb{Z})} = E_2}$ for all ${1 \leq p < \infty}$ (and therefore for all ${0 \leq p < \infty}$ when combined with Jensen's inequality, as explained at the beginning of the proof).
Finally, we have to prove ${E_2 \lesssim_p E_p}$ as well, and by the initial remarks we only need to do so in the regime $p \in (0,2)$. For such a ${p}$, we take some ${q}$ larger than $2$ and find a ${\theta \in [0,1]}$ such that ${\frac{1}{2} = \frac{1-\theta}{p} + \frac{\theta}{q}}$; by the logarithmic convexity of the ${L^p}$ norms we have then ${E_2 \leq E_p^{1-\theta} E_q^{\theta}}$, and since we have proven above that ${E_q \lesssim_q E_2}$ we can conclude. $\Box$

Now we are ready to provide a second proof of the proposition.

Proof: We claim that it will suffice to prove for every ${\omega}$ that

$\displaystyle \Big\|\sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \widetilde{\Delta}_j f \Big\|_{L^p(\mathbb{R})} \lesssim_p \|f\|_{L^p}$

with constant independent of ${\omega}$. Indeed, if this is the case, then we can raise the above inequality to the exponent ${p}$ and then apply the expectation ${\mathbb{E}_\omega}$ to both sides. The right-hand side remains ${\|f\|_{L^p}^p}$ because it does not depend on ${\omega}$, and the left-hand side becomes by linearity of expectation

$\displaystyle \int_{\mathbb{R}} \mathbb{E}_\omega\Big[ \Big|\sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \widetilde{\Delta}_j f(x) \Big|^p\Big] dx,$

which by Khintchine’s inequality is comparable to ${\int |\widetilde{S}f(x)|^p dx}$, thus proving the claim.
To prove the ${L^p}$ boundedness of the operators ${\widetilde{T}_\omega f(x) := \sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \widetilde{\Delta}_j f}$ we appeal to a well-known result, namely the Hörmander-Mikhlin multiplier theorem. Indeed, ${\widetilde{T}_\omega}$ is given by convolution with the kernel ${K_\omega := \sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \widehat{\psi}_j}$, whose Fourier transform is simply

$\displaystyle \widehat{K_\omega} (\xi) = \sum_{j \in \mathbb{Z}} \epsilon_j(\omega)\psi(2^{-j}\xi).$

For any ${\xi}$, there are at most 3 terms in the sum above that are non-zero; this and the other assumptions on ${\psi}$ readily imply that, uniformly in ${\omega}$,

\displaystyle \begin{aligned} |\widehat{K_\omega}(\xi)| \lesssim & \; 1, \\ \Big|\frac{d}{d\xi} \widehat{K_\omega}(\xi)\Big| \lesssim & \; |\xi|^{-1}; \end{aligned}

therefore ${\widehat{K_\omega}}$ is indeed a Hörmander-Mikhlin multiplier, and by the Hörmander-Mikhlin multiplier theorem we have that ${\widetilde{T}_{\omega}}$ is ${L^p \rightarrow L^p}$ bounded for ${1 < p < \infty}$ with constant uniform in ${\omega}$, and we are done. $\Box$

5. Littlewood-Paley square function

With the material developed in the previous subsections we are now ready to prove ($\dagger$). We restate it properly:

Theorem 5 Let ${1 < p < \infty}$ and let ${Sf}$ denote the Littlewood-Paley square function

$\displaystyle Sf := \Big(\sum_{j} |\Delta_j f|^2 \Big)^{1/2}.$

Then for all functions ${f \in L^p(\mathbb{R})}$ it holds that

$\displaystyle \|Sf\|_{L^p (\mathbb{R})} \sim_p \|f\|_{L^p(\mathbb{R})}. \ \ \ \ \ (\dagger)$

Proof: A first observation is that it will suffice to prove the ${\lesssim_p}$ part of the statement, thanks to duality. Indeed, since

$\displaystyle \|f\|_{L^p} = \sup_{\substack{g \in L^{p'} : \\ \|g\|_{p'}= 1}} \int f g \;dx,$

we can write

\displaystyle \begin{aligned} \int f g \;dx = & \int \Big(\sum_{j} \Delta_j f \Big)\Big(\sum_{k} \Delta_k g \Big) dx = \sum_{j,k} \int \Delta_j f \Delta_k g \; dx = \int \sum_{j} \Delta_j f \Delta_j g \; dx \\ \leq & \int \Big(\sum_{j} |\Delta_j f|^2 \Big)^{1/2}\Big(\sum_{j} |\Delta_j g|^2 \Big)^{1/2} dx = \int Sf Sg \;dx, \end{aligned}

where we have used the orthogonality of the projections in the third equality and then Cauchy-Schwarz inequality. Now Hölder’s inequality shows that

$\displaystyle \int f g \;dx \leq \|Sf\|_{L^p} \|Sg\|_{L^{p'}},$

and by the assumed ${L^{p'} \rightarrow L^{p'}}$ boundedness of ${S}$ we can bound the right hand side by ${\lesssim_p \|Sf\|_{L^p} \|g\|_{L^{p'}} = \|Sf\|_{L^p}}$, thus concluding that ${\|f\|_{L^p} \lesssim_{p} \|Sf\|_{L^p}}$ as well. We could also have gone the opposite direction, namely if one can prove ${\|f\|_{L^p} \lesssim_{p} \|Sf\|_{L^p}}$ then one can also deduce ${\|Sf\|_{L^p} \lesssim_{p} \|f\|_{L^p}}$, but duality alone does not suffice – see Exercise 11 for details.
Now observe the following fundamental fact: if ${\widetilde{\Delta}_j}$ denotes the smooth frequency projection as defined in Section 4, then we have for any ${j}$

$\displaystyle \Delta_j\widetilde{\Delta}_j = \Delta_j ;$

indeed, recall that ${\widehat{\widetilde{\Delta}_j f}(\xi) = \psi(2^{-j} \xi) \widehat{f}(\xi)}$ and that ${\psi(2^{-j} \xi)}$ is identically equal to ${1}$ on the intervals ${[2^{j},2^{j+1}] \cup [-2^{j+1},-2^{j}]}$.
Using this fact, we can argue by boundedness of vector-valued frequency projections (that is, by Corollary 2 in the previous part of this series) that

$\displaystyle \Big\|\Big(\sum_{j} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R})} = \Big\|\Big(\sum_{j} |\Delta_j \widetilde{\Delta}_j f|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R})} \lesssim_p \Big\|\Big(\sum_{j} |\widetilde{\Delta}_j f|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R})};$

but the right-hand side is now the smooth Littlewood-Paley square function ${\widetilde{S}f}$, and by Proposition 3 it is bounded by ${\lesssim_p \|f\|_{L^p}}$, thus concluding the proof. $\Box$

6. Higher dimensional variants

So far we have essentially worked only in dimension ${d=1}$ (except for Proposition 1 from the previous part). There are however some generalisations to higher dimensions of the theorems above, whose proofs follow from similar arguments.

Let ${d>1}$. First of all, with ${\psi}$ as in Section 4, define the (smooth) annular frequency projections2 ${\widetilde{P}_j}$ by

$\displaystyle \widehat{\widetilde{P}_j f}(\xi) := \psi(2^{-j}|\xi|) \widehat{f}(\xi).$

(there is no need for these functions to be exactly radial however, as long as they are uniformly smooth and compactly supported in the annula $\{ \xi : |\xi| \sim 2^j\}$). Then we have that the corresponding square function satisfies the analogue of Proposition 3:

Proposition 6 Let ${\widetilde{\mathcal{S}}}$ denote the square function

$\displaystyle \widetilde{\mathcal{S}}f = \Big( \sum_{j \in \mathbb{Z}} \big|\widetilde{P}_j f \big|^2 \Big)^{1/2}.$

Then for any ${1 < p < \infty}$ we have for any ${f \in L^p(\mathbb{R}^d)}$ the inequality

$\displaystyle \big\|\widetilde{\mathcal{S}}f\big\|_{L^p(\mathbb{R}^d)} \lesssim_{p,d} \|f\|_{L^p(\mathbb{R}^d)}. \ \ \ \ \ (3)$

Either of the proofs given for Proposition 3 generalises effortlessly to this case.
Interestingly, the non-smooth analogue of ${\widetilde{\mathcal{S}}}$ does not satisfy the analogue of Theorem 5! Indeed, the reason behind this fact is that the operator taking on the rôle of the Hilbert transform ${H}$ would be the so called ball multiplier, given by

$\displaystyle T_B f(x):= \int_{\mathbb{R}^d} \mathbf{1}_{B(0,1)}(\xi) \widehat{f}(\xi) e^{2\pi i \xi \cdot x} d\xi; \ \ \ \ \ (4)$

however, it is a celebrated result of Charles Fefferman that the operator ${T_B}$ is only bounded when ${p=2}$, and unbounded otherwise. This result is very deep and interesting but we do not discuss it in these notes. One of the ingredients needed in the proof is nevertheless presented in Exercise 7, if you are interested.

Another way in which one can generalise Littlewood-Paley theory to higher dimensions is to take products of the dyadic intervals ${[2^k,2^{k+1}]}$. That is, let for convenience ${I_k}$ denote the dyadic Littlewood-Paley interval

$\displaystyle I_k := [2^{k}, 2^{k + 1}] \cup [-2^{k + 1}, -2^{k}];$

then for ${\boldsymbol{k} = (k_1,\ldots, k_d) \in \mathbb{Z}^d}$ one defines the “rectangle” ${R_{\boldsymbol{k}}}$ to be

$\displaystyle R_{\boldsymbol{k}} = I_{k_1} \times \ldots \times I_{k_d}$

and defines the square function ${S_{\mathrm{rect}}}$ to be

$\displaystyle S_{\mathrm{rect}}f := \Big( \sum_{\boldsymbol{k} \in \mathbb{Z}^d} |\Delta_{R_{\boldsymbol{k}}} f|^2 \Big)^{1/2}.$

It is easy to see that the above rectangles are all disjoint and they tile ${\mathbb{R}^d}$. Then we have the rectangular analogue of Theorem 5

Theorem 7 For any ${1 < p < \infty}$ and all functions ${f \in L^p(\mathbb{R}^d)}$ it holds that

$\displaystyle \|S_{\mathrm{rect}} f\|_{L^p(\mathbb{R}^d)} \sim_{p,d} \|f\|_{L^p(\mathbb{R}^d)}.$

Proof: We will deduce the theorem from the one-dimensional case, that is from Theorem 5. With the same argument given there, one can see that it will suffice to establish the ${\lesssim_{p,d}}$ part of the inequalities.
First of all, with the same notation as in Lemma 4, when ${d=1}$ we define the operators

$\displaystyle T_\omega f := \sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \Delta_j f.$

These operators are ${L^p(\mathbb{R}) \rightarrow L^p(\mathbb{R})}$ bounded for any ${1 < p < \infty}$ with constant independent of ${\omega}$, as a consequence of Theorem 5 (prove this in Exercise 12).
Next, when ${d>1}$, define ${\epsilon_{\boldsymbol{k}}(\omega):= \epsilon_{k_1}(\omega)\cdot \ldots \cdot \epsilon_{k_d}(\omega)}$. By Khintchine’s inequality, it will suffice to prove that the operator

$\displaystyle \mathcal{T}_\omega f := \sum_{\boldsymbol{k} \in \mathbb{Z}^d} \epsilon_{\boldsymbol{k}}(\omega) \Delta_{R_{\boldsymbol{k}}}f$

is ${L^p \rightarrow L^p}$ bounded independently of ${\omega}$ (you are invited to check that this is indeed enough). However, it is easy to see that

$\displaystyle \mathcal{T}_\omega = T^{(1)}_\omega \circ \ldots \circ T^{(d)}_\omega,$

where ${T^{(k)}_\omega}$ denotes the operator ${T_\omega}$ above applied to the ${x_k}$ variable. The boundedness of ${\mathcal{T}_\omega}$ thus follows from that of ${T_\omega}$ by integrating in one variable at a time. $\Box$

In the next part of this series we will use the theory developed so far to prove the Marcinkiewicz multiplier theorem and Stein’s theorem on the spherical maximal function.

Exercises

Exercise 4 Fill in the missing details in the first proof of Proposition 3 (that is, the pointwise bound for ${\|\mathbf{K}'(x)\|_{B(\mathbb{C},\ell^2(\mathbb{Z}))}}$).

Exercise 5 Look at the proof of Khintchine’s inequality (Lemma 4). What is the asymptotic behaviour as ${p \rightarrow \infty}$ of the constant ${C_p}$ in the inequality

$\displaystyle \Big(\mathbb{E}_\omega\Big[\Big|\sum_{k \in \mathbb{N}} \epsilon_k(\omega) a_k \Big|^p\Big]\Big)^{1/p} \leq C_p \Big(\sum_{k \in \mathbb{N}} |a_k|^2 \Big)^{1/2}$

obtained in that proof?

Exercise 6 Let ${T}$ be a linear operator that is ${L^p(\mathbb{R}^d) \rightarrow L^p(\mathbb{R}^d)}$ bounded for some ${1 \leq p < \infty}$. Show, using Khintchine's inequality and the linearity of expectation, that for any vector-valued function ${(f_{j})_{j \in \mathbb{Z}}}$ in ${L^p(\mathbb{R}^d;\ell^2(\mathbb{Z}))}$ we have

$\displaystyle \Big\|\Big(\sum_{j \in \mathbb{Z}}|Tf_{j}|^2\Big)^{1/2}\Big\|_{L^p(\mathbb{R}^d)} \lesssim \Big\|\Big(\sum_{j \in \mathbb{Z}}|f_{j}|^2\Big)^{1/2}\Big\|_{L^p(\mathbb{R}^d)}.$

In other words, Khintchine’s inequality provides us with an upgrade to vector-valued estimates, free of charge.

Exercise 7 Recall that the Hausdorff-Young inequality says that for any ${1 \leq p \leq 2}$ it holds that

$\displaystyle \big\|\widehat{f}\big\|_{L^{p'}} \leq \|f\|_{L^p}.$

Crucially, the inequality cannot hold for ${p>2}$ (notice that when ${p>2}$ one has ${p>2>p'}$) and this can be seen in many ways. In this exercise you will show this fact using Khintchine’s inequality – this is an example of the technique known as randomisation, which is useful to construct counterexamples.

1. Let ${\varphi}$ be a smooth function with compact support in the unit ball ${B(0,1)}$ and let ${N>0}$ be an integer. Choose vectors ${x_j \in \mathbb{R}^d}$ for ${j = 1, \ldots, N}$ such that the translated functions ${(\varphi(\cdot - x_j))_{j = 1, \ldots, N}}$ have pairwise disjoint supports and define the function

$\displaystyle \Phi_\omega(x) := \sum_{j=1}^{N} \epsilon_j(\omega) \varphi(x - x_j).$

Show that

$\displaystyle \| \Phi_\omega \|_{L^p} \sim N^{1/p}$

for any ${\omega}$.

2. Show, using Khintchine’s inequality, that

$\displaystyle \mathbb{E}_\omega \Big[\big\| \widehat{\Phi}_\omega \big\|_{L^{p'}}^{p'} \Big] \lesssim N^{{p'}/2}.$

3. Deduce that there is an ${\omega}$ (equivalently, a choice of signs in ${\sum_{j=1}^{N} \pm \varphi(\cdot - x_j)}$) such that ${\big\| \widehat{\Phi}_\omega \big\|_{L^{p'}} \sim N^{1/2}}$, and conclude that the Hausdorff-Young inequality cannot hold when ${p>2}$ by taking ${N}$ sufficiently large.

Exercise 8 Consider the circle ${\mathbb{T} = \mathbb{R} / \mathbb{Z}}$ and notice that Hölder’s inequality shows that when ${p \geq 2}$ we have ${\|f\|_{L^2(\mathbb{T})} \leq \|f\|_{L^p(\mathbb{T})}}$ for all functions ${f}$. Here we describe a class of functions for which the reverse inequality holds (which in particular implies that all the ${L^p}$ norms are comparable).
Let ${f}$ be a function on ${\mathbb{T}}$ such that ${\widehat{f}}$ is supported in the set ${\{3^k \, : k \in \mathbb{N}\}}$; in particular, the Fourier series of ${f}$ is given by ${\sum_{k \in \mathbb{N}} \widehat{f}(3^k) e^{2\pi i 3^k x}}$. You will show that for such functions one has

$\displaystyle \|f\|_{L^p(\mathbb{T})} \lesssim p^{1/2} \|f\|_{L^2(\mathbb{T})} \ \ \ \ \ (5)$

for any ${p\geq 2}$. The proof again rests on a randomisation trick enabled by Khintchine’s inequality.

1. Let ${(\epsilon_k(\omega))_{k \in \mathbb{N}}}$ be as in the statement of Lemma 4. Assume that there esist Borel measures ${(\mu_\omega)_{\omega \in \Omega}}$ such that ${\widehat{\mu_\omega}(3^k) = \epsilon_k(\omega)}$ for any ${k \in \mathbb{N}}$ and such that ${\|\mu_\omega\| \lesssim 1}$ uniformly in ${\omega}$. Show that for functions ${f}$ as above we can write

$\displaystyle f = \tilde{f}_\omega \ast \mu_\omega,$

where ${\tilde{f}_\omega}$ has Fourier series ${\sum_{k} \widehat{f}(3^k) \epsilon_k(\omega) e^{2\pi i 3^k x}}$.

2. Show that, always under the assumption that the measures ${(\mu_\omega)_{\omega \in \Omega}}$ exist, the above identity together with Young’s and Khintchine’s inequalities implies (5). The ${p^{1/2}}$ constant comes from Exercise 5.
3. It remains to show that the measures ${(\mu_\omega)_{\omega \in \Omega}}$ really exist, and you will do so by constructing them explicitely. Let ${(p_k^\omega(\theta))_{k\in\mathbb{N}}}$ be the collection of trigonometric polynomials given by

$\displaystyle p_k^\omega(\theta) := 1 + \frac{\epsilon_k(\omega)}{2}(e^{2\pi i 3^k \theta} + e^{-2\pi i 3^k \theta})$

and consider the limit ${\mu_\omega}$ given by

$\displaystyle \mu_\omega(\theta) = \lim_{K \rightarrow \infty} \prod_{k=0}^{K} p_k^\omega(\theta).$

Show that this limit exists in the weak sense, that is for any trigonometric polynomial ${q(\theta)}$ the limit ${\lim_K \langle q , \prod_{k=0}^{K} p_k^\omega(\theta) \rangle}$ exists and is unique.

4. Show that if an integer ${m \in \mathbb{Z}}$ admits an expression of the form

$\displaystyle m = 3^{n_1} \pm \ldots \pm 3^{n_\ell}$

for some integers ${0 \leq n_1 < \ldots < n_\ell}$ and some choice of signs, then such an expression is necessarily unique.

5. Show that, thanks to 4) above, ${\widehat{\mu_\omega}(3^k) = \epsilon_k(\omega)}$, as desired.
6. Show that ${\|\mu_\omega\| = \int |\mu_\omega|= 1}$ to conclude the proof.
7. Let ${(N_k)_{k \in \mathbb{N}} \subset \mathbb{N}}$ be a lacunary sequence, that is ${\liminf_{k \rightarrow \infty} N_{k+1} / N_k =:\rho > 1}$ (${\rho}$ is called the lacunarity constant of the sequence). Show that if ${\rho \geq 3}$ the proof above still works (step 4) in particular).
8. If ${3 > \rho > 1}$, show that one can decompose ${(N_k)_{k \in \mathbb{N}}}$ into ${O((\log_3 \rho)^{-1})}$ sequences of lacunarity constant at least 3. Conclude that (5) holds for any function supported on a lacunary sequence, although the constant depends on the lacunarity constant of the sequence.
9. Show that there exists a constant ${C>0}$ such that the functions ${f}$ of the above kind satisfy the following interesting exponential integrability property:

$\displaystyle \int_{\mathbb{T}} e^{C |f(\theta)|^2 / \|f\|_{L^2}^2} d\theta \lesssim 1.$

[hint: Taylor-expand the exponential.]

Exercise 9 Show that if ${\psi}$ is chosen in such a way that

$\displaystyle \sum_{j \in \mathbb{Z}} |\psi(2^{-j}\xi)|^2 = c$

for all ${\xi\neq 0}$, then the converse of (1) is also true; that is, show that for any ${1

$\displaystyle \|f\|_{L^p} \lesssim_p \|\widetilde{S}f\|_{L^p}.$

Finally, construct a function ${\psi}$ such that the above condition holds.
[hint: see the proof of Theorem 5.]

Exercise 10 Let ${T_B}$ denote the ball multiplier as in (4), that is the operator defined by ${\widehat{T_Bf} = \mathbf{1}_{B(0,1)} \widehat{f}}$. To keep things simple, we let the dimension be ${d=2}$. Assume that ${T_B}$ is ${L^p(\mathbb{R}^2) \rightarrow L^p(\mathbb{R}^2)}$ bounded for a certain ${1 < p < \infty}$ (in reality it is not, unless ${p=2}$, but Fefferman's proof of this fact proceeds by contradiction, so assume away). You will show that the following inequality would then also be true. Let ${(v_j)_{j \in \mathbb{N}}}$ be a collection of unit vectors in ${\mathbb{R}}$ and denote by ${\mathcal{H}_j}$ the half-plane ${\{x \in \mathbb{R}^2 \, : \, x \cdot v_j \geq 0\}}$; finally, let ${H_j}$ be the operator given by ${\widehat{H_j f}(\xi) = \mathbf{1}_{\mathcal{H}_j}(\xi) \widehat{f}(\xi)}$ (you can see ${H_j}$ is essentially a Hilbert transform in direction ${v_j}$). If ${T_B}$ is ${L^p(\mathbb{R}^2) \rightarrow L^p(\mathbb{R}^2)}$ bounded, then for any vector-valued function ${(f_j)_{j\in\mathbb{N}} \in L^p (\mathbb{R}^2; \ell^2(\mathbb{N}))}$ we have

$\displaystyle \Big\|\Big(\sum_{j \in \mathbb{N}} |H_j f_j|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R}^2)} \lesssim \Big\| \Big(\sum_{j \in \mathbb{N}} |f_j|^2 \Big)^{1/2} \Big\|_{L^p(\mathbb{R}^2)}.$

The connection between ${T_B}$ and the ${H_j}$‘s is that, morally speaking, a very large ball looks like a half-plane near its boundary.

1. Argue that the functions ${f}$ such that ${\widehat{f}}$ is smooth and compactly supported are dense in ${L^p}$.
2. Let ${B_j^r}$ denote the ball of radius ${r}$ and center ${r v_j}$ (thus ${B^r_j}$ is tangent to the boundary of ${\mathcal{H}_j}$ at the origin for any ${r>0}$) and let ${T_j^r}$ be the operator given by ${\widehat{T_j^r f} = \mathbf{1}_{B_j^r} \widehat{f}}$. Show, using the Fourier inversion formula, that for functions ${f}$ as in i) it holds that

$\displaystyle \lim_{r \rightarrow \infty} |H_j f(x) - T_j^r f(x)| = 0 \qquad \text{ for all }x.$

3. Argue by Fatou’s lemma that

$\displaystyle \Big\|\Big(\sum_{j \in \mathbb{N}} |H_j f_j|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R}^2)} \leq \liminf_{r\rightarrow\infty} \Big\|\Big(\sum_{j \in \mathbb{N}} |T_j^r f_j|^2 \Big)^{1/2}\Big\|_{L^p(\mathbb{R}^2)}$

and conclude that, by i), it will suffice to bound the right-hand side uniformly in ${r}$.

4. Let ${T_{B_r}}$ be the operator given by ${\widehat{T_{B_r} f} = \mathbf{1}_{B_r} \widehat{f}}$ with ${B_r}$ the ball of radius ${r}$ centred at the origin (a rescaled ball multiplier); show that ${T_{B_r}}$ has the same ${L^p(\mathbb{R}^2) \rightarrow L^p(\mathbb{R}^2)}$-norm of ${T_B = T_{B_1}}$.
5. Show that ${T_j^r = \mathrm{Mod}_{-r v_j} T_{B_r} \mathrm{Mod}_{r v_j}}$, where ${\mathrm{Mod}_{\xi} f(x) := e^{-2\pi i \xi\cdot x} f(x)}$.
6. Combine 5) with Exercise 6 to conclude.

Exercise 11 Let ${S}$ be the Littlewood-Paley square function. Show that the inequality that would imply ${\|Sf\|_{L^p} \lesssim_p \|f\|_{L^p}}$ by a duality argument is

$\displaystyle \Big\| \sum_{j \in \mathbb{Z}} \Delta_j g_j \Big\|_{L^{p'}} \lesssim_{p'} \Big\| \Big( \sum_{j \in \mathbb{Z}} |g_j|^2\Big)^{1/2} \Big\|_{L^{p'}}$

and not ${\|f\|_{L^{p'}} \lesssim_{p'} \|Sf\|_{L^{p'}}}$ as one could naively expect. Next, assume that ${\|f\|_{L^{p'}} \lesssim_{p'} \|Sf\|_{L^{p'}}}$ holds and use it to prove the inequality above.
[hint: use Proposition 1 of part I.]

Exercise 12 Show, using both ${\lesssim_p}$ and ${\gtrsim_p}$ inequalities of ($\dagger$), that the operators

$\displaystyle T_\omega f := \sum_{j \in \mathbb{Z}} \epsilon_j(\omega) \Delta_j f$

are ${L^p(\mathbb{R}) \rightarrow L^p(\mathbb{R})}$ bounded for ${1 < p < \infty}$, with constant independent of ${\omega}$. Notice that, unlike the operators ${\widetilde{T}_\omega}$, the operators ${T_\omega}$ are not Calderón-Zygmund operators in general (prove this for some special choice of signs), and therefore we really need to use Littlewood-Paley theory to prove they are bounded.

Footnotes:
1: Technically, we can only argue so for finite products; but we can assume that at most ${N}$ coefficients ${a_k}$ are non-zero and then take the limit ${N\rightarrow \infty}$ at the end of the argument.
2: We call them “annular” because ${\psi(2^{-j}|\xi|)}$ is smoothly supported in the annulus ${\{\xi \in \mathbb{R}^d : 2^{j-1}\leq |\xi| \leq 2^{j+2}\}}$.