Proof of the square-function characterisation of L(log L)^{1/2}: part I

This is a follow-up on the post on the Chang-Wilson-Wolff inequality and how it can be proven using a lemma of Tao-Wright. The latter consists of a square-function characterisation of the Orlicz space L(\log L)^{1/2} analogous in spirit to the better known one for the Hardy spaces.
In this post we will commence the proof of the Tao-Wright lemma, as promised. We will start by showing how the lemma, which is stated for smooth frequency projections, can be deduced from its discrete variant stated in terms of Haar coefficients (or equivalently, martingale differences with respect to the standard dyadic filtration). This is a minor part of the overall argument but it is slightly tricky so I thought I would spell it out.

Recall that the Tao-Wright lemma is as follows. We write \widetilde{\Delta}_j f for the smooth frequency projection defined by \widehat{\widetilde{\Delta}_j f}(\xi) = \widehat{\psi}(2^{-j}\xi) \widehat{f}(\xi) , where \widehat{\psi} is a smooth function compactly supported in 1/2 \leq |\xi| \leq 4 and identically equal to 1 on 1 \leq |\xi| \leq 2 .


Lemma 1 – Square-function characterisation of L(\log L)^{1/2} [Tao-Wright, 2001]:
Let for any j \in \mathbb{Z}

\displaystyle  \phi_j(x) := \frac{2^j}{(1 + 2^j |x|)^{3/2}}

(notice \phi_j is concentrated in [-2^{-j},2^{-j}] and \|\phi_j\|_{L^1} \sim 1).
If the function {f} is in L(\log L)^{1/2}([-R,R]) and such that \int f(x) \,dx = 0 , then there exists a collection (F_j)_{j \in \mathbb{Z}} of non-negative functions such that:

  1. pointwise for any j \in \mathbb{Z}

    \displaystyle \big|\widetilde{\Delta}_j f\big| \lesssim F_j \ast \phi_j ;

  2. they satisfy the square-function estimate

    \displaystyle  \Big\|\Big(\sum_{j \in \mathbb{Z}} |F_j|^2\Big)^{1/2}\Big\|_{L^1} \lesssim \|f\|_{L(\log L)^{1/2}}.


The 3/2 exponent in the definition of \phi_j above is not very important: any exponent bigger than 1 would work for the arguments in the Tao-Wright paper. Indeed, below we will show that we can prove the theorem above with any arbitrary large exponent.
As mentioned before, Lemma 1 is consequence of a discrete version of the lemma that we now state. Recall that {h_I} denotes the Haar function associated to the dyadic interval {I}, that is

\displaystyle h_I := \frac{1}{|I|^{1/2}} (\mathbf{1}_{I_{+}} - \mathbf{1}_{I_{-}}),

where {I_{+}, I_{-}} are respectively the left and right halves of {I}.


Lemma 2 – Square-function characterisation of L(\log L)^{1/2} for martingale-differences:
For any function f : [0,1] \to \mathbb{R} in L(\log L)^{1/2}([0,1]) there exists a collection (F_j)_{j \in \mathbb{Z}} of non-negative functions such that:

  1. for any j \in \mathbb{N} and any I \in \mathcal{D}_j

    \displaystyle  |\langle f, h_I \rangle|\lesssim \frac{1}{|I|^{1/2}} \int_{I} F_j \,dx;

  2. they satisfy the square-function estimate

    \displaystyle  \Big\|\Big(\sum_{j \in \mathbb{N}} |F_j|^2\Big)^{1/2}\Big\|_{L^1} \lesssim \|f\|_{L(\log L)^{1/2}([0,1])}.

We also recall that condition 1. can be rephrased as |\mathbf{D}_j f| \lesssim \mathbf{E}_j F_j in the notation of the previous post.

The idea of the proof that Lemma 2 implies Lemma 1 is quite simply to apply Lemma 2 to translations of {f} and then average the results (translated back) with respect to the translation parameter. This is of course an oversimplification – we will need to average over a well-chosen subset of translations in order to get good estimates.

In the following we will let {\psi_j(x)} denote the rescaled function {2^j \psi (2^j x)}. We will build the functions F_j explicitely. We are given a function {f \in L (\log L)^{1/2}} with average zero, which we assume supported in [1/3,2/3] for convenience and we define for {\theta \in [-1/3,1/3]} its translation {g^{\theta}(x) := f(x-\theta)} (the 1/3 has no particular significance). Using Lemma 2 on a given {g^\theta} we obtain a collection of functions {(G_k^{\theta})_k} satisfying {|\langle g^\theta, h_I\rangle| \lesssim |I|^{-1/2} \int_I G_k^\theta} and {\big\|\big(\sum_{k}|G_k^\theta|^2\big)^{1/2}\big\|_{L^1} \lesssim \|g^\theta\|_{L(\log L)^{1/2}}} = \|f\|_{L(\log L)^{1/2}}. The functions F_j will end up being averages of these, not just of G^{\theta}_j but averages of all the G_k^{\theta} both in \theta and in |j-k| ; we anticipate that in the end the average we shall choose will be \sum_{k} 2^{-|j-k|} \int_{-1/3}^{1/3} G_k^{\theta}(x + \theta) \,d\theta , but we will get there step by step and for the moment it’s best not to concentrate too much on this expression.

Before we proceed, I should point out that since {f} is supported in the unit interval we don’t have to worry about its low frequencies. In particular, for j < 0 we will simply choose F_j = |\widetilde{\Delta}'_j f| where \widetilde{\Delta}'_j is a smooth frequency projection much like \widetilde{\Delta}_j but on a slightly bigger frequency interval. Say, for example \widehat{\widetilde{\Delta}'_j f}(\xi) = \widehat{\Psi}(2^{-j} \xi) \widehat{f}(\xi) with \widehat{\Psi} a smooth function supported in \{ 1/4 \leq |\xi| \leq 8 \} and such that it is identically equal to 1 on \{ 1/2 \leq |\xi| \leq 4 \} . With this choice we have that \widetilde{\Delta}_j \widetilde{\Delta}'_j f = \widetilde{\Delta}_j f , so that condition 1. of Lemma 2 is easily seen to be satisfied (as |\psi_j| \lesssim \phi_j ). As for condition 2., we claim that \|F_j\|_{L^1} = \| \widetilde{\Delta}'_j f\|_{L^1} \lesssim 2^{j} \|f\|_{L^1} so that we have

\displaystyle \Big\|\Big(\sum_{j < 0} |F_j|^2 \Big)^{1/2}\Big\|_{L^1} \leq \sum_{j < 0} \|F_j\|_{L^1} \lesssim \|f\|_{L^1}

and since \|f\|_{L^1} \leq \|f\|_{L(\log L)^{1/2}} we will be done with the j < 0 case. To see the claim, simply observe that since \int f(y) \,dy = 0

\displaystyle  \widetilde{\Delta}'_j f(x) = \int f(y) \Psi_j(x-y) \,dy = \int f(y) \big[\Psi_j (x-y) - \Psi_j (x) \big] \,dy

and since |y|\leq 1 (by the compact support of {f} ) we also have by the Fundamental Theorem of Calculus

\displaystyle  |\Psi_j (x-y) - \Psi_j (x)| \leq  2^j \int_{0}^{1} |(\Psi')_j(x-t)| \,dt \lesssim_N 2^j \frac{2^j}{(1 + 2^j |x|)^N};

this last expression is integrable with mass 2^j and therefore by integrating in {x} we have the claim.

From now on we concentrate solely on j \geq 0 .
We can write by expanding in the Haar basis and linearity

\displaystyle \widetilde{\Delta}_j f(x) = \widetilde{\Delta}_j g^\theta (x + \theta)= \sum_{k \in \mathbb{N}} \sum_{\substack{I \text{ dyadic}, \\ |I|=2^{-k}}} \langle g^{\theta}, h_I\rangle \, \widetilde{\Delta}_j h_I (x+\theta),

and therefore we can bound using Lemma 2

\displaystyle |\widetilde{\Delta}_j f(x)| = |\widetilde{\Delta}_j g^\theta (x + \theta)| \leq \sum_{k \in \mathbb{N}} \sum_{\substack{I \text{ dyadic}, \\ |I|=2^{-k}}} \Big[\int_I G_k^\theta \,dx\Big] |I|^{-1/2} |\widetilde{\Delta}_j h_I(x + \theta)|.

We want to produce good estimates for the factor {|I|^{-1/2} |\widetilde{\Delta}_j h_I(x + \theta)|}.
In the following I will replace {x + \theta} with {z} to save room.

1. Estimates for {|\widetilde{\Delta}_j h_I(z)|}

There are two scales involved in the expression \widetilde{\Delta}_j h_I : one is the scale of |I| , which is 2^{-k} , and the other is the scale of \widetilde{\Delta}_j , which is 2^{-j} . Accordingly, we distinguish two cases: either {k > j} or {k \leq j}.

1.1. Case \boxed{k > j}

In this case we have {|I| = 2^{-k} < 2^{-j}}, the latter being the scale of {\psi_j}. Since {\widetilde{\Delta}_j h_I(z) = h_I \ast \psi_j (z)} we write

\displaystyle \begin{aligned} |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| = & \frac{1}{|I|^{1/2}} \Big|\int h_I(y) \psi_j (z - y) \,dy\Big| \\ = & \frac{1}{|I|} \Big| \int_{I_{+}} [\psi_j(z - y) - \psi_j (z - y - |I|/2)]\,dy \Big| \\ = & \frac{1}{|I|} \Big| \int_{I_{+}} \int_{0}^{|I|/2} 2^j (\psi')_j(z - y - t) \,dt\,dy \Big| \\ \lesssim_N & \frac{2^j}{|I|} \int_{I_{+}} \int_{0}^{|I|/2} \frac{2^j}{(1 + 2^j |z - y -t|)^N} \,dt\,dy, \ \ \ \ \ \ (1) \end{aligned}

the latter by the fact that {\psi'} is a Schwartz function. We will want to replace the factor of (1 + 2^j |z - y -t|)^N at the denominator with a factor like (1 + 2^j \mathrm{dist}(z,\partial I))^N instead; the reason why we choose \mathrm{dist}(z,\partial I) instead of the simpler \mathrm{dist}(z, I) will be clear later, when we consider the case k \leq j instead. We anticipate that it will so be because we want to take advantage of all the cancellation we can, and there will be cancellation coming from inside {I} as well.

To efficiently estimate the above we distinguish two further cases for precision.

  1. \boxed{z \in 5I} : in this case, we have {|z-y-t| \leq |z-y| +|t| \lesssim |I|} and thus {2^j |z-y-t| \lesssim 2^j |I| = 2^{j-k} \lesssim 1}. This means that {(1 + 2^j |z-y-t|) \sim 1 \sim (1 + 2^j \mathrm{dist}(z, \partial I))}, and consequently by (1) we have

    \displaystyle \begin{aligned} |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_N & \, \frac{2^j}{|I|} |I|^2 \frac{2^j}{(1 + 2^j \mathrm{dist}(z,\partial I))^N} \\ =_{N} & \, \underbrace{2^{j-k}}_{=2^{-|j-k|}} \frac{2^j}{(1 + 2^j \mathrm{dist}(z,\partial I))^N} \ \ \ \ \ \ (2) \end{aligned}

  2. \boxed{z \not\in 5I}: in this case we have that for some {y_0 \in \partial I} it is {\mathrm{dist}(z, \partial I) = |z-y_0| \geq 2|I|}. Using this fact we have

    \displaystyle |z-y-t| \geq \mathrm{dist}(z,\partial I) - |y-y_0| - |t| \geq \mathrm{dist}(z,\partial I) - \frac{3}{2} |I| \geq \frac{1}{4} \mathrm{dist}(z,\partial I),

    and therefore we can bound

    \displaystyle (1 + 2^j |z-y-t|)^{-N} \lesssim_N (1 + 2^j \mathrm{dist}(z,\partial I))^{-N},

    and in turn we have the same estimate (2) as before.

To summarise, we have shown that when {k > j} we have for any {N > 0}

\displaystyle \boxed{|I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_N \,2^{-|j-k|} \frac{2^j}{(1 + 2^j \mathrm{dist}(z,\partial I))^N}. \ \ \ \ \ (3)}

1.2. Case \boxed{k \leq j}

In this case the above estimates produce a factor of {2^{j-k}} which is now larger than {1}, so we need a different argument. We can use the decay of {\psi_j} and also the fact that {\int \psi_j = 0}. We will also restrict {\theta} by imposing it belongs to a special set depending on {x,j}.

We distinguish again two further cases.

  1. \boxed{z \in I}: in this case {z \in I_{+}} or {z \in I_{-}}; say we are in the former one. We bound by triangle inequality

    \displaystyle |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \leq \frac{1}{|I|}\Big|\int_{I_{+}} \psi_j(z-y) \,dy\Big| + \frac{1}{|I|}\Big|\int_{I_{-}} \psi_j(z-y) \,dy\Big|.

    We have by the cancellation property of {\psi_j} that

    \displaystyle \begin{aligned} \frac{1}{|I|} \Big|\int_{I_{+}} \psi_j(z-y) \,dy\Big| = & \frac{1}{|I|} \Big|\int_{I_{+}} \psi_j(z-y) \,dy - \int_{\mathbb{R}} \psi_j (z-y) \,dy\Big| \\ = & \frac{1}{|I|} \Big|\int_{\mathbb{R} \backslash I_{+}} \psi_j (z-y) \,dy\Big| \\ \lesssim_{N'} & \frac{1}{|I|} \int_{\mathbb{R} \backslash I_{+}} \frac{2^j}{(1 + 2^j |z-y|)^{N'}} \,dy, \end{aligned}

    where we choose {N' > N}. Since {y \in \mathbb{R} \backslash I_{+}} now we have that {|z-y| \geq \mathrm{dist}(z,\partial I_{+})}, and thus the latter expression is

    \displaystyle \begin{aligned} \sim \frac{2^j}{|I|} \int_{\mathrm{dist}(z,\partial I_{+})}^{\infty} (1 + 2^j t)^{-N'} \,dt \leq & \frac{2^j}{|I|} \int_{\mathrm{dist}(z,\partial I_{+})}^{\infty} 2^{-jN'} t^{-N'} \,dt \\ \sim_{N'} & \frac{2^j}{|I|} 2^{-jN'} (\mathrm{dist}(z,\partial I_{+}))^{-N'} \mathrm{dist}(z,\partial I_{+}) \\ \lesssim & 2^{j} (2^j \mathrm{dist}(z,\partial I_{+}))^{-N'} \ \ \ \ \ \ (4) \end{aligned}


    (the latter inequality simply because {z \in I_{+}} and so {\mathrm{dist}(z,\partial I_{+})\leq |I|/2}).
    Our problem now is the following: this expression as it stands does not provide any a-priori decay of the form 2^{-O(|j-k|)} (compare with (2) and (3) above). We need that decay because we will have to sum over all indices {k} to obtain {F_j} , and therefore we have to force it in, so to say. One way to do so is to impose that \mathrm{dist}(z,\partial I_{+}) is never too small. How large do we need it to be? It is a matter of a simple calculation to see that to have (2^j \mathrm{dist}(z,\partial I_{+}))^{-1} \lesssim 2^{-O(|k-j|)} we need to impose \mathrm{dist}(z,\partial I_{+}) \gtrsim 2^{-k} 2^{-\gamma |k-j|} . Recalling that z = x + \theta , this will have the effect of shaving off a little bit of the set [-1/3,1/3] of possible translations. Moreover, since we want this decay for every k \leq j , we will need to shave off several parts of the set of possible translations – as such, we will need at some point to make sure that we are not shaving off all of the possible translations! This will indeed not be the case fortunately. If we impose the distance condition above we obtain that the set of allowable translations is given by the following, up to parameters {A,\alpha} to be chosen later (with {A \ll 1} and {0 < \alpha < 1} in general):

    \displaystyle \boxed{ \begin{aligned} \Theta_{x,j} := \Big\{ \theta \in \Big[-\frac{1}{3},\frac{1}{3}\Big] \text{ s.t. } \mathrm{dist}(x+\theta, 2^{-k}\mathbb{Z}) & \geq A 2^{-k} 2^{-\alpha |j-k|} \\ & \text{ for all } 0\leq k\leq j \Big\}. \end{aligned} }

    Indeed the endpoints of any interval I belong to a 2^{-k} \mathbb{Z} lattice, hence the definition above.
    To reiterate, if {\theta \in \Theta_{x,j}} we have

    \displaystyle \begin{aligned} 2^j \mathrm{dist}(z,\partial I_{+}) = & 2^j \mathrm{dist}(x+\theta, 2^{-k-1}\mathbb{Z}) \\ \geq & A 2^{j-k-1} 2^{-\alpha|j-k-1|} \\ = & (A 2^{\alpha-1}) 2^{(1-\alpha)|j-k|}; \end{aligned}

    notice that since {\alpha < 1} the last expression is {\gtrsim_{A,\alpha} 1} and thus {2^j \mathrm{dist}(z,\partial I_{+}) \gtrsim_{A,\alpha} 1 + 2^j \mathrm{dist}(z,\partial I_{+})}. We split {N' = N + (N'-N)} and insert the lowerbound in the estimate (4) to obtain

    \displaystyle  \begin{aligned} & = (2^j \mathrm{dist}(z,\partial I_{+}))^{-(N'-N)} \frac{2^j}{ (2^j \mathrm{dist}(z,\partial I_{+}))^{N} } \\ & \lesssim_{N,N',A,\alpha} 2^{-(1-\alpha)(N' - N) |j-k|} \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N} }.  \end{aligned}

    For the remaining quantity {\frac{1}{|I|} \big|\int_{I_{-}} \psi_j(z-y) \,dy\big|} we do not need to appeal to the cancellation properties of {\psi_j} but rather we bound {|\psi_j (z-y)| \lesssim_{N'} 2^j (1+ 2^j |z-y|)^{-N'}} right away and notice that the resulting integral is bounded by the one just estimated since {I_{-} \subset \mathbb{R}\backslash I_{+}}. Thus we have shown that when {z \in I_{+}} and \theta \in \Theta_{x,j}

    \displaystyle |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_{N,N',A,\alpha} 2^{-(1-\alpha)(N' - N) |j-k|} \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N}}.

    Now notice that since {z \in I_{+}} we have {\mathrm{dist}(z, \partial I_{-}) \geq \mathrm{dist}(z, \partial I_{+})} and therefore

    \displaystyle \boxed{ \begin{aligned} |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_{N,N',A,\alpha} & \;2^{-(1-\alpha)(N' - N) |j-k|} \\ & \times \Big[ \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N}} + \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{-}))^{N}} \Big] \ \ \ \ \ \ \ (5) \end{aligned} }


    is an equally good bound but has the advantage of being symmetric in {I_{+}, I_{-}}. The same bound clearly holds for {z \in I_{-}} and thus for all {z \in I}.

  2. {z \not\in I}: in this case we don’t need to split the integration over {I}, and we simply bound

    \displaystyle  |I|^{-1/2}|\widetilde{\Delta}_j h_I (z)| \lesssim_{N'} \frac{1}{|I|} \int_{I} \frac{2^j}{(1 + 2^j |z-y|)^{N'}} \,dy.

    Since we have that for {y \in I} it holds that {|z-y| \geq \mathrm{dist}(z, \partial I)}, the last expression is {\lesssim 2^j (1 + 2^j \mathrm{dist}(z, \partial I))^{-N'} \leq 2^j (2^j \mathrm{dist}(z, \partial I))^{-N'}} – compare this with (4). Since we have {\mathrm{dist}(z, \partial I) \geq \mathrm{dist}(z, 2^{-k}\mathbb{Z})}, the argument given for (i) above repeats essentially unchanged, safe for having {I} in place of {I_{+}}. Thus we have when {z \not\in I}

    \displaystyle  |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_{N,N',A,\alpha} 2^{-(1-\alpha)(N' - N) |j-k|} \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I))^{N}}.

    In particular, since {z \not\in I} we have {\mathrm{dist}(z, \partial I) = \min \{ \mathrm{dist}(z,\partial I_{+}), \mathrm{dist}(z,\partial I_{+}) \} } and therefore even for {z \not\in I} the bound (5) continues to hold (and the right-hand sides are comparable as well).

To summarise, we have shown that when {k \leq j} we have for any {N'>N>0} that (5) holds.

2. Estimating {|\widetilde{\Delta}_j f(x)|}

We now have sufficiently many estimates in order to treat {|\widetilde{\Delta}_j f(x)|}. As said before, it suffices to consider {\sum_{I \,:\,|I|=2^{-k}} \big[\int_I G_k^\theta \,dx \big] |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)|}. We let for convenience

\displaystyle \phi^M_j (x) := \frac{2^j}{(1 + 2^j |x|)^M}.

We will show the estimate

\displaystyle \Big[\int_I G_k^\theta \,dx \Big] |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim 2^{-\beta|j-k|} \int_{I} G_k^\theta(y) \phi^M_j (z-y) \,dy \ \ \ \ \ \ (\dagger_{\beta,M})


for some {\beta > 0} and some {M>1}. Indeed, summing both sides over all dyadic intervals I  with |I| = 2^{-k} fixed will allow us to replace the RHS with a full convolution of G_k^{\theta} and \phi^M_j ; at the LHS we will have instead the {k} -th term of the sum that we are using to bound {|\widetilde{\Delta}_j f(x)|} pointwise (see beginning of the proof). We remark that if {\beta < \beta'} then {(\dagger_{\beta', M}) \Rightarrow (\dagger_{\beta, M})} and that similarly if {M < M'} then {(\dagger_{\beta, M'}) \Rightarrow (\dagger_{\beta, M})}.

We distinguish the two cases as before.

2.1. Case \boxed{k > j}

In this case there are hardly any problems. Indeed, using the bound (3) proven before for this case we have

\displaystyle \Big[\int_I G_k^\theta \,dx \Big] |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_{N} 2^{-|j-k|} \int_{I} G_k^\theta (y) \frac{2^j}{(1 + 2^j \mathrm{dist}(z, \partial I))^N } \,dy,

and we want to show that the second factor in the integral is bounded pointwise by \phi^M_j(z-y) for some {M} .

Again we distinguish the two further cases as before:

  1. \boxed{z \in 5I}: since {k > j} we have {2^j |I| \leq 1}; therefore we also have {2^j \mathrm{dist}(z,\partial I) \lesssim 2^j |I| \leq 1} and similarly {2^j |z-y| \lesssim 2^j |I| \leq 1} for all {y \in I}. This means that {(1 + 2^j \mathrm{dist}(z,\partial I))^{-N} \sim_N 1 \sim_N (1 + 2^j |z-y|)^{-N}}, and therefore we have when {k > j, z \in 5I}

    \displaystyle  \Big[\int_I G_k^\theta \,dx \Big] |I|^{-1/2} |\widetilde{\Delta}_j h_I(z)| \lesssim_{N} 2^{-|j-k|} \int_{I} G_k^\theta (y)\phi^N_j(z-y) \,dy;

    that is, when {k > j, z \in 5I} estimate {(\dagger_{1,N})} holds.

  2. \boxed{z \not\in 5I}: in this case we have {\mathrm{dist}(z,\partial I) \geq 2 |I|} and therefore for some {y_0 \in \partial I}

    \displaystyle  \mathrm{dist}(z, \partial I) \geq |z-y| -|y-y_0| \geq |z-y| - |I| \geq |z-y| - \frac{1}{2} \mathrm{dist}(z, \partial I)

    and hence {\mathrm{dist}(z, \partial I) \gtrsim |z-y|}. It thus follows right away that estimate {(\dagger_{1,N})} holds in this case as well.

2.2. Case \boxed{k \leq j}

We will consider only the contribution coming from the term of (5) with {\mathrm{dist}(z, \partial I_{+})} at the denominator, since the other term can be treated symmetrically. We have to bound

\displaystyle  2^{-(1-\alpha)(N' - N) |j-k|} \int_{I} G_{k}^{\theta}(y) \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N}} \,dy.

We distinguish two further cases:

  1. \boxed{z \in 5I_{+}}: in this case we have for all {y \in I} that {|z-y| \lesssim |I|}. Thus we have (by the hypothesis that {\theta} in {z = x + \theta} belongs to {\Theta_{x,j}})

    \displaystyle \mathrm{dist}(z,\partial I_{+}) \geq \mathrm{dist}(z, 2^{-k-1} \mathbb{Z}) \geq A 2^{\alpha-1} 2^{-\alpha |j - k|} |I| \gtrsim_{A,\alpha} 2^{-\alpha|j-k|} |z-y|,

    and therefore

    \displaystyle  \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N}} \lesssim_{N, A, \alpha} 2^{\alpha N |j-k|} \frac{2^j}{ (1 + 2^j |z-y|)^{N}}.

    [Of course this is not a good bound because we are losing a large power of 2^{|j-k|} here; however, the decay previously introduced is enough to more than compensate this loss.]
    Using this bound on the quantity we have to estimate, we see that it is controlled by

    \displaystyle \begin{aligned} \lesssim_{N, N', A, \alpha} & 2^{-(1-\alpha)(N' - N) |j-k|} 2^{\alpha N |j-k|} \\ & \times \int_{I} G_{k}^{\theta}(y) \frac{2^j}{ (1 + 2^j |z-y|)^{N}} \,dy \\ = & 2^{-[(1-\alpha)N' - N]|j-k|} \int_{I} G_{k}^{\theta}(y) \phi^{N}_{j}(z-y) \,dy, \end{aligned}

    and therefore {(\dagger_{(1-\alpha)N' - N, N})} holds. Notice that by taking {N'} sufficiently larger than {N} we can make sure that the exponent {-[(1-\alpha)N' - N]|j-k|} is always negative.

  2. \boxed{z \not\in 5I_{+}}: in this case we have more simply that {\mathrm{dist}(z,\partial I_{+}) \geq 2|I|}, and therefore (for some {y_0 \in \partial I_{+}})

    \displaystyle \begin{aligned} \mathrm{dist}(z,\partial I_{+}) = & |z-y_0| \geq |z-y| - |y-y_0| \\ \geq &|z-y| - |I| \geq |z-y| - \frac{1}{2}\mathrm{dist}(z,\partial I_{+}) , \end{aligned}

    which means {\mathrm{dist}(z,\partial I_{+}) \gtrsim |z-y|}. This gives immediately

    \displaystyle \begin{aligned} 2^{-(1-\alpha)(N' - N) |j-k|} & \int_{I} G_{k}^{\theta}(y) \frac{2^j}{ (1 + 2^j \mathrm{dist}(z,\partial I_{+}))^{N}} \,dy \\ \lesssim & 2^{-(1-\alpha)(N' - N) |j-k|} \int_{I} G_{k}^{\theta}(y) \phi^{N}_{j}(z-y) \,dy, \end{aligned}

    and therefore {(\dagger_{(1-\alpha)(N' - N), N})} holds.

It is now clear that we can set for example {\alpha = 1/2} and {N' = 4N} and we will have

\displaystyle \begin{aligned} (1 - \alpha)(N' - N) = & 3N /2, \\ (1 - \alpha)N' - N = & N, \end{aligned}

both exponents being at least {1}, and therefore we have shown that for any {N} the estimate {(\dagger_{1,N})} is always true (for an appropriately large constant).

2.3. Conclusion of the estimation

The conclusion of the estimation of {|\widetilde{\Delta}_j f(x)|} is therefore that for any {\theta \in \Theta_{x,j}} we have (with constant uniform in such {\theta})

\displaystyle |\widetilde{\Delta}_j f(x)| \lesssim_{N,A} \sum_{k \in \mathbb{N}} 2^{-|j-k|} G_{k}^{\theta} \ast \phi^{N}_{j}(x + \theta). \ \ \ \ \ \ \ (6)

Observe now that in defining \Theta_{x,j} we have not shaved off too much off the set of translations. We actually have {|\Theta_{x,j}| \sim 1}, because

\displaystyle \begin{aligned} \big|[-1/3, 1/3] \backslash \Theta_{x,j}\big| \leq & \sum_{k =0}^{j} 2 A 2^{-k} 2^{-\alpha |j-k|} \cdot \#([-1/3, 1/3] \cap 2^{-k} \mathbb{Z}) \\ \leq & 2A \sum_{\ell \geq 0} 2^{-\alpha \ell} = \frac{2A}{1 - 2^{-\alpha}}; \end{aligned}

with {\alpha = 1/2} as above it suffices to take {A = \frac{1}{100}} to have that {|\Theta_{x,j}| \geq 1/2}.
We can now average the RHS of (6) over \theta \in \Theta_{x,j} (since the estimate is uniform in such parameter) and have that

\displaystyle \begin{aligned} |\widetilde{\Delta}_j f(x)| \lesssim_{N} & \sum_{k \in \mathbb{N}} 2^{-|j-k|} \frac{1}{|\Theta_{x,j}|} \int_{\Theta_{x,j}} G_{k}^{\theta} \ast \phi^{N}_{j}(x + \theta) \, d\theta \\ = & \sum_{k \in \mathbb{N}} 2^{-|j-k|} \int \Big[ \frac{1}{|\Theta_{x,j}|} \int_{\Theta_{x,j}} G_{k}^{\theta} (y + \theta) \,d\theta \Big] \phi^{N}_{j}(x-y) \,dy. \end{aligned}

Since the {G_{k}^{\theta}} are non-negative and {|\Theta_{x,j}| \sim 1} we have that {|\Theta_{x,j}|^{-1} \int_{\Theta_{x,j}} G_{k}^{\theta} (y + \theta) \,d\theta \lesssim \int_{-1/3}^{1/3} G_{k}^{\theta} (y + \theta) \,d\theta}, and therefore if we define

\displaystyle \boxed{ F_j (y) := \sum_{k \in \mathbb{N}} 2^{-|j-k|} \int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta }

then these are the desired functions for the smooth frequency version of the Tao-Wright lemma, because the above has just shown that

\displaystyle |\widetilde{\Delta}_j f(x)| \lesssim_N F_j \ast \phi^N_j (x).

3. Conclusion of the proof that Lemma 2 implies Lemma 1

We have so far constructed functions F_j that satisfy 1. of Lemma 1. It remains to verify that 2. of said lemma is also satisfied. This will be a simple consequence of Minkowski’s inequality.

Observe that by Cauchy-Schwarz inequality

\displaystyle \begin{aligned} F_j (y) \leq & \Big(\sum_{k \in \mathbb{N}} 2^{-|j-k|}\Big)^{1/2} \Big(\sum_{k \in \mathbb{N}} 2^{-|j-k|} \Big[\int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta \Big]^2 \Big)^{1/2} \\ \lesssim & \Big(\sum_{k \in \mathbb{N}} 2^{-|j-k|} \Big[\int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta \Big]^2 \Big)^{1/2}. \end{aligned}

Therefore we have

\displaystyle \begin{aligned} \Big(\sum_{j \geq 0} |F_j (y)|^2 \Big)^{1/2} \lesssim & \Big( \sum_{j \geq 0} \sum_{k \in \mathbb{N}} 2^{-|j-k|} \Big[\int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta \Big]^2 \Big)^{1/2} \\ \lesssim & \Big( \sum_{k \in \mathbb{N}} \Big[\int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta \Big]^2 \Big)^{1/2}. \end{aligned}

Integrating both sides in dy and using Minkowski’s inequality in the \ell^2 summation and the d\theta integral on the RHS we have

\displaystyle \begin{aligned} \int \Big(\sum_{j \geq 0} |F_j (y)|^2 \Big)^{1/2} dy \lesssim & \int \Big( \sum_{k \in \mathbb{N}} \Big[\int_{-1/3}^{1/3} G_{k}^{\theta}(y + \theta) \,d\theta \Big]^2 \Big)^{1/2} \,dy \\ \leq & \int \int_{-1/3}^{1/3} \Big( \sum_{k \in \mathbb{N}} |G_{k}^{\theta}(y + \theta)|^2 \Big)^{1/2} \,d\theta\, dy \\ = & \int_{-1/3}^{1/3} \Big[ \int  \Big( \sum_{k \in \mathbb{N}} |G_{k}^{\theta}(y + \theta)|^2 \Big)^{1/2}\, dy \Big] \,d\theta \\ = & \int_{-1/3}^{1/3} \Big[ \int  \Big( \sum_{k \in \mathbb{N}} |G_{k}^{\theta}(y)|^2 \Big)^{1/2}\, dy \Big] \,d\theta, \end{aligned}

with the latter equalities by Fubini and translation invariance. By hypothesis (that is, by Lemma 2) the quantity in square brackets is bounded by \|g^\theta\|_{L(\log L)^{1/2}} = \|f\|_{L(\log L)^{1/2}}, and therefore we have 2. of Lemma 1 and the proof is concluded. \square

In the next post we will finally prove Lemma 2.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s