Quadratically modulated Bilinear Hilbert transform

Here is a simple but surprising fact.

Recall that the Hilbert transform Hf(x) := p.v. \int f(x-t) \frac{dt}{t} is L^p \to L^p bounded for all {1 < p < \infty} (and even L^1 \to L^{1,\infty} bounded, of course). The quadratically modulated Hilbert transform is the operator

\displaystyle H_q f(x) := p.v. \int \int f(x-t) e^{-i t^2} \frac{dt}{t};

this operator is also known to be L^p \to L^p bounded for all {1 < p < \infty} , but the proof is no corollary of that for H , it's a different beast requiring oscillatory integral techniques and almost orthogonality and is due to Ricci and Stein (interestingly though, H_q is also L^1 \to L^{1,\infty} bounded, and this can indeed be obtained by a clever adaptation of Calderón-Zygmund theory due to Chanillo and Christ).

The bilinear Hilbert transform instead is the operator

\displaystyle BHT(f,g)(x) := p.v. \int \int f(x-t)g(x+t)\frac{dt}{t}

and it is known, thanks to foundational work of Lacey and Thiele, to be L^p \times L^q \to L^r bounded at least in the range given by p,q>1, r>2/3 with the exponents satisfying the Hölder condition 1/p + 1/q = 1/r (this condition is strictly necessary to have boundedness, due to the scaling invariance of the operator). This operator has an interesting modulation invariance (corresponding to the fact that its bilinear multiplier is \mathrm{sgn}(\xi - \eta), which is invariant with respect to translations along the diagonal): indeed, if \mathrm{Mod}_{\theta} denotes the modulation operator \mathrm{Mod}_{\theta} f(x) := e^{- i \theta x} f(x), we have

\displaystyle BHT(\mathrm{Mod}_{\theta}f,\mathrm{Mod}_{\theta}g) = \mathrm{Mod}_{2 \theta} BHT(f,g);

it is this fact that suggests one should use time-frequency analysis to deal with this operator.
Now, analogously to the linear case, one can consider the quadratically modulated bilinear Hilbert transform, given simply by

\displaystyle BHT_q(f,g)(x) := p.v. \int \int f(x-t)g(x+t) e^{-i t^2} \frac{dt}{t}.

One might be tempted to think, by analogy, that this operator is harder to bound than BHT – at least, I would naively think so at first sight. However, due to the particular structure of the bilinear Hilbert transform, the boundedness of BHT_q is a trivial corollary of that of BHT ! Indeed, this is due to the trivial polynomial identity

\displaystyle (x+t)^2 + (x-t)^2 = 2x^2 + 2t^2;

thus if \mathrm{QMod}_{\theta} denotes the quadratic modulation operator \mathrm{QMod}_{\theta}f(x) = e^{-i \theta x^2} f(x) we have

\displaystyle \begin{aligned} BHT_q(f,g)(x) = & \int f(x-t)g(x+t) e^{-it^2} \frac{dt}{t} \\ = & \int f(x-t)g(x+t) e^{ix^2}e^{-i(x+t)^2/2}e^{-i(x-t)^2/2} \frac{dt}{t} \\ = & e^{ix^2}\int e^{-i(x-t)^2/2}f(x-t)e^{-i(x+t)^2/2}g(x+t)  \frac{dt}{t} \\ = & \big[ \mathrm{QMod}_{-1} BHT( \mathrm{QMod}_{1/2} f, \mathrm{QMod}_{1/2} g )\big](x). \end{aligned}

Of course this trick is limited to quadratic modulations, so for example already the cubic modulation of BHT

is non-trivial to bound (but the boundedness of the cubic modulation of the trilinear Hilbert transform would again be a trivial consequence of the boundedness of the trilinear Hilbert transform itself… too bad we don’t know if it is bounded at all!). Polynomial modulations of bilinear singular integrals (thus a bilinear analogue of the Ricci-Stein work) have been shown to be bounded by Christ, Li, Tao and Thiele in “On multilinear oscillatory integrals, nonsingular and singular“.

UPDATE: Interesting synchronicity just happened: today Dong, Maldague and Villano have uploaded on ArXiv their paper “Special cases of power decay in multilinear oscillatory integrals” in which they extend the work of Christ, Li, Tao and Thiele to some special cases that were left out. Maybe I should check my email for the arXiv digest before posting next time.

Fine structure of some classical affine-invariant inequalities and near-extremizers (account of a talk by Michael Christ)

Pdf version here: link.

I’m currently in Bonn, as mentioned in the previous post, participating to the Trimester Program organized by the Hausdorff Institute of Mathematics – although my time is almost over here. It has been a very pleasant experience: Bonn is lovely, the studio flat they got me is incredibly nice, Germany won the World Cup (nice game btw) and the talks were interesting. 2nd week has been pretty busy since there were all the main talks and some more unexpected talks in number theory which I attended. The week before that had been more relaxed instead, but I’ve followed a couple of talks then as well. Here I want to report about Christ’s talk on his work in the last few years, because I found it very interesting and because I had the opportunity to follow a second talk, which was more specific of the Hausdorff-Young inequality and helped me clarify some details I was confused about. If you get a chance, go to his talks, they’re really good.

What follows is an account of Christ’s talks – there are probably countless out there, but here’s another one. This is by no means original work, it’s very close to the talks themselves and I’m doing it only as a way to understand better. I’ll stick to Christ’s notation too. Also, I’m afraid the bibliography won’t be very complete, but I have included his papers, you can make your way to the other ones from there.

1. Four classical inequalities and their extremizers

Prof. Christ introduced four famous apparently unrelated inequalities. These are

  • the Hausdorff-Young inequality: for all functions {f \in L^p (\mathbb{R}^d)}, with {1\leq p \leq 2},

    \displaystyle \boxed{\|\widehat{f}\|_{L^{p'}}\leq \|f\|_{L^p};} \ \ \ \ \ \ \ \ \ \ \text{(H-Y)}

  • the Young inequality for convolution: if {1+\frac{1}{q_3}=\frac{1}{q_1}+\frac{1}{q_2}} then

    \displaystyle \|f \ast g\|_{L^{q_3}} \leq \|f\|_{L^{q_1}}\|g\|_{L^{q_2}};

    for convenience, he put it in trilinear form

    \displaystyle \boxed{ |\left\langle f\ast g, h \right\rangle|\leq \|f\|_{L^{p_1}}\|g\|_{L^{p_2}}\|h\|_{L^{p_3}}; } \ \ \ \ \ \ \ \ \ \ \text{(Y)}

    notice the exponents satisfy {\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}=2} (indeed {q_1=p_1} and same for index 2, but {p_3 = q'_3});

  • the Brunn-Minkowski inequality: for any two measurable sets {A,B \subset \mathbb{R}^d} of finite measure it is

    \displaystyle \boxed{ |A+B|^{1/d} \geq |A|^{1/d} + |B|^{1/d}; } \ \ \ \ \ \ \ \ \ \ \text{(B-M)}

  • the Riesz-Sobolev inequality: this is a rearrangement inequality, of the form

    \displaystyle \boxed{ \left\langle \chi_A \ast \chi_B, \chi_C \right\rangle \leq\left\langle \chi_{A^\ast} \ast \chi_{B^\ast}, \chi_{C^\ast} \right\rangle,} \ \ \ \ \ \ \ \ \ \ \text{(R-S)}

    where {A,B,C} are measurable sets and given set {E} the notation {E^\ast} stands for the symmetrized set given by ball {B(0, c_d |E|^{1/d})}, where {c_d} is a constant s.t. {|E|=|E^\ast|}: it’s a ball with the same volume as {E}.

These inequalities share a large group of symmetries, indeed they are all invariant w.r.t. the group of affine invertible transformations (which includes dilations and translations) – an uncommon feature. Moreover, for all of them the extremizers exist and have been characterized in the past. A natural question then arises

Is it true that if {f} (or {E}, or {\chi_E} where appropriate) is close to realizing the equality, then {f} must also be close (in an appropriate sense) to an extremizer of the inequality?

Another way to put it is to think of these questions as relative to the stability of the extremizers, and that’s why they are referred to as fine structure of the inequalities. If proving the inequality is the first level of understanding it, answering the above question is the second level. As an example, answering the above question for (H-Y) led to a sharpened inequality. Christ’s work was motivated by the fact that nobody seemed to have addressed the question before in the literature, despite being a very natural one to ask.

Continue reading

Ptolemaics meetings 4 & 5 & 6 ; pt I

These last ones have been quite interesting meetings, I’m happy about how the whole thing is turning out. Sadly I’m very slow at typing and working out the ideas, so I have to include three different meetings in one. Since the notes are getting incredibly long, I’ll have to split it in at least two parts.I include the pdf version of it, in case it makes it any easier to read.

ptolemaics meeting 4 & 5 & 6 pt I

Let me get finally into the time frequency of the Walsh phase plane. I won’t include many proofs as they are already well written in Hytönen’s notes (see previous post). My main interest here is the heuristic interpretation of them (disclaimer: you might think I’m bullshitting you at a certain point, but I’m probably not). Ideally, it would be very good to be able to track back the train of thoughts that went in Fefferman’s and Thiele-Lacey’s proofs.

Sorry if the pictures are shit, I haven’t learned how to draw them properly using latex yet.

1. Brush up

Recall we have Walsh series for functions {f \in L^2(0,1)} defined by

\displaystyle W_N f(x) = \sum_{n=0}^{N}{\left\langle f,w_n\right\rangle w_n(x)},

the (Walsh-)Carleson operator here is thus

\displaystyle \mathcal{C}f(x) = \sup_{N\in \mathbb{N}}{|W_N f(x)|},

and in order to prove {W_N f(x) \rightarrow f(x)} a.e. for {N\rightarrow +\infty} one can prove that

\displaystyle \|\mathcal{C}f\|_{L^{2,\infty}(0,1)} \lesssim \|f\|_{L^2(0,1)}.

There’s a general remark that should be done at this point: the last inequality is equivalent to

\displaystyle \left|\left\langle\mathcal{C}f, \chi_E\right\rangle\right| = \left|\int_{E}{\mathcal{C}f}\,dx\right| \lesssim |E|^{1/2}\|f\|_{L^2(0,1)}

to hold on every measurable {E} (of finite measure).
Continue reading

Ptolemaics meetings 2 & 3

I’m very far behind in this, I’ve got to admit I’m busier than last year and haven’t coped with it completely yet. Luckily we’re progressing slowly in order to understand better and that gives me the opportunity to merge together the blog posts for meetings 2 & 3.

As I said in the previous post, the goal for now is to understand the proof of {L^2\rightarrow L^{2,\infty}} boundedness of Carleson’s operator (through time freq. analysis). As an introduction to the real thing we’ve started from the simpler case of the Walsh transform, or Walsh series, or Walsh phase plane, whatever you want to call it. It’s easier because all the nasty technicalities disappear but the ideas needed are already in there, that’s why it propaedeutic. We’re following Hytönen’s notes as suggested by Tuomas (you can find them here: http://wiki.helsinki.fi/pages/viewpage.action?pageId=79564963). An alternative is Tao’s lecture notes (lecture 5 in particular) for course Math254A W’ 01 (http://www.math.ucla.edu/~tao/254a.1.01w/) which are quite nice – as all of his stuff. The main differences are in that Hytönen proves every single statement, and he deals with the Walsh series (analogue of the Fourier series) while Tao deals with the Walsh transform (analogue of the Fourier transform). Also, Hytönen then goes on to prove the full euclidean case, while Tao doesn’t.

The Walsh operators are best described as operators on the real line with a different field structure. One works on {\mathbb{Z}_2[[X]]}, i.e. the Laurent series with coefficient in {\mathbb{Z}_2}, which can be identified with the (binary expression of) positive reals by

\displaystyle a_N \cdots a_0 . a_{-1} a_{-2} \cdots \equiv a_N X^N + \ldots a_1 X + a_0 + \frac{a_{-1}}{X} + \frac{a_{-2}}{X^2} + \ldots .

Continue reading

Ptolemaics meeting #1

Together with some other PG students in the Harmonic Analysis working group, we’ve decided (it was Kevin’s idea originally) to set up a weekly meeting to learn about topics of harmonic analysis we don’t get to see otherwise (it works quite well as an excuse to drink beer, too). The topic we settled on arose pretty much by itself: it turned out that basically everybody was interested in time-frequency analysis on his own, either through Carleson’s theorem or some other related stuff. So we decided to learn about time-frequency analysis.

Last tuesday we had our first meeting: it was mainly aimed at discussing the arrangements to be made and what to read before next meeting, but we sketched some motivational introduction (it was quite improvised, I’m afraid); see below. Also, it was Odysseas that came up with the name. I think it’s quite brilliant: Ptolemy was the first to introduce the systematic use of epicycles in astronomy, and – as the science historian Giovanni Schiapparelli noticed – epicycles were nothing but the first historical appearance of Fourier series. That’s why they offered such accurate predictions even though the theory was wrong: by adding a suitable number of terms you can describe orbits within any amount of precision. Thus, from Carleson’s result you can go all the way back to Ptolemy: therefore Ptolemaics. Odysseas further added that Ptolemy’s first name was Claudius, like the roman emperor that first began the effective conquest of Britain; but that’s another story.

I will incorporate below a post I was writing for this blog about convergence of Fourier series, so it will be quite long in the end. Sorry about that, next posts will probably be way shorter.

1. Fourier series trivia

First some trivia of Fourier series as to brush up.

One wishes to consider approximations of functions (periodic of period 1) by means of trygonometric polynomials

\displaystyle \sum_{n=0}^{N}{\left(a_n \cos{2\pi n x} + b_n \sin{2\pi n x}\right)},

or, with a better notation,

\displaystyle \sum_{n=-N}^{N}{c_ n e^{2\pi i n x}}.

Continue reading