Oscillatory integrals I: single-variable phases

You might remember that I contributed some lecture notes on Littlewood-Paley theory to a masterclass, which I then turned into a series of three posts (IIIIII). I have also contributed a lecture note on some basic theory of oscillatory integrals, and therefore I am going to do the same and share them here as a blog post in two parts. The presentation largely follows the one in Stein’s “Harmonic Analysis: Real-variable methods, orthogonality, and oscillatory integrals“, with inputs from Stein and Shakarchi’s “Functional Analysis: Introduction to Further Topics in Analysis“, from some lecture notes by Terry Tao for his 247B course, from a very interesting paper by Carbery, Christ and Wright and from a number of other places that I would now have trouble tracking down.
In this first part we will discuss the theory of oscillatory integrals when the phase is a function of a single variable. There are extensive exercises included that are to be considered part of the lecture notes; indeed, in order to keep the actual notes short and engage the reader, I have turned many things into exercises. If you are interested in learning about oscillatory integrals, you should not ignore them.
In the next post, we will study instead the case where the phases are functions of several variables.

0. Introduction

A large part of modern harmonic analysis is concerned with understanding cancellation phenomena happening between different contributions to a sum or integral. Loosely speaking, we want to know how much better we can do than if we had taken absolute values everywhere. A prototypical example of this is the oscillatory integral of the form

\displaystyle  \int e^{i \phi_\lambda (x)} \psi(x) dx.

Here {\psi}, called the amplitude, is usually understood to be “slowly varying” with respect to the real-valued {\phi_\lambda}, called the phase, where {\lambda} denotes a parameter or list of parameters and \phi'_\lambda gets larger as \lambda grows; for example {\phi_\lambda(x) = \lambda \phi(x)}. Thus the oscillatory behaviour is given mainly by the complex exponential {e^{i \phi_\lambda(x)}}.
Expressions of this form arise quite naturally in several problems, as we will see in Section 1, and typically one seeks to provide an upperbound on the absolute value of the integral above in terms of the parameters {\lambda}. Intuitively, as {\lambda} gets larger the phase {\phi_\lambda} changes faster and therefore {e^{i \phi_\lambda}} oscillates faster, producing more cancellation between the contributions of different intervals to the integral. We expect then the integral to decay as {\lambda} grows larger, and usually seek upperbounds of the form {|\lambda|^{-\alpha}}. Notice that if you take absolute values inside the integral above you just obtain {\|\psi\|_{L^1}}, a bound that does not decay in {\lambda} at all.
The main tool we will use is simply integration by parts. In the exercises you will also use a little basic complex analysis to obtain more precise information on certain special oscillatory integrals.

1. Motivation

In this section we shall showcase the appearance of oscillatory integrals in analysis with a couple of examples. The reader can find other interesting examples in the exercises.

1.1. Fourier transform of radial functions

Let {f : \mathbb{R}^d \rightarrow \mathbb{C}} be a radially symmetric function, that is there exists a function {f_0: \mathbb{R}^{+} \rightarrow \mathbb{C}} such that {f(x) = f_0(|x|)} for every {x \in \mathbb{R}^d}. Let’s suppose for simplicity that {f\in L^1(\mathbb{R}^d)} (equivalently, that {f_0 \in L^1(\mathbb{R}, r^{d-1} dr)}), so that it has a well-defined Fourier transform. It is easy to see (by composing {f} with a rotation and using a change of variable in the integral defining {\widehat{f}}) that {\widehat{f}} must also be radially symmetric, that is there must exist {g: \mathbb{R}^{+} \rightarrow \mathbb{C}} such that {\widehat{f}(\xi) = g(|\xi|)}; we want to understand its relationship with {f_0}. Therefore we write using polar coordinates

\displaystyle \begin{aligned} \widehat{f}(\xi) = & \int_{\mathbb{R}^d} f(x) e^{-2\pi i x \cdot \xi} dx  \\  = & \int_{0}^{\infty}\int_{\mathbb{S}^{d-1}} f_0(r) e^{-2\pi i r \omega\cdot \xi} r^{d-1} d\sigma_{d-1}(\omega) dr, \\  = & \int_{0}^{\infty} f_0(r) r^{d-1} \Big(\int_{\mathbb{S}^{d-1}} e^{-2\pi i r \omega\cdot \xi} d\sigma_{d-1}(\omega)\Big) dr  \end{aligned}

where {d\sigma_{d-1}} denotes the surface measure on the unit {(d-1)}-dimensional sphere {\mathbb{S}^{d-1}} induced by the Lebesgue measure on the ambient space {\mathbb{R}^{d}}. By inspection, we see that the integral in brackets above is radially symmetric in {\xi}, and so if we define

\displaystyle  J(t) := \int_{\mathbb{S}^{d-1}} e^{-2\pi i t \omega\cdot \mathbf{e}_1} d\sigma_{d-1}(\omega),

with {\mathbf{e}_1 = (1, 0, \ldots, 0)}, we have

\displaystyle  \widehat{f}(\xi) = g(|\xi|) = \int_{0}^{\infty} f_0(r) r^{d-1} J(r|\xi|) dr. \ \ \ \ \ (1)

This is the relationship we were looking for: it allows one to calculate the Fourier transform of {f} directly from the radial information {f_0}.

Continue reading

Ptolemaics meetings 2 & 3

I’m very far behind in this, I’ve got to admit I’m busier than last year and haven’t coped with it completely yet. Luckily we’re progressing slowly in order to understand better and that gives me the opportunity to merge together the blog posts for meetings 2 & 3.

As I said in the previous post, the goal for now is to understand the proof of {L^2\rightarrow L^{2,\infty}} boundedness of Carleson’s operator (through time freq. analysis). As an introduction to the real thing we’ve started from the simpler case of the Walsh transform, or Walsh series, or Walsh phase plane, whatever you want to call it. It’s easier because all the nasty technicalities disappear but the ideas needed are already in there, that’s why it propaedeutic. We’re following Hytönen’s notes as suggested by Tuomas (you can find them here: http://wiki.helsinki.fi/pages/viewpage.action?pageId=79564963). An alternative is Tao’s lecture notes (lecture 5 in particular) for course Math254A W’ 01 (http://www.math.ucla.edu/~tao/254a.1.01w/) which are quite nice – as all of his stuff. The main differences are in that Hytönen proves every single statement, and he deals with the Walsh series (analogue of the Fourier series) while Tao deals with the Walsh transform (analogue of the Fourier transform). Also, Hytönen then goes on to prove the full euclidean case, while Tao doesn’t.

The Walsh operators are best described as operators on the real line with a different field structure. One works on {\mathbb{Z}_2[[X]]}, i.e. the Laurent series with coefficient in {\mathbb{Z}_2}, which can be identified with the (binary expression of) positive reals by

\displaystyle a_N \cdots a_0 . a_{-1} a_{-2} \cdots \equiv a_N X^N + \ldots a_1 X + a_0 + \frac{a_{-1}}{X} + \frac{a_{-2}}{X^2} + \ldots .

Continue reading

Ramsey numbers

Perhaps made famous by the work of Erdős, the Ramsey numbers are one very nice example of graph theory and combinatorics.

Notation here is somewhat nasty (well, it could be worse), so say that a complete graph of size {n} is denoted as {K_n}, a totally disconnected graph as {D_n} (i.e. a graph with {n} vertices and {0} edges), and given a graph {G} the expression {K_n \leq G } means {G} has a subgraph with {n} vertices which is isomorphic to a complete graph (same for {D_n}).

Definition 1 Given {k,l\in\mathbb{N}}, the Ramsey number {R(k,l)} is the minimum number such that a graph with at least {R(k,l)} vertices either contains a subgraph isomorphic to {K_k} or to {D_l}.

In order to generalize this notion readily, it’s better to reformulate it in terms of {2}-colorings:

Definition 2 Given {k,l\in\mathbb{N}}, the Ramsey number {R(k,l)} is the minimum number such that any {2}-coloring of {K_{R(k,l)}} has a red subgraph isomorphic to {K_k} or a blue subgraph isomorphic to {K_l}.

In this way the definition only requires notation for complete subgraphs. Notice that the definition makes the Ramsey number naturally symmetric, {R(k,l)=R(l,k)}, that trivially {R(1,k)=1} for every {k} and {R(2,k)=k}. Finally, notice that the number is increasing in the arguments, i.e. {R(k,l)\leq R(k+1,l)}.
Continue reading