# Oscillatory integrals I: single-variable phases

You might remember that I contributed some lecture notes on Littlewood-Paley theory to a masterclass, which I then turned into a series of three posts (IIIIII). I have also contributed a lecture note on some basic theory of oscillatory integrals, and therefore I am going to do the same and share them here as a blog post in two parts. The presentation largely follows the one in Stein’s “Harmonic Analysis: Real-variable methods, orthogonality, and oscillatory integrals“, with inputs from Stein and Shakarchi’s “Functional Analysis: Introduction to Further Topics in Analysis“, from some lecture notes by Terry Tao for his 247B course, from a very interesting paper by Carbery, Christ and Wright and from a number of other places that I would now have trouble tracking down.
In this first part we will discuss the theory of oscillatory integrals when the phase is a function of a single variable. There are extensive exercises included that are to be considered part of the lecture notes; indeed, in order to keep the actual notes short and engage the reader, I have turned many things into exercises. If you are interested in learning about oscillatory integrals, you should not ignore them.
In the next post, we will study instead the case where the phases are functions of several variables.

0. Introduction

A large part of modern harmonic analysis is concerned with understanding cancellation phenomena happening between different contributions to a sum or integral. Loosely speaking, we want to know how much better we can do than if we had taken absolute values everywhere. A prototypical example of this is the oscillatory integral of the form

$\displaystyle \int e^{i \phi_\lambda (x)} \psi(x) dx.$

Here ${\psi}$, called the amplitude, is usually understood to be “slowly varying” with respect to the real-valued ${\phi_\lambda}$, called the phase, where ${\lambda}$ denotes a parameter or list of parameters and $\phi'_\lambda$ gets larger as $\lambda$ grows; for example ${\phi_\lambda(x) = \lambda \phi(x)}$. Thus the oscillatory behaviour is given mainly by the complex exponential ${e^{i \phi_\lambda(x)}}$.
Expressions of this form arise quite naturally in several problems, as we will see in Section 1, and typically one seeks to provide an upperbound on the absolute value of the integral above in terms of the parameters ${\lambda}$. Intuitively, as ${\lambda}$ gets larger the phase ${\phi_\lambda}$ changes faster and therefore ${e^{i \phi_\lambda}}$ oscillates faster, producing more cancellation between the contributions of different intervals to the integral. We expect then the integral to decay as ${\lambda}$ grows larger, and usually seek upperbounds of the form ${|\lambda|^{-\alpha}}$. Notice that if you take absolute values inside the integral above you just obtain ${\|\psi\|_{L^1}}$, a bound that does not decay in ${\lambda}$ at all.
The main tool we will use is simply integration by parts. In the exercises you will also use a little basic complex analysis to obtain more precise information on certain special oscillatory integrals.

1. Motivation

In this section we shall showcase the appearance of oscillatory integrals in analysis with a couple of examples. The reader can find other interesting examples in the exercises.

1.1. Fourier transform of radial functions

Let ${f : \mathbb{R}^d \rightarrow \mathbb{C}}$ be a radially symmetric function, that is there exists a function ${f_0: \mathbb{R}^{+} \rightarrow \mathbb{C}}$ such that ${f(x) = f_0(|x|)}$ for every ${x \in \mathbb{R}^d}$. Let’s suppose for simplicity that ${f\in L^1(\mathbb{R}^d)}$ (equivalently, that ${f_0 \in L^1(\mathbb{R}, r^{d-1} dr)}$), so that it has a well-defined Fourier transform. It is easy to see (by composing ${f}$ with a rotation and using a change of variable in the integral defining ${\widehat{f}}$) that ${\widehat{f}}$ must also be radially symmetric, that is there must exist ${g: \mathbb{R}^{+} \rightarrow \mathbb{C}}$ such that ${\widehat{f}(\xi) = g(|\xi|)}$; we want to understand its relationship with ${f_0}$. Therefore we write using polar coordinates

\displaystyle \begin{aligned} \widehat{f}(\xi) = & \int_{\mathbb{R}^d} f(x) e^{-2\pi i x \cdot \xi} dx \\ = & \int_{0}^{\infty}\int_{\mathbb{S}^{d-1}} f_0(r) e^{-2\pi i r \omega\cdot \xi} r^{d-1} d\sigma_{d-1}(\omega) dr, \\ = & \int_{0}^{\infty} f_0(r) r^{d-1} \Big(\int_{\mathbb{S}^{d-1}} e^{-2\pi i r \omega\cdot \xi} d\sigma_{d-1}(\omega)\Big) dr \end{aligned}

where ${d\sigma_{d-1}}$ denotes the surface measure on the unit ${(d-1)}$-dimensional sphere ${\mathbb{S}^{d-1}}$ induced by the Lebesgue measure on the ambient space ${\mathbb{R}^{d}}$. By inspection, we see that the integral in brackets above is radially symmetric in ${\xi}$, and so if we define

$\displaystyle J(t) := \int_{\mathbb{S}^{d-1}} e^{-2\pi i t \omega\cdot \mathbf{e}_1} d\sigma_{d-1}(\omega),$

with ${\mathbf{e}_1 = (1, 0, \ldots, 0)}$, we have

$\displaystyle \widehat{f}(\xi) = g(|\xi|) = \int_{0}^{\infty} f_0(r) r^{d-1} J(r|\xi|) dr. \ \ \ \ \ (1)$

This is the relationship we were looking for: it allows one to calculate the Fourier transform of ${f}$ directly from the radial information ${f_0}$.

Now we claim that the function ${J}$ is an example of oscillatory integral of the type mentioned at the beginning. Indeed, observe that the inner product ${\omega \cdot \mathbf{e}_1}$ depends only on the first component of ${\omega}$; thus we write ${\omega = (\cos \theta, \omega' \sin \theta )}$, with ${\theta}$ the angle between ${\omega}$ and ${\mathbf{e}_1}$ and ${\omega' \in \mathbb{S}^{d-2}}$. By factorising the spherical measure ${d\sigma_{d-1}}$ along the ${\mathbf{e}_1}$ axis, we can use this change of variables to write

\displaystyle \begin{aligned} J(t) =& \int_{\mathbb{S}^{d-2}} \int_{0}^{\pi} e^{-2\pi i t \cos\theta} (\sin\theta)^{d-2} d\theta d\sigma_{d-2}(\omega') \\ =& c_{d-2} \int_{0}^{\pi} e^{-2\pi i t \cos\theta} (\sin\theta)^{d-2} d\theta, \end{aligned}

because the integrand does not depend on ${\omega'}$ (here ${c_{d-2} = \int_{\mathbb{S}^{d-2}} d\sigma_{d-2}(\omega') = \sigma_{d-2}(\mathbb{S}^{d-2})}$). It is now trivial to match the last expression to the one for an oscillatory integral where the parameter is ${t}$. Later on we will see how to estimate it in terms of ${|t|}$ and show that ${J(t)}$ is bounded by a constant multiple of ${(1+|t|)^{-(d-1)/2}}$.

$J$ is a very well-known object in analysis: it’s a Bessel function, for which a rich collection of useful identities exists. However, I specifically want to use only integration-by-parts tools here and thus I will avoid any reference to the properties of Bessel functions – I will not even call them by their name (no offence to Bessel).

The information we get from how fast ${J(t)}$ decays in terms of ${|t|}$ tells us of something interesting. Indeed, recall that if ${f \in L^1(\mathbb{R}^d)}$ then its Fourier transform ${\widehat{f}}$ is continuous. In general, when ${1 < p \leq 2}$ and ${f \in L^p(\mathbb{R}^d)}$, we know by Hausdorff-Young inequality that ${\widehat{f}}$ is in ${L^{p'}(\mathbb{R}^d)}$, but we don't know whether it is continuous or not. You can easily see this is generally not the case when ${p=2}$. However, for radial functions one can show continuity of the Fourier transform if the exponent ${p}$ is sufficiently close to 1! In particular

Proposition 1 If ${1 \leq p < \frac{2d}{d+1}}$ then the Fourier transform ${\widehat{f}}$ of any radial function ${f \in L^p(\mathbb{R}^d)}$ is continuous away from ${0}$.

This follows purely from the decay of the oscillatory integral ${J}$ and you will prove it in Exercise 1.

1.2. Counting the number of ways in which a number can be written as the sum of two squares

Consider the following diophantine-type problem: when can we write an integer ${n}$ as the the sum of two squares, $n = x^2 + y^2$? If you fiddle around with examples a bit you might find that certain numbers have such a representation, say

$\displaystyle 5641 = 75^2 + 4^2,$

while others, such as the successor 5642, have no such representation. Studying this problem more in depth you might discover the Fibonacci identity

$\displaystyle (a^2 + b^2)(c^2+d^2)=(ac-bd)^2 + (ad+bc)^2,$

and deduce as a consequence that it suffices to study the case where ${n}$ is prime. If you went further down this road, you might learn that there is a way to write ${p}$ prime as $x^2+y^2$ if and only if $p \equiv 1 \mod 4$, and this gives the full answer for the problem: $n = x^2 + y^2$ has a solution if and only if in the prime factorisation of ${n}$ the prime factors that are congruent to $3 \mod 4$ appear an even number of times each.
However, after this, you might notice that some numbers have even more than one representation as a sum of two squares: for example,

$\displaystyle 5645 = 74^2 + 13^2 = 67^2 + 34^2.$

The next step would then be to try and count how many distinct solutions there are. It turns out that this can be done in terms of the exponents in the prime factorisation of ${n}$ (and the answer involves the Gaussian integers $\mathbb{Z}[i]$); however, this is not too informative, because factorising large integers is knowingly hard. Let us take a different, less algebraic approach instead.
We could generalise the question a little by considering sums of more than two squares. Letting ${k}$ be fixed and given an integer ${n \in \mathbb{N}}$, we define ${k}$-th sum-of-squares function to be

$\displaystyle r_k(n) := \# \{ (x_1, \ldots, x_k) \in \mathbb{Z}^k \text{ such that } x_1^2 + \cdots + x_k^2 = n \};$

in words, ${r_k(n)}$ is the number of ways in which ${n}$ can be expressed as the sum of ${k}$ squares. Then we want to study the behaviour of the function ${r_k(n)}$ as ${n}$ grows. In the following however we will stick to the case ${k=2}$.

If you plot the graph of ${r_2}$ you will quickly realise it is not a very regular function (it’s sequence A004018 on the OEIS):

A natural approach to deal with this irregularity is to study the averaged sequence instead, which should behave better. This should help us answer the question “in how many ways can we expect to be able to write an arbitrary number ${n}$ as a sum of two squares?”. In formulas, we want to study the behaviour of ${\sum_{n=1}^{N} r_2(n)/N}$ as ${N}$ tends to ${\infty}$. The limit of the expression above is easy to find: the key is to notice that ${n = x^2 + y^2}$ means that the point ${(x,y) \in \mathbb{Z}^2}$ belongs to the circle of radius ${n^{1/2}}$ centred at ${(0,0)}$, and therefore to the ball of radius ${N^{1/2}}$ with the same centre. Therefore

$\displaystyle \sum_{n=1}^{N} r_2(n) = \#\{ (x,y) \in \mathbb{Z}^2 \text{ such that } x^2 + y^2 \leq N \},$

and a simple geometric argument shows that the limit of ${\frac{1}{N} \sum_{n=1}^{N} r_2(n)}$ is ${\pi}$ (see Exercise 2). The geometric argument actually says something more: namely, we can also give an upperbound on the error, that is

$\displaystyle \sum_{n=1}^{N} r_2(n) = \pi N + O(N^{1/2}).$

The error upperbound is significant because it is of smaller order than the main term (which grows like ${O(N)}$). The limit of the averaged sequence was very easy to find, so now we ask a more sophisticated question: what is the behaviour of the error term? How much can the sum deviate from ${\pi N}$? ${O(N^{1/2})}$ is what one would expect from a uniformly random behaviour, but it turns out that it is far from optimal, and we can do better. However, be warned that we do not know yet how much better we can do! It is still an open problem1 to establish the true asymptotic behaviour of the error term ${\mathcal{E}(N):= \sum_{n=1}^{N} r_2(n) - \pi N}$ (it is interesting to look at the behaviour of the averages at the dedicated MathWorld page).
Using oscillatory integral techniques, we can show the improved estimate

Proposition 2 For ${N}$ large enough, we have the error term estimate

$\displaystyle |\mathcal{E}(N)| = O(N^{1/3}).$

Proof: The idea is to use the Poisson Summation formula to reveal some interesting oscillation hidden in the problem. In the end, the proof will again rely on a good decay estimate for a certain oscillatory integral related to ${J}$.
Recall that, for ${f \in \mathcal{S}(\mathbb{R}^2)}$, that is a Schwartz function, the Poisson Summation formula says that

$\displaystyle \sum_{\boldsymbol{n} \in \mathbb{Z}^2} f(\boldsymbol{n}) = \sum_{\boldsymbol{n} \in \mathbb{Z}^2} \widehat{f}(\boldsymbol{n}).$

The quantity we are interested in is ${\Theta(N):=\sum_{\boldsymbol{n} \in \mathbb{Z}^2} \mathbf{1}_{B(0,N^{1/2})}(\boldsymbol{n})}$, but ${\mathbf{1}_{B(0,N^{1/2})}}$ (the characteristic function of a ball of radius ${N^{1/2}}$) is not in ${\mathcal{S}(\mathbb{R}^2)}$ – thus we have to regularise it. Let ${\delta>0}$ be a small parameter to be chosen later and let ${\varphi}$ be a bump function in ${C^\infty}$, supported in the unit ball and such that ${\int_{\mathbb{R}^2}\varphi(x)dx = 1}$; we let ${\varphi_\delta}$ denote the rescaled function ${\delta^{-2} \varphi (\delta^{-1}x)}$ (so that ${\int \varphi_{\delta} = 1}$ too). Then we define

$\displaystyle \chi_{N,\delta} := \mathbf{1}_{B(0,N^{1/2})} \ast \varphi_\delta,$

which is in ${\mathcal{S}(\mathbb{R}^2)}$ and is an approximation to ${\mathbf{1}_{B(0,N^{1/2})}}$. Applying the Poisson Summation formula to this function, we obtain

$\displaystyle \Theta_{\delta}(N):= \sum_{\boldsymbol{n} \in \mathbb{Z}^2} \chi_{N,\delta}(\boldsymbol{n}) = \widehat{\mathbf{1}}_{B(0,N^{1/2})}(0)\widehat{\varphi}(0) + \sum_{\boldsymbol{n} \neq 0 \in \mathbb{Z}^2} \widehat{\mathbf{1}}_{B(0,N^{1/2})}(\boldsymbol{n})\widehat{\varphi}(\delta\boldsymbol{n});$

the first term is easily evaluated to be exactly ${\pi N}$, our main term! Thus the second term is an error we have to control. Observe that, since ${\mathbf{1}_{B(0,N^{1/2})}}$ is radial, we have by (1) that

$\displaystyle \widehat{\mathbf{1}}_{B(0,N^{1/2})}(\boldsymbol{n}) = \int_{0}^{N^{1/2}} r J(r|\boldsymbol{n}|)dr.$

The function ${J}$ is not only an oscillatory integral, but it is itself oscillating, and with the techniques of the next section you will be able to show in Exercise 9 that

$\displaystyle \int_{0}^{R} r J(r)dr = O(R^{1/2}). \ \ \ \ \ (2)$

After a change of variables, estimate (2) shows that ${|\widehat{\mathbf{1}}_{B(0,N^{1/2})}(\boldsymbol{n})|\lesssim N^{1/4} |\boldsymbol{n}|^{-3/2}}$. In the range ${|\boldsymbol{n}|<\delta^{-1}}$ we have ${|\widehat{\varphi}(\delta\boldsymbol{n})|\lesssim 1}$ and therefore the above estimate gives us

$\displaystyle \sum_{|\boldsymbol{n}|< \delta^{-1}, \boldsymbol{n}\neq 0} |\widehat{\mathbf{1}}_{B(0,N^{1/2})}(\boldsymbol{n})\widehat{\varphi}(\delta\boldsymbol{n})| \lesssim N^{1/4} \sum_{|\boldsymbol{n}|< \delta^{-1}, \boldsymbol{n}\neq 0} |\boldsymbol{n}|^{-3/2} \sim N^{1/4} \delta^{-1/2}.$

In the range ${|\boldsymbol{n}|\geq \delta^{-1}}$ things are even better, since ${\widehat{\varphi}}$ decays fast, being in ${C^{\infty}}$; in particular, we have ${|\widehat{\varphi}(\delta\boldsymbol{n})|\lesssim (\delta|\boldsymbol{n}|)^{-1}}$ and thus, by the above same argument, this range also contributes ${O(N^{1/4} \delta^{-1/2})}$. Summarising, we have proven that

$\displaystyle \Theta_\delta (N) = \pi N + O(N^{1/4} \delta^{-1/2}).$

Now, observe that

$\displaystyle \Theta_\delta ((N^{1/2}-\delta)^2) \leq \Theta(N) \leq \Theta_\delta ((N^{1/2}+\delta)^2); \ \ \ \ \ (3)$

this is because, as you can easily see by expanding the convolution,

$\displaystyle \chi_{(N^{1/2}-\delta)^2, \delta} \leq \mathbf{1}_{B(0,N^{1/2})} \leq \chi_{(N^{1/2}+\delta)^2, \delta}.$

Therefore we have, thanks to (3) and the fact that ${(N^{1/2}\pm \delta)^2 \approx N \pm 2 N^{1/2} \delta }$,

$\displaystyle \Theta(N) - \pi N = \mathcal{E}(N) = O(N^{1/4} \delta^{-1/2}) + O(N^{1/2} \delta),$

and if we optimize by choosing ${\delta = N^{-1/6}}$ we see that both terms above are ${O(N^{1/3})}$ and we are done. $\Box$

2. Oscillatory integrals in one variable

In this section we present some techniques that allow one to estimate oscillatory integrals when one has a lowerbound for some of the derivatives of the phase. We will analyse objects of the form

$\displaystyle I(\lambda) := \int_{a}^{b} e^{i \lambda \phi(x)} dx$

and more in general

$\displaystyle I_\psi(\lambda) := \int_{a}^{b} e^{i \lambda \phi(x)} \psi(x) dx.$

Before we start, let’s consider the following heuristic. Suppose that we have functions ${f,g}$ and we know that ${g}$ is “slowly varying”, that is the derivative ${g'}$ is small. In order to find a good upperbound for an expression of the form ${|\int_a^b f(t)g(t) dt|}$, we could take advantage of the fact that ${g'}$ is small by using integration by parts to estimate instead the expression ${- \int_a^b F(t) g'(t) dt + (\text{boundary terms})}$, where ${F}$ is a primitive of ${f}$. What we gain is that now we are working with an integral for which we know one of the factors of the integrand, ${g'}$, is small. We will use this idea and variations thereof over and over.

2.1. Principle of non-stationary phase

We begin by considering the following case. Let ${\psi \in C^{\infty}(\mathbb{R})}$ have compact support in ${(a,b)}$ (so that in particular ${\psi(a)=\psi(b)=0}$) and suppose that the phase ${\phi \in C^\infty}$ satisfies ${\phi'(t) \neq 0}$ for all ${t \in [a,b]}$. We claim that in this case the integral ${I_\psi(\lambda)}$ decreases very fast in ${\lambda}$, in particular

Proposition 3 (Principle of non-stationary phase) Let ${\psi,\phi}$ be as above, that is ${\psi \in C^\infty_c ((a,b))}$ and ${\phi \in C^\infty}$ is such that ${\phi'\neq 0}$ on ${[a,b]}$. Then for every ${N >0}$ we have

$\displaystyle |I_\psi(\lambda)| \lesssim_{N,\psi,\phi} |\lambda|^{-N}.$

Remark 1 Notice that the bound given by the proposition above is only interesting when ${|\lambda|}$ is large. Indeed, when ${|\lambda|\leq 1}$ we can simply bound ${|I_\psi(\lambda)|\leq \|\psi\|_{L^1}\lesssim 1}$ by taking the absolute value inside the integral.

Proof: The proof is a simple integration-by-parts argument.
We want to use integration by parts a number of times. Notice that ${(e^{i\lambda \phi})' = i\lambda \phi' e^{i\lambda \phi}}$, so if we define the differential operator ${D}$ by

$\displaystyle Df(t) := \frac{1}{i \phi'(t)} \frac{df}{dt}$

we have ${\lambda^{-1} D(e^{i\lambda \phi})= e^{i\lambda \phi}}$. Notice ${D}$ is well-defined because ${\phi' \neq 0}$. Using integration by parts we then have

\displaystyle \begin{aligned} \int_{a}^{b} e^{i \lambda \phi(t)} \psi(t) dt = & \int_{a}^{b} \lambda^{-1} D(e^{i \lambda \phi(t)}) \psi(t) dt \\ = & \frac{e^{i \lambda \phi}\psi}{i \lambda \phi'} \bigg|_{a}^{b} + \int_{a}^{b} e^{i \lambda \phi(t)} \lambda^{-1} D^{\intercal}\psi(t) dt \\ = & \lambda^{-1} \int_{a}^{b} e^{i \lambda \phi(t)} D^{\intercal}\psi(t) dt, \end{aligned}

where the boundary term vanishes by the hypothesis on the support of ${\psi}$; here ${D^{\intercal}}$ denotes the transpose2 of the operator ${D}$, namely ${D^{\intercal}f(t) = - \frac{1}{i}\frac{d}{dt}\Big(\frac{f}{ \phi'}\Big)}$. By repeating the argument ${N}$ times we get

$\displaystyle I_{\psi}(\lambda) = \lambda^{-N} \int_{a}^{b} e^{i \lambda \phi} (D^{\intercal})^N(\psi) dt,$

and we conclude by taking absolute values (the resulting integral is finite). $\Box$

We can interpret the principle of non-stationary phase as saying that, for a generic phase ${\phi}$, the behaviour of the oscillatory integrals ${I_\psi(\lambda)}$ is determined by the points where ${\phi' = 0}$, because away from those points the integral contributes at most an error term ${O_N(|\lambda|^{-N})}$. In Section 2 below we will show how this heuristic can be made precise.

2.2. Van der Corput’s lemma

We will now consider more general situations than the non-stationary phase one.

We start by considering ${I(\lambda)}$, that is the case where ${\psi \equiv 1}$, and we assume that we have ${|\phi'(t)| > 1}$ for all ${t \in (a,b)}$. We make a further important assumption, that is we also assume that ${\phi'}$ is monotonic3. Then we have

Proposition 4 If ${\phi}$ is such that ${\phi'}$ is monotonic and ${|\phi'|>1}$ on ${[a,b]}$, we have

$\displaystyle |I(\lambda)| \leq C |\lambda |^{-1}$

for an absolute constant ${C>0}$.

Some observations are in order, before we proceed to the proof:

1. First of all, the assumption that ${\phi'}$ is monotonic is fundamental: indeed, the statement is false otherwise! (prove this in Exercise 4; see also Exercise 5.)
2. Secondly, we cannot get a decay in ${\lambda}$ better than ${|\lambda|^{-1}}$, as the example ${\phi(t) = t}$ shows (you are cordially invited to do the calculation).
3. Thirdly, by a simple rescaling of the phase, we can see that the proposition is more general than it looks: if the lowerbound on ${\phi'}$ becomes more generally ${|\phi'|>\mu}$, then the estimate becomes ${|I(\lambda)|\leq C (\mu|\lambda|)^{-1}}$.
4. Finally, notice that the constant ${C}$ in the statement above depends neither on the phase ${\phi}$ nor on the interval ${[a,b]}$! In particular, if we allowed the constant to depend arbitrarily on the interval, we would trivially have that ${|I(\lambda)|\leq |a-b|}$, which is rather uninteresting.

Observations ii)-iv) are actually related, thanks to the scaling behaviour of the inequality. Indeed, if we ask for which values of ${\alpha}$ we can have the inequality

$\displaystyle |I(\lambda)|\leq C_{\phi} |\lambda|^{-\alpha}$

hold with a constant ${C_{\phi}}$ that is independent4 of ${(a,b)}$, then the answer is that necessarily it must be ${\alpha=1}$. Prove this in Exercise 6.
Now we proceed with the proof.

Proof: We repeat the same integration-by-parts argument as before, except this time the boundary terms do not vanish. Thus we have, with ${D}$ the same differential operator as before, that

$\displaystyle \int_{a}^{b} e^{i \lambda \phi(t)}dt = \Big( \frac{e^{i \lambda \phi(b)}}{i \lambda \phi'(b)} - \frac{e^{i \lambda \phi(a)}}{i \lambda \phi'(a)} \Big) + \int_{a}^{b} e^{i \lambda \phi(t)} \lambda^{-1} D^{\intercal}(1) dt.$

Since ${|\phi'|>1}$, the boundary term is bounded by ${2/|\lambda|}$ simply by triangle inequality. As for the other term, ${ \lambda^{-1}D^{\intercal}(1) = -(i\lambda)^{-1} d/dt(1/\phi')}$, and by taking absolute values inside the integral we have that it is bounded by

$\displaystyle \frac{1}{|\lambda|}\int_{a}^{b} \Big|\frac{d}{dt}\Big(\frac{1}{\phi'}\Big)\Big|dt.$

However, ${\phi'}$ is monotonic and therefore so is ${1/\phi'}$; this means that in the last integral the derivative is single-signed, and therefore we can take the absolute value outside (something normally prohibited!) and obtain that it equals

$\displaystyle \frac{1}{|\lambda|}\Big|\int_{a}^{b} \frac{d}{dt}\Big(\frac{1}{\phi'}\Big)dt \Big|.$

By the Fundamental Theorem of Calculus this is equal to ${|\lambda|^{-1}\big|\frac{1}{\phi'(b)} - \frac{1}{\phi'(a)}\big|}$, which is bounded by ${1/|\lambda|}$, thanks to the monotonicity assumption. Thus we have proven the theorem with ${C=3}$. $\Box$

The natural step after this is to investigate what happens when the condition ${|\phi'|>1}$ is violated but we still have lowerbounds for some higher derivatives. Indeed, we have

Theorem 5 (Van der Corput) } Let ${k \geq 2}$ and let ${\phi \in C^k}$ be such that ${|\phi^{(k)}|>1}$ on ${(a,b)}$. Then

$\displaystyle |I(\lambda)|\leq C_k |\lambda|^{-1/k},$

where ${C_k}$ is an absolute constant depending only on ${k}$.

Notice that ${|\lambda|^{-1/k}}$ decays slower than ${|\lambda|^{-1}}$ as ${\lambda \rightarrow \infty}$. This is to be expected, since the phase “slows down” near the zeros of ${\phi'}$ and hence there is less overall cancellation. It is indeed sharp, as the example ${\phi(t)=t^k}$ over ${[0,1]}$ shows (again, you are cordially invited to do the calculation – you can use complex integration and regularize the integral by introducing a factor of ${e^{-\varepsilon t^k}}$, then take ${\varepsilon \rightarrow 0}$ at the end; see also Exercise 13).
Again, it goes without saying that if we have ${|\phi^{(k)}|>\mu}$ instead, then the inequality becomes ${|I(\lambda)| \leq C_k (\mu |\lambda|)^{-1/k}}$. Moreover, the exponent ${1/k}$ is necessary for the inequality to hold with ${C_k}$ independent of the interval ${(a,b)}$ (see again Exercise 6).

Remark 2 You should have noticed that this time we are not making an explicit monotonicity assumption such as in Proposition 4. However, we implicitely still are! Indeed, the fact that ${|\phi^{(k)}|>1}$ for some ${k\geq 2}$ implies that ${\phi^{(k-1)}}$ attains value zero in at most one point of ${(a,b)}$, and thus ${\phi^{(k-2)}}$ attains value zero in at most two points, and so on; iterating, we see that ${\phi'}$ has at most ${k-2}$ changes of monotonicity, or in other words we can partition ${(a,b)}$ into at most ${k-1}$ intervals such that ${\phi'}$ is monotonic on each interval.

Proof: The proof proceeds by induction on ${k}$. The case ${k=1}$ (with the additional assumption that ${\phi'}$ be monotonic) has been proven in Proposition 4. Notice that if ${|\phi''|>1}$ then ${\phi'}$ is monotonic.
Assume now that the statement is true for ${k-1}$. Since ${|\phi^{(k)}|>1}$, the ${(k-1)}$-th derivative ${\phi^{(k-1)}}$ can have at most one zero in ${(a,b)}$. Denote this zero by ${t_0}$ and split the integral as

$\displaystyle \int_{a}^{t_0 - \delta} + \int_{t_0 - \delta}^{t_0 + \delta} + \int_{t_0 + \delta}^{b}$

for some ${\delta>0}$ to be chosen later. In the interval ${(a,t_0 - \delta)}$ the function ${\phi^{(k-1)}}$ is never zero, but moreover we have by the assumption on its derivative ${\phi^{(k)}}$ that ${|\phi^{(k-1)}|>\delta}$; similarly on the interval ${(t_0 + \delta, b)}$. By inductive hypothesis we therefore have that

$\displaystyle \Big|\int_{a}^{t_0 - \delta}e^{i\lambda \phi(t)} dt \Big| + \Big|\int_{t_0 + \delta}^{b} e^{i\lambda \phi(t)} dt \Big| \leq 2C_{k-1} (\delta |\lambda|)^{-1/(k-1)}.$

As for the remaining integral, we just estimate trivially

$\displaystyle \Big|\int_{t_0 - \delta}^{t_0 + \delta}e^{i\lambda \phi(t)} dt \Big| \leq 2 \delta.$

Putting everything together, we have shown that ${|I(\lambda)| \leq 2C_{k-1} (\delta |\lambda|)^{-1/(k-1)} + 2 \delta}$, and by choosing ${\delta = |\lambda|^{-1/k}}$ this gives

$\displaystyle |I(\lambda)| \leq (2C_{k-1}+2)|\lambda|^{-1/k},$

proving the induction (with ${C_k = 2C_{k-1} + 2}$). $\Box$

Although the constant ${C_k}$ obtained with the above argument suffices for most (if not all) applications, it’s interesting to notice it is not optimal in ${k}$. See Exercise 11 if you are interested in determining the correct behaviour of the optimal constant in ${k}$.

It is a simple matter of integrating by parts to extend Van der Corput’s lemma to the oscillatory integrals ${I_\psi(\lambda)}$ as well. We obtain

Corollary 6 Let ${\psi \in C^1}$ and let the phase ${\phi}$ satisfy ${|\phi^{(k)}|>1}$ in ${(a,b)}$ for some ${k\geq 1}$ (if ${k=1}$, assume additionally that ${\phi'}$ is monotonic). Then the inequality

$\displaystyle |I_{\psi}(\lambda)| \leq C'_k \Big[|\psi(b)| + \int_a^b |\psi'(t)|dt\Big] \cdot |\lambda|^{-1/k}$

holds, with ${C'_k > 0}$ an absolute constant depending only on ${k}$.

Thus the overall constant will depend on ${a,b}$ through the amplitude ${\psi}$ (though notice that if we assume ${\|\psi\|_{L^\infty} + \|\psi'\|_{L^1}}$ is finite, it is independent of ${a,b}$), but we have isolated its dependence in the term in square brackets. The constant is still independent of the phase ${\phi}$. Proof: Let ${\Phi(t) := \int_{a}^{t} e^{i\lambda \phi(s)}ds}$, so that

$\displaystyle I_\psi (\lambda) = \int_{a}^{b} \Phi'(t) \psi(t) dt.$

Integrating by parts and taking absolute values we have that ${|I_\psi(\lambda)|}$ is bounded by

$\displaystyle |\Phi(b)\psi(b)| + \int_a^b |\Phi(t)| |\psi'(t)| dt.$

For the first term, by Theorem 5 we can estimate ${|\Phi(b)| \lesssim_k |\lambda|^{-1/k}}$. Similarly, the second term can then be estimated by ${C_k |\lambda|^{-1/k} \int_a^b |\psi'(t)| dt}$. Summing these contributions gives the stated bound. $\Box$

Notice the integration-by-parts trick we used here is different from the one used for Propositions 3, 4.

2.3. Method of stationary phase

Now we go back to the observation made at the end of Section 2. Recall that the portions of the integral ${I_\psi(\lambda)}$ where the phase satisfies ${\phi' \neq 0}$ contribute at most ${O_{\psi,N}(\lambda^{-N})}$ for arbitrary ${N>0}$ and thus can tipically be treated as an error term. The behaviour of ${I_\psi(\lambda)}$ is therefore determined by the zeros of ${\phi'}$ and higher derivatives of the phase. In particular, one can perform a full asymptotic expansion of the oscillatory integral ${I_\psi(\lambda)}$. This is known as the method of stationary phase; it’s particularly useful in physics, where it prominently appears in the semi-classical approximation to Quantum Field Theory. We can state (one version of) the method as follows.

Theorem 7 (Method of stationary phase) Assume that ${\phi \in C^\infty}$. Let ${k\geq 2}$ and assume that

$\displaystyle \phi'(t_0) = \ldots = \phi^{(k-1)}(t_0) = 0 \qquad \text{ and that } \phi^{(k)}(t_0) \neq 0.$

If ${\psi \in C^{\infty}_c}$ is supported in a sufficiently small neighbourhood of ${t_0}$, then there exist coefficients ${a_j}$ for ${j \in \mathbb{N}}$ (each depending only on finitely many derivatives of ${\phi}$ and ${\psi}$ at ${t_0}$) such that

$\displaystyle I_\psi(\lambda) \simeq e^{i \lambda \phi(t_0)} \lambda^{-1/k} \sum_{j \in \mathbb{N}} a_j \lambda^{-j/k}, \ \ \ \ \ (4)$

where by ${\simeq}$ we mean that for all ${n>0}$ we have for the sum truncated at ${n-1}$ that

$\displaystyle \bigg|I_\psi(\lambda) - e^{i \lambda \phi(t_0)} \lambda^{-1/k} \sum_{j = 0}^{n-1} a_j \lambda^{-j/k}\bigg| \lesssim_{\psi,\phi,n} |\lambda|^{-n/k}$

as ${\lambda \rightarrow \infty}$, and moreover for all ${n, \ell > 0}$ we have

$\displaystyle \bigg|\Big(\frac{d}{d\lambda}\Big)^\ell \Big[ I_\psi(\lambda) - e^{i \lambda \phi(t_0)} \lambda^{-1/k} \sum_{j = 0}^{n-1} a_j \lambda^{-j/k}\Big]\bigg| \lesssim_{\psi,\phi,\ell,n} |\lambda|^{-\ell-n/k}.$

The coefficients ${a_j}$ can be determined explicitely – for example, when ${k=2}$ an explicit calculation of ${a_0}$ shows that the main term in ${I_\psi(\lambda)}$ is

$\displaystyle e^{i\lambda \phi(t_0)}\Big(\frac{2\pi}{-i \lambda \phi''(t_0)}\Big)^{1/2}\psi(t_0).$

This is excellent for a quick-and-dirty estimate of complicated integrals.
We will not prove the above theorem here since it would make these notes unnecessarily long, but if you fancy you can prove it yourself in Exercise 13 following the guidelines provided there.

Exercises

This is a (long) list of exercises meant to improve your grasp on the topic and provide you with useful ideas for the future – do the ones you like. Those that require a bit more work are marked by ${\bigstar}$‘s – they are not necessarily harder, just longer maybe. The ones unmarked are probably more important for a basic understanding though. In any case, hints are given further down the page. Since the exercises are meant to be a complement to the lectures, don’t be afraid to have a look at the hints.

Exercise 1 (${\bigstar}$) Let ${p}$ be such that ${1\leq p < 2d/(d+1)}$. Show, by using the decay estimate ${J(t) \lesssim (1 + |t|)^{-(d-1)/2}}$, that if ${f}$ is a radial function in ${L^p(\mathbb{R}^d)}$ and ${\mathbb{R}^d \ni \xi\neq 0}$, then the Fourier transform ${\widehat{f}}$ is continuous at ${\xi}$. (See hints for a walk-through)

Exercise 2 Show, using the geometric interpretation given in Section 1, that there exists some constant ${C>0}$ such that for all ${N>1}$

$\displaystyle \big|\sum_{n=1}^{N} r_2(n) - \pi N \big| \leq C N^{1/2}$

holds.

Exercise 3 If ${D}$ is a (linear) differential operator, its transpose is given by the linear operator ${D^{\intercal}}$ that satisfies

$\displaystyle \langle Df, g \rangle = \langle f, D^{\intercal} g \rangle$

for all test functions ${f,g \in C^\infty_c}$. While ${D}$ satisfies Leibniz’s rule

$\displaystyle D(fg) = g Df + f Dg,$

show that ${D^{\intercal}}$ in general does not.

Exercise 4 Show that Proposition 4 is false if the assumption that ${\phi'}$ is monotonic is dropped. In other words, untangling the quantifiers, construct a family of phases ${(\phi^{\lambda})_{\lambda}}$ such that ${|(\phi^{\lambda})'|>1}$ on ${(a,b)}$ for each ${\lambda}$, but ${|\lambda||I(\lambda; \phi^{\lambda})|}$ is not bounded as ${\lambda \rightarrow +\infty}$.

[To destroy cancellation in ${I(\lambda)}$, you should aim to make ${{(\phi^{\lambda})}'}$ oscillate at the scale ${|\lambda|^{-1}}$. Indeed, consider the following: looking only at the real part of the integral, we are trying to make ${\int_{0}^{1} \cos(2\pi \lambda \phi^{\lambda}(t))dt}$ large (and positive, say). The function ${\cos(2\pi \lambda x)}$ is positive for ${x}$ in ${\big[-\frac{1}{4\lambda},\frac{1}{4\lambda}\big] + \frac{1}{\lambda}\mathbb{Z}}$, and negative otherwise. If you look at the graph of ${\phi^{\lambda}}$ (and I encourage you to make a drawing of the argument that follows), you want it to spend as much time as possible in the horizontal bands ${\mathbb{R} \times \big(\big[-\frac{1}{4\lambda},\frac{1}{4\lambda}\big] + \frac{1}{\lambda}\mathbb{Z}\big)}$ and as little time as possible in the complement, that is in the bands ${\mathbb{R} \times \big(\big[\frac{1}{4\lambda},\frac{3}{4\lambda}\big] + \frac{1}{\lambda}\mathbb{Z}\big)}$, so that the positive contribution outweighs the negative one. To achieve this, ${(\phi^{\lambda})'}$ should be small when ${\phi^{\lambda}}$ is in the former bands (but not too small, since we still want ${|(\phi^{\lambda})'|>1}$), and quite large when in the latter bands. In particular, ${(\phi^{\lambda})'}$ will not be monotone and it will oscillate between two behaviours over an interval of length ${\sim \lambda^{-1}}$.]

Exercise 5 Show the following weaker version of Van der Corput’s lemma for ${k=1}$: assume that ${|\phi'|>\mu}$ but, instead of assuming that ${\phi'}$ is monotonic, we only assume that ${|\phi''| on ${(a,b)}$; prove that

$\displaystyle |I(\lambda)| \lesssim \Big(\frac{1}{\mu} + \frac{M (b-a)}{\mu^2}\Big)|\lambda|^{-1}.$

Exercise 6 Let ${\phi}$ be a phase that satisfies ${|\phi^{(k)}|>1}$ on ${(a,b)}$ for some ${k\geq 1}$. Show that a necessary condition for inequality

$\displaystyle |I(\lambda)|\leq C_{\phi} |\lambda|^{-\alpha}$

to hold with ${C_{\phi}}$ independent of ${[a,b]}$ is that ${\alpha=1/k}$.

Exercise 7 Show, using integration by parts (as many times as necessary) and Van der Corput’s lemma, that we have the estimate

$\displaystyle |J(t)| \lesssim_d (1 + |t|)^{-(d-1)/2},$

where ${J}$ is the function introduced in Section 1.
[This is not how this estimate is usually proved. There is indeed a shortcut, using certain identities for the function ${J}$, but it is interesting that simple integration by parts suffices.]

Exercise 8 The Airy equation is the dispersive PDE

\displaystyle \begin{aligned} \partial_t u + \partial^3_x u & = 0, \\ u(0,x) & = f(x). \end{aligned}

By taking a Fourier transform in the spatial variable ${x}$, it is not hard to see that the solution to the Airy equation can be written formally as the convolution

$\displaystyle u(t,x) = \frac{1}{2\pi}\int f(y) \frac{1}{t^{1/3}}\mathrm{Ai}\Big(\frac{x-y}{t^{1/3}}\Big) dy,$

where ${\mathrm{Ai}(x)}$ denotes the Airy function

$\displaystyle \mathrm{Ai}(x) := \int_{\mathbb{R}} e^{i(\xi^3 + x \xi)} d\xi,$

which is clearly an oscillatory integral (provided the integral exists!). You will show that it satisfies the following estimates:

1. For ${x>1}$ we have superpolynomial decay, that is for every ${N>0}$ we have ${|\mathrm{Ai}(x)|\lesssim |x|^{-N}}$.
2. For ${-1 \leq x \leq 1}$ we have ${|\mathrm{Ai}(x)|\lesssim 1}$.
3. For ${x < -1}$ we have ${|\mathrm{Ai}(x)|\lesssim |x|^{-1/4}}$.

1. It will be useful to smoothly split ${\mathbb{R}}$ dyadically. Let ${\varphi}$ be a ${C^\infty}$ bump function supported in ${[-2,2]}$, with ${\varphi\equiv 1}$ on ${[-1,1]}$; define ${\psi(\xi) := \varphi(\xi/2) - \varphi(\xi)}$, and ${\psi_j(\xi) := \psi(2^{-j}\xi)}$. Show that ${\varphi(\xi) + \sum_{j \in \mathbb{N}} \psi_j(\xi)= 1}$ for every ${\xi}$ and that ${\psi_j}$ is supported in ${2^j < |\xi| < 2^{j+2}}$. We can make sense of the integral defining ${\mathrm{Ai}(x)}$ as the limit as ${n\rightarrow\infty}$ of

$\displaystyle \int e^{i(\xi^3 + x \xi)}\varphi(\xi) d\xi + \sum_{j=0}^{n} \int e^{i(\xi^3 + x \xi)}\psi_j(\xi) d\xi.$

2. Splitting the integral using the decomposition above, show 1) by the principle of non-stationary phase. In doing this, you will also prove that the limit above indeed exists pointwise.
3. Show 2) adapting the argument you just used for ii).
4. Show 3) by Corollary 6 (and the splitting).
5. Show the simple dispersive estimate

$\displaystyle \|u(\cdot,t)\|_{L^\infty(\mathbb{R},dx)} \lesssim |t|^{-1/3} \|f\|_{L^1(\mathbb{R})}.$

[The integration in the definition of ${\mathrm{Ai}(x)}$ is over all of ${\mathbb{R}}$ instead of a finite interval as in Section 2; that is why we introduced the splitting. More importantly, notice that the phase ${x\xi + \xi^3}$ is not of the form ${\lambda \phi}$, or at least so it appears at first sight…]

Exercise 9 Show, using integration by parts and Van der Corput’s lemma, estimate (2); that is, when ${d=2}$ (thus ${J(r) = \int_{0}^{\pi} e^{-2 \pi i r \cos\theta} d\theta}$) show that we have for large ${R}$

$\displaystyle \Big|\int_{0}^{R} r J(r) dr \Big| \lesssim R^{1/2}.$

The point is that ${\int_{0}^{R} r J(r) dr}$ is itself an oscillatory integral (expanding ${J}$ and using Fubini shows this is the case).
[Notice this is a much better estimate than the one you would get from just plugging in the estimate ${|J(r)|\lesssim |r|^{-1/2}}$, which would give you ${O(R^{3/2})}$. There is therefore additional cancellation to be exploited.]

Exercise 10 An estimate of the form

$\displaystyle |\{t \in (a,b) \text{ s.t. } |\phi(t)|< \lambda\}| = O(\lambda^{\alpha})$

for some ${\alpha>0}$ is called a sublevel-set estimate. Notice that it is only interesting for small ${\lambda}$. These estimates are in many ways related to oscillatory integral estimates, as this exercise will show:

1. Show that Van der Corput’s lemma implies that, under the same hypotheses on ${\phi}$, for every ${\lambda>0}$ we have

$\displaystyle |\{t \in (a,b) \text{ s.t. } |\phi(t)|< \lambda\}| \lesssim_k \lambda^{1/k}. \ \ \ \ \ (6)$

We stress that the constant only depends on ${k}$.

2. Show conversely that if we assume that (6) holds, we can prove Van der Corput’s lemma by splitting the interval ${(a,b)}$ into ${\{t \in (a,b) \text{ s.t. } |\phi'(t)|<\theta\}}$ and its complement, where ${\theta}$ is a parameter to be optimized (Remark 2 might come in handy).
3. Now prove (6) directly by induction in ${k}$ (thus providing another proof of Van der Corput’s lemma, when combined with ii)).
4. (Only if you know about ${p}$-adics, otherwise ignore) Let ${p}$ be a prime and let ${\mathbb{Z}_p}$ be the ring of ${p}$-adic integers, with its non-archimedean valuation ${|\cdot|_p}$. If ${P}$ is a polynomial in ${\mathbb{Z}[X]}$, what does the sublevel-set

$\displaystyle \{ x \in \mathbb{Z}_p \text{ s.t. } |P(x)|_p

correspond to, in simpler terms? And what would an estimate like (6) mean in this context?

Exercise 11 (${\bigstar}$) The proof of Van der Corput’s lemma we have presented in these notes gave us a constant that is exponential in ${k}$; in particular, you can easily see that it gives ${C_k \sim 2^k}$ (Theorem 5). This behaviour is not sharp. Indeed, if ${B_k}$ denotes the smallest constant for which the inequality

$\displaystyle |I(\lambda)|\leq B_k |\lambda|^{-1/k}$

holds (under the hypotheses of Theorem 5), then we actually have that the constant grows linearly in ${k}$, namely ${B_k \sim k}$.
In this exercise you will obtain the optimal behaviour of the constant, showing that ${|I(\lambda)|\leq C k |\lambda|^{-1/k}}$. This will be achieved by using the same strategy as in ii)-iii) of Exercise 10, but with an improved dependency on ${k}$ for estimate (6).

1. Show that, if ${E}$ is a measurable subset of ${\mathbb{R}}$ with ${|E|>0}$, for any ${k}$ one can find ${x_0, \ldots, x_k \in E}$ such that for any ${j \in \{0,1,\ldots, k\}}$

$\displaystyle \prod_{i \;:\; i\neq j} |x_i - x_j| \geq \Big(\frac{|E|}{2e}\Big)^k.$

[hint: squeeze ${E}$ into an interval of length ${|E|}$, then do the obvious thing.]

2. Show the following generalisation of the mean-value theorem: let ${x_0 < x_1 < \cdots < x_k}$ and ${f \in C^k([x_0,x_k])}$; then there exists ${y \in (x_0, x_k)}$ such that

$\displaystyle f^{(k)}(y) = (-1)^k k! \sum_{j=0}^{k} \frac{f(x_j)}{\prod_{i \;:\; i\neq j} (x_i - x_j)}.$

[hint: use Lagrange interpolation at the ${x_j}$‘s to get a polynomial approximant, then apply Rolle’s theorem to the difference ${k}$ times.]

3. Show that if ${\phi}$ is such that ${\phi^{(k)}(t)>1}$ for all ${t \in \mathbb{R}}$, then we have the sublevel-set estimate (see Exercise 10)

$\displaystyle |\{t \in \mathbb{R} \text{ s.t. } |\phi(t)|<\lambda\}|\leq (2e)((k+1)!)^{1/k} \lambda^{1/k}.$

In particular, use i) and ii) above applied to ${E = \{t \in \mathbb{R} \text{ s.t. } |\phi(t)|<\lambda\}}$ and ${f = \phi}$.

4. Use Stirling’s approximation to show that ${((k+1)!)^{1/k} \sim k}$.
5. Prove Van der Corput’s lemma using the same proof as in ii) of Exercise 10 (using the sublevel-set estimate above that has an improved constant) to conclude that

$\displaystyle |I(\lambda)|\leq C k |\lambda|^{-1/k}.$

6. Show that ${B_k}$ cannot grow slower than ${k}$ [check a canonical example and use complex integration techniques; don’t expect a straightforward calculation though.]

Exercise 12 You might have noticed that the statement of Theorem 7 (the method of stationary phase) omits the case ${k=1}$. Indeed, in this case there can be no expansion in terms of powers of ${\lambda^{-1}}$, since the integral is ${O_N(\lambda^{-N})}$ for every ${N>0}$ (by non-stationary phase principle). However, in the case where the support of ${\psi}$ is not strictly contained in ${(a,b)}$ (and thus non-stationary phase does not apply), a similar statement holds. Prove an asymptotic expansion of the form

$\displaystyle \lambda^{-1}\sum_{j \in \mathbb{N}} \Big(a_j e^{i\lambda \phi(a)} + b_j e^{i\lambda \phi(b)} \Big) \lambda^{-j}$

for ${I_\psi(\lambda)}$, under the hypotheses that ${\phi' \neq 0}$ in ${(a,b)}$ and that ${\phi, \psi \in C^\infty}$ (in particular, in general ${\psi(a), \psi(b) \neq 0}$). Calculate explicitely the first few coefficients ${a_j,b_j}$.
{[hint: just use integration by parts repeatedly. ]}

Exercise 13 (${\bigstar \bigstar}$) In this exercise you will prove the method of stationary phase as stated in Theorem 7. The proof will proceed in stages, and we can assume ${t_0 = 0}$ and ${\phi(0)=0}$ for simplicity.

1. Begin with the case ${k=2}$, which will already contain all the main ideas. Take ${\phi(t) = t^2}$ and ${\psi(t) = t^m e^{-t^2}}$ for some ${m>0}$ (which is not compactly supported, but still very concentrated around the origin); let ${a=-\infty}$ and ${b=+\infty}$. Show, using standard complex integration techniques, that ${I_\psi(\lambda) = \int_{-\infty}^{+\infty} e^{i \lambda t^2 } t^m e^{-t^2} dt}$ equals (fixing the principal branch of ${z^{-(m+1)/2}}$)

$\displaystyle (1 - i \lambda)^{-(m+1)/2}\int_{-\infty}^{+\infty} t^m e^{-t^2} dt,$

and argue by a power series expansion in ${\lambda^{-1}}$ that therefore it satisfies (4).

2. Now we keep the quadratic phase ${\phi(t) = t^2}$ but replace the gaussian factor ${e^{-t^2}}$ directly with a compactly supported function ${\eta}$. Thus, let ${\eta \in C^\infty_c}$ and let ${\psi(t) = t^m \eta(t)}$. Show, by splitting the region of integration smoothly into one close to ${0}$ and one away from ${0}$ (according to a well chosen parameter), that in this case

$\displaystyle |I_{t^m \eta}(\lambda)|\lesssim_{\eta,m} |\lambda|^{-(m+1)/2}.$

More precisely, let ${\varphi}$ denote a smooth bump function supported in ${[-1,1]}$ and decompose ${1 = \varphi(t/\delta) + (1 - \varphi(t/\delta))}$. For the region close to ${0}$ you can take the absolute value inside the integral and give a trivial estimate; for the region away from ${0}$, apply the integration by parts argument as in the proof of Proposition 3 ${N}$ times, where ${2N}$ is bigger than ${m+1}$, then take the absolute value inside and estimate the size of this integral. Finally, optimise in the parameter ${\delta}$.

3. Show, using the above same arguments, that if ${g \in \mathcal{S}(\mathbb{R}^d)}$ (that is, ${g}$ is a Schwartz function) is such that ${g\equiv 0}$ in a neighbourhood of ${0}$, then for any ${N>0}$ one has ${|I_g(\lambda)|=O_{N,g}(|\lambda|^{-N})}$. (The phase is still ${\phi(t)=t^2}$.)
4. Now, still in the case ${\phi(t)=t^2}$, we tackle the general ${\psi}$ case. Write

$\displaystyle \int e^{i\lambda t^2} \psi(t) dt = \int e^{i\lambda t^2} e^{-t^2} \big(e^{t^2} \psi(t)\big) \eta(t) dt,$

where ${\eta \in C^\infty_c}$ is such that ${\eta(t) = 1}$ for ${t}$ in the support of ${\psi}$. Perform a Taylor expansion of ${e^{t^2} \psi(t)}$ to degree ${n}$ and substitute into ${I_\psi(\lambda)}$. Then use i)-ii)-iii) to argue that (4) holds for each term you get out of this procedure.

5. Now consider the general ${k=2}$ case. The idea is to deform the phase to turn it into ${t^2}$ again. Find a diffeomorphism from a sufficiently small neighbourhood ${U}$ of ${0}$ that, by a change of variables, achieves this, and then conclude that (4) holds by iv).
6. Generalise the above to ${k>2}$. The substitute for i) will be the identity

$\displaystyle \int_{0}^{\infty} e^{i \lambda t^k} e^{-t^k} t^m dt = C_{k,m} (1 - i\lambda)^{-(m+1)/k},$

which is similarly proven by standard complex integration techniques.

Exercise 14 In these notes we have only considered amplitudes ${\psi}$ that are everywhere ${C^\infty}$. In applications however (see viii) below) more singular cases arise, and it is instructive to see that some of those cases can still be treated using the techniques here developed. This exercise will moreover show you that sometimes some cancellation can also arise from the amplitude.
Let ${P(t) = \sum_{j=1}^{d} c_j t^j}$ be a polynomial of degree ${d}$. You will show that for arbitrary ${0 < \varepsilon < R}$ it holds

$\displaystyle \Big|\int_{\varepsilon < |t| < R} e^{i P(t)} \frac{dt}{t} \Big| \leq C_d$

with a constant that depends only on ${d}$ (and not on the coefficients of ${P}$!). Notice that the amplitude ${1/t}$ is only ${C^\infty}$ away from ${0}$, and has a bad singularity there. In the following, it will be really important that integration is over a set symmetric with respect to the origin such as ${\{t \in \mathbb{R} \; : \; \varepsilon < |t| < R\}= (-R,-\varepsilon)\cup(\varepsilon,R) }$ above.

1. Show by a change of variable that we can assume that ${c_d = 1}$ (at the price of changing ${\varepsilon, R}$; but these are arbitrary).
2. Show, using Corollary 6, that the portion of the integral over ${1 < |t| < R}$ is bounded by ${O_d(1)}$, so that it suffices to concentrate on the remaining part ${\varepsilon < |t| 1}$ we cannot estimate the integral by taking absolute values, since the resulting quantity grows like ${\log R}$; we really need oscillatory integral estimates!).
3. Let ${P_k(t) := \sum_{j=1}^{k} c_j t^j}$ (thus ${P_d = P}$). Using the trivial estimate ${|e^{i\theta} - 1| \lesssim |\theta|}$ and taking absolute values, show that

$\displaystyle \Big|\int_{\varepsilon < |t| < 1} e^{i P(t)} \frac{dt}{t} - \int_{\varepsilon < |t| < 1} e^{i P_{d-1}(t)} \frac{dt}{t} \Big| \lesssim_d 1.$

4. Use steps i)-iii) repeatedly to reduce to the case of integral ${\int_{\varepsilon < |t| < 1} e^{i c_1 t} \frac{dt}{t}}$.
5. It remains to show that the above integral is ${O_d(1)}$. Show that $\int_{a < |t| < b} \frac{dt}{t} = 0$ (this is a form of cancellation!). Therefore, you can arbitrarily subtract ${1}$ from the exponential term ${e^{i c_1 t}}$ above…
6. Split the integral as ${\int_{\varepsilon < |t| < 1/c_1} + \int_{1/c_1 < |t| < 1}}$ and apply again Corollary 6 to the second integral to show that it is ${O_d(1)}$.
7. It remains to estimate ${\int_{\varepsilon < |t| < 1/c_1} e^{i c_1 t} \frac{dt}{t}}$. Using v), show that it equals ${\int_{\varepsilon < |t| < 1/c_1} (e^{i c_1 t} - 1) \frac{dt}{t}}$ and estimate the latter by taking the absolute value and using the trivial estimate for ${|e^{i\theta}-1|}$. This concludes the proof.
8. Consider the operator

$\displaystyle f \mapsto \lim_{\substack{\varepsilon \rightarrow 0, \\ R \rightarrow \infty}} \int_{\varepsilon < |t| < R} f(x -t, y-t^2) \frac{dt}{t};$

this is known as the Hilbert transform along a parabola. Show that this operator is ${L^2(\mathbb{R}^2) \rightarrow L^2(\mathbb{R}^2)}$ bounded. [hint: use Plancherel and take a Fourier transform of the operator above; after some calculations, you should recognise a certain object… ]

Exercise 15 (${\bigstar}$) The discrete counterparts to oscillatory integrals, which we haven’t discussed in these notes, are exponential sums, that is sums of the form

$\displaystyle \sum_{n=1}^{N} e^{2\pi i f(n)}$

(sometimes they are normalised by dividing by ${N}$). Such sums are ubiquitous in number theory, where they are used to encode all sorts of information, from the number of solutions to diophantine equations to the behaviour of the Riemann Zeta function.
It should not be too surprising that some of the techniques developed for oscillatory integrals transfer over to the treatment of exponential sums. Indeed, integration by parts has a very direct discrete counterpart in summation by parts. In this exercise you will prove some estimates for exponential sums in the spirit of the ones studied in these lectures. We will use a notational convention from number theory, namely we introduce the function ${e(x):= e^{2\pi i x}}$ so that ${e^{2\pi i f(n)} = e(f(n))}$.
You will prove an analogue of Proposition 4:

1. Show the trigonometric identity

$\displaystyle \frac{1}{1 - e^{i\theta}} = \frac{1}{2} + \frac{i}{2}\cot(\theta/2).$

2. Let ${g(n) = \frac{e(f(n))}{e(f(n)) - e(f(n-1))}}$; show by summation by parts that

$\displaystyle \sum_{n=1}^{N} e(f(n)) = \big[e(f(N))g(N) - e(f(0))g(1)\big] - \sum_{n=1}^{N-1}e(f(n))(g(n+1) - g(n)).$

3. Using i), show that

$\displaystyle g(n+1) - g(n) = (i/2)\big[\cot(\pi(f(n+1) - f(n))) - \cot(\pi(f(n) - f(n-1)))\big].$

4. So far we just did a bunch of algebra; now we start doing analysis. Assume that ${f'}$ is monotonic in the interval ${[0,N]}$ and that for some ${1/2 > \delta > 0}$ we have ${\mathrm{dist}(f'(x),\mathbb{Z}) > \delta}$ for all ${x\in [0,N]}$ (that is, the derivative always stays within two consecutive integers, never getting any closer than ${\delta}$ to any of them). Show that the sequence ${n \mapsto f(n) - f(n-1)}$ is also monotonic and that for some fixed integer ${k}$ we have for all ${n\in \{1,\ldots,N\}}$

$\displaystyle k + \delta \leq f(n) - f(n-1) \leq k+1 - \delta.$

5. Since ${\cot}$ is also monotonic in intervals of the form ${[\pi k, \pi(k+1)]}$, show that under the hypotheses in iv) on ${f}$ we have

$\displaystyle \Big|\sum_{n=1}^{N-1}e(f(n))(g(n+1) - g(n))\Big| \leq |g(N) - g(0)|,$

and deduce that the above is bounded by ${\lesssim \delta^{-1}}$. [hint: monotonicity allows you to take the absolute values from inside of a sum to outside of it; also, ${\cot(x) \leq 1/x}$ for ${x \in (0,\pi/2]}$.]

6. Put everything together (check those remaining terms) and conclude that you have shown: if ${f'}$ is monotonic on interval ${I}$ and ${\mathrm{dist}(f'(x),\mathbb{Z}) > \delta}$ for all ${x\in I}$, then

$\displaystyle \Big|\sum_{n \in I \cap \mathbb{Z}} e(f(n))\Big| \lesssim \delta^{-1}. \ \ \ \ \ (7)$

Compare this with Proposition 4 and appreciate the similarities between the respective proofs.

Now we move on to the 2nd derivative. You will prove an analogue of Van der Corput’s lemma, that is: if ${f\in C^2}$ is such that for all ${x \in I}$ we have

$\displaystyle \lambda < |f''(x)|

(for some ${C>1}$), then

$\displaystyle \Big|\sum_{n \in I \cap \mathbb{Z}} e(f(n))\Big|\lesssim C |I| \lambda^{1/2} + \lambda^{-1/2}. \ \ \ \ \ (8)$

The strategy will be to partition ${I}$ into intervals where (7) above applies, and where it doesn’t to use trivial estimates, and finally optimize between the two.

1. Assume for convenience that ${f''}$ is positive and let ${I = [a,b]}$. Show that the range of ${f'}$ is ${J:= [f'(a), f'(b)]}$ and show that ${|J|\leq C\lambda |I|}$.
2. Partition ${J}$ into intervals close to integers and intervals that avoid integers. More precisely, let ${1/2> \delta>0}$ be a parameter to be chosen and define

\displaystyle \begin{aligned} A_k :=& [k-\delta, k+\delta] \cap J, \\ B_k :=& [k+\delta, k+1-\delta] \cap J. \end{aligned}

Show that ${A_k}$ and ${B_k}$ are not empty for at most ${O(C\lambda|I| + 1)}$ values of ${k}$ and that they partition ${J}$.

3. Since ${f'}$ is monotonic, we can partition ${I}$ into a disjoint union of intervals ${(f')^{-1}(A_k)}$ and ${(f')^{-1}(B_k)}$. Show that on the first ones we have trivially

$\displaystyle \Big|\sum_{n \in (f')^{-1}(A_k) \cap \mathbb{Z}} e(f(n))\Big|\lesssim \delta/ \lambda$

and on the second ones by (7) we have

$\displaystyle \Big|\sum_{n \in (f')^{-1}(B_k) \cap \mathbb{Z}} e(f(n))\Big|\lesssim \delta^{-1}.$

4. Now put everything together to show that we have

$\displaystyle \Big|\sum_{n \in I \cap \mathbb{Z}} e(f(n))\Big|\lesssim (C\lambda |I| + 1)\Big(\frac{\delta}{\lambda} + \frac{1}{\delta}\Big),$

and conclude estimate (8) by choosing an optimal ${\delta}$ (i.e. minimise ${\delta/\lambda + 1/\delta}$). Again, you should appreciate the similarities between this proof and the proof of Van der Corput’s lemma we have given in the notes.

5. An estimate is only interesting if it beats the trivial one, which in this case is ${|\sum_{n \in I\cap\mathbb{Z}} e(f(n))|\leq |I|}$ (by triangle inequality; here ${|I|> 1}$). For which range of values of ${\lambda}$ is estimate (8) interesting?
6. Consider truncations of the Riemann Zeta function along the critical line, that is sums of the form ${\sum_{n = A}^{B} n^{-1/2 - it}}$. Argue that in order to bound such sums, by summation by parts one can reduce to study sums of the form ${\sum_{n = A'}^{B'} n^{-it}}$ instead. As ${n^{-it} = e(-(t/2\pi) \log n)}$, verify that the estimates (7), (8) that you proved above are well suited to treat these last oscillatory sums. This is how bounds on the growth of ${\big|\zeta\big(\frac{1}{2} + it \big)\big|}$ as ${t \rightarrow \infty}$ are usually proven.

Hints

Hints for Exercise 1: Take the estimate

$\displaystyle |J(t)| \lesssim (1+|t|)^{-(d-1)/2} \ \ \ \ \ (10)$

for granted, and follow these steps:

1. Split the function ${f \in L^p(\mathbb{R}^d)}$ as ${f(x) = f_1(x) + f_2(x) := f(x) \mathbf{1}_{B(0,1)}(x) + f(x) (1 - \mathbf{1}_{B(0,1)}(x))}$, where ${\mathbf{1}_{B(0,1)}}$ denotes the characteristic function of the ball of radius ${1}$ centered at ${0}$. Argue that ${f_1}$ is in ${L^1(\mathbb{R}^d)}$ and therefore the contribution to ${\widehat{f}}$ given by ${\widehat{f_1}}$ is continuous at ${\xi \neq 0}$.
2. Let ${\rho = |\xi|}$ and write the radial Fourier transform ${g(\rho)}$ of ${f_2}$ according to equation (1); then in the integral use a change of variable and argue that it will suffice to show the continuity of the function ${\Phi(\rho):=\rho^d g(\rho)}$.
3. It suffices to show continuity of ${\Phi}$ from above and from below, so take ${\rho' > \rho}$ and show that you can write (abusing notation a little, we write ${f_2}$ for the radial part of ${f_2(x)}$ as well)

\displaystyle \begin{aligned} \Phi(\rho) - \Phi(\rho') = & \int_{\rho}^{\rho'} f_2\Big(\frac{s}{\rho}\Big) s^{d-1} J(s) ds \ \ \ \ \ \ \ \ \ \ \ \ \ (I)\\ & + \int_{\rho}^{+\infty} \Big(f_2\Big(\frac{s}{\rho}\Big) - f_2\Big(\frac{s}{\rho'}\Big)\Big) s^{d-1} J(s) ds. \ \ \ \ \ (II) \end{aligned}

4. Fixing ${\rho}$, use HĆ¶lder’s inequality and (10) (and the hypothesis on ${p}$) to show that the integral ${\int_{\rho}^{+\infty} |f_2\Big(\frac{s}{\rho}\Big)| s^{d-1} |J(s)| ds}$ is finite; hence deduce by standard arguments that term (I) tends to ${0}$ as ${\rho' \rightarrow \rho^{+}}$.
5. Using again HĆ¶lder’s inequality and (10), argue that term (II) also tends to ${0}$ as ${\rho' \rightarrow \rho^{+}}$ (you might want to do a change of variables ${s = \rho t}$ and then argue by the continuity the ${L^p}$ norm with respect to dilations). Putting everything together, the proof is concluded.

Hints for Exercise 2: Prove upper- and lower-bounds for ${\sum_{n=1}^{N} r_2(n)}$; tile the plane into squares with centers in the ${\mathbb{Z}^2}$ lattice.

Hints for Exercise 3: Try with ${D = a(x) \frac{d}{dx}}$.

Hints for Exercise 4: Let ${a=0, b=1}$ for simplicity. Construct phases ${\phi_\lambda}$ with oscillatory first derivatives where the oscillations happen at scale ${\lambda^{-1}}$. A good choice would be ${\phi_\lambda(t) = 2\pi t + \lambda^{-1} \theta(\lambda t)}$ with ${\theta}$ a smooth periodic function of period ${1}$. Show that

$\displaystyle \lambda I(\lambda) = \lambda \int_{0}^{1} e^{i (2\pi t + \theta(t))}dt + O(1),$

and furthermore show that you can choose ${\theta}$ such that the integral above is ${\neq 0}$ and such that ${|{\phi_\lambda}'| > 1}$.

Hints for Exercise 5: Just adapt the proof of Proposition 4.

Hints for Exercise 6: Do a change of variable ${t = \theta s}$ in ${I(\lambda)}$, with ${\theta}$ a parameter. If the inequality is to be invariant with respect to this transformation, a certain condition involving ${\alpha}$ and ${k}$ will have to be met.

Hints for Exercise 7: Let ${J_d}$ be the function

$\displaystyle J_d(t) := \int_{0}^{\pi} e^{-2\pi i t \cos\theta} (\sin\theta)^{d-2} d\theta$

and show by integration by parts that we have the recurrence relation

$\displaystyle (2\pi i t)^2 J_d(t) = (d-3)(d-5) J_{d-4}(t) - (d-3)(d-4)J_{d-2}(t);$

iterate this to reduce to the case ${d=3,2}$. For ${J_3}$, compute the integral directly and get a bound of ${O(|t|^{-1})}$; for ${J_2}$, show by Van der Corput’s lemma that we have a bound of ${O(|t|^{-1/2})}$ (besides the trivial bound of ${O(1)}$). Notice that you will need to split the integral in 3 parts (say, ${\int_{0}^{\pi/4} + \int_{\pi/4}^{3\pi/4} + \int_{3\pi/4}^{\pi}}$) in order to apply Van der Corput, since there is not a uniform lowerbound on ${\phi'}$ or ${\phi''}$ over the entire interval ${[0,\pi]}$.

Hints for Exercise 8: Find a rescaling of the ${\xi}$ variable that allows you to rewrite the phase as ${|x|^{3/2} \phi(\xi) = |x|^{3/2}(\mathrm{sgn}(x)\xi + \xi^3)}$, which is of the form we studied.

1. When ${x>1}$, ${\phi' \neq 0}$. Show that this implies ${|\int \exp(i|x|^{3/2} \phi(\xi)) \varphi(\xi) d\xi|\lesssim_N \min\{1, |x|^{- 3N/2}\}}$ by non-stationary phase. For the ${\psi_j(\xi) }$ terms instead, apply the proof of the non-stationary phase principle to each term, with ${D}$ the differential operator such that ${D(e^{i \phi(\xi)}) = e^{i\phi(\xi)}}$, to show that it contributes

$\displaystyle |x|^{-3N/2} \int |(D^{\intercal})^N \psi_j(\xi)| d\xi.$

To show that this is summable in ${j}$, show that one will have

$\displaystyle (D^{\intercal})^N \psi_j(\xi) = \sum_{k=0}^{N} c_{\phi,k,N}(\xi) 2^{-jk} \psi^{(k)}(2^{-j}\xi)$

for some (integrable) functions ${c_{\phi,k,N}}$ depending on ${\phi'}$ that one can compute explicitely, and conclude using the fact that ${\sum_{j \in \mathbb{N}} 2^{-jk} |\psi^{(k)}(2^{-j}\xi)| \lesssim_k 1}$.
If you cannot convince yourself that ${(D^{\intercal})^N \psi_j}$ can be put in that form, try this other way: show by induction in ${N}$ that every term that appears in the expansion of ${(D^{\intercal})^N \psi_j}$ is of the form ${(\psi_j)^{(k)}(\xi) \xi^{\ell} (3\xi^2+1)^{-m}}$ (up to some constant) and that moreover ${k - \ell + 2m > 1}$. Then show that ${\int |(\psi_j)^{(k)}(\xi) | |\xi|^{\ell} (3\xi^2+1)^{-m} d\xi \sim 2^j 2^{-(k - \ell + 2m)j}}$, which is summable in ${j}$.

2. When ${-1 < x < 1}$, ${\phi}$ might have critical points (if ${x<0}$), but only in the support of ${\varphi}$. Bound the ${\varphi}$ contribution by ${O(1)}$ trivially, and the contribution of the ${\psi_j}$ by the same argument above (since the non-stationary phase principle applies to them again).
3. When ${x < -1}$, ${\phi}$ certainly has two critical points (${\pm 1/\sqrt{3}}$) inside the support of ${\varphi}$. For the ${\psi_j}$ terms, argue as above that they contribute ${O_N(|x|^{-3N/2})}$ and can thus be ignored. For ${\varphi}$, isolate the critical points by splitting ${\varphi (\xi) = \varphi(100 \xi) + (\varphi(\xi) - \varphi(100 \xi))}$ and similarly split the integral. Notice the support of ${\varphi(100 \xi)}$ is ${[-1/50,1/50]}$ and thus does not contain any critical points. For the contribution ${\int \exp(i|x|^{3/2} \phi(\xi)) (\varphi(\xi) - \varphi(100 \xi)) d\xi}$, notice that ${|\phi''|\gtrsim 1}$ and use Corollary 6 with ${k=2}$.

Hints for Exercise 9: Here one needs to be a little clever: the point is to notice that the phase ${2\pi r \cos \theta}$ has first derivative non-zero around the problematic point ${\theta = \pi / 2}$. If ${\varphi}$ denotes a smooth bump function supported in ${[-\pi/4,\pi/4]}$, split ${J(r)}$ as ${\mathscr{J}_1(r) + \mathscr{J}_2(r)}$ where

$\displaystyle \mathscr{J}_1(r) := \int e^{-2\pi i r \cos\theta} \varphi(\theta - \pi/2) d\theta.$

Argue by the non-stationary phase principle that ${|\mathscr{J}_1(r)|\lesssim_N (1 + r)^{-N}}$ and that therefore ${\int_{0}^{R} r \mathscr{J}_1(r) dr = O(1)}$. For ${\mathscr{J}_2}$, use Fubini and integration by parts to show that

\displaystyle \begin{aligned} \int_{0}^{R} r \mathscr{J}_2(r) dr = -\int_{0}^{\pi} \Big(\frac{R e^{-2\pi i R \cos\theta}}{2 \pi i \cos\theta} + & \frac{e^{-2\pi i R \cos\theta}}{(2 \pi i \cos\theta)^2}-\frac{1}{(2 \pi i \cos\theta)^2}\Big) \\ & \times (1 - \varphi(\theta - \pi/2)) d\theta. \end{aligned}

Where ${1 - \varphi(\theta - \pi/2)}$ does not vanish, one has that the 2nd derivative of the phase ${\cos\theta}$ is bounded from below; conclude therefore using Corollary 6 term by term.

Hints for Exercise 10:

1. You need to rewrite the measure of the sublevel set as a nice oscillatory integral. Show that ${|\{t \in (a,b) \;: \; |\phi(t)|<\lambda\}| = \int_{a}^{b} \mathbf{1}(|\phi(t)|/\lambda < 1) dt}$. Choose some smooth non-negative bump function ${\varphi}$ such that ${\mathbf{1}(|\phi(t)|/\lambda < 1) \leq \varphi(\phi(t)/\lambda)}$ for every ${t}$ and consequently replace the latter in the integral. Apply Fourier inversion to the term ${\varphi(\phi(t)/\lambda)}$, followed by Fubini, and apply Van der Corput's lemma to the resulting oscillatory integral.
2. Splitting ${(a,b)}$ as suggested, (6) takes care of ${|\int_{\{|\phi'|<\theta\}} \exp(i \lambda \phi(t))dt|}$ by taking the absolute value inside and ${\{|\phi'|\geq \theta\}}$ consists of a bounded number of intervals (how many? See Remark 2), for each of which one can apply Proposition 4 to the associated oscillatory integral. Finally, choose ${\theta}$ in order to optimize the resulting bound (that is, make it as small as possible).
3. The base case ${k=1}$ is easy. For the general ${k}$ case, use the same strategy as in ii): split ${(a,b)}$ into those points where ${|\phi'(t)|<\theta}$, to which the inductive hypothesis applies, and those where ${|\phi'(t)|\geq \theta}$, which consist of boundedly many intervals on each of which you can apply the base case.
4. What does it mean for ${x \in \mathbb{Z}_p}$ to have ${|x|_p \leq p^{-1}}$? And what does it mean for it to have ${|x|_p \leq p^{-2}}$? etc.

Hints for Exercise 11:

1. Squeezing ${E}$ into an interval can only make the left hand side of the inequality smaller, so it is fine to do. Once ${E}$ is an interval, just take uniformly distributed points inside it as your ${x_j}$‘s and perform the simple calculation that results.
2. By Lagrange interpolation, you can find a polynomial ${P(X)}$ of degree ${k}$ such that ${P(x_j) = f(x_j)}$ for all ${j \in \{0,\ldots,k\}}$. Argue that ${P^{(k)} - f^{(k)}}$ must have a zero inside ${(x_0,x_k)}$. Then notice that ${P^{(k)}(X)}$ must be a constant, precisely ${k!}$ times the coefficient of ${X^k}$. Calculate that coefficient explicitely (the constraints ${P(x_j) = f(x_j)}$ give you a linear system of equations in the coefficients of ${P}$, with a nice Vandermonde matrix appearing; solving the system with Cramer’s rule is a breeze).
3. Part i) gives you the points ${x_j}$ and for each ${j}$ reversing the inequality one has ${\Big(\prod_{i \;:\; i\neq j} |x_i - x_j|\Big)^{-1} \leq (2e)^k |E|^{-k}}$. Then use ii) together with this inequality, the fact that ${|\phi^{(k)}|>1}$ and that ${|\phi(x_j)|<\lambda}$ for each ${j}$.
4. Remember Remark 2: how many intervals is ${\{|\phi'|>\theta\}}$ made of at most?
5. Use ${(a,b)=(0,1)}$ and ${\phi(t)= t^k}$. To do this you will need complex integration. Use a regularisation: evaluate instead ${\int_{0}^{1} e^{i t^k} e^{-\epsilon t^k} dt}$ and take ${\epsilon \rightarrow 0}$ at the end. To evaluate the integral, see it as the integral of holomorphic function ${e^{z^k}}$ along the path ${\Gamma_0 := \{(i-\epsilon)^{1/k}t \text{ s.t. } t \in [0,1]\}}$ (fixing a branch of the ${k}$-th root). Complete the path ${\Gamma_0}$ to a well-chosen closed path ${\Gamma}$ that includes part of the real line and conclude using Cauchy’s integral theorem and some simple estimates on the decay of ${|e^{z^k}|}$ along certain directions of the complex plane.

Hints for Exercise 12: The first step is as in the proof of Proposition 4, and one has

$\displaystyle \int_{a}^{b} e^{i\lambda \phi(t)} \psi(t) dt = \Big(\frac{e^{i\lambda \phi(b)} \psi(b)}{i \lambda\phi'(b)} - \frac{e^{i\lambda \phi(a)} \psi(a)}{i \lambda\phi'(a)}\Big) - \int_{a}^{b}e^{i\lambda \phi(t)} \Big(\frac{\psi}{i \lambda\phi'}\Big)'(t) dt.$

Iterating this on the integral on the right hand side shows that it is of size ${O(\lambda^{-2})}$ and therefore the above gives you ${a_0, b_0}$. Keep applying integration by parts until necessary.

Hints for Exercise 13:

1. You should see the integral as the complex integral of (a multiple of) the holomorphic function ${z^m e^{-z^2}}$ along the path ${\Gamma_0 := (1 - i\lambda)^{1/2}\mathbb{R} \subset \mathbb{C}}$. Truncate to ${\Gamma^{+}_0 := \{(1 - i\lambda)^{1/2} t \; : \; t \in [0,R]\}}$ and complete this path to a closed path ${\Gamma}$ that includes the real interval ${[0,R]}$. Use Cauchy’s integral theorem and simple estimates on ${|z^m e^{-z^2}|}$ to conclude that as ${R \rightarrow \infty}$ this (plus an equal contribution from a similarly defined ${\Gamma^{-}_0}$) gives precisely what is claimed. The power series expansion of ${(1 - i\lambda)^{-(m+1)/2}}$ is standard, just write it as ${\lambda^{-(m+1)/2} (\lambda^{-1} - i)^{-(m+1)/2}}$ and expand the function ${(z - i)^{-(m+1)/2}}$ in the ${z}$ variable.
2. It is straightforward to show that ${\int |t|^m |\eta(t)|\varphi(t/\delta) dt \lesssim_{m,\eta} \delta^{m+1}}$. The other term looks a bit nasty, but luckily one does not have to make all the computations implied. Let ${\eta(t) (1 - \varphi(t/\delta)) =: \omega_\delta(t)}$ for convenience. Show that the support of ${\omega_\delta(t)}$ and all its derivatives is contained in ${\delta \lesssim |t| \lesssim 1}$. The integration by parts argument in the proof of Proposition 3 reduces matters to estimating ${\lambda^{-N} \int |(D^{\intercal})^N (t^m \omega_\delta (t))|dt}$ with ${Df(t) := (2 i t)^{-1} f'(t)}$. ${(D^{\intercal})^N}$ is too horrible to evaluate precisely (this is due to the fact that ${D^{\intercal}}$ is not a derivation, that is it does not satisfy Leibniz’s rule; see Exercise 3), but we do not need to. Show by induction on ${N}$ that every term in the expansion of ${(D^{\intercal})^N f}$ is of the form ${f^{(j)} t^{-(2N - j)}}$ for ${j \in \{0,\ldots, N\}}$. Apply this to function ${t^m \omega_\delta (t)}$ together with the general Leibniz’s rule (the binomial expansion of ${(fg)^{(j)}}$) to conclude that ${(D^{\intercal})^N (t^m \omega_\delta (t))}$ is of the form

$\displaystyle \sum_{k=0}^{m} \sum_{j=k}^{N} c_{j,k,N} \omega_\delta^{(j-k)}(t) t^{-(2N -m) + (j-k)}$

for some constants ${c_{j,k,N}}$ whose precise value does not concern us. Show that, by the support of ${\omega_\delta^{(j-k)}}$ and the fact that ${2N > m+1}$, the integral of each term is at most ${\lambda^{-N} \delta^{-(2N-m)+1}}$ (careful with ${\omega_\delta^{(j-k)}}$, when expanded this contains factors of ${\delta^{-\ell}}$ for ${\ell \leq j-k}$). Therefore the quantity to be estimated is controlled by ${\delta^{m+1} + \lambda^{-N} \delta^{-(2N-m)+1}}$, and it remains for you to choose a good ${\delta}$ that minimises this.

3. Just repeat the second part of the above with ${\delta\sim 1}$.
4. Taylor expansion gives you ${e^{-t^2} \psi(t) = P(t) + t^{n+1} R_n(t)}$ for some polynomial ${P}$ of degree ${n}$ and a remainder ${R_n(t)}$ that is well-behaved. Write ${\int \exp(i\lambda t^2) \exp(-t^2) P(t) \eta(t) dt}$ as ${-\int \exp(i\lambda t^2) \exp(-t^2) P(t)dt - \int \exp(i\lambda t^2) \exp(-t^2) P(t) (1 -\eta(t)) dt}$; use i) on each monomial of ${P(t)}$ to deal with the first integral, and use iii) on the second integral to show it is an error term (show that ${\exp(-t^2) P(t) (1 -\eta(t))}$ is a Schwartz function).
For ${\int \exp(i\lambda t^2) t^{n+1} (\exp(-t^2) R_n(t) \eta(t)) dt}$, use ii) to show that it is also an error term. Choosing ${n}$ large enough gives the result.

5. Simply show that under the given hypotheses ${s = |\phi(t)|^{1/2}}$ defines a diffeomorphism. Finally, if ${t = \theta(s)}$ denotes the inverse diffeomorphism, show that ${\psi(\theta(s))\theta'(s)}$ satisfies the same properties as ${\psi}$. Performing the change of variable ${t \mapsto s}$ concludes the argument.
6. Repeat the proof you gave for i) basically word by word and adapt the rest.

Hints for Exercise 14:

1. The important thing is that ${dt/t}$ is invariant with respect to changes of variables of the form ${t \mapsto \alpha t}$.
2. Since ${P}$ is a polynomial, we have ${P^{(d)}(t) = d!}$. With ${\psi(t) = 1/t}$, the constant in Corollary 6 is ${\sim_d 1/R = O_d(1)}$.
3. ${e^{i P(t)} - e^{i P_{d-1}(t)} = e^{i P_{d-1}(t)} \big(e^{i t^d} - 1\big)}$ and ${\int_{0}^{1}\frac{|t|^d}{|t|}dt = O_d(1)}$.
4. Steps i)-ii)-iii) together have the effect of removing the top monomial from ${P}$. Repeat until there is only the last monomial ${c_1 t}$.
5. Here ${\lambda^{-1} = |c_1|^{-1}}$, but with ${\psi(t) = 1/t}$ the constant in Corollary 6 evaluates to ${|c_1|}$.
6. We can add or subtract ${0}$ for free to any quantity and bear in mind this is only relevant when ${c_1>1}$.
7. You can consider ${\varepsilon, R}$ fixed for the sake of the exercise. Calculate the Fourier transform of the expression and show by Fubini that it equals

$\displaystyle \widehat{f}(\xi,\eta) \int_{\varepsilon < |t| < R} e^{- 2\pi i (\xi t + \eta t^2)} \frac{dt}{t}.$

The phase ${\xi t + \eta t^2}$ is a polynomial, so apply what you just proved to get an ${L^\infty}$ bound for it. Conclude using Plancherel.

Hints for Exercise 15:

1. Just a boring high-school trigonometry exercise.
2. Write ${\sum_n e(f(n))}$ as ${\sum_{n} g(n) (e(f(n)) - e(f(n-1)))}$ and check that every term that appears in this sum appears exactly once in the right hand side (with the correct sign) and viceversa.
3. Write ${f(n) - f(n-1) = \int_{n-1}^{n} f'(s)ds}$.
4. Apply triangle inequality to the left hand side, thus erasing the ${e(f(n))}$ factors. Then appeal to monotonicity to take the absolute values outside of the summation and observe that the resulting expression is a telescopic sum, thus only the first and last terms survive. Finally, use the trigonometric identity in i) to evaluate ${g(N), g(0)}$, combine with periodicity of ${\cot}$ and iv).
5. The remaining terms are bounded by ${|g(N)| + |g(1)|}$ and are dealt with as in the final part of v).
1. The bound on the length of ${J}$ is an immediate consequence of the mean value theorem.
2. ${A_k \cup B_k}$ has length 1 and they are all disjoint.
3. The first sum runs over those ${n}$ such that ${|f'(n)-k|\lambda}$, there are at most ${O(\delta / \lambda)}$ such values of ${n}$.
For the second sum, verify that the hypotheses of (7) as in vi) apply.

4. Summing over ${k}$ gives the factor of ${C\lambda|J| + 1}$ by part b).
5. It must be the case that both ${C |I| \lambda^{1/2} < |I|}$ and ${\lambda^{-1/2} < |I|}$

Footnotes:
1: It is conjectured that for any ${\epsilon >0}$ the error ${\mathcal{E}(N)}$ is bounded by ${C_{\epsilon} N^{1/4 + \epsilon}}$ for some constant ${C_{\epsilon}>0}$. The estimate is certainly false with ${\epsilon=0}$.
2: Formally, the operator such that ${\langle Df, g\rangle = \langle f, D^{\intercal}g \rangle}$ for all ${f,g \in C^\infty_c}$.
3: Recall that a function ${f : \mathbb{R}\rightarrow\mathbb{R}}$ is monotonic non-decreasing if ${x < y \Rightarrow f(x) \leq f(y)}$, and monotonic non-increasing if ${x < y \Rightarrow f(x) \geq f(y)}$. A function is monotonic if it is either monotonic non-decreasing or monotonic non-increasing.
4: As the subscript indicates, the constant might depend on ${\phi}$ in principle. However, this will not be the case for us, at least for now.