The Hausdorff-Young inequality is one of the most fundamental results about the mapping properties of the Fourier transform: it says that
for all , where . It is important because it tells us that the Fourier transform maps continuously into , something which is not obvious when the exponent is not 1 or 2. When the underlying group is the torus, the corresponding Hausdorff-Young inequality is instead
The optimal constant is actually less than 1 in general, and it has been calculated for (and proven to be optimal) by Beckner, but this will not concern us here (if you want to find out what it is, take to be a gaussian). In the notes on Littlewood-Paley theory we also saw (in Exercise 7) that the inequality is false for greater than 2, and we proved so using a randomisation trick enabled by Khintchine’s inequality1.
Today I would like to talk about how the Hausdorff-Young inequality (H-Y) is proven and how important (or not) interpolation theory is to this inequality. I won’t be saying anything new or important, and ultimately this detour into H-Y will take us nowhere; but I hope the ride will be enjoyable.
1. Classical proofs of the Hausdorff-Young inequality
When I was a lowly undergraduate, I was baffled by -norm inequalities that were stated for exponents in a continuous range. I reasoned that – based on my very limited experience – most times one can only prove such inequalities for specific exponents; the number of instances in which one can prove an inequality for a whole range of exponents directly must be very scarce I thought, depending heavily on your expressions having a nice structure. [What I had in mind were examples from linear analysis like Hölder’s or Young’s inequality and the like.]
It turns out that, in a sense, although very naive at the time I was right: now that I have more experience I know that indeed it is true that most of the times we can only prove inequalities for very specific exponents! The piece of the puzzle that I was missing was the interpolation theory that allows one to take two different estimates and get “all the ones in between”.
For example, the Hausdorff-Young inequality is commonly proven as follows: when , the inequality is actually an equality, commonly known as Plancherel’s identity:
When instead, the inequality is a simple consequence of the triangle inequality for integrals and the fact that :
Appealing to the complex interpolation result commonly known as the Riesz-Thorin theorem we can conclude (since the Fourier transform is a linear map) that the Fourier transform will also be bounded from for all and exponents that are obtained by linear interpolation of the inverse exponents above: namely,
with the understanding that . One can see that , which simply means ; moreover, we have and , which are the inequalities above.
Notice that complex interpolation between inequalities with constants yields an intermediate inequality with constant ; in this particular case, and therefore we obtain constant 1 for all intermediate values of with this method.
However, one might argue that using complex interpolation is a bit too much. The Riesz-Thorin theorem is a consequence of Hadamard’s three-lines lemma from complex analysis, which itself is a consequence of the maximum modulus principle that says that a holomorphic/harmonic function reaches its maximum at the boundary of the domain. So, complex interpolation is the result of some elementary but very deep property of holomorphic functions, and one might argue that such a proof is not a fully real-variable proof of the Hausdorff-Young inequality if we have to bring complex analysis into the mix. We can ask the following questions:
Q1: Is there a proof of the Hausdorff-Young inequality that does not use complex interpolation?
Q2: Even better, is there a proof of the Hausdorff-Young inequality that does not use any interpolation at all?
The first question has a trivial answer and a not-so-trivial one. The trivial answer is that, since when , we can use real interpolation between Lorentz spaces instead of complex interpolation, and reach the same conclusion.
Recall that real interpolation between Lorentz spaces is an extension of Marcinkiewicz real interpolation to Lebesgue exponents that are not necessarily equal. In particular, it says that if one has a sub-linear operator such that is
(with and ), then for all and the operator is also2
When this implies the strong boundedness of , because by choosing and standard properties of Lorentz spaces one has
In particular, we see that the above applies to the Fourier transform operator, and therefore one can conclude the Hausdorff-Young inequality. However, it should be noticed that while complex interpolation gives a clean constant (where are the constants for the known inequalities we want to interpolate between), real interpolation gives instead a “polluted” constant of , that is a constant that is affected by a numerical factor that depends on the various exponents involved and crucially one that blows up near the endpoints, that is when .
Further remark: While the caveat about the polluted constants resulting from real interpolation is important in general, in the specific situation of the H-Y inequality one must not panic. Indeed, the Fourier transform tensorises nicely, and as a consequence one can remove whatever spurious constants obtained by using the tensor power trick (see Tao’s post about it).
This certainly constitutes a fully real-variable proof of the Hausdorff-Young inequality, but it relies on a more sophisticated real interpolation technique involving the theory of Lorentz spaces, instead of the simpler Marcinkiewicz real interpolation (which corresponds to the case and in the above). Someone who really cares for the simplest possible proof might ask whether we can’t maybe do even better and use a simpler real interpolation technique (such as Marcinkiewicz’s). Indeed, it turns out that we can, if we throw in the mix some rearrangement inequalities of Hardy, Littlewood and Paley! That would be the not-so-trivial answer mentioned before. More on this below, in the next section.
Regarding the second question, consider first the following elementary argument (which works not just for and but for generic abelian groups). It will prove Hausdorff-Young for some special exponents (besides the already considered) and was originally given by Young himself3.
Young’s argument: Quite simply, we take an exponent such that is even: say , so that , where is an integer. We observe that
and since we have by Plancherel’s identity that
where there are copies of in the convolution at the RHS. At this point we simply apply Young’s convolution inequality many times: one can check that for generic functions and exponents and satisfying
it holds that
Applied to , this generalised Young’s inequality says that
for the exponent such that ; but this exponent is precisely as we wanted!
This simple argument shows the Hausdorff-Young inequality for all the values of for which is even, but it does not say anything about those other exponents in between. I am not aware of an argument that might prove H-Y on for all exponents without using any real interpolation, but there is such a proof for H-Y on finite abelian groups! The argument involves the tensor power trick and is presented in Section 3 below.
2. A proof of Hausdorff-Young on or using simple REAL interpolation (and rearrangements)
I learned about this proof from my colleague Odysseas Bakas, who learned it from Zygmund’s “Trigonometric series” (the full reference is: Volume II, Chapter XII, section 5). It is based on an inequality for certain rearrangements that is commonly attributed to Paley, but Paley’s contribution was actually in extending the inequality below to general orthonormal systems. The proof of this version for the trigonometric system is originally due to Hardy and Littlewood instead – but I won’t be sticking to their presentation because it is quite different from today’s taste.
The proof itself only works for or , but this is alright because once you have the case sorted, the higher dimensional cases of follow inductively from Minkowski’s inequality and the fact that the Fourier transform “tensorises”. Indeed, assume that you have proven the inequality for dimensions and ; write as and let denote the Fourier transform in the first component and in the last components (thus ). Then we have
The argument for is entirely similar. Notice that we have been able to apply Minkowski because – if were bigger than 2, the reverse would hold and the argument would not work!
2.1. Proof of H-Y on (with rearrangements)
We will next prove the H-Y inequality on for convenience, and at the end of the section we will indicate the modifications needed to adapt the proof to . First we will prove the inequality of Paley(-Hardy-Littlewood) using Marcinkiewicz’s real-interpolation, a fully real-variable method. In order to avoid confusion, we prove a preliminary version that is more straightforward:
When , inequality (1) is actually an equality, and more precisely Plancherel’s identity – you can verify this by yourself.
While the inequality above does not (necessarily) hold when , we claim that a weak-type inequality does hold at this endpoint; this will enable us to conclude the inequality for the stated range of exponents by using Marcinkiewicz interpolation.
Notice indeed that the LHS of (1) is the (that is, a weighted norm) of the sequence . What we claim therefore is that
Notice that the case of the inequality we are trying to prove can also be rephrased as .
(2) is actually a very simple consequence of the trivial Hausdorff-Young inequality for , that is the fact that . Indeed, this implies that the LHS of (2) is bounded by
clearly, the RHS of the last expression is comparable to
which is precisely the claim.
The proof is therefore concluded by appealing to Marcinkiewicz’s real-interpolation theorem, applied to the linear operator .
Now we are able to approach the full Paley(-Hardy-Littlewood) inequality; its statement will require a small definition:
Definition: Let be a sequence in such that
We define the rearrangement to be the sequence given by
when , and
when < 0.
Notice that, according to the definition, we are rearranging the sequence separately in the positive and negative indices.
Paley’s inequality is as follows:
where is the rearranged sequence of the Fourier coefficients of .
The statement is virtually identical to the one for the preliminary version (1) above, save for the fact that we now have the rearranged Fourier coefficients at the LHS. What difference does this make? Well, in terms of the strength of the inequality, the latter is certainly stronger: indeed, it is not hard to see that the LHS of (3) is always LARGER than the one of (1). This is due to the fact that, since , the resulting numerical coefficient is decreasing, and therefore by rearranging the Fourier coefficients by order of magnitude we are concentrating the entire mass where these numerical coefficients are largest.
Moreover, another difference that must be appreciated is that we cannot prove (3) using word-by-word the same proof that we used for (1), because the operator is not even sublinear! However, a small trick will allow us to repeat essentially the same proof.
The trick mentioned above is simple: we linearise the LHS! Indeed, for any , there exists at least one permutation of such that
(by permutation I mean a bijection of with itself). Since the LHS of (3) is a sum of positive quantities, we can always rearrange the summation; in particular, we have
It suffices therefore to prove that the linear operator satisfies
for then we can apply this inequality to the fixed function above; but this can now be done by repeating verbatim the proof of (1), since the specific weight is irrelevant for the real-interpolation argument.
Now we can finally prove the Hausdorff-Young inequality; from the proof, it will be clear why we needed to bother with the rearrangements.
Proof of H-Y using Paley(-Hardy-Littlewood)’s inequality: We will show that is always dominated by the LHS of (3).
Assume we have only non-negative frequencies, for ease of notation, and actually assume as well so that I don’t have to write them over and over (you can add them back in yourself without changing the argument one bit). We have
and by the fact that we have rearranged the Fourier coefficients we have that the above is bounded from below by
Observe now this convenient little miracle: we always have . But then, since (which only holds because , which in turn is only true because ), we have
It is easy to check – again because of the fact that is a non-increasing rearrangement – that the latter quantity is
(yes, without the rearrangement now), which is precisely the LHS of the Hausdorff-Young inequality for ! The proof of H-Y on using only real-interpolation is now concluded.
2.2. Proof of H-Y on (with rearrangements)
Now, before we move to the next section, we need to address the modifications needed for when the group is instead of . They are not too complicated, but some care is required because of some technicalities.
First of all, the inequality that takes the place of the preliminary (1) is
in the same range ; the inequality that takes the place of (3) is then
where is the decreasing rearrangement of , which for a given function is defined as
(it is not obvious that this is the correct choice, though it is the first guess one might make; but the proof goes through in the end). See the post on Lorentz spaces about the properties of these rearrangements.
The proof of (1′) goes pretty much the same as the proof of (1): the operator is linear and the inequality to be proven is ; when this is again Plancherel, and when the weak inequality can be proven verbatim; Marcinkiewicz interpolation finishes the proof.
However, the proof of (3′) is a bit trickier. Indeed, we can no longer linearise the rearrangement!
It is FALSE in general that there exists a measurable function such that
[Curiously, the reverse is true, in the sense that there always exists a measurable (and measure-preserving) function such that
unfortunately, this does not help us. See this paper of John V. Ryff for details.]
So, we cannot use the same strategy we used for , and we are stuck with operator which is not even sub-linear. However, a little reflection reveals that it is not too far from being sublinear! Indeed, as seen in the post on Lorentz spaces, the decreasing rearrangement of a sum of functions satisfies
applied to our operator above this shows that
it is then a simple exercise to show that, for the particular measure spaces involved (that is the regular Lebesgue spaces and the weighted ), the Marcinkiewicz interpolation theorem continues to hold. This means that it will suffice to prove the endpoint inequalities for as before to conclude; but these are quite simple to prove. For we use the fact that rearrangements satisfy and Plancherel; for the weak inequality we use the same argument as before, since too.
It remains to show, in order to conclude, that is dominated by the LHS of (3′). If we allowed ourselves the use of the theory of Lorentz spaces, we’d see that the LHS of (3′) is nothing but the norm of , and the desired inequality would be an immediate consequence of the fact that in the range ; however, we are trying to avoid the use of anything too sophisticated so we will use a direct argument instead. The argument works for a generic function , so using the monotonicity of we write
for any (notice that is an interval of integers, by the monotonicity of ) and observe that we can therefore compare the last expression above to
Again, by the monotonicity of and definition of , we see that and therefore the last expression above is comparable to
and this shows what we wanted if we take . The adaptation of the proof of H-Y using rearrangements to is concluded.
3. A proof of Hausdorff-Young in finite abelian groups without ANY interpolation (sort of)
This is something that I learned from my BSc advisor who pointed me to Terry Tao’s blog, specifically to his post on the tensor power trick. In particular, I will reproduce here Tao’s proof of the Hausdorff-Young inequality in the setting of a finite abelian group which does not use any interpolation (nor rearrangements). Technically speaking, we will still be performing something that acts roughly like an interpolation, but in the finite setting things simplify so much that our “interpolation step” will become so trivial that one can hardly call it interpolation anymore.
3.1. A quick primer on the Fourier transform on finite abelian groups
Before I give you the proof, it is best if I quickly review what the Fourier transform on a finite abelian group is. Let be this finite abelian group; then we must define its Pontryagin dual, which is the set of all functions (where by we mean the circle group, that is, the group of complex numbers of modulus 1) that are actually homomorphisms, i.e. that preserve the group operations: a function is in if and only if
for all . The elements of are usually called characters. Observe that is more than just a set, namely, it is a finite abelian group itself: indeed, multiplication is commutative in , so if are characters we can define the function and see that is a homomorphism and hence a character as well4. The inverse of character is easily seen to be the character , the complex conjugate, and the identity is the so-called trivial character .
A very important thing about the characters is that they are orthogonal to each other. Indeed, first of all, if we let denote the normalised counting measure on , that is
then we can see that if is a character, the integral
is a translation-invariant quantity, in the sense that for all we must have
As a consequence, either is the trivial character (in which case it can be seen that ) or must be zero. This has the following consequence: since
and is a character, the above is always equal to , unless (the trivial character), in which case clearly . This is precisely an orthogonality relation, and it is not hard to show that the characters form a basis of (this is the finite abelian case of the Peter-Weyl theorem).
The Fourier transform is then defined as one would expect, that is simply as the projection against the basis of characters: the Fourier transform of is the function defined by
It is easy to verify that this Fourier transform corresponds very closely to the usual one on or , in the sense that a number of standard identities hold for it:
Translations become modulations and viceversa, under the action of the Fourier transform: that is
- for the convolution of functions
we have that
The Inverse Fourier transform of is
applied to , this gives the Fourier inversion formula
(notice how trivial this statement is in the finite setting: identifying the functions with vectors on , this is simply saying that a function is the sum of its components in the specific orthonormal basis we are considering).
Plancherel and Parseval’s formulas hold, that is
(notice that on the RHS we have summation, that is non-normalised, unlike at the LHS) and
The trivial Hausdorff-Young inequality holds, that is
That said, we are ready to prove H-Y for a finite abelian group without using interpolation (or, if we are being completely honest, using so little interpolation that it cannot even be called that).
3.2. Proof of H-Y in finite abelian groups without interpolation
In this subsection we will prove that for all we have
for any finite abelian group , where the constant hidden in is independent of .
First of all, we show that we can prove H-Y in the case that takes a simple form, namely we assume that for some it holds that
for some set . In this case, we have for the Fourier transform that
by Plancherel, and
by the trivial H-Y inequality. By the logarithmic convexity of the norms (which is a very primitive form of interpolation, if you will) we then have that
where is such that . Hence and a little algebra reveals that
However, at the same time we have by our hypotheses on that
for the very special functions that we are working with.
In order to extend this result to general functions, we will decompose them in pieces like the above and use triangle inequality to sum over all the pieces; this will introduce an inefficient constant that grows in , which is bad, but we will use the fact that the Fourier transform tensorises to remove it with the tensor power trick.
Let then for
and correspondigly define . Each is a function of the type above. We observe first that when is sufficiently small then the piece does not contribute much at all. Indeed, if we see that (by the basic inequality when )
if the above contributes (by triangle inequality) at most . What this means is that we can forget about those indices such that , or equivalently .
There are other indices that we can forget about (those where is very large), because the corresponding sets will be empty: indeed, we have that
and therefore the only values of that are relevant are those for which (say) .
In light of the above observations, our chosen decomposition of is as follows:
and where satisfies pointwise. Then we have as seen above, and we have by (4) and triangle inequality that
which as anticipated is the H-Y inequality with a bad constant that grows with the cardinality .
Now we will finally use the tensor power trick to remove this unfortunate logarithm. The first thing to notice is that we can make into a group: the group operation is simply componentwise, so we can still denote it by . The second thing to notice is that we also have , because the characters of are all necessarily of the form for , where
is the tensor product of the single-variable characters. If we denote by the tensor product of with itself times, then we can see that we have for its Fourier transform (on the group )
in words, the Fourier transform of the tensored is the tensored Fourier transform of itself. Now, since is a finite abelian group, we have by (5) that
(notice the factor of that appeared). It is trivial to check that and that , and therefore by taking -th roots of the last inequality we have shown the improved version of (5) given by
However, nothing prevents us from taking arbitrarily large. We do so and see that , and thus we have proven H-Y with constant – and without using essentially any interpolation!
Unfortunately, the trick only applies to the finite abelian case.
4. Why does this matter?
You might rightfully think that all of the above amounts to nothing. Why bother looking for other proofs of H-Y, specifically proofs that do not use complex interpolation or no interpolation at all? There’s nothing wrong with using complex interpolation after all, as the Riesz-Thorin theorem is rather easy to prove. Sure, some people have a fondness for finding the simplest possible proof of a theorem, but is the reward worth the effort given that there is nothing too complicated going on? All the proofs fit in a few pages, not 200 or so.
Well, as it turns out, there are situations in which interpolation is not available at all! One of these situations is that of Nonlinear Fourier Analysis, in which one can construct certain non-linear transforms (often called scattering transforms) that can do for certain special non-linear PDEs what the ordinary Fourier transform does for linear PDEs. Here I am referring to the fact that, such as for example with the free Schrödinger equation , one can take a Fourier transform in the spacial variable and turn the PDE into a collection of ODEs: in this instance, by taking the Fourier transform in we obtain for each fixed frequency the ODE in the variable , which has the solution ; Fourier inversion then gives the propagator formula , loosely speaking. Suitable nonlinear Fourier transforms / scattering transforms can do a similar job for PDEs such as the Korteveg-deVries equation and the cubic non-linear Schrödinger equation(s) .
These nonlinear Fourier transforms (I am deliberately avoiding defining them because they will be the subject of a future post) can be shown to satisfy estimates analogous to Plancherel and the trivial H-Y inequality, and therefore the question naturally arises as to whether they satisfy a full H-Y inequality for exponents inbetween . However, the answer is far from trivial! Since the transforms are intrinsically non-linear objects, there is no known interpolation theory available for them. A fundamental technique that is the bread and butter of every respectable harmonic analyst is suddenly gone, and one is left with picking up the pieces.
Interestingly, the full H-Y inequality for (some of) these transforms has been proven, but the argument uses the Christ-Kiselev argument, something which is surprising in itself. You can find more details in these notes by Tao and Thiele, which are a go-to for the subject. A byproduct of the Christ-Kiselev argument, as we have experienced ourselves, is that one obtains a constant for the H-Y inequality that blows up as , while on the other hand the inequality is true with a finite constant for . It is still an open problem whether this behaviour of the constant is real or not; intuition and some partial results (e.g. this by Kovač, Oliveira e Silva and Rupčić) suggest otherwise. This is a real-life situation in which the above questions Q1 and Q2 become very relevant: when there is no interpolation available, we are forced to do without, and one way to get there is to go back to the settings we are familiar with and try to do without in there too – hoping we can learn something useful or at least get some inspiration. This, to me, was justification enough to start pondering about these questions; of course, nothing I said in this post has any consequences for the nonlinear H-Y constant problem.
1: An interesting observation is that one can modify the construction in Exercise 7 and get rid of Khintchine’s inequality, making the proof entirely elementary. What one does is take a function of the form with both sufficiently concentrated around the origin. Taking the sufficiently distant from each other we can make the different terms of the sum essentially disjoint in space, so that ; taking the sufficiently distant from each other we can make the different terms of the sum essentially disjoint in frequency instead, so that . Ultimately, the Hausdorff-Young inequality fails above 2 because it is possible for two functions to be more or less orthogonal both in space and in frequency, and because then we have and hence . [go back]
2: for the sake of clarity, let me restate that is given by
and similarly for . [go back]
3: It was the very first version of the Hausdorff-Young inequality to appear besides the endpoints, and it was Hausdorff that later extended it to the full range. [go back]
4: A more familiar notation can be obtained by observing that every character can be written as for some function , and noticing that is now an additive homomorphism and that the sum of two such ‘s gives again rise to an additive homomorphism and hence a character. However, this is unnecessary. [go back]