Understand the details of the consequences of subconvexity mentioned in lecture (reference: (Michel 2007, sec. 5)):
Subconvexity vs. geometry of numbers.
Distinguishing modular forms.
Duke theorem. Reduction of supersingular elliptic curves.
Quantum unique ergodicity. One exercise here is to understand how this follows from subconvexity in the special case of Eisenstein series, as in (Luo and Sarnak 1995, sec. 2).
Some “classic” papers (non-exhaustive):
Bounds for Fourier coefficients of half-integral weight, applications to quadratic forms: (Iwaniec 1987), (Duke 1988), (Duke, Friedlander, and Iwaniec 1997)
Subconvexity for \({\mathop{\mathrm{GL}}}_2\): (Duke, Friedlander, and Iwaniec 1993) (Duke, Friedlander, and Iwaniec 1994) (Duke, Friedlander, and Iwaniec 2001).
Moments and amplification via periods: (Venkatesh 2010), (Michel and Venkatesh 2010), (Iwaniec and Sarnak 1995), (Sarnak 1985).
Shifted convolution sums:
via \(\delta\)-symbol: (Duke, Friedlander, and Iwaniec 1993)
via periods: (Sarnak 2001), (Blomer and Harcos 2008), (Blomer, Jana, and Nelson 2024).
Papers emphasizing variation of the test vector: (Reznikov 2008; Bernstein and Reznikov 2010; Venkatesh 2010).
One exercise is draw parallels, e.g., between
(Michel and Venkatesh 2010, Thm 5.1) and (Duke, Friedlander, and Iwaniec 1993),
(Venkatesh 2010, sec. 4) and (Duke, Friedlander, and Iwaniec 1994), or
(Michel and Venkatesh 2010, Thm 5.2) and (Duke, Friedlander, and Iwaniec 2001).
Another is to reprove some results using different methods, e.g., by working out a “classical” proof in the style of (Duke, Friedlander, and Iwaniec 2001) for subconvexity for Maass forms at special points, namely, for \(L(1/2 + i t_f, f)\) with \(f\) on \({\mathop{\mathrm{SL}}}_2(\mathbb{Z})\) of eigenvalue \(1/4 + t_f^2\), by estimating an amplified fourth moment, e.g., \[\sum_{f : t_f \in [T, T+1]} \left| \sum_{\ell \asymp L} c_{\ell} \lambda_f(\ell)\right|^2 \left| L(\tfrac{1}{2} + i t_f, f) \right|^4.\]
Study the proof of the convexity bound. There are two steps:
The Phragmen–Lindelöf convexity principle, to reduce estimates for \(\Re(s) = 1/2\) to estiamtes for \(\Re(s) = 1 + \varepsilon\) and \(\Re(s) = - \varepsilon\).
The functional equation, to reduce further to estimates for \(\Re(s) = 1 + \varepsilon\).
Establishing the necessary bounds for \(\Re(s) = 1 + \varepsilon\), for which see https://www.math.wsu.edu/faculty/scliu/papers/Convexity.pdf and references.
Some recent papers, concerning subconvexity or related problems, that haven’t been fully explored (e.g., interpreted via integral representations):
\(\delta\)-method papers such as (Sharma 2022) and (Aggarwal, Leung, and Munshi 2022)
Higher moments over very large families, as in (Chandee and Li 2020a), (Chandee and Li 2020b)
Rankin–Selberg when the rank difference is larger than one, as in (Blomer, Li, and Miller 2019)
Higher rank subconvex bounds (Blomer and Buttcane 2020), (Marshall 2023), (Nelson 2023), (Nelson 2021), (Hu and Nelson 2023). There are many “exercises” implicit in these papers; for instance, a half-dozen are suggested in (Nelson 2023, Remark 1.4). Some other questions:
These have all proceeded via arithmetic amplification. Is it possible to succeed in some cases via “family shortening” (as in, e.g., (Sarnak 2001))? A natural case to try would be the \(t\)-aspect. Some experiments with \({\mathop{\mathrm{GL}}}_2\) suggest this is difficult (see https://ultronozm.github.io/math/20230522T174726__shrinking-archimedean-families-second-moment-gl2.html).
“Purely horizontal” aspects remain open, e.g., twists by Dirichlet characters of prime conductor on \({\mathop{\mathrm{GL}}}_4\).
(Extra credit) Create a song of thematic relevance to the lectures. Examples: