Home > FFT 2021



Faraway Fourier Talks 2021-2022

The Norbert Wiener Center Online Seminar on Harmonic Analysis and Applications


When:

Mondays at 2 pm EST.

Where:

Zoom. For access to the Zoom link, please join the mailing list by entering your information here.

Recorded talks will become available on our Youtube channel.


The Faraway Fourier Talks will resume in the spring semester of 2022.

Scheduled and completed talks:

 Sept. 27th, 2021  Professor David Walnut (GMU, UMD)  Exponential bases for partitions of intervals
For a partition of [0,1] into intervals I1,...,In we prove the existence of a partition of ℤ into Λ1,..., Λn such that the complex exponential functions with frequencies in Λk form a Riesz basis for L2(Ik), and furthermore, that for any J⊆{1,2,...,n}, the exponential functions with frequencies in ⋃j∈J Λj form a Riesz basis for L2(I) for any interval I with length |I|=Σj∈J |Ij|. The construction extends to infinite partitions of [0,1], but with size limitations on the subsets J⊆ ℤ. The construction utilizes an interesting assortment of tools from analysis, probability, and number theory.
This is joint work with Shauna Revay (GMU and Novetta), and Goetz Pfander (Catholic University of Eichstaett-Ingolstadt).
ConcludedRecording Slides
 Oct. 4th, 2021  Professor Anne Gelb (Dartmouth)  Empirical Bayesian inference using joint sparsity
We develop a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. While a variety of applications can be modeled using this framework, our prototypical example comes from synthetic aperture radar (SAR) data, from which data are acquired from neighboring aperture windows. Hence a good test case is to consider the observations modeled as noisy Fourier samples. Our numerical experiments demonstrate that using the support informed sparse prior not only improves accuracy of the recovery, but also reduces the uncertainty in the posterior when compared to standard sparsity producing priors.

This is joint work with Theresa Scarnati formerly of the Air Force Research Lab Wright Patterson and now working at Qualis Corporation in Huntsville, AL, and Jack Zhang, recent bachelor degree recipient at Dartmouth College and now enrolled at University of Minnesota’s PhD program in mathematics.
ConcludedRecording
 Oct 11th, 2021  Professor Zuowei Shen (National University of Singapore)  Deep Approximation via Deep Learning
The primary task of many applications is approximating/estimating a function through samples drawn from a probability distribution on the input space. The deep approximation is to approximate a function by compositions of many layers of simple functions, that can be viewed as a series of nested feature extractors. The key idea of deep learning network is to convert layers of compositions to layers of tuneable parameters that can be adjusted through a learning process, so that it achieves a good approximation with respect to the input data. In this talk, we shall discuss mathematical theory behind this new approach and approximation rate of deep network; we will also how this new approach differs from the classic approximation theory, and how this new theory can be used to understand and design deep learning network.
ConcludedRecording
 Oct 18th, 2021  Professor Amit Singer (Princeton)  Wilson Statistics: Derivation, Generalization, and Applications to Cryo-EM
The power spectrum of proteins at high frequencies is remarkably well described by the flat Wilson statistics. Wilson statistics therefore plays a significant role in X-ray crystallography and more recently in cryo-EM. Specifically, modern computational methods for three-dimensional map sharpening and atomic modeling of macromolecules by single particle cryo-EM are based on Wilson statistics. In this talk we use certain results about the decay rate of the Fourier transform to provide the first rigorous mathematical derivation of Wilson statistics. The derivation pinpoints the regime of validity of Wilson statistics in terms of the size of the macromolecule. Moreover, the analysis naturally leads to generalizations of the statistics to covariance and higher order spectra. These in turn provide theoretical foundation for assumptions underlying the widespread Bayesian inference framework for three-dimensional refinement and for explaining the limitations of autocorrelation based methods in cryo-EM.
ConcludedRecording Slides
 Oct 25th, 2021  Professor John Klauder  Expanding Quantum Field Theory Using Affine Quantization
Quantum field theory uses canonical quantization (CQ), and often fails, e.g., φ44, etc. Affine quantization (AQ) - which will be introduced - can solve a variety of problems that CQ cannot. AQ can even be used to solve certain models regarded as nonrenormalizable. The specific procedures of AQ lead to a novel Fourier transformation that illustrates how AQ can create a generous contribution to quantum field theory.
ConcludedRecording Slides
 Nov. 1st, 2021  Professor Thomas Strohmer (UC Davis)  Fighting Surveillance Capitalism with Mathematics
'Sharing is Caring', we are taught. However, in the Age of Surveillance Capitalism, a new economic system that pushes for relentless data capture and analysis, we better think twice what we share. As data sharing is increasingly locking horns with data-privacy concerns, synthetic data are gaining traction as a potential solution to the aporetic conflict between privacy and utility. The goal of synthetic data is to create an as-realistic-as-possible dataset, one that not only maintains the nuances of the original data, but does so without risk of exposing sensitive information. As such, synthetic data can be instrumental in reestablishing the balance between the need of data that drives AI advances and the fundamental right to data protection for citizens and consumers. However, the road to privacy is paved with NP-hard problems! In this talk I will present three recent mathematical breakthroughs in the NP-hard challenge of computationally efficiently creating synthetic data that come with provable privacy and utility guarantees. We draw from a wide range of mathematical concepts, including Boolean Fourier analysis, duality, empirical processes, and microaggregation. For instance, we will see some surprising connections between theoretical probability and anonymization. I will also present the first noisefree method to achieve differential privacy and discuss applications of our approach for data analysis tasks arising in the Intensive Care Unit.
This is joint work with March Boedihardjo and Roman Vershynin.
ConcludedRecording
 Nov. 8th, 2021  Professor Robert Calderbank (Duke)  Climbing the Diagonal Clifford Hierarchy
Quantum computers are moving out of physics labs and becoming generally programmable. In this talk, we start from quantum algorithms like magic state distillation and Shor factoring that make essential use of diagonal logical gates. The difficulty of reliably implementing these gates in some quantum error correcting code (QECC) is measured by their level in the Clifford hierarchy, a mathematical framework that was defined by Gottesman and Chuang when introducing the teleportation model of quantum computation. We describe a method of working backwards from a target logical diagonal gate at some level in the Clifford hierarchy to a quantum error correcting code (CSS code) in which the target logical can be implemented reliably.

This talk describes joint work with my graduate students Jingzhen Hu and Qingzhong Liang.
ConcludedRecording Slides
 Nov. 15th, 2021  Professor Hans Feichtinger (Vienna)  Conceptual Harmonic Analysis: Tools and Goals
The Ubiquitous Role of BUPUs
Since almost 14 years the speaker tries to promote the idea of ``CONCEPTUAL HARMONIC ANALYIS'' as a way to combine or rather reconcile Abstract Harmonic Analysis (AHA) with Computational Harmonic Analysis (CHA) and much more. In particular, the long history Fourier Analysis (by now 200 years!) has contributed to a diversification of methods and standards. This has led to the unpleasant situation that mathematicians, engineers or physicists have their own notations, their own settings and habits, and numerical work is often only seen as a way to illustrate the continuous theory, or to simulate a problem in order to improve the heuristic basis for the proper development of a mathematical theory.

Going back to Andre Weil and Hans Reiter one can say that the natural domain for Fourier Analysis are LCA groups. The same is true for time-frequency analysis and Gabor Analysis. But in the world of AHA we can discuss the analogy between different groups G. Once the dual group G^ has been identified we can define the forward and inverse Fourier transform, define time-frequency shifts and the STFT and discuss the reconstruction from samples (for band-limited functions or from the STFT).

Obviously one expects that the FFT should be useful in computing at least approximately the Fourier transform of a nice function, or the convolution of two functions, or perhaps even measures. We should motivate the approaches and ideally provide a guarantee (in the spirit of numerical integration methods) for computations to deliver good quantitative results. Ideally to approach should avoid unnecessary technicalities (such as Lebesgue integration or Frechet spaces such as S(R)), at least for the problems relevant for digital signal processing. Of course, suitable function spaces are required in order to express properly that computations deliver a good approximation of a given signal.

In the talk the speaker will report on attempts to rebuild Fourier Analysis over LCA groups (including R^d) from scratch. First convolution of bounded measures is introduced via translation invariant systems and then the Fourier Stieltjes transform is introduced, up to the convolution theorem. BUPUs (bounded uniform partitions play an important role here). As an intermediate goal the space S_0(G) is introduced, and finally the Banach Gelfand Triple (S_0,L_2,S_0*). Most spaces relevant for classical Fourier Analysis are then sandwiched between S_0 and S_0* and are isometrically invariant under the time-frequency shifts.

Overall, the focus of the talk will be on alternative ways to provide a proper foundation for AHA, it will talk about non-standard function spaces (avoiding Lebesgue spaces as a starting point) and suggest an interpretation of signals as ``mild distributions'' (members of S_0*), having a bounded STFT. On the other hand we need computational tools plus quantitative and constructive approximations of guaranteed quality.
ConcludedRecording
 Nov. 22nd, 2021  Professor Rama Chellappa (Johns Hopkins)  Design of Unbiased, Adaptive and Robust AI Systems
Over the last decade, algorithms and systems based on deep learning and other data-driven methods have contributed to the reemergence of Artificial Intelligence-based systems with applications in national security, defense, medicine, intelligent transportation, and many other domains. However, another AI winter may be lurking around the corner if challenges due to bias, domain shift and lack of robustness to adversarial attacks are not considered while designing the AI systems. In this talk, I will present our approach to bias mitigation and designing AI systems that are robust to domain shift and a variety of adversarial attacks.
ConcludedRecording Slides
 Dec. 6th, 2021  Professor Dustin Mixon (OSU)  Three proofs of the Benedetto--Fickus theorem
In 2003, Benedetto and Fickus introduced a vivid intuition for an objective function called the frame potential, whose global minimizers are fundamental objects known today as unit norm tight frames. Their main result was that the frame potential exhibits no spurious local minimizers, suggesting local optimization as an approach to construct these objects. Local optimization has since become the workhorse of cutting-edge signal processing and machine learning, and accordingly, the community has identified a variety of techniques to study optimization landscapes. This talk applies some of these techniques to obtain three modern proofs of the Benedetto--Fickus theorem. Joint work with Tom Needham, Clayton Shonkwiler, and Soledad Villar.
ConcludedRecording
 Dec. 13th, 2021  Professor Hrushikesh Mhaskar (Claremont Graduate University)  Learning without training
The fundamental problem of machine learning is often formulated as the problem of function approximation as follows. Starting with a data of the form {(x_i, y_i)} sampled from an unknown joint distribution t, approximate f(x) = E_t (y|x). Since t is unknown, it is impossible to give a constructive method to find the minimizer of the generalization error defined as the deviation of the model from the target function f in L^2(t). Instead, the estimation of this error is made independently of the actual construction, known as training, which is based on the minimization of another objective function. In this talk, we will point out some pitfalls of this paradigm, and describe our efforts to bypass this procedure and construct a “good” approximation to f directly from the data. While our construction is universal in the sense that it does not involve any assumptions on the target function, we will obtain probabilistic estimates on the pointwise deviation between our approximation and f under some smoothness assumption on f. The talk is mostly theoretical, but some proof-of-concept applications are discussed.
ConcludedRecording
 Dec. 20th, 2021  Professor Weilin Li (Courant Institute)  Function approximation with one-bit Bernstein and neural networks
The celebrated universal approximation theorems for neural networks typically state that every sufficiently nice function can be arbitrarily well approximated by a neural network with carefully chosen parameters. Motivated by recent questions regarding compression and overparameterization, we ask whether it is possible to represent any reasonable function with a neural network whose parameters are restricted to a small set of values, with the extreme case being one-bit {+1,-1} neural networks? We answer this question in the affirmative. One of our main innovations is a novel approximation result for linear combinations of multivariate Bernstein polynomials, with only +1 and -1 coefficients. Joint work with Sinan Gunturk.
ConcludedRecording

Organizing Committee:
Radu Balan
Jacob Bedrossian
John Benedetto
Maria Cameron
Wojciech Czaja
Tom Goldstein
Vince Lyzinski


 

In cooperation with

SIAM

 

Now in Print!
Excursions in Harmonic Analysis:
The Fall Fourier Talks at the Norbert Wiener Center

Excursions in Harmonic Analysis, Volume 1 Excursions in Harmonic Analysis, Volume 2
Excursions in Harmonic Analysis, Volume 3 Excursions in Harmonic Analysis, Volume 4
Excursions in Harmonic Analysis, Volume 5