When:
Mondays at 11 a.m. PST / 2 p.m. EST / 8 p.m. CET
Where:
Zoom. For access to the Zoom link, please join the mailing list by entering your information here.
Recorded talks will be available in December on our Youtube channel.
Upcoming Talks:
January 25th, 2021 
Andrea Bertozzi (UCLA) 
PseudoSpectral Methods for High Dimensional Data Analysis on Graphs 
I will speak about a general class of machine learning problems in which data lives on similarity graphs and the goal is to solve a penalized graph mincut problem. Applications include semisupervised learning, unsupervised learning, and modularity optimization – originally developed
for community detection on networks – but recast as an unsupervised
machine learning problem. These problems have a mathematical connection
to total variation minimization in Euclidean space and this analogy leads
to a natural class of machine learning algorithms that mimic
pseudospectral methods in nonlinear partial differential equations. The
methods are graph analogues of geometric motion – e.g. Motion by mean
curvature and the MBO scheme to approximate that dynamics.

February 1st, 2021 
Michael Lacey (Georgia Tech) 
February 8th, 2021 
Alfred Hero (University of Michigan) 
February 15th, 2021 
Akram Aldroubi (Vanderbilt University) 
February 22nd, 2021 
Pete Casazza (University of Missouri) 
March 1st, 2021 
Marcin Bownik (University of Oregon) 
March 8th, 2021 
Rodolfo Torres (University of California, Riverside) 
March 15th, 2021 
Rene Vidal (John Hopkins University) 
March 22nd, 2021 
Doug Cochran (Arizona State University) 
March 29th, 2021 
Chris Heil (Georgia Tech) 
April 5th, 2021 
Yurii Lyubarskii (Norwegian University of Science and Technology) 
April 12th, 2021 
Gestur Olafsson (Louisiana State University) 
April 19th, 2021 
Joel Tropp (Caltech) 
April 26th, 2021 
Anna Gilbert (Yale) 
May 3rd, 2021 
Alfonso Bandeira (Zurich) 
May 10th, 2021 
Min Wu (UMD) 
May 17th, 2021 
Darrin Speegle (Saint Louis University) 
May 24th, 2021 
Demetrio Labate (University of Houston) 
Previous Talks:
January 4th, 2021 
Qiyu Sun (University of Central Florida 
Some Mathematical Problems in Graph Signal Processing 
Graph signal processing provides an innovative framework to handle data residing on various networks and many irregular domains. It is an emerging interdisciplinary field that merges algebraic and spectral graph theory with applied and computational harmonic analysis. In this talk, I will discuss some mathematical problems related to graph signal processing, with emphasis on phase retrieval and velocity field, filtering and inverse filtering, sampling and reconstruction, and distributed verification, implementation and optimization.

December 21st, 2020 
Vivek Goyal (Boston University) 
One Click at a Time: Photon and ElectronLevel Modeling for Improved Imaging 
Detectors that are capable of sensing a single photon are no longer rare, yet the bulk of signal processing intuitions and methods have an implicit connection with Gaussian noise models. Particlelevel modeling can lead to substantially different methods that sometimes perform dramatically better than classical methods. For example, using detectors with singlephoton sensitivity enables lidar systems to form depth and reflectivity images at very long ranges. Initially, our interest was in exploiting inhomogeneous Poisson process models and the typical structure of natural scenes to achieve extremely high photon efficiency. However, modeling at the level of individual photons does not merely give advantages when signals are weak. It is also central to withstanding high levels of ambient light and mitigating the effects of detector dead time, which ordinarily create high bias in highflux imaging. Our sensor signal processing advances thus potentially improve lidar performance in settings with very high dynamic range of optical flux, such as navigation of autonomous vehicles. Modeling of dead time can presumably improve many other applications of timecorrelated singlephoton counting. Furthermore, modeling at the level of individual incident particles and emitted secondary electrons leads to improvements in focused ion beam microscopy that apply uniformly over all dose levels.

December 14th, 2020 
Gil Strang (MIT)) 
The ColumnRow Factorization of a Matrix A = CR [Slides] 
Matrix factorizations like A = LU and A = USV^{T} have become the organizing principles of linear algebra. This expository paper develops a columnrow factorization A = CR = (m × r) (r × n) for any matrix of rank r. The matrix C contains the first r independent columns of A : a basis for the column space. The matrix R contains the nonzero rows of the reduced row echelon form rref(A). Then R = [I F] P contains a matrix F to express the remaining n  r columns of A as combinations CF of the independent columns in C. When the independent columns don’t all come first, P permutes those columns of I and F into their correct positions so that CR = [C CF] P produces A.
A = CR is an “interpolative decomposition” that includes r actual columns of A in C.
A more symmetric factorization A = C W^{1} R^{*} also includes r actual rows of A in R^{*}.

November 30th, 2020 
Carlos Cabrelli (University of Buenos Aires) 
Frames by Operator Orbits 
I will review some results on the question of when the orbits {T^{j} g: j ∈ J, g ∈ G}, of a bounded operator T acting on a Hilbert space H where G is a subset of H form a frame of H. I will also comment on recent advances. This is motivated by the Dynamical Sampling problem that consists of recovering a timeevolving signal from its spacetime samples.

November 23rd, 2020 
Tomaso Poggio (MIT) 
Deep Puzzles: Towards a Theoretical Understanding of Deep Learning 
Very recently, square loss has been observed to perform well in classification tasks with deep networks. However, a theoretical justification is lacking, unlike the crossentropy case for which an asymptotic analysis is available. Here we discuss several observations on the dynamics of gradient flow under the square loss in ReLU networks. We show how convergence to a local minimum norm solution is expected when normalization techniques such as Batch Normalization (BN) or Weight Normalization (WN) are used, in a way which is similar to the behavior of linear degenerate networks under gradient descent (GD), though the reason for zeroinitial conditions is different. The main property of the minimizer that bounds its expected error is its norm: we prove that among all the interpolating solutions, the ones associated with smaller Frobenius norms of the weight matrices have better margin and better bounds on the expected classification error. The theory yields several predictions, including aspects of Donoho's Neural Collapse and the bias induced by BN on the weight matrices towards orthogonality.

November 16th, 2020 
Jean Pierre Gabardo (McMaster University) 
Factorization of positive definite functions through convolution and the Turan problem 
An open neighborhood U of 0 in Euclidean space is called symmetric if U=U. Let PD(U) be the class of continuous positive definite functions supported on U and taking the value 1 at the origin. The Turan problem for U consists in computing the Turan constant of U, which is the supremum of the
integrals of the functions in PD(U). Clearly, this problem can also be stated on any locally compact abelian group. In this talk, we will introduce the notion of "dual" Turan problem. In the case of a finite abelian group G, the Turan problem for a symmetric set S consists thus in maximizing the integral (which is just a finite sum) over G of the positive definite functions taking the value 1 at 0 and supported on S, while its dual is just the Turan problem for the set consisting of the complement of S together with the origin. We will show a surprising relationship between the maximizers of the Turan problem and those of the dual problem. In particular, their convolution product must be identically 1 on G. We then extend those results to Euclidean space by first finding an appropriate notion of dual Turan problem in this context. We will also point out an interesting connection between the Turan problem and frame theory by characterizing socalled Turan domains as domains admitting Parseval frames of (weighted) exponentials of a special kind.

November 9th, 2020 
Jill Pipher (Brown University) 
Boundary value problems for elliptic complex coefficient systems: the pellipticity condition 
Formulating and solving boundary value problems for divergence form real elliptic equations has been an active and productive area of research ever since the foundational work of De Giorgi  Nash  Moser established Holder continuity of solutions when the coefficients are merely bounded and measurable. The solutions to such realvalued equations share some important properties with harmonic functions: maximum principles, Harnack principles, and estimates up to the boundary that enable one to solve Dirichlet problems in the classical sense of nontangential convergence. Solutions to complex elliptic equations and elliptic systems do not necessarily share these good properties of continuity or maximum principles.
In joint work with M. Dindos, we introduce in 2017 a structural condition (pellipticity) on divergence form elliptic equations with complex valued matrices which was inspired by a condition related to Lp contractivity due to Cialdea and Maz'ya. The pellipticity condition that generalizes CialdeaMaz'ya was also simultaneously discovered by CarbonaroDragicevic, who used it to prove a bilinear embedding result. Subsequently, Feneuil  Mayboroda  Zhao have used pellipticity to study wellposedness of a degenerate elliptic operator associated with domains with lowerdimensional boundary.
In this seminar, we discuss pellipticity for complex divergence form equations, and then describe recent work, joint with J. Li and M. Dindos, extending this condition to elliptic systems. In particular, we can give applications to solvability of Dirichlet problems for the Lame systems.

November 2nd, 2020 
Ursula Molter (University of Buenos Aires) 
Riesz Bases of Exponentials and the Bohr Topology 
In this talk we address the question of what domains Ω of R^{d} with finite measure, admit a Riesz basis of exponentials, that is, the existence of a discrete set B ⊂ R^{d} such that the exponentials E(B) = {e^{2piß·ω} : ß ∈ B} form a Riesz basis of L^{2}(Ω). Using the Bohr compactification of the integers, we show a necessary and sufficient condition to ensure that a multitile Ω subset of R^{d} of positive measure (but not necessarily bounded) admits a structured Riesz basis of exponentials for L^{2}(Ω). Here a set Ω ⊂ R^{d} is a kmultitile for Z^{d} if Σ_{λ ∈ Zd} Χ_{Ω}(ω  λ) = k a.e. ω ∈ R^{d}. 
October 26th, 2020 
Virginia Naibo (Kansas State University) 
Fractional Leibniz Rules: A Guided Tour 
The usual Leibniz rules express the derivative of a product of functions in terms of the derivatives of each of the factors. In an analogous sense, fractional Leibniz rules involve the concept of fractional derivative and provide estimates of the size and smoothness of a product of functions in terms of the size and smoothness of each of the factors. These bilinear estimates stem from the study of partial differential equations such as Euler, Navier Stokes and Kortewegde Vries. In this talk, I will present fractional Leibniz rules associated to bilinear pseudodifferential operators with homogeneous symbols, including CoifmanMeyer multipliers, and with symbols in the bilinear Hörmander classes. Through different approaches, the estimates will be discussed in the settings of weighted Lebesgue, TriebelLizorkin and Besov spaces. 
October 19th, 2020 
Ronald Coifman (Yale) 
Phase Unwinding Analysis: Nonlinear Fourier Transforms and Complex Dynmaics 
Our goal here is to introduce recent developments of analysis of highly oscillatory functions. In particular we will sketch methods extending conventional Fourier analysis, exploiting both phase and amplitudes of holomorphic functions. The miracles of nonlinear complex holomorphic analysis, such as factorization and composition of functions lead to new versions of holomorphic orthonormal bases , relating them to multiscale dynamical systems, obtained by composing Blaschke factors.
We also, remark, that the phase of a Blaschke product is a onelayer neural net with (arctan as an activation sigmoid) and that the composition is a "Deep Neural Net" whose depth is the number of compositions, our results provide a wealth of related libraries of orthogonal bases . We will also indicate a number of applications in medical signal processing , as well in precision Doppler. Each droplet in the phase image below represent a unit of a two layers deep net and gives rise to an orthonormal basis the Hardy space 
October 12th, 2020 
Alex Iosevich (University of Rochester) 
Finite Point Configurations and Applications to Frame Theory 
We are going to discuss some recent developments in the study of finite point configuration in sets of a given Hausdorff dimension. We shall also survey some applications of the finite point configuration machinery to the problems of existence and nonexistence of exponential/Gabor bases and frames. 
Organizing Committee:
Wojtek Czaja
Radu Balan
Jacob Bedrossian
John Benedetto
Vince Lyzinski
Thomas Goldstein
Ray Schram
